import solara
@solara.component
def Page():
with solara.lab.ChatBox():
with solara.lab.ChatMessage(user=True, name="User"):
"Hello!")
solara.Markdown( Page()
Build a basic LLM chat app with Solara
In this post, we will build a basic LLM chat app with Solara. Large Language Models (LLMs) have become increasingly popular and Solara provides several components to work with them. Let’s dive in.
First things first, let’s install Solara.
$ pip install solara
Now, let’s start by creating an app.py
that sends a simple message with the content “Hello!” as a user. To do that we use the ChatBox
and ChatMessage
components.
You can modify the user name and/or the message as you please.
import solara
@solara.component
def Page():
with solara.lab.ChatBox():
with solara.lab.ChatMessage(user=True, name="Morpheus"):
"Wake up, Neo...")
solara.Markdown( Page()
You can also send a message as an assistant.
import solara
@solara.component
def Page():
with solara.lab.ChatBox():
with solara.lab.ChatMessage(user=False, name="Assistant",):
"Hello! How can I assist you today?")
solara.Markdown( Page()
To have a conversation, we create a reactive variable messages
where we will store the messages. To do that we create a list of dictionaries where we will save the roles (for example, user
and assistant
) and the messages contents.
import solara
from typing import List
from typing_extensions import TypedDict
class MessageDict(TypedDict):
str
role: str
content:
= solara.reactive([]) messages: solara.Reactive[List[MessageDict]]
We can generate a conversation by adding messages to the reactive variable messages
that we previously created and displaying each message one by one.
@solara.component
def Page():
= [
messages.value "role": "user", "content": "Hello!"},
{"role": "assistant", "content": "Hello! How can I assist you today?"},
{
]with solara.lab.ChatBox():
for item in messages.value:
with solara.lab.ChatMessage(
=item["role"] == "user",
user="User" if item["role"] == "user" else "Assistant"
name
):"content"])
solara.Markdown(item[ Page()
Let’s now add the possibility to receive messages from the user by adding the ChatInput
component and a send
function that adds the message to the conversation.
= solara.reactive([])
messages: solara.Reactive[List[MessageDict]] @solara.component
def Page():
def send(message):
= [*messages.value, {"role": "user", "content": message}]
messages.value with solara.lab.ChatBox():
for item in messages.value:
with solara.lab.ChatMessage(
=item["role"] == "user",
user="User" if item["role"] == "user" else "Assistant"
name
):"content"])
solara.Markdown(item[=send)
solara.lab.ChatInput(send_callback Page()
Try it out by sending a message.
EchoBot
Up to now we are only displaying the message the user sent. Let’s first simulate a conversation by replying exactly the same message we receive from the user. To do that we need to add a response
function and a result
function that will reply the last message (which will be the one sent by the user) and it will be activated once every time the counter user_message_count
changes.
= solara.reactive([])
messages: solara.Reactive[List[MessageDict]] @solara.component
def Page():
= len([m for m in messages.value if m["role"] == "user"])
user_message_count def send(message):
= [*messages.value, {"role": "user", "content": message}]
messages.value def response(message):
= [*messages.value, {"role": "assistant", "content": message}]
messages.value def result():
if messages.value != []:
-1]["content"])
response(messages.value[= solara.lab.use_task(result, dependencies=[user_message_count])
result with solara.lab.ChatBox():
for item in messages.value:
with solara.lab.ChatMessage(
=item["role"] == "user",
user="User" if item["role"] == "user" else "EchoBot"
name
):"content"])
solara.Markdown(item[=send)
solara.lab.ChatInput(send_callback Page()
The complete code can be found below.
Show the code
import solara
from typing import List
from typing_extensions import TypedDict
class MessageDict(TypedDict):
str
role: str
content:
= solara.reactive([])
messages: solara.Reactive[List[MessageDict]] @solara.component
def Page():
= len([m for m in messages.value if m["role"] == "user"])
user_message_count def send(message):
= [*messages.value, {"role": "user", "content": message}]
messages.value def response(message):
= [*messages.value, {"role": "assistant", "content": message}]
messages.value def result():
if messages.value != []:
-1]["content"])
response(messages.value[= solara.lab.use_task(result, dependencies=[user_message_count])
result with solara.lab.ChatBox():
for item in messages.value:
with solara.lab.ChatMessage(
=item["role"] == "user",
user="User" if item["role"] == "user" else "EchoBot"
name
):"content"])
solara.Markdown(item[=send)
solara.lab.ChatInput(send_callback Page()
Up to now, our EchoBot application looks like this. Try it out!
StreamBot
Let’s now build a Bot that will stream a response message. Let’s first emulate a streamed response with a function that we call response_generator
.
# Streamed response emulator
import time
import random
def response_generator():
= random.choice(
response
["Hello! How can I assist you today?",
"Hello! If you have any questions or need help with something, feel free to ask.",
]
)for word in response.split():
yield word + " "
0.05) time.sleep(
Let’s see that it’s working as expected.
for chunk in response_generator():
print(chunk)
Hello!
How
can
I
assist
you
today?
It works. Notice that for the moment the response_generator
function will give one of the two possible responses at random without considering the user message.
Let’s now create a function that will be adding the chunks successively to the message.
def add_chunk_to_ai_message(chunk: str):
= [
messages.value *messages.value[:-1],
{"role": "assistant",
"content": messages.value[-1]["content"] + chunk,
}, ]
We need to modify the EchoBot code to include this functionality as follows.
= solara.reactive([])
messages: solara.Reactive[List[MessageDict]] @solara.component
def Page():
= len([m for m in messages.value if m["role"] == "user"])
user_message_count def send(message):
= [*messages.value, {"role": "user", "content": message}]
messages.value def response(message):
= [*messages.value, {"role": "assistant", "content": ""}]
messages.value for chunk in response_generator():
add_chunk_to_ai_message(chunk)def result():
if messages.value != []:
-1]["content"])
response(messages.value[= solara.lab.use_task(result, dependencies=[user_message_count])
result with solara.lab.ChatBox():
for item in messages.value:
with solara.lab.ChatMessage(
=item["role"] == "user",
user="User" if item["role"] == "user" else "StreamBot"
name
):"content"])
solara.Markdown(item[=send)
solara.lab.ChatInput(send_callback Page()
The complete code can be found below.
Show the code
import solara
import time
import random
from typing import List
from typing_extensions import TypedDict
class MessageDict(TypedDict):
str
role: str
content:
= solara.reactive([])
messages: solara.Reactive[List[MessageDict]]
# Streamed response emulator
def response_generator():
= random.choice(
response
["Hello! How can I assist you today?",
"Hello! If you have any questions or need help with something, feel free to ask.",
]
)for word in response.split():
yield word + " "
0.05)
time.sleep(
def add_chunk_to_ai_message(chunk: str):
= [
messages.value *messages.value[:-1],
{"role": "assistant",
"content": messages.value[-1]["content"] + chunk,
},
]
= solara.reactive([])
messages: solara.Reactive[List[MessageDict]] @solara.component
def Page():
= len([m for m in messages.value if m["role"] == "user"])
user_message_count def send(message):
= [*messages.value, {"role": "user", "content": message}]
messages.value def response(message):
= [*messages.value, {"role": "assistant", "content": ""}]
messages.value for chunk in response_generator():
add_chunk_to_ai_message(chunk)def result():
if messages.value != []:
-1]["content"])
response(messages.value[= solara.lab.use_task(result, dependencies=[user_message_count])
result with solara.lab.ChatBox():
for item in messages.value:
with solara.lab.ChatMessage(
=item["role"] == "user",
user="User" if item["role"] == "user" else "StreamBot"
name
):"content"])
solara.Markdown(item[=send)
solara.lab.ChatInput(send_callback Page()
Our StreamBot application looks like this. Try it out!
ChatGPT bot
The StreamBot application don’t take into account the user message. To reply something coherent, let’s use one of OpenAI models (in this example, gpt-3.5-turbo
).
First, obtain an OPENAI_API_KEY=sk-...
and replace it below.
import os
import openai
from openai import OpenAI
from dotenv import load_dotenv, find_dotenv
= load_dotenv(find_dotenv()) # read local .env file
_ = os.environ['OPENAI_API_KEY']
openai.api_key
= OpenAI() client
Now we can define a new response_generator
function that will use OpenAI to give a coherent answer.
def response_generator(message):
return client.chat.completions.create(
="gpt-3.5-turbo",
model=[
messages"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
{
],=True
stream )
Let’s see that it works (as you can see in the code, we need to add some cleaning to the chunks and verify they are not None
).
for chunk in response_generator("Hello!"):
if chunk.choices[0].delta.content is not None:
print(chunk.choices[0].delta.content)
Hello
!
How
can
I
assist
you
today
?
We need to modify the StreamBot code as follows.
= solara.reactive([])
messages: solara.Reactive[List[MessageDict]] @solara.component
def Page():
= len([m for m in messages.value if m["role"] == "user"])
user_message_count def send(message):
= [*messages.value, {"role": "user", "content": message}]
messages.value def response(message):
= [*messages.value, {"role": "assistant", "content": ""}]
messages.value for chunk in response_generator(message):
if chunk.choices[0].delta.content is not None:
0].delta.content)
add_chunk_to_ai_message(chunk.choices[def result():
if messages.value != []:
-1]["content"])
response(messages.value[= solara.lab.use_task(result, dependencies=[user_message_count])
result with solara.lab.ChatBox():
for item in messages.value:
with solara.lab.ChatMessage(
=item["role"] == "user",
user="User" if item["role"] == "user" else "ChatGPT"
name
):"content"])
solara.Markdown(item[=send)
solara.lab.ChatInput(send_callback Page()
The complete code can be found below.
Show the code
import solara
from typing import List
from typing_extensions import TypedDict
import os
import openai
from openai import OpenAI
= os.environ['OPENAI_API_KEY']
openai.api_key
= OpenAI()
client
def response_generator(message):
return client.chat.completions.create(
="gpt-3.5-turbo",
model=[
messages"role": "system", "content": "You are a helpful assistant."},
{"role": "user", "content": message}
{
],=True
stream
)
class MessageDict(TypedDict):
str
role: str
content:
= solara.reactive([])
messages: solara.Reactive[List[MessageDict]] @solara.component
def Page():
= len([m for m in messages.value if m["role"] == "user"])
user_message_count def send(message):
= [*messages.value, {"role": "user", "content": message}]
messages.value def response(message):
= [*messages.value, {"role": "assistant", "content": ""}]
messages.value for chunk in response_generator(message):
if chunk.choices[0].delta.content is not None:
0].delta.content)
add_chunk_to_ai_message(chunk.choices[def result():
if messages.value != []:
-1]["content"])
response(messages.value[= solara.lab.use_task(result, dependencies=[user_message_count])
result with solara.lab.ChatBox():
for item in messages.value:
with solara.lab.ChatMessage(
=item["role"] == "user",
user="User" if item["role"] == "user" else "ChatGPT"
name
):"content"])
solara.Markdown(item[=send)
solara.lab.ChatInput(send_callback Page()