To use Llama 3.2 in a Streamlit app for a chatbot, you'll need to integrate it with the Streamlit framework. Here's a step-by-step guide to help you set this up:
Prerequisites: Python 3.7 or higher. Installed libraries: Streamlit, Llama 3.2, and any necessary dependencies for Llama 3.2. Steps: 1.Install the necessary libraries: If you haven’t installed Streamlit and Llama 3.2 yet, you can do so using pip:
pip install streamlit llama3.2
If Llama 3.2 has specific installation instructions or dependencies (e.g., a model file), make sure to follow them from the official documentation. 3.Create the Streamlit app: Create a Python file (e.g., app.py) to define the app and integrate the Llama 3.2 model.
import streamlit as st
import ollama
### streamlist Title
st.title("💬 Dinesh' vishe Chatbot")
st.header(" This is using llama3.2 for streamlit.. ")
if "messages" not in st.session_state:
st.session_state["messages"] = [{"role": "assistant", "content": "Hi sir/ Mam, How can I help you? 🔍"}]
### Write Message History
for msg in st.session_state.messages:
if msg["role"] == "user":
st.chat_message(msg["role"], avatar="💡").write(msg["content"])
else:
st.chat_message(msg["role"], avatar="🔥").write(msg["content"])
## Generator for Streaming Tokens
def generate_response():
response = ollama.chat(model='llama3.2', stream=True, messages=st.session_state.messages)
for partial_resp in response:
token = partial_resp["message"]["content"]
st.session_state["full_message"] += token
yield token
if prompt := st.chat_input():
st.session_state.messages.append({"role": "user", "content": prompt})
st.chat_message("user", avatar="💡").write(prompt)
st.session_state["full_message"] = ""
st.chat_message("assistant", avatar="🪔").write_stream(generate_response)
st.session_state.messages.append({"role": "assistant", "content": st.session_state["full_message"]})