Skip to content

Commit

Permalink
Update LLM chat tutorial to use latest OpenAI chat completions API (#884
Browse files Browse the repository at this point in the history
)

* Update LLM chat tutorial to use latest OpenAI API

* Update LLM chat apps to use latest OpenAI API

* Attribution

Co-authored-by: klurpicolo <[email protected]>

* Rename chat tutorial pages

* Update links to chat tutorials

---------

Co-authored-by: klurpicolo <[email protected]>
Co-authored-by: Debbie Matthews <[email protected]>
  • Loading branch information
3 people authored Nov 16, 2023
1 parent 3980368 commit 116ba9b
Show file tree
Hide file tree
Showing 9 changed files with 61 additions and 60 deletions.
22 changes: 11 additions & 11 deletions content/kb/tutorials/chat.md
Original file line number Diff line number Diff line change
@@ -1,9 +1,9 @@
---
title: Build conversational apps
title: Build a basic LLM chat app
slug: /knowledge-base/tutorials/build-conversational-apps
---

# Build conversational apps
# Build a basic LLM chat app

## Introduction

Expand Down Expand Up @@ -335,12 +335,12 @@ Now let's write the app. We'll use the same code as before, but we'll replace th

```python
import streamlit as st
import openai
from openai import OpenAI

st.title("ChatGPT-like clone")

# Set OpenAI API key from Streamlit secrets
openai.api_key = st.secrets["OPENAI_API_KEY"]
client = OpenAI(api_key=st.secrets["OPENAI_API_KEY"])

# Set a default model
if "openai_model" not in st.session_state:
Expand Down Expand Up @@ -371,30 +371,30 @@ if prompt := st.chat_input("What is up?"):
All that's changed is that we've added a default model to `st.session_state` and set our OpenAI API key from Streamlit secrets. Here's where it gets interesting. We can replace our logic from earlier to emulate streaming predetermined responses with the model's responses from OpenAI:

```python
for response in openai.ChatCompletion.create(
for response in client.chat.completions.create(
model=st.session_state["openai_model"],
messages=[{"role": m["role"], "content": m["content"]} for m in st.session_state.messages],
stream=True,
):
full_response += response.choices[0].delta.get("content", "")
full_response += (response.choices[0].delta.content or "")
message_placeholder.markdown(full_response + "")
message_placeholder.markdown(full_response)
st.session_state.messages.append({"role": "assistant", "content": full_response})
```

Above, we've replaced the list of responses with a call to [`openai.ChatCompletion.create`](https://platform.openai.com/docs/guides/gpt/chat-completions-api). We've set `stream=True` to stream the responses to the frontend. In the API call, we pass the model name we hardcoded in session state and pass the chat history as a list of messages. We also pass the `role` and `content` of each message in the chat history. Finally, OpenAI returns a stream of responses (split into chunks of tokens), which we iterate through and display each chunk.
Above, we've replaced the list of responses with a call to [`OpenAI().chat.completions.create`](https://platform.openai.com/docs/guides/text-generation/chat-completions-api). We've set `stream=True` to stream the responses to the frontend. In the API call, we pass the model name we hardcoded in session state and pass the chat history as a list of messages. We also pass the `role` and `content` of each message in the chat history. Finally, OpenAI returns a stream of responses (split into chunks of tokens), which we iterate through and display each chunk.

Putting it all together, here's the full code for our ChatGPT-like app and the result:

<Collapse title="View full code" expanded={false}>

```python
import openai
from openai import OpenAI
import streamlit as st

st.title("ChatGPT-like clone")

openai.api_key = st.secrets["OPENAI_API_KEY"]
client = OpenAI(api_key=st.secrets["OPENAI_API_KEY"])

if "openai_model" not in st.session_state:
st.session_state["openai_model"] = "gpt-3.5-turbo"
Expand All @@ -414,15 +414,15 @@ if prompt := st.chat_input("What is up?"):
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
for response in openai.ChatCompletion.create(
for response in client.chat.completions.create(
model=st.session_state["openai_model"],
messages=[
{"role": m["role"], "content": m["content"]}
for m in st.session_state.messages
],
stream=True,
):
full_response += response.choices[0].delta.get("content", "")
full_response += (response.choices[0].delta.content or "")
message_placeholder.markdown(full_response + "")
message_placeholder.markdown(full_response)
st.session_state.messages.append({"role": "assistant", "content": full_response})
Expand Down
4 changes: 2 additions & 2 deletions content/kb/tutorials/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -10,5 +10,5 @@ Our tutorials include step-by-step examples of building different types of apps
- [Connect to data sources](/knowledge-base/tutorials/databases)
- [Session State basics](/knowledge-base/tutorials/session-state)
- [Deploy Streamlit apps](/knowledge-base/tutorials/deploy)
- [Build conversational apps](/knowledge-base/tutorials/build-conversational-apps)
- [LLM quickstart](/knowledge-base/tutorials/llm-quickstart)
- [Build a basic LLM chat app](/knowledge-base/tutorials/build-conversational-apps)
- [Build an LLM app using LangChain](/knowledge-base/tutorials/llm-quickstart)
73 changes: 37 additions & 36 deletions content/kb/tutorials/llm/index.md
Original file line number Diff line number Diff line change
@@ -1,14 +1,15 @@
---
title: LLM quickstart
title: Build an LLM app using LangChain
slug: /knowledge-base/tutorials/llm-quickstart
---

# LLM quickstart
# Build an LLM app using LangChain

## OpenAI, LangChain, and Streamlit in 18 lines of code

In this tutorial, you will build a Streamlit LLM app that can generate text from a user-provided prompt. This Python app will use the LangChain framework and Streamlit. Optionally, you can deploy your app to [Streamlit Community Cloud](https://streamlit.io/cloud) when you're done.

*This tutorial is adapted from a blog post by Chanin Nantesanamat: [LangChain tutorial #1: Build an LLM-powered app in 18 lines of code](https://blog.streamlit.io/langchain-tutorial-1-build-an-llm-powered-app-in-18-lines-of-code/).*
_This tutorial is adapted from a blog post by Chanin Nantesanamat: [LangChain tutorial #1: Build an LLM-powered app in 18 lines of code](https://blog.streamlit.io/langchain-tutorial-1-build-an-llm-powered-app-in-18-lines-of-code/)._

<Cloud src="https://doc-tutorial-llm-18-lines-of-code.streamlit.app/?embed=true" height="600" />

Expand Down Expand Up @@ -75,64 +76,64 @@ To start, create a new Python file and save it as `streamlit_app.py` in the roo

1. Import the necessary Python libraries.

```python
import streamlit as st
from langchain.llms import OpenAI
```
```python
import streamlit as st
from langchain.llms import OpenAI
```

2. Create the app's title using `st.title`.

```python
st.title('🦜🔗 Quickstart App')
```
```python
st.title('🦜🔗 Quickstart App')
```

3. Add a text input box for the user to enter their OpenAI API key.

```python
openai_api_key = st.sidebar.text_input('OpenAI API Key', type='password')
```
```python
openai_api_key = st.sidebar.text_input('OpenAI API Key', type='password')
```

4. Define a function to authenticate to OpenAI API with the user's key, send a prompt, and get an AI-generated response. This function accepts the user's prompt as an argument and displays the AI-generated response in a blue box using `st.info`.

```python
def generate_response(input_text):
llm = OpenAI(temperature=0.7, openai_api_key=openai_api_key)
st.info(llm(input_text))
```
```python
def generate_response(input_text):
llm = OpenAI(temperature=0.7, openai_api_key=openai_api_key)
st.info(llm(input_text))
```

5. Finally, use `st.form()` to create a text box (`st.text_area()`) for user input. When the user clicks `Submit`, the `generate-response()` function is called with the user's input as an argument.

```python
with st.form('my_form'):
text = st.text_area('Enter text:', 'What are the three key pieces of advice for learning how to code?')
submitted = st.form_submit_button('Submit')
if not openai_api_key.startswith('sk-'):
st.warning('Please enter your OpenAI API key!', icon='')
if submitted and openai_api_key.startswith('sk-'):
generate_response(text)
```
```python
with st.form('my_form'):
text = st.text_area('Enter text:', 'What are the three key pieces of advice for learning how to code?')
submitted = st.form_submit_button('Submit')
if not openai_api_key.startswith('sk-'):
st.warning('Please enter your OpenAI API key!', icon='')
if submitted and openai_api_key.startswith('sk-'):
generate_response(text)
```

6. Remember to save your file!
7. Return to your computer's terminal to run the app.

```bash
streamlit run streamlit_app.py
```
```bash
streamlit run streamlit_app.py
```

## Deploying the app

To deploy the app to the Streamlit Cloud, follow these steps:

1. Create a GitHub repository for the app. Your repository should contain two files:

```
your-repository/
├── streamlit_app.py
└── requirements.txt
```
```
your-repository/
├── streamlit_app.py
└── requirements.txt
```

1. Go to [Streamlit Community Cloud](http://share.streamlit.io), click the `New app` button from your workspace, then specify the repository, branch, and main file path. Optionally, you can customize your app's URL by choosing a custom subdomain.
2. Click the `Deploy!` button.
1. Click the `Deploy!` button.

Your app will now be deployed to Streamlit Community Cloud and can be accessed from around the world! 🌎

Expand Down
2 changes: 1 addition & 1 deletion content/library/api-cheat-sheet.md
Original file line number Diff line number Diff line change
Expand Up @@ -262,7 +262,7 @@ st.color_picker("Pick a color")
>>> st.chat_input("Say something")
```

Learn how to [build chat-based apps](/knowledge-base/tutorials/build-conversational-apps)
Learn how to [Build a basic LLM chat app](/knowledge-base/tutorials/build-conversational-apps)

</CodeTile>

Expand Down
2 changes: 1 addition & 1 deletion content/library/api/chat/chat-input.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: st.chat_input displays a chat input widget.

<Tip>

Read the [Build conversational apps](/knowledge-base/tutorials/build-conversational-apps) tutorial to learn how to use `st.chat_message` and `st.chat_input` to build chat-based apps.
Read the [Build a basic LLM chat app](/knowledge-base/tutorials/build-conversational-apps) tutorial to learn how to use `st.chat_message` and `st.chat_input` to build chat-based apps.

</Tip>

Expand Down
2 changes: 1 addition & 1 deletion content/library/api/chat/chat-message.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@ description: st.chat_message inserts a chat message container into the app.

<Tip>

Read the [Build conversational apps](/knowledge-base/tutorials/build-conversational-apps) tutorial to learn how to use `st.chat_message` and `st.chat_input` to build chat-based apps.
Read the [Build a basic LLM chat app](/knowledge-base/tutorials/build-conversational-apps) tutorial to learn how to use `st.chat_message` and `st.chat_input` to build chat-based apps.

</Tip>

Expand Down
4 changes: 2 additions & 2 deletions content/menu.md
Original file line number Diff line number Diff line change
Expand Up @@ -570,9 +570,9 @@ site_menu:
url: /knowledge-base/tutorials/deploy/kubernetes
- category: Knowledge base / Tutorials / Session State basics
url: /knowledge-base/tutorials/session-state
- category: Knowledge base / Tutorials / Build conversational apps
- category: Knowledge base / Tutorials / Build a basic LLM chat app
url: /knowledge-base/tutorials/build-conversational-apps
- category: Knowledge base / Tutorials / LLM quickstart
- category: Knowledge base / Tutorials / Build an LLM app using LangChain
url: /knowledge-base/tutorials/llm-quickstart
- category: Knowledge base / Using Streamlit
url: /knowledge-base/using-streamlit
Expand Down
10 changes: 5 additions & 5 deletions python/api-examples-source/chat.llm.py
Original file line number Diff line number Diff line change
@@ -1,13 +1,13 @@
import openai
import streamlit as st
from openai import OpenAI

st.title("ChatGPT-like clone")
with st.expander("ℹ️ Disclaimer"):
st.caption(
"We appreciate your engagement! Please note, this demo is designed to process a maximum of 10 interactions. Thank you for your understanding."
)

openai.api_key = st.secrets["OPENAI_API_KEY"]
client = OpenAI(api_key=st.secrets["OPENAI_API_KEY"])

if "openai_model" not in st.session_state:
st.session_state["openai_model"] = "gpt-3.5-turbo"
Expand All @@ -28,7 +28,7 @@
st.info(
"""Notice: The maximum message limit for this demo version has been reached. We value your interest!
We encourage you to experience further interactions by building your own application with instructions
from Streamlit's [Build conversational apps](https://docs.streamlit.io/knowledge-base/tutorials/build-conversational-apps)
from Streamlit's [Build a basic LLM chat app](https://docs.streamlit.io/knowledge-base/tutorials/build-conversational-apps)
tutorial. Thank you for your understanding."""
)

Expand All @@ -41,15 +41,15 @@
with st.chat_message("assistant"):
message_placeholder = st.empty()
full_response = ""
for response in openai.ChatCompletion.create(
for response in client.chat.completions.create(
model=st.session_state["openai_model"],
messages=[
{"role": m["role"], "content": m["content"]}
for m in st.session_state.messages
],
stream=True,
):
full_response += response.choices[0].delta.get("content", "")
full_response += (response.choices[0].delta.content or "")
message_placeholder.markdown(full_response + "▌")
message_placeholder.markdown(full_response)
st.session_state.messages.append(
Expand Down
2 changes: 1 addition & 1 deletion python/api-examples-source/requirements.txt
Original file line number Diff line number Diff line change
Expand Up @@ -9,5 +9,5 @@ scipy
altair==4.2.0
pydeck==0.8.0
Faker==19.1.0
openai==0.27.8
openai==1.3.0
streamlit-nightly==1.28.1.dev20231026

0 comments on commit 116ba9b

Please sign in to comment.