Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

POC: AI #65

Open
wants to merge 5 commits into
base: main
Choose a base branch
from
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
37 changes: 37 additions & 0 deletions commands/ai/ai_commands.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,37 @@
from discord import app_commands
import discord
from modules.ai.llm import get_response, list_models
from utils.splitter import split_text

def register_commands(
tree: discord.app_commands.CommandTree,
guilds: list[discord.Object],
):
@tree.command(
name="llm",
description="Ask the AI a question",
guilds=guilds,
)
@app_commands.describe(question="The question you want to ask the AI")
async def llm(interaction: discord.Interaction, question: str, model: str = ""):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

  1. I think the list of models already be specified as discord Choices
  2. Provide a default one since users probably don't care/know the differences between each model

await interaction.response.defer()
try:
response = get_response(question, model)
split_response = split_text(response)
for chunk in split_response:
await interaction.followup.send(chunk)
except Exception as e:
await interaction.followup.send(f"An error occurred: {e}")

@tree.command(
name="llm_models",
description="List available models",
guilds=guilds,
)
async def llm_models(interaction: discord.Interaction):
await interaction.response.defer()
try:
models = list_models()
await interaction.followup.send("\n".join(models))
except Exception as e:
await interaction.followup.send(f"An error occurred: {e}")
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

ephemeral=True on error maybe

2 changes: 2 additions & 0 deletions main.py
Original file line number Diff line number Diff line change
Expand Up @@ -8,6 +8,7 @@
import datetime

# user commands
from commands.ai import ai_commands
from commands.basic import basic_commands
from commands.bbt_count import bbt_count
from commands.config import config_commands
Expand Down Expand Up @@ -81,6 +82,7 @@ async def deployment_info(interaction: discord.Interaction):


# * register commands the just the placetw server
ai_commands.register_commands(tree, [placetw_guild])
edit_entry_cmd.register_commands(tree, placetw_guild, client)
restart.register_commands(tree, placetw_guild)
watching.register_commands(tree, placetw_guild, client)
Expand Down
24 changes: 24 additions & 0 deletions modules/ai/llm.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Will need to fix integration tests for this lol (monkeypatch away OpenAI)

Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
import requests
from openai import OpenAI
from openai.types.chat import ChatCompletion
import os
from dotenv import load_dotenv
load_dotenv()

openai = OpenAI(
base_url=os.getenv("OPENAI_BASE_URL"),
api_key=os.getenv("OPENAI_API_KEY")
)

def list_models() -> list[str]:
return [model.id for model in openai.models.list().data]

def get_response(prompt: str, model: str = "") -> str:
completion: ChatCompletion = openai.chat.completions.create(
model=model if model else os.getenv("DEFAULT_LLM_MODEL"),
messages=[
{"role": "user", "content": prompt}
],
stream=False
)
return completion.choices[0].message.content
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

needs error handling pain (a try catch maybe)

36 changes: 35 additions & 1 deletion poetry.lock

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions pyproject.toml
Original file line number Diff line number Diff line change
Expand Up @@ -22,6 +22,7 @@ beautifulsoup4 = "^4.12.3"
gitpython = "^3.1.43"
pytest = "^8.2.1"

openai = "^1.35.10"
[build-system]
requires = ["poetry-core"]
build-backend = "poetry.core.masonry.api"
Expand Down
17 changes: 17 additions & 0 deletions utils/splitter.py
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Try this one-liner

Or you could use numpy.array_split for this, though I guess it's a little overkill and you need to convert them all back into strings later

Original file line number Diff line number Diff line change
@@ -0,0 +1,17 @@


def split_text(text: str, max_length: int = 2000) -> list[str]:
chunks = []
current_chunk = ""

for line in text.splitlines(keepends=True): # keepends=True keeps the newline characters
if len(current_chunk) + len(line) <= max_length:
current_chunk += line
else:
chunks.append(current_chunk)
current_chunk = line

if current_chunk: # Append any remaining text
chunks.append(current_chunk)

return chunks
Loading