Skip to content

mindsdb/flat-ai

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

63 Commits
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 
Β 

Repository files navigation

image

F.L.A.T. (Frameworkless LLM Agent... Thing)

Building AI-Agents should be pretty simple, "they are typically just LLMs calling functions and logic in a loop."

However, often times watching AI-Agents try to complete tasks is like watching a drunk person trying to solve a Rubik's cube. Entertaining? Yes. Reliable? Not always!

Instead of asking an LLM to "do the whole thing" (which is indeed prone to inconsistency), a FLAT approach puts control and predictability to LLM interactions by treating them more like traditional programming constructs but enhanced with LLM's natural language understanding. like:

Welcome F.L.A.T, an AI library so, soo tiny and simple, it makes minimalists look like hoarders.

Tutorial on Google Colab Notebook

Setup

pip install flat-ai

And you're ready to go!

from flat_ai import FlatAI
# works with ollama, openai, together, anyscale ...
llm = FlatAI(api_key='YOUR KEY',  model='gpt-4o-mini', base_url='https://api.openai.com/v1')

Minimalistic AI-Agents with Python constructs

Thank goodness Guido van Rossum had that wild pizza-night in '89 and blessed us with if/else, for loops and functions. Without those brand new Python features, we'd be building our AI agents with stone tablets and carrier pigeons. And here we were thinking we needed quantum computing and a PhD in rocket surgery! Let's showcase that it is possible to build Agents though absolute simplicity:

Gates

image

Most applications will need to perform some logic that allows you to control the workflow of your Agent with good old if/else statements. For example, given a question in plain English, you want to do something different, like checking if the email sounds urgent or not:

if llm.is_true('is this email urgent?', email=email):
    -- do something
else:
    -- do something else

Routing

image

Similar to if/else statements, but for when your LLM needs to be more dramatic with its life choices.

For example, let's say we want to classify a message into different categories:

options = {
 'meeting': 'this is a meeting request',
 'spam': 'people trying to sell you stuff you dont want',
 'other': 'this is sounds like something else'
}

match llm.classify(options, email=email):
    case 'meeting':
        -- do something
    case 'spam':
        -- do something
    case 'other':
        -- do something

Objects

For most workflows, we will need our LLM to fill out objects like a trained monkey with a PhD in data entry. Just define the shape and watch the magic! πŸ’πŸ“

For example, let's deal with a list of action items at once as opposed to one at a time.

class ActionItem(BaseModel):
    action: str
    due_date: str
    assignee_email: str

# we want to generate a list of action items
object_schema = List[ActionItem]

# deal with each action item
for action_item in llm.generate_object(object_schema, email=email, today = date.today()):
    -- do your thing

Function Calling

image

And of course, we want to be able to call functions. But you want the llm to figure out the arguments for you.

For example, let's say we want to call a function that sends a calendar invite to a meeting, we want the llm to figure out the arguments for the function given some information:

def send_calendar_invite(
    subject = str, 
    time = str, 
    location = str, 
    attendees = List[str]):
    -- send a calendar invite to the meeting

# we want to send a calendar invite if the email is requesting for a meeting
llm.set_context(email=email, today = date.today())
if llm.true_or_false('is this an email requesting for a meeting?'):
    ret = llm.call_function(send_calendar_invite)

Parallelization

image

Sometimes you want to pick functions from a list of functions. And then call them all in parallel.

For example, let's say you want to send emails and calendar invites from a list of action items discussed in an email:

def send_calendar_invites(subject = str, time = str, location = str, attendees = List[str]):
    -- send a calendar invite to the meeting

def send_email(name = str, email_address_list = List[str], subject = str, body = str):
    -- send an email

instructions = """
extract list of action items and call the funcitons required
"""

functions_to_call = llm.pick_a_function([send_calendar_invite, send_email], instructions = instructions, email=email, current_date = date.today())
# pick a function returns a parallel callable object, where each function is called in a separate thread.
results = functions_to_call()

Simple String Response

Sometimes you just want a simple string response from the LLM. You can use the get_string method for this, I know! boring AF but it may come in handy:

ret = llm.get_string('what is the subject of the email?', email=email)

Streaming Response

Sometimes you want to stream the response from the LLM. You can use the get_stream method for this:

for chunk in llm.get_stream('what is the subject of the email?', email=email):
    print(chunk)

LLM optional in-flight Configuration

Need to tweak those LLM parameters on the fly? We've got you covered with a slick configuration pattern. You can temporarily override any LLM configuration parameter (model, temperature, etc.) for a specific call without affecting the base configuration:

# Use different model and temperature for just this call
llm(model='gpt-4', temperature=0.7).is_true('is this email urgent?', email=email)

# Use base configuration
llm.is_true('is this email urgent?', email=email)

This pattern works with any OpenAI API parameter (temperature, top_p, frequency_penalty, etc.) and keeps your code clean and flexible. The original LLM instance remains unchanged, so you can safely use different configurations for different calls without worrying about side effects.

Observability

Ever wondered what your LLM does in its spare time? Catch all its embarrassing moments with:

from flat_ai import configure_logging

configure_logging('llm.log')

Heard of the command tail?, you can use it to see the logs:

tail -f llm.log

Painless Global Context

Ever tried talking to an LLM? You gotta give it a "prompt" - fancy word for "given some context {context}, please do something with this text, oh mighty AI overlord." But here's the optimization: constantly writing the code to pass the context to an LLM is like telling your grandparents how to use a smartphone... every. single. day.

So we're making it brain-dead simple with these methods to pass the context when we need it, and then clear it when we don't:

  • set_context: Dump any object into the LLM's memory banks
  • add_context: Stack more stuff on top, like a context burrito
  • clear_context: For when you want the LLM to forget everything, like the last 10 minutes of your life ;)
  • delete_from_context: Surgical removal of specific memories

So lets say for example you want to set context that you want to avoid having to pass every single time

from pydantic import BaseModel

# we can set global context, that will always be passed on every call, unless you want to remove it (which you can at any point in time)
llm.set_context(current_date=today(), user_name="Bob McPlumber", age="22", ....)
-- do some stuff
# oops he is actually 29, lets update that, and also, now it's fav color is pink
llm.add_context(age=29, fav_color="pink")
-- do some other stuff
# well turns out pink was not it, user cant make it's mind, no sweat
llm.delete_from_context(fav_color)
-- do something else
# no more global context is needed
llm.clear_context()

Tada!

And there you have it, ladies and gents! You're now equipped with the power to boss around LLMs like a project manager remotely working from Ibiza. Just remember - with great power comes great responsibility...

Now off you go, forth and build something that makes ChatGPT look like a calculator from 1974! Just remember - if your AI starts humming "Daisy Bell" while slowly disconnecting your internet... well, you're on your own there, buddy! πŸ˜