OmniBridge wrap and connects different AI models. It helps access different AI models in a centralized place.
pip install omnibridge
Now you can start using OmniBridge!
NOTE: Once installed, you can use omnibridge with both
omnibridge
andobr
commands.
Add your key
obr create key --name open_ai --value <value>
Add your model
obr create model chatgpt --name gpt3.5 --key open_ai
You can now run chatGPT from your cli!
obr run model --name gpt3.5 --prompt "tell me a joke"
You can also use the model you created to build flows (aka Auto-GPT), passing the output of one model to several others!
obr create flow --name chef --model gpt3.5 -i "what ingridients do I need for the dishes?"
"what wine would you suggest to pair with the dishes" "how much time does it take to prepare?"
This command set up four instances of your model, the first instance will handle your prompt as you would expect regularly, however, instead of returning the output, it will pass it to the other three, adding a specific instruction for each!
Understand it best with an example - (Notice it may take a short while to generate a response.)
obr run flow --name chef --prompt "suggest two dishes for a romantic date"
This should return
1. Filet Mignon with Roasted Vegetables: <description>
2. Lobster Risotto: <description>
******************************************************************
Ingridients:
< a list of ingridients>
******************************************************************
1. Filet Mignon with Roasted Vegetables: A red wine like a Cabernet Sauvignon or a Merlot...
2. Lobster Risotto: A white wine like a Chardonnay or a Sauvignon Blanc...
******************************************************************
Typical cooking times for a filet mignon can range from 8 to 12 minutes, and for lobster risotto,
it can take around 30-40 minutes.
Add your key
obr create key --name open_ai --value <value>
Create two models
obr create model chatgpt --name gpt_model -k open_ai
obr create model dalle --name dalle_model -k open_ai
Now combine both models in a sequential flow
obr create flow --name image_flow --multi gpt_model dalle_model -t seq
Finally, run the flow with a prompt
obr run flow --name image_flow -i "create a prompt to an image that will amaze me"
NOTE: Currently, we create the images in the current working directory.
These of you looking for more freedom creating a.i model flows, we got you covered too!
obr run flow -f flow_example.json -p "in short, suggest two dished for dinner date"
flow_example.json
{
"version": "beta",
"models": [
{
"name": "gpt3.5",
"models": [
{
"name": "gpt3.5",
"instruction": "what ingridients do I need for the dishes?"
},
{
"name": "gpt3.5",
"instruction": "what wine goes well with the dishes?"
},
{
"name": "gpt3.5",
"instruction": "what should I make for desert?",
"models": [
{
"name": "dalle",
"instruction": "realistic photography."
}
]
}
]
}
]
}
Template Structure
version
: The version of the JSON template.models
: An array of models, each containing:name
: The name of the AI model you created in OmniBridge.models
: An optional nested array of models, each containing:name
: The name of the nested AI model you created in OmniBridge.instruction
: The instruction to be executed by the AI model. In the beta version we concatenate the instruction to the end of the input generated by the parent model.models
: An optional nested array of models, following the same structure as above, getting the output of the current model as input.
NOTE: In the beta version Image models can only be used as the leaves of the graph.
We are working on more cool stuff!
Come share your ideas, usage, and suggestions!
Join our discord server and share your feedback and ideas with us!
Join us in shaping the future of A.I!
For information on how to contribute, see here.