Model Name | Task | Metrics | Domain |
---|---|---|---|
focoos_object365 | Detection | - | Common Objects, 365 classes |
focoos_rtdetr | Detection | - | Common Objects, 80 classes |
focoos_cts_medium | Semantic Segmentation | - | Autonomous driving, 30 classes |
focoos_cts_large | Semantic Segmentation | - | Autonomous driving, 30 classes |
focoos_ade_nano | Semantic Segmentation | - | Common Scenes, 150 classes |
focoos_ade_small | Semantic Segmentation | - | Common Scenes, 150 classes |
focoos_ade_medium | Semantic Segmentation | - | Common Scenes, 150 classes |
focoos_ade_large | Semantic Segmentation | - | Common Scenes, 150 classes |
focoos_aeroscapes | Semantic Segmentation | - | Drone Aerial Scenes, 11 classes |
focoos_isaid_nano | Semantic Segmentation | - | Satellite Imagery, 15 classes |
focoos_isaid_medium | Semantic Segmentation | - | Satellite Imagery, 15 classes |
Focoos is a comprehensive SDK designed for computer vision tasks such as object detection, semantic segmentation, instance segmentation, and more. It provides pre-trained models that can be easily integrated and customized by users for various applications. Focoos supports both cloud and local inference, and enables training on the cloud, making it a versatile tool for developers working in different domains, including autonomous driving, common scenes, drone aerial scenes, and satellite imagery.
- Pre-trained Models: A wide range of pre-trained models for different tasks and domains.
- Cloud Inference: API to Focoos cloud inference.
- Cloud Training: Train custom models with the Focoos cloud.
- Multiple Local Inference Runtimes: Support for various inference runtimes including CPU, GPU, Torchscript CUDA, OnnxRuntime CUDA, and OnnxRuntime TensorRT.
- Model Monitoring: Monitor model performance and metrics.
We recommend using UV as a package manager and environment manager for a streamlined dependency management experience. Here’s how to create a new virtual environment with UV:
pip install uv
uv venv --python 3.12
source .venv/bin/activate
Focoos models support multiple inference runtimes. To keep the library lightweight and to allow users to use their environment, optional dependencies (e.g., torch, onnxruntime, tensorrt) are not installed by default. Foocoos is shipped with the following extras dependencies:
[torch]
: torchscript CUDA[cuda]
: onnxruntime CUDA[tensorrt]
: onnxruntime TensorRT
uv pip install focoos[cpu] git+https://github.com/FocoosAI/focoos.git
uv pip install focoos[torch] git+https://github.com/FocoosAI/focoos.git
uv pip install focoos[cuda] git+https://github.com/FocoosAI/focoos.gi
To perform inference using TensorRT, ensure you have TensorRT version 10.5 installed.
sudo apt-get install tensorrt
uv pip install focoos[tensorrt] git+https://github.com/FocoosAI/focoos.git
from focoos import Focoos
focoos = Focoos(api_key=os.getenv("FOCOOS_API_KEY"))
model = focoos.get_remote_model("focoos_object365")
detections = model.infer("./image.jpg", threshold=0.4)
setup FOCOOS_API_KEY_GRADIO environment variable with your Focoos API key
uv pip install focoos[dev] git+https://github.com/FocoosAI/focoos.git
gradio gradio/app.py
from focoos import Focoos
focoos = Focoos(api_key=os.getenv("FOCOOS_API_KEY"))
model = focoos.get_local_model("focoos_object365")
detections = model.infer("./image.jpg", threshold=0.4)
For container support, Focoos offers four different Docker images:
focoos-cpu
: only CPUfocoos-onnx
: Includes ONNX supportfocoos-torch
: Includes ONNX and Torchscript supportfocoos-tensorrt
: Includes ONNX, Torchscript, and TensorRT support
This repository also includes a devcontainer configuration for each of the above images. You can launch these devcontainers in Visual Studio Code for a seamless development experience.