A cloud-based Marketing Mix Modeling (MMM) solution deployed on Google Cloud Platform.
./deploy.sh # Deploy the latest version to production
GPT-Bayes consists of two main components:
-
MMM Agent Alpha - A specialized GPT model with API integration
- Interface: MMM Agent Alpha
- Authentication: [email protected]
- Function: Provides user interface for MMM analysis
-
Backend Service
- Production URL: https://nextgen-mmm.pymc-labs.com
- Function: Handles model fitting and parameter management via API endpoints
- Infrastructure: Hosted on Google Cloud Engine (GCE) under the
gpt-bayes
project
app.py
- Main Flask applicationtest_mmm_async.py
- Local API testing utility
nginx/
- NGINX reverse proxy settingsdockerfile
- Container specificationsstart.sh
- Container initializationdeploy.sh
- Deployment automationenvironment.yml
- Development environment specifications
gpt-agent/gpt_prompt.md
- System instructionsgpt-agent/api_spec.json
- API specificationsgpt-agent/knowledge/
- Reference documentationgpt-agent/privacy_policy.md
- Data handling guidelines
test-data/
- Example datasets
The application runs on Google Compute Engine (GCE) under the gpt-bayes
project, accessible at https://nextgen-mmm.pymc-labs.com
.
Use deploy.sh
to update the application. This script handles:
- Updating the container in Google Artifact Registry (GAR)
- Deploying to the production environment
./deploy.sh
Access the production server:
gcloud compute ssh gpt-bayes --zone us-central1-a
Container management commands:
# List containers
docker ps -a
# Monitor container logs
docker attach CONTAINER_ID
# Access container shell
docker exec -it CONTAINER_ID /bin/bash
Build and publish to Google Artifact Registry:
gcloud builds submit
Note: This updates the container image but doesn't affect the production deployment.
View available Container-Optimized OS images:
gcloud compute images list --project cos-cloud --no-standard-images
Update production container:
# Clear existing containers
gcloud compute ssh gpt-bayes --zone us-central1-a --command 'docker system prune -f -a'
# Deploy new container
gcloud compute instances update-container gpt-bayes \
--zone=us-central1-a \
--container-image=us-central1-docker.pkg.dev/bayes-gpt/gpt-bayes/gpt-bayes:latest
Create new server instance:
gcloud compute instances create gpt-bayes \
--machine-type e2-standard-4 \
--boot-disk-size 20GB \
--image image-name \
--image-project cos-cloud \
--zone us-central1 \
--metadata container-image=your-container-image-name \
--tags http-server \
--firewall-create allow-http
Deploy NGINX reverse proxy updates:
cd nginx
gcloud builds submit
Update backend IP address:
- Navigate to
nginx/nginx.conf
- Modify the
proxy_pass
directive with the new IP - Example:
proxy_pass http://35.208.203.115:5000;
Create development environment:
# Using conda
conda env create -f environment.yml
# Using mamba (faster)
mamba env create -f environment.yml
# Activate environment
conda activate base
Launch the development stack:
- Start Redis:
redis-server
- Start Celery worker (new terminal):
celery -A app.celery worker --loglevel=info
- Start Flask (new terminal):
python app.py --port 5001
- Run tests:
# Test local instance
python test_mmm_async.py local
# Test production instance
python test_mmm_async.py deployed
The test suite:
- Generates sample MMM data
- Submits to specified API endpoint
- Monitors result generation
- Displays model analytics