This repository contains a comprehensive setup for optimized LLM training environments, featuring accelerated training pipelines, automated Git integration, and real-time feedback systems. The setup includes magic functions for enhanced developer experience and performance monitoring.
- Automated Git integration with timestamped branches
- Visual and audio feedback systems for code execution
- Accelerated training configurations
- Memory optimization techniques
- Cache management systems
- Automatic branch creation with timestamp-based naming
- Configurable auto-save intervals
- Branch cleanup automation
- Push/pull operations with error handling
- Visual feedback for code execution status
- Audio feedback system (optional)
- Real-time execution status indicators
- Performance metrics display
- Accelerator integration
- Memory management
- Cache optimization
- Batch processing configurations
- Conda library updates
- Package version control
- Dependency management
- Cache cleanup utilities
- Clone the repository:
git clone https://github.com/username/llm-training-environment.git
- Install required packages:
pip install -r requirements.txt
- Configure environment variables:
export HF_TOKEN="your_hugging_face_token"
%%ap1
# Your code here
# Will automatically save and push to a new branch
# Visual feedback is automatically enabled for all code execution
# Green dot = Success
# Red dot = Error
# Clear cache
clear_cache()
# Monitor memory usage
print_memory_stats()
- Auto-save interval: 120 seconds (configurable)
- Branch naming format: "DD-MMM-YYYY-HHMM-IST"
- Auto-cleanup of old auto-save branches
- Default batch size: 2
- Gradient accumulation steps: 4
- Learning rate: 2e-4
- Weight decay: 0.01
- Python 3.8+
- PyTorch 2.0+
- Accelerate
- Transformers
- MLflow
- Git
- Memory usage monitoring
- Cache management
- Batch size optimization
- GPU utilization tracking
Contributions are welcome! Please read our Contributing Guidelines for details.
This project is licensed under the MIT License - see the LICENSE file for details.
- Hugging Face team for transformers library
- Unsloth team for optimization techniques
- PyTorch team for core functionalities