Skip to content

Latest commit

 

History

History
34 lines (30 loc) · 2.04 KB

roadmap.md

File metadata and controls

34 lines (30 loc) · 2.04 KB

Roadmap

This is a roadmap of basic features that are needed to make AutoCog (and STA) usable. It only considers a few weeks worth of work but I rarely have brain-cycles to work on this project.

Given that I am currently working alone on this project, I am not tracking work using issues and milestones.

In the roadmap below, each minor versions integrates the incremental progress of the previous ones. Simply, all the v0.4.X are steps toward v0.5. These bugfix level milestones are subject to reordering (change of priority) and shifting (introducing new milestone or actual bugfixes).

Version Features Notes Tracking
v0.4 Structured Thoughts release 1st version of ST
v0.4.1 Tests & Fixes Testing more LLMs and fix tokenizations issue
v0.4.2 Roadmap & Doc Needed some organization...
v0.4.3 FTA: Simplify, Choice Limit, and Norms
v0.4.4 Beam Search Implementation within FTA
v0.4.5 Python Cogs Cog from Python files for tools
v0.4.6 PDF Reader Read a PDF piece-by-piece and write a summary
v0.5 Language Docs Description of the language
v0.5.1 Tests & Fixes Expecting that it will be needed...
v0.5.2 Low-Level llama-cpp-python
v0.5.3 Unified FTA FTA in one "loop" using llama-cpp-python low-level API
v0.5.4 Elementary Library of elementary "worksheet" (arithmetic: add/mul/div, literacy: spelling, grammar, comprehension)
v0.5.5 MMLU-Exams Library of MCQ Solver using different Thought Patterns
v0.5.6 FTA to BNF Translate FTA to llama.cpp BNF
v0.6 Tutorials Elementary Reader & Writer
v0.6.1 HuggingFace Tramsformers FTA implementation for HGTF
v0.7 Benchmarking Evaluate speed and accuracy on Elementary and MMLU-Exams
v0.7.1 Tooling Benchmark
v0.8 Finetuning Selected foundation LLMs targetting improved performance at MMLU-Exams
v0.8.1 Finetune Tooling