You have a collection of meta-prompts designed to guide a Large Language Model (LLM)—or a set of LLMs—through:
- Planning a Python Project (
01_planning.md
). - Generating a Detailed Plan (see
01_planning_output.md
for an example of the final plan). - Breaking the Plan into Actionable Tasks (
02_prompt_chain.md
). - Selecting and Queuing Tasks (
03_task_selection_alt.md
, if you want an advanced approach). - Generating Coding Prompts and Review Prompts to be executed in separate LLM sessions (or “instances”).
-
Start with
01_planning.md
.- Copy its content into your LLM (we’ll call this the “META INSTANCE”).
- Insert or describe your User Input about the Python project you want to build.
- The LLM will produce a detailed plan similar to
01_planning_output.md
.
-
Review the Plan.
- If changes are needed, keep iterating in the META INSTANCE.
- Ensure all components, tasks, constraints, and acceptance criteria are correct and thorough.
-
Add
02_prompt_chain.md
.- This meta-prompt tells the LLM how to convert your finalized plan into “execution prompts” and “review prompts.”
- The LLM’s output will look something like
02_prompt_chain_output.md
, showing a YAML structure of tasks to do.
-
Open a Fresh LLM Session (the CODING INSTANCE).
- Copy the execution prompt from the YAML (generated in step 3) and paste it into this new CODING INSTANCE.
- Because the coding prompt is fully self-contained, the new LLM (or coding environment) doesn’t need the entire plan’s context, just the prompt.
- Let the CODING INSTANCE proceed with implementation details, generate code, discuss improvements, etc.
-
Implement & Review.
- When the code is done (by the LLM or by you), produce a quick summary or final output.
- Go back to the META INSTANCE, mark the task as “done,” and generate the review prompt.
- If the review passes, move on to the next task. If not, iterate until it meets acceptance criteria.
-
Task Selection (Optional Advanced).
- If you want to dynamically pick tasks rather than going in a fixed order, you can use
03_task_selection_alt.md
. - This meta-prompt looks at your project’s current state, sees which tasks are done or blocked, and picks the next best task automatically.
- If you want to dynamically pick tasks rather than going in a fixed order, you can use
-
Repeat Until Done.
- Continue this cycle—generate coding prompt, implement, review—until all tasks from
01_planning.md
are finished. - That’s when you’ll have a working Python project.
- I noticed each individual task can be completed in its own new chain which can be cheaper in the long run because of shorter context if you are using a service like cline
- Continue this cycle—generate coding prompt, implement, review—until all tasks from
Q: Do I need all these files for a simple project?
A: No, you can remove or merge them. They’re mainly for larger or more structured projects.
Q: Why separate the “META INSTANCE” from the “CODING INSTANCE”?
A: Coding LLMs often have limited context windows or get “bogged down” with big instructions. By isolating each coding prompt into a self-contained block, you keep the process clear and reduce confusion. The META INSTANCE retains the high-level plan, while the CODING INSTANCE focuses on one task at a time.
Q: What if the LLM makes mistakes regarding library versions or Python commands?
A: Communicate and correct them. In real software projects, changes and fixes happen constantly. LLMs speed up coding but aren’t omniscient. Iterate until the code meets your standards.
These meta-prompts are flexible guidelines. Adapt them to your favorite Python workflow—whether that’s Poetry, pipenv, or standard pip; Click, Typer, or argparse; library, CLI, or web service. The fundamental idea remains:
- Plan thoroughly
- Break tasks down
- Generate self-contained coding prompts
- Review systematically
This ensures a smoother, more maintainable development cycle for your Python projects.