Why These Playbooks Matter
Generative AI, RAG, and Agentic AI are often discussed as if they are interchangeable. They are not.
Each pattern solves a different engineering problem. The practical challenge for teams is knowing when to use a prompt-driven workflow, when to ground answers on source documents, and when to orchestrate multiple steps across roles and tools.
That is why I built three separate Node.js playbooks. Each repo is intentionally small, runnable, and easy to explain. The goal is to make the mechanics visible, not to hide them behind a large platform.
Repos:
- Generative AI: github.com/amiya-pattnaik/generativeAI-engineering-playbook
- RAG: github.com/amiya-pattnaik/rag-engineering-playbook
- Agentic AI: github.com/amiya-pattnaik/agentic-engineering-playbook
The Practical Difference
User Need
|
+--> Fast content draft from prompt/context
| -> Choose GENERATIVE AI
|
+--> Answers grounded in source documents with citations
| -> Choose RAG
|
+--> Multi-step planning + tool orchestration
-> Choose AGENTIC AI
Rule of thumb:
- Generative AI is the quickest path for first-draft output.
- RAG is the safer path when answers must stay grounded in source text.
- Agentic AI is the right path when the work itself is multi-step and role-based.
1. Generative AI Playbook
This repo focuses on single-shot structured generation. The demo app takes a task and some context, then produces QA test cases as JSON.
What it demonstrates:
- Prompt-driven generation from
task + context - Mock mode for safe offline demos
- Optional OpenAI provider mode
- Structured JSON output and basic parsing guardrails
- Scenario runner for repeatable examples
Framework-backed v2:
manual: direct Node.js prompt builder and provider integrationlangchain v2:ChatPromptTemplateplus structured output with LangChain
This is the best repo to explain how an LLM can help with drafting artifacts like:
- test cases
- summaries
- checklists
- requirement refinements
2. RAG Engineering Playbook
This repo focuses on grounded Q&A over a local knowledge base. The demo app indexes policy-style documents, retrieves relevant chunks, then answers only from retrieved context.
What it demonstrates:
- document ingestion and chunking
- query embedding and retrieval
- scoring and threshold checks
- citation validation
- abstention when context is weak or incomplete
- anti-hallucination scenarios
Framework-backed v2:
manual: custom retrieval, scoring, grounding, and abstention logicllamaindex v2: parallel implementation using LlamaIndex for indexing and retrieval
This is the best repo to explain how to reduce hallucination risk when the answer must come from known documents rather than model intuition.
3. Agentic AI Engineering Playbook
This repo focuses on orchestration. Instead of one prompt producing one answer, a sequence of specialized roles works through a banking scenario and produces a report.
What it demonstrates:
- scenario-based workflow execution
- context handoff between steps
- engineering, quality, and platform perspectives
- optional test generation and execution
- report artifacts for traceability
Framework-backed v2:
manual: explicit orchestrator with step-by-step flowlanggraph v2: graph-based orchestration using LangGraph
This is the best repo to explain how AI moves from content generation into coordinated workflow execution.
Why I Kept Both Manual and Framework Versions
Each repo now follows the same pattern:
- a manual implementation that shows the mechanics clearly
- a framework-backed v2 that maps the same idea to a popular ecosystem tool
That split is useful for engineering teams because it shows both the underlying mechanics and the framework-based implementation.
The manual path proves you understand the underlying architecture:
- prompt construction
- retrieval logic
- scoring and grounding
- orchestration state handoff
The framework-backed path proves you can also work with the tools teams recognize:
- LangChain for generative structured-output workflows
- LlamaIndex for RAG indexing and retrieval
- LangGraph for agent orchestration
Quick Comparison
| Playbook | Best For | Demo Shape | Framework v2 |
|---|---|---|---|
| Generative AI | First-draft content generation | Task + context -> structured output | LangChain |
| RAG | Grounded answers over docs | Retrieve -> gate -> answer -> cite | LlamaIndex |
| Agentic AI | Multi-step workflow execution | Role chain -> reports -> optional tests | LangGraph |
How an Engineering Team Can Use These
- Start with the Generative AI repo when you need lightweight LLM-assisted drafting.
- Move to the RAG repo when accuracy depends on current internal documents.
- Use the Agentic repo when work must flow across multiple roles, tools, or decisions.
In practice, real systems often combine them:
- Generative AI for drafting
- RAG for grounding
- Agentic AI for orchestration
Closing Thought
These three repos are intentionally small, but together they tell a bigger story.
AI engineering is not one pattern. It is a set of design choices. The right architecture depends on whether you need generation, grounding, orchestration, or some combination of all three.
That is the reason I built these playbooks separately and then aligned them under one umbrella.