Why Another Playbook?
Agentic AI is great for chaining reasoning steps, but teams also need a lightweight, deterministic generative flow they can demo in minutes. This playbook shows how to ship a simple QA helper with clear contracts (JSON output), guardrails, and a mock fallback—no ML background required.
Repos:
- Generative: github.com/amiya-pattnaik/generativeAI-engineering-playbook
- Agentic (for comparison): github.com/amiya-pattnaik/agentic-engineering-playbook
What It Demonstrates
- Structured prompts: System/user messages demand JSON test cases (title, steps, expected_result, risk).
- Guardrails: Server parses/validates JSON; falls back gracefully if the model drifts.
- Mock vs provider: Default mock generator for offline demos; flip to OpenAI by setting
OPENAI_API_KEY. - Scenario picker: UI dropdown loads tasks/contexts from
scenarios/*.jsonvia/api/scenarios. - Single API contract: Client never sees secrets; server handles provider selection and logging/timing.
Architecture (small and swappable)
- Node.js + Express API; static HTML/JS UI (no framework).
generateroute handles input validation, timing, and mode detection.- Provider abstraction: OpenAI client using
response_format: json_objector mock generator. - Scenario files power both the UI picker and CLI batch runs (reports in JSON/Markdown).
Quickstart (demo)
git clone https://github.com/amiya-pattnaik/generativeAI-engineering-playbook.git
cd generativeAI-engineering-playbook/demo-app
cp .env.example .env # leave key empty for mock mode
npm install
npm run dev # open http://localhost:3000
# use the scenario dropdown or type your own task/context, then Generate
Use OpenAI instead of mock:
- Set
OPENAI_API_KEY(and optionallyOPENAI_MODEL) in.env. - The server switches to provider mode automatically; UI stays the same.
Scenario Runner (batch/evals)
# run all scenarios and emit reports to demo-app/reports/
npm run demo:scenarios
# or single scenario
npm run demo:scenario -- scenarios/login-mfa.json
Outputs include timing, model/mode, and the structured completion (JSON + Markdown).
Why This Matters for Engineering/QA
- Fast demo, low risk: Mock mode works offline; provider mode is a single env var away.
- Contracts first: JSON outputs are easy to validate and wire into tests or ticket templates.
- Extendable: Add routes for PR review or incident summaries; add providers without touching the UI.
- Eval-friendly: Scenario files double as a golden set; batch runner produces artifacts for comparison.
How to Extend
- Add more scenarios under
demo-app/scenarios/to exercise different flows. - Plug in other providers by adding
providers/<name>.jsand switching selection inservices/generator.js. - Add a simple schema check (zod/ajv) before returning responses; log redacted prompts/responses for tracing.
- Hook into CI: run
npm run demo:scenariosas a smoke/eval step and diff JSON outputs between builds.
Closing Thought
Pair this generative playbook with the agentic one: show the small, contract-first generator alongside the multi-agent chain. Together they cover quick assists and deeper reasoning—both with mock-first defaults and clear ops stories.