Why This Playbook?

Single-shot prompting is useful, but many engineering problems are multi-step. This playbook demonstrates an agentic workflow where specialized roles pass context forward and produce actionable artifacts.

Repos:

Concept Primer: What Is Agentic AI?

Agentic AI decomposes goals into coordinated steps executed by specialized agents and tools. Instead of one prompt -> one answer, you get a chain with context continuity and audit-friendly outputs.

Broader Agentic Use Cases

  • Incident triage and coordinated remediation planning.
  • SDLC orchestration across engineering, QA, security, and platform.
  • Change-risk and release-readiness evaluation.
  • Cross-system root-cause analysis with tool integrations.

Demo scope in this repo:

  • A banking workflow: Metrics -> Discovery -> Engineering -> Quality -> Platform -> TestDesigner -> Summary.

Concept Comparison (GenAI vs Agentic vs RAG)

User Need
   |
   +--> Fast content draft from prompt/context
   |      -> Choose GENERATIVE AI
   |
   +--> Multi-step planning + tool orchestration
   |      -> Choose AGENTIC AI
   |
   +--> Answers grounded in source documents with citations
          -> Choose RAG

What It Demonstrates

  • Scenario-first orchestration with metrics/signals.
  • Multi-agent context handoff.
  • Optional Playwright test generation and execution.
  • Markdown/HTML report generation for traceability.
  • Mock-first reliability with optional provider mode.

Flow

  1. Load scenario, metrics, and optional external signals.
  2. Orchestrator runs agent chain in sequence.
  3. Each agent consumes prior outputs and emits its artifact.
  4. Model calls run in mock or provider mode.
  5. Optional test stage generates/runs Playwright specs.
  6. Final report is rendered as Markdown/HTML.

ASCII Diagram

Scenario + Metrics + Signals
            |
            v
      Orchestrator (run.js)
            |
            v
 [Metrics] -> [Discovery] -> [Engineering] -> [Quality]
      -> [Platform] -> [TestDesigner] -> [Summary]
            |
            v
   Reports (MD/HTML) + Optional Test Results

Provider Support

  • OpenAI is integrated out-of-the-box.
  • The flow is extensible to Gemini, Claude, and others by adding model clients and extending selection logic in src/models.js.

Quickstart

git clone https://github.com/amiya-pattnaik/agentic-engineering-playbook.git
cd agentic-engineering-playbook
npm install

node src/run.js scenarios/banking-app.json --metrics data/metrics.json
node src/run.js scenarios/banking-app.json --metrics data/metrics.json --html
node src/run.js scenarios/banking-app.json --metrics data/metrics.json --signals config/signals.json --html --run-tests

Use OpenAI:

cp config/model.example.json config/model.json
node src/run.js scenarios/banking-app.json --metrics data/metrics.json

Run and Evaluate

# baseline run
node src/run.js scenarios/banking-app.json --metrics data/metrics.json

# include html report
node src/run.js scenarios/banking-app.json --metrics data/metrics.json --html

# full flow with tests
node src/run.js scenarios/banking-app.json --metrics data/metrics.json --signals config/signals.json --html --run-tests

Closing Thought

Agentic AI becomes practical when each step is explicit, auditable, and connected to real engineering signals instead of prompt-only intuition.