AI & Machine Learning

Inside smart-sdlc: The Skill-First Agentic Framework That Turns Copilot and Claude Into a Full SDLC Team

smart-sdlc from superml.dev is a markdown-only agile development framework that runs inside the AI assistant you already use — no runtime, no platform, just six personas and six SDLC phases your Copilot or Claude can activate on demand.

Bhanu Pratap
Share this article

Share:

smart-sdlc from superml.dev is a markdown-only agile development framework that runs inside the AI assistant you already use — no runtime, no platform, just six personas and six SDLC phases your Copilot or Claude can activate on demand.
Table of Contents

The Bet That smart-sdlc Is Making

Almost every “agentic SDLC” announcement from the last six months has been about a new platform. GitLab shipped Duo. GitHub shipped Copilot Workspaces. AWS shipped Q Developer’s agent mode. Microsoft and Azure pitched an end-to-end AI-led lifecycle. The common thread: a new runtime, a new dashboard, a new seat to pay for, a new place where your team’s context lives.

smart-sdlc, the framework from superml.dev and superml.org (by crazyaiml), makes a different bet. Its README pitch, verbatim, is:

“A standalone AI-driven agile development framework that works natively inside GitHub Copilot, Claude, or any AI coding assistant.”

There’s no runtime to install. There’s no daemon. There’s no proprietary agent orchestrator. What ships in the _superml/ folder of your repo is a collection of markdown skill files — plain SKILL.md documents — that an AI assistant reads and follows when you activate them. The skills encode the expertise; your existing AI does the execution.

The framework’s own framing is direct: “each skill is a markdown file (SKILL.md) containing structured instructions the AI reads and follows. No code execution, no installed tooling — just skills your AI activates on demand.”

If you squint, it’s the same pattern Anthropic itself shipped with Claude Code skills (and Cowork skills before that) — but generalised across Copilot, Cursor, Claude, and any other chat-based AI assistant that can read a file path and follow instructions.

The Six Personas

smart-sdlc’s unit of expertise is the persona — a named, role-specific skill that your AI activates when you address it. Six personas span the lifecycle:

PersonaDefault nameCopilot handlePrimary domain
Product / BAAria@sml-agent-pmRequirements, PRDs, user stories
ArchitectRex@sml-agent-architectSystem design, ADRs
DeveloperNova@sml-agent-developerImplementation, code review
Modernization LeadSage@sml-agent-sageLegacy analysis, migration
Team Lead / PMLead@sml-agent-leadEpics, sprint planning
Code ArchaeologistScout@sml-agent-scoutCodebase onboarding

In practice, a developer opens Copilot Chat, types @sml-agent-architect design a queue replacement for our legacy SQS worker pool, and the assistant — guided by Rex’s SKILL.md — produces architecture choices, an ADR, and the trade-off analysis in the framework’s expected shape. Same model, same IDE, same chat window — but the output is now role-aware and artifact-aware, not generic prose.

Persona names are customisable. “Aria,” “Rex,” “Nova,” and the rest are defaults; a team can rename them to match their own role vocabulary (some orgs will want “Product Owner” instead of “Product / BA”), and the skill honours the local config.

Six Phases, Numbered for a Reason

The phase structure is equally deliberate. smart-sdlc organises skills under _superml/skills/ into six numbered phase folders, plus two cross-cutting layers:

  • Phase 0: 0-relearn — codebase onboarding (Scout lives here).
  • Phase 1: 1-analysis — problem analysis and discovery.
  • Phase 2: 2-planning — requirements and UX (Aria’s home base).
  • Phase 3: 3-solutioning — architecture and work breakdown (Rex).
  • Phase 4: 4-implementation — build, test, ship (Nova).
  • Phase 5: 5-modernize — legacy analysis and migration (Sage).
  • core — shared utilities.
  • integrations — JIRA, Confluence, GitHub, GitLab, Azure DevOps connectors.

The numbering matters because skills reference each other across phases. A 3-solutioning architecture skill can cite the 2-planning PRD artifact shape as a precondition. An artifact readiness guard — a pattern where the skill refuses to proceed until the upstream artifact exists in the expected shape — is what keeps the handoffs from being lossy.

This is also where the framework’s agile sensibilities show. It’s not trying to be waterfall in numbered-phase clothing. Phase 0 (relearn) is explicitly positioned as “the thing you do before every meaningful change, not once at project kickoff” — a small but ideologically important move for legacy-heavy enterprises.

The Two-Phase Setup

One of the cleaner design choices in smart-sdlc is its separation of team config from personal config. Running:

npx @supermldev/smart-sdlc init

…bootstraps the team-wide _superml/ folder at the repo root and writes _superml/config.yml — things every team member shares, typically committed to git.

npx @supermldev/smart-sdlc persona

…configures a developer’s individual workspace: their preferred persona names, the tools they’ve enabled (Copilot, Claude, Cursor), and anything else that shouldn’t leak into git. _superml/persona.yml is gitignored.

This is the right cut. The frustrating thing about most “team AI” tooling is that it either forces everyone into the same IDE (Cursor-only, or Copilot-only) or pollutes the repo with personal preferences. smart-sdlc treats team context and personal context as first-class different.

When you pick Copilot specifically, the framework auto-generates a .github/ folder with one *.agent.md per persona, one SKILL.md per skill (for slash-command access), a copilot-instructions.md, and a pull_request_template.md. So the Copilot activation isn’t just “read this file” — it’s wired into the conventions Copilot already looks for.

”Meetings” — Multi-Persona Context Generation

The command I find most interesting, structurally, is:

npx @supermldev/smart-sdlc meeting

It generates a structured context prompt that brings multiple personas into a single AI session. The use case is a design review, sprint planning, or an architecture discussion where you want the PM, the architect, the developer, and the modernization lead in the same chat — effectively a “round table” inside your AI assistant.

In practice, this is a workaround for a real LLM limitation: no frontier model today reliably switches between distinct expert personas mid-conversation without context bleed. By generating a single, carefully structured prompt that pre-loads all four roles with their individual skill files, meeting makes multi-role sessions more reliable than ad-hoc @-mention switching.

It’s also the kind of feature that’s only possible because the whole system is markdown. If meeting had to orchestrate agent handoffs across a runtime, it would be a product. Because it just concatenates skill files into a prompt, it’s a shell command.

The Integrations Layer

smart-sdlc’s integrations/ folder is where the framework earns its enterprise keep. Out of the box it covers JIRA, Confluence, GitHub, GitLab, and Azure DevOps — via REST API or MCP Server, with support for bearer, basic, and header-based auth.

The MCP support is the more forward-looking choice. As MCP consolidates under the Linux Foundation’s Agentic AI Foundation (which we covered in the Agent Stack Grows Up piece), a company’s internal knowledge surfaces — their private wiki, ticketing, SSO-gated dashboards — will increasingly expose MCP servers as the canonical integration point. A framework that can read JIRA via MCP today will read anything via MCP tomorrow without a rewrite.

The artifact side also has real conflict-prevention controls: ticket lock, branch lock, and version traceability are called out explicitly. These are exactly the places where ad-hoc Copilot use breaks down — two developers generate competing ADRs against the same ticket, or a Cursor session rewrites a file a reviewer had already started on. smart-sdlc’s skills enforce the lock pattern at the SDLC artifact level, not at the IDE level.

What smart-sdlc Is Not

Because the framework is deliberately scoped, it’s worth being clear about what it doesn’t try to do:

  • It’s not a model. It doesn’t fine-tune anything, doesn’t call any LLM directly, doesn’t provide inference. Bring your own Copilot/Claude/Cursor.
  • It’s not an orchestrator. There’s no long-running agent, no task queue, no background worker. The AI runs in your chat window, under your hand.
  • It’s not a replacement for CI/CD. Phase 4 covers “build, test, ship” but as a set of skills your developer persona invokes — not as a pipeline runtime. It slots next to your existing GitHub Actions, not over them.
  • It’s not locked to one vendor. The same _superml/ folder drives Copilot, Claude Code, Cursor, or any other AI assistant. The per-tool auto-generated wiring (.github/ for Copilot, etc.) is an ergonomic convenience, not a lock-in.

Read through that list and you’ll notice the framework is doing the opposite of what most agentic-SDLC products do. Most products are investing in orchestrator code, proprietary agent runtimes, and vendor lock-in. smart-sdlc is investing in vocabulary, artifact shapes, and role handoffs — the parts that are actually hard.

How This Slots Into the Human-Led, AI-Accelerated Stack

A few days ago I wrote about how the winning 2026 stack is human-led, AI-accelerated — the pattern where a human stays in the loop at every irreversible step and the AI handles the acceleration. smart-sdlc is a concrete instance of that pattern at the SDLC layer:

  • Human-led: the developer still types the @sml-agent-architect invocation, still accepts or rejects Rex’s output, still opens the PR. Nothing runs autonomously overnight.
  • AI-accelerated: the expertise embedded in each persona’s SKILL.md means the AI’s output lands in the right artifact shape the first time. Fewer rewrites. Less “can you reformat this as an ADR?” churn.
  • Checkpoints: ticket lock and branch lock enforce the human touch at the right places — exactly where the “25% production agent failure rate” Gartner cited tends to bite.

The way to read smart-sdlc is not “let the AI build your software.” It’s “give your AI the vocabulary of a senior engineer in each role, and let your developers orchestrate.” That framing is genuinely distinct from the “autonomous swarm of agents ships your roadmap” pitch, and — per everything we’ve seen about real-world agent reliability in 2026 — it’s probably the one that survives contact with production.

Installation and Getting Started

If you want to try it on a real repo, the full flow is:

# One-time team setup — run from repo root, commit the result
npx @supermldev/smart-sdlc init

# Per-developer workspace setup — gitignored
npx @supermldev/smart-sdlc persona

# Multi-persona session generation
npx @supermldev/smart-sdlc meeting

# Housekeeping
npx @supermldev/smart-sdlc help    # context-aware guidance
npx @supermldev/smart-sdlc list    # enumerate skills and agents
npx @supermldev/smart-sdlc update  # refresh to latest
npx @supermldev/smart-sdlc clean   # remove generated files

Inside Copilot Chat, the activation is just @sml-agent-pm, @sml-agent-architect, and so on. Inside Claude or Cursor, you point at the SKILL.md file directly — the framework is explicit that “the AI reads the skill, loads config and persona settings, loads company reference docs, and activates the persona.”

License is MIT, © 2026 Superml.dev. Published under @supermldev/smart-sdlc on npm.

What to Watch

A few threads worth tracking as smart-sdlc matures:

  • MCP-native integrations. The framework already supports MCP servers for internal knowledge retrieval; the interesting question is whether common enterprise integrations (JIRA, Confluence, ServiceNow) get shipped as officially-blessed MCP servers that smart-sdlc auto-configures. That would close the last gap between “generic AI assistant” and “company-context AI assistant” without the framework having to ship its own connectors.
  • Cross-team skill registries. module.yaml hints at a skill registry. If superml.dev opens a community registry where teams can publish their own persona variants (“Aria but for regulated healthcare,” “Rex specialised for embedded systems”), the framework could become the distribution layer for SDLC expertise the way npm became the distribution layer for JavaScript utilities.
  • Eval harnesses. The next hard problem for skill-first frameworks is measurement. How do you test that a SKILL.md change actually improved the output, versus regressing it? Expect either smart-sdlc or a sibling project to ship a skill-level eval runner in the next six months.
  • Comparison studies. Somebody — probably an academic lab or a consulting firm — will publish a head-to-head between “team using smart-sdlc on Copilot” and “team using vanilla Copilot.” The result is non-obvious and will shape enterprise adoption.

The deeper bet smart-sdlc is making is that the next layer of value in agentic development isn’t better models — it’s better vocabulary. Personas, phase structure, artifact shapes, handoff contracts. If the Stanford AI Index is right that benchmarks are saturating, then the remaining progress happens at exactly this layer: the specification of what the AI is supposed to do, and what “done” looks like at each step.

Worth keeping an eye on.

Sources

Back to Blog

Related Posts

View All Posts »