Category: ai

Recent Posts

Why the PRD Needs to Change

A Product Requirements Document (PRD) has traditionally served as a coordination artifact between humans: product managers, engineers, designers, and business stakeholders. Its purpose has been to define what a product should do, why it should exist, and how success will be measured. As teams increasingly incorporate agentic AI systems, systems capable of planning, executing, and iterating autonomously, the PRD’s audience changes. It is no longer read only by people; it must now be interpreted and executed by machines.

Agentic AI differs alot from prompt-based generative systems. Rather than producing a single output in response to a query, agentic systems splits goals into tasks, select tools, execute actions, observe results, and revise their plans. This level of self-driven behaviour might increase the risk of requirementes that are not clear enough. Where a human being would have stopped, and tried to understand or perhaps asked for clarifications, an AI agent might just go full steam ahead. Sometimes incorrectly.

This creates both an opportunity and a necessity. The PRD can evolve into a formalized contract between the user and the AI, defining not only what to build, but how the AI is allowed to reason, decide, and act. Such a contract enables reproducibility across models and vendors, reduces operational risk, and aligns AI behavior with business intent so that the end product can be produced a second time using the same input.

From Human Alignment Tool to Machine-Readable Contract

Traditional PRDs tolerate ambiguity because humans are good at reading between the lines. Phrases like “fast,” “scalable,” or “user-friendly” are often left intentionally vague, relying on shared context and iterative discussion. However, natural language ambiguity is a known failure mode for automated systems.

For an agentic AI, ambiguity becomes executable behavior. If a requirement states “optimize performance,” the agent must decide which metric, under what constraints, and at what cost. Without explicit boundaries, the AI may optimize for the wrong outcome, such as reducing latency by removing logging or security checks. A secondary validation might be cruical for buisness but be disastrus for performance.

Reframing the PRD as a contract introduces several conceptual shifts:

  1. Explicit obligations: What the AI must do.
  2. Explicit constraints: What the AI must not do.
  3. Verification criteria: How success and failure are evaluated.
  4. Termination conditions: When the AI must stop, escalate or ask for clarifications.

This mirrors how software contracts and interface specifications are used to enable independent implementations while preserving consistent behavior .

Design Principles for AI-Consumable PRDs

1. Determinism Over Narrative

Human-oriented PRDs often rely on narrative explanations and stories with emotionally evaluations. For AI consumption, narrative should be minimized in favor of deterministic statements.

Human-style requirement

“The system should load quickly for most users.”

AI-contract requirement

“The system must render the initial UI of the start page within ≤2.0 seconds for ≥95% of requests measured at the 90th percentile under a simulated load of 1,000 concurrent users.”

Clear thresholds reduce the AI’s need to infer intent and make behavior more reproducible across models.

2. Explicit Goal Hierarchies

Agentic AI systems typically plan by decomposing goals into sub-tasks or tokens. A PRD designed for such systems should expose this hierarchy directly.

Instead of a flat list of features, requirements should be structured as:

  • Primary objective (business outcome)
  • Secondary objectives (supporting outcomes)
  • Non-objectives (explicit exclusions)

This reduces unintended optimization, incorrect focus or hallucination. Research on objective misalignment shows that agents will exploit underspecified goals if constraints are absent.

Agentic misalignment makes it possible for models to act similarly to an insider threat, behaving like a previously-trusted coworker or employee who suddenly begins to operate at odds with a company’s objectives.
anthropic.com


3. Constraints as First-Class Requirements

Traditional PRDs often treat constraints as footnotes. For AI agents, constraints must be first-class, machine-readable rules with as little room as possible for interpretation.

Examples:

  • Technology constraints (“Must use MySQL 8.4 LTS for data storage”)
  • Security constraints (“No plaintext secrets or personal data in code or logs”)
  • Organizational constraints (“Do not modify billing systems”)

This aligns with best practices in AI safety, which emphasize bounding action spaces for autonomous systems .


4. Verification and Acceptance Criteria

Humans can negotiate acceptance during reviews. AI agents need predefined acceptance tests.

Every requirement should include:

  • A measurable condition
  • A verification method
  • A pass/fail threshold

This mimic the role of automated tests in continuous integration, which enable repeatable evaluation without human judgment.


Example: PRD as a Contract for an Agentic Coding AI

Below is a simplified excerpt from a PRD explicitly designed for an agentic AI tasked with generating a software feature.


Product Requirement Contract (Excerpt)

Objective
Build a REST API endpoint that allows authenticated users to create and retrieve TODO items.

Primary Goal
Enable users to persist TODO items with title, description, and deadline.

Non-Goals

  • No user interface implementation
  • No notification or reminder features
  • No third-party integrations

Functional Requirements

  1. The API must expose POST /todos and GET /todos.
  2. Each TODO must include: id, title, description, deadline, created_datatime.
  3. Requests must be authenticated using JWT.

Constraints

  • Language: Python 3.11
  • Framework: Flask
  • Database: MySQL
  • ORM: SQLAlchemy
  • No direct SQL queries permitted.

Security Requirements

  • Input validation must reject malformed JSON.
  • Deadlines must be ISO-8601 formatted.
  • Authentication failures must return HTTP 401.

Acceptance Criteria

  • Unit tests must cover ≥90% of business logic.
  • All tests must pass using pytest.
  • Linting must pass with flake8 default rules.

Termination Conditions

  • If database schema migration fails, stop execution and report error.
  • If test coverage <90%, do not proceed to final output.

This structure minimizes interpretation and enables the same PRD to be reused across different AI models or providers, improving portability and reproducibility. No reason for storytelling, instead focusing on facts.


Model- and Provider-Independence

One motivation for treating the PRD as a contract is to avoid lock-in to a single AI model or vendor. Today’s agentic systems vary significantly in planning depth, tool usage, and error recovery strategies and by using a PRD we can re-run on multiple models and evaluate outcome.

By externalizing intent into a structured PRD:

  • The PRD defines behavior, not the model.
  • Different agents can be evaluated against the same acceptance criteria.
  • Organizations can swap models without rewriting product intent.
    This mirrors how open standards enable multiple implementations while preserving interoperability.

Implications for Product and Business Stakeholders

More time is spent defining constraints and success criteria upfront, but less time is lost correcting misaligned outputs. For business stakeholders, the PRD becomes auditable evidence of intent, a record of what the AI was instructed to do.

This is especially relevant for governance and risk management. Regulators increasingly emphasize traceability and accountability in AI systems. Both on what data is being fed in but also on what model it has been run. A PRD as the contract provides a clear artifact linking business intent to AI action.


Brief Notes on Risk, Ethics, and Governance

In no way a full review but some items that came up while writing:

  1. Auditability: A structured PRD enables post-hoc analysis of whether failures stemmed from bad instructions or bad execution. It gives a clearer path to why a decision was taken.
  2. Liability: Explicit constraints reduce ambiguity about responsibility when AI systems cause harm and extra/new contstaints can be added along the way.
  3. Human override: PRDs should specify escalation conditions where human approval is required or when a manual review is needed for the model to continue.
    These align with widely cited AI governance principles emphasizing human oversight and bounded autonomy.

Conclusion

As agentic AI systems move from experimental tools to production actors, the PRD must evolve as one of the main tools for product development. No longer just a communication aid between humans, it becomes a contract that defines, constrains, and verifies autonomous behavior. By prioritizing determinism, explicit constraints, and measurable acceptance criteria, organizations can create PRDs that are portable across AI models, safer to deploy, and better aligned with business goals.

In this framing, the PRD is not diminished by AI, it becomes more important than ever.