Before AI Does Something Important, It Has to Ask You Three Questions.

A simple protocol that keeps humans in control—by forcing AI to pause when meaning is unclear.

These aren't three fixed questions. They're three checkpoints that ask: What are we talking about? Why does it matter? What do we do first? The AI can't act until you answer.

Why This Matters

Modern AI systems don't wait for commands — they initiate actions, trigger workflows, and make decisions. Without a source of human-defined intention, they improvise:

That's not intelligence.
That's automation without direction.

The Human Agency Protocol installs the missing layer.
Integrated directly into AI platforms, HAP forces systems to pause when meaning, purpose, or intention is unclear — and resume only once humans decide what the action is for.

With HAP, your AI acts on human intention, not assumptions.

How HAP Works

Stop → Ask → Proceed

1

Stop

The system detects ambiguity, drift, or a skipped stage.

2

Ask

A context-appropriate question is triggered (an Inquiry Blueprint).

3

Proceed

Only after the human resolves the checkpoint can the AI continue.

These checkpoints are enforced. They cannot be bypassed.

Two Inquiry Modes

Human thinking has two rhythms. HAP protects both.

Convergent Mode

For alignment, planning, and execution.

  • Moves meaning → purpose → intention → action
  • No skipping or fast-forwarding
  • Guarantees human-led closure

Reflective Mode

For exploration, insight, and creative depth.

  • Cycles between stages without forcing decisions
  • Protects depth from premature action
  • Guarantees human-anchored reflection

Both modes follow the same rule:
AI cannot advance when orientation is unclear.

The Inquiry Ladder

All productive human inquiry follows the same structure:

1

Meaning

Are we talking about the same thing?

2

Purpose

Why does this matter?

3

Intention

What will we do first?

4

Action

How do we execute?

HAP enforces progress through these stages.
When any stage is unresolved, the system must pause and ask.

Core Principles (v0.3)

Stage Progression Enforcement

AI must follow the ladder in the correct order (Convergent) or controlled cycles (Reflective).
No jumps. No shortcuts. No inference.

Mandatory Human Checkpoints

When meaning, purpose, or intention is unclear, AI must stop and ask.
Explicit confirmation required.

Human-Gated Actions

All downstream actions (publishing, deploying, triggering systems) require resolved stages.
Higher risk → higher required stage.

Two Inquiry Modes

Convergent for decisions; Reflective for depth.
Both obey stop → ask → proceed.

Privacy by Architecture

Only structural signals leave the device.
Never content. Never transcripts.

Blueprinted Questions

Questions follow open, shared Inquiry Blueprints.
AI chooses wording; the protocol enforces timing.

Verified Compliance

Every checkpoint and action can be cryptographically proven (HAP Envelope).
Proof, not trust.

Federated Stewardship

Qualified Service Providers enforce the rules without owning data.
No central authority. No extraction.

Why This Matters Now

Automation accelerates. Human intention becomes scarce.

HAP ensures:

The right question, asked at the right time, keeps humans in the loop — by design, not by hope.

Complete Specification