# Research Autopilot

Research Autopilot is the auditable automation layer of Codex Research Stack. In v0.6 Experience Preview, it feeds the Research Cockpit and the real-project launch flow rather than replacing the researcher.

It turns a research goal into:

- a visible research route;
- a method-wizard selection;
- primary research routes and stackable method modules;
- staged tasks;
- AI suggestion cards;
- draft, revision, and accepted method artifacts;
- user accept/reject decisions;
- project fact updates;
- evidence, method, claim, and writing gates;
- recovery actions;
- project memory and event-log entries;
- an auditable run report.

## Autonomy Boundary

Autopilot is an auditable automation layer, not a black-box paper generator.

- AI can propose materials, claims, evidence drafts, method notes, repair actions, and task cards.
- AI cannot mark evidence as verified without required fields.
- AI cannot pass a stage gate directly.
- AI cannot write to project facts until the user accepts a suggestion.
- Accepted method suggestions create draft method artifacts. They do not pass gates automatically.
- Users can create, edit, mark for revision, or confirm method artifacts inside the Autopilot page.
- Rejected suggestions are logged and do not modify the project.

## Method Workflow

v0.4 replaced the static method selector with a composable method workflow. v0.6 keeps that model, uses it in the new project wizard, and surfaces its consequences in Cockpit.

The user answers a method wizard about:

- sources and data;
- inference goals;
- analysis modes;
- evidence and ethics constraints.

The app recommends one primary route and a set of method modules. The user can edit that combination before planning a run.

Primary routes cover literature review, qualitative and fieldwork projects, survey measurement, experiment and quasi-experiment, econometrics and causal inference, text and corpus analysis, network analysis, digital-trace and platform research, spatial/GIS, historical/archive/comparative cases, policy or program evaluation, computational modeling and ABM, mixed methods, and custom general projects.

Modules add concrete required outputs such as DOI metadata records, data provenance notes, platform ethics notes, variable dictionaries, identification strategies, model specifications, survey sampling frames, interview protocols, codebooks, coding frames, network edge definitions, spatial unit notes, experiment protocols, ABM parameters, comparative case logic, policy evaluation logic, figure/table indexes, reproducibility bundles, and writing-quality reports.

## Method Artifact Closure

Method artifacts are generic review objects, not specialized editors. The app stores them in `.research-console/objects/method-artifacts.json`; optional readable projections can summarize them, but the JSON object file remains the source of truth.

Gate behavior is fixed:

- missing required method artifact: `block`;
- `draft` or `needs_revision`: `revise`;
- `accepted`: satisfies that method-artifact requirement only.

An accepted method artifact does not override material, evidence, claim, ethics, or writing checks. After editing and confirming an artifact, the user must re-run gate evaluation to update the route-aware blocking reasons.

## Stage Model

The route-aware stage order is:

1. Research design
2. Material / data plan
3. Evidence and ethics
4. Method artifacts
5. Analysis and claim binding
6. Writing and export review

Each selected route or module contributes required outputs to these stages. Blocked stages must show user-level repair actions. Missing method artifacts block the relevant gate; draft or needs-revision artifacts keep the gate in revise state until the user completes and confirms them.

## Cockpit Loop

Cockpit consumes launch-wizard and Autopilot output and displays:

- Start Here / Setup Progress for question, route, materials, claims, evidence, method artifacts, gates, and export;
- the latest run and stage state;
- gate decisions and blocking reasons;
- recovery actions generated from block or revise checks;
- project memory linked to decisions, risks, method confirmations, and export checkpoints;
- recent append-only events;
- AI suggestion source;
- export readiness.

The Cockpit can resume a run, trigger cross-stage review, update recovery actions, export task cards, and jump to the deeper page where a repair should happen.

## Launch Flow

v0.6 can create a project and seed the first Autopilot route in one local action. The launch wizard accepts a project folder, project name, research question, method wizard answers, method selection, an optional first material, and an optional first candidate claim.

The seeded material remains a draft inbox object. The seeded claim remains unsupported. They are visible in Cockpit, but they do not become evidence and do not pass gates until the user upgrades evidence, binds it to claims, confirms method artifacts, and reruns checks.

## Provider Model

OpenAI-compatible BYOK configuration is optional. Without a provider, the local project workflow remains usable.

The project stores provider metadata and `key_present`, not raw API keys. Raw keys are saved through the system credential store. Provider output can enrich suggestion-card content, but it cannot directly write facts or pass gates. If a provider call fails, Autopilot records a `provider_error` event and falls back to local rule suggestions.
