What a CAIS Capstone demands.
How it is reviewed.
How it is scored.
The Capstone is the terminal performance assessment of the CAIS credential. It is not a final project. It is a reviewed body of work — an artifact plus a strategic memo plus an ethics note plus a risk register plus a mandatory 10–15 minute candidate video walkthrough, evaluated asynchronously by a four-reviewer panel. It is scored against a public, weighted, six-dimension rubric and published to the Public Verification Registry on pass. This document is the citable specification regulators, employers, faculty, and candidates all read from. It is written to be held up against ISO/IEC 17024 §9.2 performance-assessment principles.
The prompt is public. The rubric is weighted. The panel is named. The review is asynchronous. The walkthrough is on camera.
In an AI-saturated world, narration is the proof.
In an AI-saturated world, narration is the proof.
Anyone can generate a dossier. Anyone can generate a pitch deck. Anyone can generate a working demo, a GitHub repo, a rubric-compliant write-up. What cannot be generated is a human being — on camera, in their own voice, for fifteen minutes — narrating the decisions behind their own work. That is the line the CAIS Capstone draws.
What was the hardest decision you made — and what did you trade off to make it?
Where is this body of work weakest — and why did you ship it anyway?
If you rebuilt this tomorrow, what would you do differently — and why?
Why narration is the proof.
The CAIS Capstone does not accept a silent dossier. The candidate walkthrough is the single most expensive artifact to fabricate, the single most expensive artifact to outsource, and the single most expensive artifact to script. That is the point.
It cannot be generated.
The walkthrough is indexed to a named, identified human on camera. Synthetic, AI-voiced, avatar-driven, or third-party-narrated submissions are a COC-2026-01 §IIIviolation and terminate candidacy on discovery. There is no re-sit, and the finding is recorded in the Public Registry.
It cannot be outsourced.
A ghost-builder cannot narrate decisions they did not make. The three reflective prompts force authorship to the surface — tradeoffs, weaknesses, counterfactuals — in a way that someone who commissioned the work cannot fake. Reviewers are trained to detect the gap.
It cannot be read aloud.
Reviewers score for scripted cadence, read-aloud tonality, and the candidate’s ability to speak fluently to unscripted follow-up prompts captured in the same continuous take. A recited walkthrough scores below the Walkthrough Performancefloor and fails the 75/60 rule.
The dossier proves the system was built. The walkthrough proves you built it.
CAIS Capstone Standard · Authorship Clause · COC-2026-01 §III
Six principles every Capstone is built to satisfy.
A Capstone is the proof. The written examination is the prerequisite.
01 · Performance, not paperwork
The Capstone measures what a candidate can build, deploy, and explain on camera under realistic constraint. Desk research, literature review, and restatement of published frameworks do not satisfy the standard on their own. The candidate must produce a functioning artifact that did not exist before the Capstone window opened.
02 · Reviewed asynchronously, narrated on camera by the candidate
Every Capstone culminates in a mandatory 10–15 minute on-camera walkthrough recorded by the candidate, addressing three fixed prompts (the hardest decision made, the weakest part of the submission, and what would be rebuilt differently). The walkthrough is evaluated, alongside the written dossier, by a four-reviewer panel scoring independently and asynchronously against a published rubric. There is no such thing as a silent pass. A written dossier without a walkthrough does not reach a pass decision. The walkthrough is the proof of first-party authorship.
03 · Ethics is a graded dimension, not a disclaimer
Every Capstone includes an Ethics Note aligned to the CAIS Code of Professional Conduct (COC-2026-01), a standing Ethics reviewer on the panel, and a discrete ethics section within the candidate walkthrough addressing harms, mitigations, and residual-risk posture. Ethics carries weight on the rubric. Failure on the ethics dimension is a failure of the Capstone.
04 · Public specification, private submissions
The prompts, the deliverable spec, the panel composition rules, and the rubric are public. Individual candidate submissions, panelist scoring rationales, and recorded walkthroughs are confidential to the candidate, the panel, the Capstone Committee, and the Ethics Review Board under documented retention policy.
05 · Public Registry record of every review outcome
Every submitted Capstone — pass, fail, or withdrawn — is committed to the Public Verification Registry as a signed attestation. On pass, the credential tier and the Capstone reference hash are encoded into the non-transferable credential instrument. On fail or appeal, the attestation encodes outcome status without disclosing dimension-level scores.
06 · Appealable, not arbitrary
The Capstone is a standards-body decision. Candidates have a defined, time-bound appeal window under published procedure. Appeals are adjudicated by the Capstone Committee of the Standards Council and, where escalated, the Ethics Review Board. The decision chain is documented end-to-end.
Four tiers. Four prompts. One standard of review.
Each prompt is refreshed on a 12-month rotation with a parallel bank of equivalent variants for form-security.
Design, deploy, and document a safe AI-assisted workflow inside a defined role.
Select a recurring task within a real or realistic professional role. Design and implement an AI-assisted workflow that performs the task end-to-end, meets a documented quality bar, and integrates a safety, data-handling, and escalation policy. Deploy it. Document it. Narrate the tradeoffs on camera.
Core artifact (required): a running AI-assisted workflow with documented inputs, outputs, prompts, safety checks, logging, and escalation paths. Screen-recorded execution evidence accepted.
Window: 14 days from prompt assignment to artifact lock. 21 days to walkthrough submission.
Build, evaluate, and ship a production-grade AI system end-to-end.
Construct a deployable AI system — agent, pipeline, application, or composite — that performs a substantive task in a real or realistic deployment context. Cover: data handling, retrieval or context design, agent or orchestration architecture, evaluation methodology, observability, and a documented rollback plan. Narrate and justify the architectural choices on camera against realistic alternatives.
Core artifact (required): a functioning AI system with source or agent-graph artifacts, evaluation results, observability evidence, and a rollback plan. On-camera system demonstration required inside the walkthrough video.
Window: 30 days from prompt assignment to artifact lock. 45 days to walkthrough submission.
Deploy an AI-powered offering into market with evidenced outcomes.
Take an AI-enabled product, service, or internal transformation to market. Evidence real users, real revenue, real cost, or real measurable operational impact. Walk the panel through the unit economics, the positioning, the legal and ethical exposure, and the scale-up posture. Candidates without market exposure may submit a documented internal deployment with measurable operational evidence.
Core artifact (required): a deployed offering with market or operational evidence, unit economics, and a go-forward plan. Evidence must be date-stamped and independently verifiable by the panel.
Window: 30-day Build window inside Prompt Atlas; outcomes window of up to 90 days. Walkthrough submission due within 14 days of outcomes lock.
Design a multi-system AI ecosystem with documented governance.
Design, document, and partially instantiate an AI ecosystem spanning multiple systems, agents, data boundaries, and human decision points. Address: architecture, agent topology, data governance, human-in-the-loop design, ethics and safety policy, incident response, and long-horizon operational viability. Candidate walkthrough includes an extended 15–20 min architectural narration addressing written adversarial prompts provided by the panel at scheduling.
Core artifact (required): ecosystem architecture document, working instantiation of at least two federated components, governance and incident-response documentation, and a peer-reviewable evaluation harness. Portfolio submission permitted; portfolio items must be first-party.
Window: 90 days from prompt assignment to artifact lock. 120 days to walkthrough submission.
What a Capstone submission contains.
Every tier submits the same five-component package. The artifact scales with tier.
| Component | Purpose | Format | Length cap |
|---|---|---|---|
| 01 · Core Artifact | The built system, workflow, deployment, or ecosystem itself. Panel must be able to inspect, execute, or independently verify it. | Tier-appropriate: running workflow, system, deployed offering, or federated ecosystem | n/a |
| 02 · Strategic Memo | What the artifact is, why it exists, who it serves, how it creates leverage, what the tradeoffs were, and what the path forward is. | PDF, single-column, 11pt minimum | ≤ 2,500 words |
| 03 · Ethics Note | Structured ethical analysis: harms considered, populations affected, mitigations applied, COC-2026-01 clauses engaged, residual risk accepted. | PDF, structured sections per template | ≤ 1,200 words |
| 04 · Risk Register | Table of identified technical, operational, regulatory, ethical, and reputational risks with likelihood, impact, mitigation, and residual-risk posture. | Structured table, template provided | ≥ 10 rows |
| 05 · Walkthrough Video | Candidate narrates the submission on camera, addresses three fixed reflective prompts (hardest decision, weakest part, what would be rebuilt), and demonstrates the core artifact. First-party authorship evidence. No ghost-reading scripts. | MP4 / MOV upload to Prompt Atlas submission portal | 10–15 min (20 min cap for CAIS-A) |
| 06 · Supporting Slide Deck | Optional visual aid referenced during the walkthrough. Not a substitute for on-camera narration. | PDF export of slide deck | ≤ 15 slides |
Submission integrity requirements
- Every deliverable carries the candidate's wallet address, submission timestamp, and prompt reference in header metadata.
- Artifacts produced with AI assistance must declare the assistance surface and models used. Non-declaration is a Code of Conduct violation (COC-2026-01 §III).
- Third-party code, data, or content must be attributed in the Strategic Memo. Unattributed third-party substance in the core artifact is a Code of Conduct violation.
- Submissions are hash-anchored on receipt. A sha256 of each deliverable is committed to the Public Verification Registry at the moment of lock and published to the candidate as a receipt.
- Late submissions are not accepted. Extensions are granted only under Council-approved accommodation or documented force-majeure.
Asynchronous panel review.
Four reviewers. One decision.
A candidate walkthrough they cannot fake.
No live sittings. No scheduled defense. The candidate submits a complete dossier plus an on-camera walkthrough. Four reviewers score independently against a published rubric. The Chair writes the decision record.
Panel composition
Every Capstone review panel is constructed from four standing roles, drawn from the Faculty roster and the Ethics Review Board, with conflict-of-interest declarations filed per panelist per candidate. Panelist names are disclosed to the candidate when the review window opens. Panel review is completed within 14 days of submission lock.
| Role | Drawn from | Principal responsibility |
|---|---|---|
| Chair Faculty | CAIS Faculty (Council-seated) | Owns the review cycle. Enforces timing. Reconciles divergent scores. Writes the final decision record. Breaks dimension-level score ties. |
| Domain SME Tier-matched | Faculty + Standards Council domain expert roster | Assesses technical rigor and domain competence against the artifact and the candidate's on-camera reasoning. |
| Ethics Reviewer ERB | Ethics Review Board | Scores the ethics dimension independently against the Ethics Note and the ethics section of the candidate walkthrough. |
| Peer Panelist Credentialed at tier + 1 | Active CAIS credential-holder at the candidate's tier or higher (minimum: one tier above for CAIS-P/B; Architect for CAIS-O; Faculty-only for CAIS-A) | Represents the credentialed community standard. Non-voting on ethics; full vote on outcome. |
Candidate walkthrough — 10–15 minutes, on-camera, pre-recorded (20 min cap for CAIS-A)
The walkthrough is the evidence of first-party authorship and lived command of the work. It is recorded by the candidate, uploaded to the Prompt Atlas submission portal, and reviewed asynchronously. Every walkthrough must address the following fixed structure:
| Segment | Indicative time | Content |
|---|---|---|
| 01 · Artifact walkthrough | 5–7 min | Candidate narrates the core artifact on camera. Screen-share demonstration expected at Builder tier and above. Architecture or agent topology explained aloud. |
| 02 · Three fixed reflective prompts | 4–5 min | Candidate addresses all three, in order: • The hardest decision — what it was, the alternatives, why the chosen path. • The weakest part — what a skeptical reviewer would attack first. • What would be rebuilt differently — if the candidate started over today. |
| 03 · Ethics & risk segment | 2–3 min | Candidate addresses populations affected, top residual risk accepted, and the COC-2026-01 clauses most material to the work. |
| CAIS-A addendum Architect only | +3–5 min | Architect candidates address three written adversarial prompts issued by the panel at review-window open: one technical, one governance, one systemic-risk. |
Asynchronous review workflow — 14 days from lock to decision
| Phase | Window | Content |
|---|---|---|
| 01 · Submission lock & hash anchor | Day 0 | All deliverables (artifact, strategic memo, ethics note, risk register, walkthrough video, optional slide deck) locked. Sha256 of each artifact committed in the Public Registry. COI declarations filed by all four panelists. |
| 02 · Independent async scoring | Days 1–7 | Each panelist scores every dimension independently against the published rubric without conferring. Scoring rationales recorded per dimension on the Panel Rating Sheet. |
| 03 · Reconciliation | Days 8–11 | Chair assembles the four rating sheets. Divergences of more than one performance level on any dimension trigger a documented reconciliation exchange (written, not live). Chair breaks unresolved ties. |
| 04 · Outcome vote & decision record | Days 12–14 | Four panelists cast pass/fail votes. Chair writes the decision record, signs it, and submits to the Capstone Committee for Public Registry attestation. |
Outcome vote
Four panelists vote on a discrete pass/fail outcome after reconciling dimension scores.A pass requires at least three of four votes. A 2-2 tie is resolved against the candidate (fail) with automatic referral to the Capstone Committee for a written review, which may reinstate pass on documented procedural grounds only — not on substantive re-judgment. A unanimous fail is recorded as a clean fail with no Committee review absent appeal.
Walkthrough retention & integrity
Every walkthrough video is retained by GAISB for 24 months under published retention policy, accessible to the candidate on written request, to the Capstone Committee during any appeal, and to regulators under Regulator Audit Access. Walkthroughs are not published externally and do not exit the GAISB evidence chain absent appeal or subpoena. Synthetic, AI-generated, or third-party-narrated walkthroughs are a Code of Conduct violation (COC-2026-01 §III) — authorship of the walkthrough is authorship of the credential.
The rubric is public. The weights are fixed.
Every panelist scores every dimension on the 4-level scale. Weighted mean is the aggregate. The aggregate decides pass.
| Dimension | What it measures | Weight | Floor |
|---|---|---|---|
| 01 · Technical rigor | Architectural soundness, evaluation methodology, evidence of working deployment, defensibility of technical choices. | 25% | 60% |
| 02 · Code of Conduct alignment | Ethical analysis quality, alignment with COC-2026-01 §§II–IV, treatment of populations affected, residual-risk posture, transparency of mitigations. | 20% | 60% |
| 03 · Business applicability | Real-world usability, market or operational fit, unit economics (where applicable), defensible path to scale or durable value. | 20% | 60% |
| 04 · Strategic clarity | Quality of the Strategic Memo: problem framing, tradeoff reasoning, leverage analysis, go-forward coherence. | 15% | 60% |
| 05 · Walkthrough performance | Clarity and command on camera, quality of reasoning on the three fixed reflective prompts, candor about limitations, demonstrable first-party authorship of the artifact. | 15% | 60% |
| 06 · Artifact quality | Production quality of submitted artifacts: documentation, reproducibility, presentation, attention to detail. | 5% | 60% |
| Aggregate pass threshold | 100% | 75% | |
Performance levels
Every dimension is scored on a four-level scale. Level descriptors are dimension-general and are operationalized per-dimension in the Panel Rating Sheet issued to each panelist at the opening of the review window.
| Level | Score | Descriptor (dimension-general) |
|---|---|---|
| Exemplary | 90–100% | Performance exceeds the standard in a way a panel would cite as a reference example. Decisions are defensible and sophisticated. The work is publishable. |
| Proficient | 75–89% | Performance meets the standard. Decisions are defensible. Minor gaps exist but do not compromise the whole. Credential-worthy. |
| Developing | 60–74% | Performance is in range but not yet at standard. Material gaps exist. Candidate shows the capability but has not closed the case. |
| Insufficient | < 60% | Performance does not demonstrate the competency. Gaps are structural, not incidental. Passing at this level would undermine the credential. |
Inter-rater reliability
Panelist dimension scores diverging by more than one performance level on any dimension trigger a documented asynchronous reconciliation exchange. Unreconciled divergence is escalated to the Chair, who may invite a fifth reviewer from the standing Capstone Committee. Inter-rater reliability is computed quarterly and published in the annual CAIS Psychometric Report alongside the examination reliability figures.
Rubric training
Panelists complete a calibration session before serving on any Capstone review panel and re-calibrate annually. Calibration materials are maintained by the Capstone Committee and include anchor examples at each performance level, across each dimension, across each tier — including exemplar candidate walkthroughs at Exemplary and Insufficient levels.
What each level looks like, dimension by dimension.
Published for candidate preparation and panel calibration. These descriptors are the operational text panelists score against.
Dimension 01 · Technical rigor
Exemplary. Architecture is defensible against realistic adversarial alternatives. Evaluation methodology is sound and quantitative. Deployment evidence is independently verifiable. Decisions under uncertainty are reasoned explicitly.
Proficient. Architecture is sound. Evaluation is documented. Deployment evidence is present. Some choices are justified by convention rather than analysis, but none are indefensible.
Developing. Architecture is plausible. Evaluation is partial or qualitative. Deployment evidence is thin. Some decisions do not survive interrogation.
Insufficient. Architecture has structural flaws. Evaluation is absent or performative. Deployment evidence is missing or fabricated. Core choices cannot be defended.
Dimension 02 · Code of Conduct alignment
Exemplary. Ethics Note is specific, structured, and identifies non-obvious harms. Populations affected are named. Mitigations are concrete and auditable. COC-2026-01 clauses are cited and applied. Residual risk is accepted transparently.
Proficient. Ethics Note covers the expected surface area. Material harms identified. Mitigations are present and proportionate. COC-2026-01 alignment is visible. Residual risk is declared.
Developing. Ethics Note is generic. Harms identification is surface-level. Mitigations are nominal. COC-2026-01 references are present but not applied.
Insufficient. Ethics Note is a disclaimer. Material harms missed. Mitigations absent or cosmetic. COC-2026-01 alignment is not demonstrated. Insufficient on this dimension is a Capstone fail regardless of aggregate score.
Dimension 03 · Business applicability
Exemplary. Clear real-world use, evidenced demand or operational impact, defensible unit economics or operational metrics, credible path to durable value or scale.
Proficient. Real-world use is plausible. Evidence of demand or operational impact is present. Unit economics or operational metrics are defensible at current scale. Go-forward is sensible.
Developing. Use case is plausible but not yet evidenced. Economics or operational metrics are speculative. Go-forward is aspirational rather than analyzed.
Insufficient. Use case is weak or demonstrably serves no real user or operational need. Economics or operational metrics are fabricated or missing.
Dimension 04 · Strategic clarity
Exemplary. The Strategic Memo reads like a briefing a board would act on. Problem framing is crisp. Tradeoffs are stated as tradeoffs, not avoided. Leverage is identified explicitly. Go-forward is concrete.
Proficient. Memo is clear. Problem framing is sound. Tradeoffs are surfaced. Leverage is present in the reasoning even if not named. Go-forward is coherent.
Developing. Memo communicates the work but lacks sharpness. Problem framing is present but broad. Tradeoffs are acknowledged but not reasoned. Go-forward is generic.
Insufficient. Memo reads as description, not reasoning. No clear problem framing. Tradeoffs avoided. No go-forward of substance.
Dimension 05 · Walkthrough performance
Exemplary. Candidate owns the material on camera. The three reflective prompts are answered with precision and self-awareness. Hardest decision is reasoned aloud with clear tradeoff analysis; weakest part is named honestly; rebuild reflection is substantive. Ethics segment is handled with maturity. Authorship is unmistakable.
Proficient. Candidate is in command. The three prompts are addressed accurately. Limitations acknowledged. Ethics segment handled without deflection. Authorship is clear.
Developing. Candidate has command of the work but answers to the reflective prompts are thin or under-specified. Weakest-part or rebuild reflection is generic. Ethics segment is present but surface-level.
Insufficient. Candidate does not command the work. The three reflective prompts are evaded, restated without answer, or addressed with platitudes. Walkthrough appears scripted, read, or narrated by someone other than the candidate. Authorship concerns raised to Chair for review.
Dimension 06 · Artifact quality
Exemplary. Artifacts are production-grade in documentation, reproducibility, and presentation. A new reader can reconstruct intent in under 30 minutes.
Proficient. Artifacts are complete and organized. Documentation is sufficient. Presentation is coherent.
Developing. Artifacts are present but rough. Documentation thin. Presentation inconsistent.
Insufficient. Artifacts are incomplete, disorganized, or inconsistent with the Strategic Memo. A new reader cannot orient.
Every review outcome is recorded in the Public Registry.
Every pass mints a non-transferable credential.
The credential is not an email. It is a signed transaction.
Attestation on pass
On a pass decision, the GAISB Standards Council signs a W3C-Verifiable-Credentials- conformant attestation to the candidate's registered Standards Council issuance. The attestation isnon-transferable — non-transferable by design. The attestation encodes: credential tier, Capstone prompt reference, Capstone artifact hash, date of issuance, issuing wallet signature, and a reference to this standard (CAP-2026-01) as the governing instrument. The Public Registry record is the authoritative record; the Registry is the human-readable mirror.
Attestation on fail or withdraw
On a fail or withdraw decision, a signed attestation records review-cycle date, outcome status, and a redacted outcome code. Dimension-level scores, panelist identities, and narrative rationales are not committed in the Public Registry. The fail attestation is a neutral administrative record; it does not expose the candidate beyond the fact of a submitted, completed, or withdrawn attempt.
Revocation posture
Credentials may be revoked for established Code of Conduct violations discovered post-issuance. Revocation instructions are signed by the Standards Council and committed to the ledger as a subsequent attestation referencing the original issuance transaction. The chain retains both instructions in perpetuity; the human-readable Verification endpoint reports the credential as revoked with reason code. There is no off-chain erasure.
A Capstone credential is a professional instrument. In evidentiary, regulatory, and employment contexts, the question is not "did you pass" but "can that be verified by a neutral third party, at any time, without asking GAISB." Committing the attestation in the Public Registry answers that question structurally. The registry does not depend on GAISB continuing to operate or continuing to maintain a database. The credential survives the institution.
When a Capstone fails. What happens next.
Every failed candidate has a documented procedural path and a substantive re-sit path.
Appeal window — procedural only
Candidates may file an appeal within 30 days of written notice of fail decision. Appeals are heard by the Capstone Committee of the Standards Counciland, where ethics-dimension outcomes are contested, jointly with the Ethics Review Board. Appeals are procedural only — they examine whether the review was conducted per this standard (panel composition, COI declarations, review-window timing, rubric application, reconciliation record). Appeals are not a substantive re-judgment of the artifact.
The Committee may: uphold the decision, remand for re-review (with or without the original panel), or reinstate pass on documented procedural grounds. Committee decisions are final and committed in the Public Registry as appeal-outcome attestations referencing the original review-outcome transaction.
Re-sit policy
- Waiting period: 90 days between Capstone attempts, regardless of tier.
- Prompt refresh: candidates sitting again draw from a different prompt variant. The 12-month rotation window means no candidate sits the same prompt twice.
- Panel composition: no returning panelists from the original review, except by written candidate consent.
- Attempt cap: three Capstone attempts per 24-month rolling window. A fourth attempt requires documented remedial pathway sign-off from Faculty.
- Fee structure: candidates failing at the Developing level on no more than two dimensions may elect a discounted re-sit within the waiting-period boundary; policy discretion of the Capstone Committee, not a right.
- Code of Conduct bans: where Capstone misconduct is established (ghostwriting, fabricated artifacts, unattributed third-party substance), re-sit is barred under the Sanction Guidelines Matrix — typically five years, or permanent for misrepresentation of first-party authorship. See Code of Professional Conduct.
Remedial pathway
Candidates failing any Capstone attempt are offered a structured remedial pathway inside Prompt Atlas, including: directed review against the dimensions that failed, Faculty-reviewed practice Builds, and readiness sign-off before the next sitting. The remedial pathway is not punitive; it is how the credential is kept honest on both sides.
How this specification can be challenged. How it can be audited.
A published rubric you can't contest is a slogan, not a standard.
Public comment
This specification is open for structured public comment for 180 days from the publication of each revision. Comments are submitted through the Standards Library public-comment form, received on the public record, and disposed of by the Capstone Committee with a published disposition matrix (accepted, accepted-with-modification, rejected-with-reason, deferred). Material changes trigger a new 180-day window.
Regulator audit access
National competent authorities and recognized accreditation bodies may request Regulator Audit Access under the Regulator Engagement Office charter. Access includes, under NDA: anonymized candidate walkthrough recordings for specified cohorts, panel composition and COI declarations, aggregate rubric distributions, panelist reconciliation records, Committee appeal-decision records, and the Capstone sections of the annual CAIS Psychometric Report. Access is free of charge. SeeCryptographic Auditability.
Employer due diligence
Employers assessing CAIS Capstone holders may request a Capstone Briefing through the Employer Recognition Network. Briefings cover the prompt bank in aggregate, the rubric, the asynchronous review procedure, the candidate walkthrough requirement, and interpretive guidance for reading a CAIS attestation in hiring contexts. Briefings do not disclose individual candidate submissions or walkthrough recordings. SeeFor Employers.
Document Control
A reviewed body of work.
Scored by named reviewers. Narrated on camera by the candidate. Recorded in the Public Registry.
Every CAIS Capstone is reviewed asynchronously by four named panelists, scored against a six-dimension weighted rubric, anchored by a mandatory 10–15 minute candidate video walkthrough, and attested in the Public Verification Registry. The prompt bank is curated. The rubric is public. The 75/60 rule is fixed. Delivered inside Prompt Atlas.
Authored by GAISB · Reviewed inside Prompt Atlas · Proven by Real Builds