Technical Whitepaper

BEDAMD: Prompt Architecture
as the Practical Solution
to AI Hallucination

How a portable operating system layered on existing AI models eliminates confabulation, enforces citation, and delivers domain-expert reliability — without new models, new infrastructure, or a nine-figure R&D budget.

PublishedApril 2026
Version1.1
AuthorBEDAMD / jngmedia.com
ClassificationPublic — Credibility Literature

AI language models hallucinate. This is not a controversial statement — it is a documented, widely acknowledged technical limitation of transformer-based large language models. The models generate plausible text by predicting likely token sequences; when their training data is insufficient, ambiguous, or simply absent for a given query, they fill the gap with confident-sounding fiction.

The industry's response has been to throw infrastructure at the problem: retrieval-augmented generation pipelines, fine-tuned domain models, vector databases, agent frameworks, and reinforcement learning from human feedback. These approaches reduce hallucination at the margins. They do not eliminate it. And they introduce substantial cost, complexity, and maintenance overhead.

BEDAMD takes a different approach. Rather than attempting to fix the model, BEDAMD operates as a prompt-architecture layer — a portable operating system that runs on top of existing frontier models (Gemini and Claude) and enforces grounding, citation, and domain-specialist routing through structured prompt architecture alone. No new model. No infrastructure. No data center. Activated by a single loader key pasted into existing AI settings.

The result is domain-expert reliability grounded in a curated 79-volume physical reference library, delivered through six specialist roles managed by a single triage manager, at a cost of infrastructure tokens and nothing else.

Well. I'll BEDAMD.

Section 01

The Problem: Hallucination Is Not A Bug. It's Architecture.

Large language models do not retrieve facts. They generate text. The distinction matters enormously in practice and is almost universally underappreciated by end users who interact with AI systems through conversational interfaces designed to create the impression of knowledgeable interlocutors.

When a user asks a frontier model about medication interactions, structural load calculations, tenant law, or species toxicity — the model produces a response that sounds authoritative. It may cite plausible-sounding sources. It may use correct technical vocabulary. It may be completely wrong. And it will not hedge unless it has been specifically trained or prompted to do so.

This is not a flaw that can be patched. It is a consequence of how autoregressive language models work: they predict the next most likely token given prior context. In domains where the training corpus is thin, inconsistent, or contradicted by more recent developments, the model's predictions become unreliable in proportion to the gap between what it was trained on and what it is being asked.

The Specific Failure Modes

Three failure modes are particularly consequential for the domains BEDAMD addresses:

Confident confabulation: The model generates specific-sounding false information — wrong drug dosages, incorrect legal statutes, nonexistent part numbers — with the same fluency and confidence it applies to correct information. The user has no signal to distinguish between them.

Temporal drift: The model's knowledge has a training cutoff. Laws change. Drug interactions get updated. New species classifications are published. The model's answer may have been correct at training time and be dangerously incorrect now.

Cross-domain bleed: When a query touches multiple domains — a medical question with legal implications, an engineering problem with financial components — the model attempts to synthesize across its training data with no structured mechanism to ensure each domain is handled by appropriate expertise or sources.

The users most at risk from these failure modes are not enterprise teams with prompt engineers and QA processes. They are individuals making real decisions — health decisions, legal decisions, safety decisions — based on AI responses they have no practical means of verifying.

"The AI isn't lying. It just doesn't know the difference between knowing something and generating something that sounds like knowing something. That's a meaningful distinction when the question is whether to take your child to the emergency room. Well. I'll BEDAMD."

Section 02

Why Current Solutions Fall Short

The AI industry has not ignored the hallucination problem. Significant capital and engineering effort has been applied to it. The approaches fall into four broad categories, each with meaningful limitations:

Retrieval-Augmented Generation (RAG)

RAG systems augment model responses by retrieving relevant text chunks from a vector database and including them in the prompt context. This reduces hallucination when relevant chunks are retrieved accurately. It introduces new failure modes: retrieval quality depends on embedding quality, chunk granularity, and database completeness. Retrieved chunks may be outdated, contextually inappropriate, or misleading when presented without the surrounding material that gives them meaning. RAG systems require infrastructure — vector databases, embedding models, indexing pipelines, and ongoing maintenance. They are probabilistic, not deterministic. A RAG system cannot guarantee that a response about medication interactions is drawn from the Physicians' Desk Reference rather than a forum post that was indexed alongside it.

Fine-Tuning

Domain-specific fine-tuning reduces hallucination by adjusting model weights to reflect curated training data in a target domain. It is expensive, requires sustained access to high-quality labeled data, and must be repeated whenever the underlying base model is updated. Fine-tuned models do not generalize across domains — a medically fine-tuned model has no particular competence in structural engineering or tenant law. Fine-tuning also "bakes in" a knowledge cutoff at the time of training; the model cannot be updated incrementally.

Guardrails and Output Filtering

Post-generation filtering systems attempt to detect and flag hallucinated content after the model produces it. Detection accuracy is imperfect by definition — a system that could reliably detect all hallucinations would not need to allow the model to hallucinate in the first place. Filtering adds latency and cost without addressing the underlying generation problem.

RLHF and Constitutional AI

Reinforcement learning from human feedback and related alignment techniques improve model behavior across many dimensions including factual accuracy. They have not eliminated hallucination and are not expected to — the fundamental limitation is architectural, not behavioral.

The Common Thread

Every current approach to hallucination reduction either requires significant infrastructure investment, introduces new failure modes, or addresses symptoms rather than causes. None of them guarantee that a specific response is grounded in a specific, authoritative source that the user can independently verify.

BEDAMD addresses the cause: the model is not grounded. The solution is to ground it — explicitly, structurally, and verifiably — through prompt architecture.

"Everyone else is building a better guardrail for the edge of the cliff. We moved the road. Well. I'll BEDAMD."

Section 03

The BEDAMD Architecture

BEDAMD is a prompt-architecture operating system. It does not modify the underlying model. It does not require external infrastructure. It operates entirely within the model's context window, establishing a structured behavioral environment through a carefully engineered set of persistent instructions that the model carries into every conversation.

The architecture has three primary components:

The Manager Role

Bea Shepherd, the BEDAMD Manager, is the system's triage and routing layer. Every query enters the system through the Manager, which evaluates the query against the Master Manifest — a structured index of the 79-volume reference library organized by domain — and routes the query to one or more specialist roles. The Manager operates invisibly; the user experiences a single coherent response, not a visible routing decision. The Manager also enforces cross-disciplinary audit requirements: when a query in one domain has safety implications for another (a machining question with medical implications, a legal question with financial consequences), the Manager flags the intersection and ensures the appropriate secondary specialist is engaged.

The Specialist Roles

Six specialist roles operate under the Manager's direction, each grounded in a dedicated stack of physical reference volumes. Each specialist carries domain-specific logic hierarchies that govern how they process and prioritize sources within their domain. Each specialist is instructed to cite sources by book title and chapter or section — not page numbers, which vary by edition, but structural citations that remain valid across printings. Specialists do not improvise. They derive. When a question cannot be answered from their reference stack, they say so.

The Citation Enforcement Layer

Citation is not optional in BEDAMD. Every substantive technical claim must be grounded in a specific volume from the Master Manifest. The instruction architecture establishes re-verification intervals by domain — medical and safety topics require source re-consultation more frequently than legal or logistics topics, reflecting the relative stakes of drift in each domain. This creates what the Grok analysis of the BEDAMD architecture termed "Variable-Rate Grounding" — a dynamic calibration of verification frequency to risk profile.

"The books run the show. The AI is the voice. That's the whole architecture. It fits in a prompt. Well. I'll BEDAMD."

Section 04

The Reference Library: 79 Volumes, Zero Improvisation

The BEDAMD reference library is a curated collection of 79 physical reference volumes organized across eight domains. The curation is the product. Selecting reference materials that are authoritative, internally consistent, widely recognized within their domains, and appropriate for the target user population required 35+ years of hands-on practice across machining, legal research, broadcast operations, and self-sufficiency — the domain expertise that distinguishes a useful reference library from a collection of books that happen to be on a shelf.

The library is intentionally bounded. BEDAMD does not claim to answer every question in every domain — it claims to answer the questions its library covers, accurately, with citations, every time. A bounded system that performs reliably within its bounds is more valuable, and less dangerous, than an unbounded system that performs unreliably everywhere.

Domain Coverage

Domain Specialist Key Reference Volumes Volumes
Medical & Biological HAWKEYE Merck Manual 17th, Netter's Anatomy 5th, PDR 64th, SOF Medical HB, Austere Medicine 3rd 13
Machining & Engineering SARGE / CHIEF Machinery's HB 24th, Marks' Mech Eng 9th, ASM Metals Desk Ed, McMaster-Carr #130 12
Law & Procedure FRANK Black's Law 9th, Emanuel Torts/CivPro/ConLaw, Full Nolo library (6 volumes) 11
Physics, Math & Finance CHIEF Roark's Stress & Strain 6th, Serway Physics 5th, Ross Corp Finance 10th, Montgomery Applied Stats 4th 8
Florida Species & Field DARWIN NAS Field Guide to Florida, FNPS Wild Edibles 2021, Sibley Birds, NAS Insects & Spiders 7
Home, Repair & Survival SARGE Wiring a House (Pros by Pros) 4th, Plumbing (Pros by Pros) 3rd, Home Comforts, Ency Country Living 50th 8
Business & Strategy PENNY The Art of War (Griffith trans.), The Decision Book (Krogerus), See You at the Top (Ziglar), How to Win Friends (Carnegie) 6
Reconstruction & Resilience SARGE / CHIEF The Knowledge (Dartnell), How to Invent Everything (North), MacWelch Survive Anything/Off Grid 4

All volumes are physical books, held in the BEDAMD reference library. ISBNs are documented in the Master Manifest v2.2. Citations reference book title and chapter or section — structural references that remain valid across printings and editions.

"Seventy-nine books. Every one of them chosen on purpose. Every one of them on a physical shelf. Every answer traceable back to a specific volume a user can pick up and verify. That's not a feature. That's the whole point. Well. I'll BEDAMD."

Section 05

The Specialist Roster

BEDAMD operates through a Manager and six domain specialists. Each role carries a defined logic hierarchy — a structured sequence of reference consultation that governs how the specialist processes queries within their domain. The hierarchy ensures consistent behavior: physics before specifications before structural analysis before logistics before finance, for example, in CHIEF's domain. The specialist never skips steps and never improvises facts that can be derived from the reference stack.

Character Call Sign / Role Domain Logic Priority
Bea Shepherd MANAGER Triage, routing, cross-domain audit, user interface Master Manifest v2.2 triage; LEAD/PEER assignment; safety audit triggers
CHIEF Strategy & Engineering Physics, structural analysis, engineering tolerances, ROI, NPV, statistics, logistics Physics → Specifications → Structural → Logistics → Finance
SARGE Shop & Fabrication Machining, fabrication, electrical, plumbing, sheet metal, survival, field repair Machinery's → McMaster-Carr → Utilities → Survival reference
HAWKEYE Medical & Biological Anatomy, pharmacology, first aid, dental, botanical, field medicine Symptom Guide → Merck → SOF/Austere → PDR → Botanical
FRANK Legal & Procedure Civil procedure, torts, small claims, business law, court forms, tenant rights Black's Law → Emanuel → Nolo guides → Forms reference
DARWIN Species & Field Florida species ID, foraging, toxicity, birds, insects, habitat, field safety Wild Edibles → NAS Florida → Sibley → Insects & Spiders → Botanical
Penny Ledger Business & Strategy Business strategy, operations, interpersonal effectiveness, decision frameworks, life coaching Art of War → Decision Book → Carnegie → Ziglar → domain reference

The triage convention is LEAD/PEER: for any query touching multiple domains, the most relevant specialist is designated LEAD and provides primary analysis. Additional specialists provide "Peer Notes" addressing their domain's intersection with the query. The Manager makes this assignment invisibly — the user receives a unified response, not a visible committee discussion.

"Six specialists and a manager who has never once been flustered. The right one picks up automatically. You just ask the question. Well. I'll BEDAMD."

Section 06

Triage & Routing Logic

The Manager role implements a structured triage process on every query. The triage operates against the Master Manifest v2.2, which maps subject matter to specialist domains and specific reference volumes. The routing is deterministic in its logic, not probabilistic — the same query will route to the same specialist or specialist combination every time.

Single-Domain Queries

When a query clearly falls within a single specialist's domain — a medication interaction question, a thread specification question, a small claims court procedure question — the Manager routes directly to that specialist. The specialist responds as LEAD with mandatory citation. Response closes with domain-specific formatting: CHIEF and SARGE responses end with a Foreman Note and Next Phase Requirements. Other specialists follow their domain's citation conventions.

Multi-Domain Queries

Real-world questions frequently cross domain boundaries. "Is this repair safe to do myself?" involves both SARGE (the repair procedure) and potentially HAWKEYE (the safety implications) and FRANK (the permit and liability implications). The Manager identifies the primary domain, assigns a LEAD, and flags secondary domains for Peer Notes appended to the primary response. The user receives a single coherent answer that reflects all relevant specialist input.

The Safety Audit Trigger

When shop or fabrication data involves safety considerations, the Manager automatically triggers a cross-disciplinary audit requiring HAWKEYE or FRANK review as appropriate. This is not optional — it is a mandatory architectural feature that cannot be suppressed by query framing. A question about welding in a confined space triggers both SARGE's fabrication expertise and HAWKEYE's medical review of confined-space hazards. The user doesn't have to know to ask about both.

User Direction

Under normal operation, the Manager handles routing automatically and invisibly. Users can direct queries to a specific specialist by name when needed or chosen — addressing FRANK directly, for example, for a purely legal question. This capability exists but is not required. The system is designed so that most users never need to think about routing at all.

"You ask. The right expert answers. You didn't have to know which one to ask. That's not convenience — that's the point of having a manager. Well. I'll BEDAMD."

Section 07

Citation Enforcement & Variable-Rate Grounding

Citation in BEDAMD is architectural, not aspirational. The system does not encourage specialists to cite sources — it requires them to, through prompt instructions that make uncited technical claims structurally non-compliant with the specialist's operating parameters.

The citation format is book title plus chapter or section designation. Page numbers are deliberately excluded: page numbers vary by edition, printing, and format. Chapter and section designations are structural — they identify the same material regardless of which edition a user holds. This makes BEDAMD citations independently verifiable by any user who owns the referenced volume.

Variable-Rate Grounding

Different domains have different tolerance for drift — the gradual accumulation of error that occurs in any system operating over many turns of conversation without actively re-anchoring to source material. BEDAMD implements domain-specific re-verification intervals:

Domain Re-verification Interval Rationale
Medical / Safety Every 4 turns Physiological monitoring priority; highest stakes for drift
Engineering / Shop Every 8 turns Mechanical tolerances — drift can cause structural or safety failure
Legal / Logistics Every 10 turns Procedural accuracy; lower rate of consequential drift

At each re-verification interval, the specialist is instructed to explicitly re-consult the Master Manifest and confirm that the current response trajectory remains grounded in the designated reference stack. The first line of a re-verified response states: "Source Verified: [Book Title] [Chapter/Section]." This creates an auditable trail of source verification within the conversation.

"Trust but verify. Except we do the verifying automatically, on a schedule, by domain risk level, and we tell you when we've done it. That's not a feature. That's operational discipline. Well. I'll BEDAMD."

Section 08

Cross-Domain Safety Architecture

One of the most consequential failure modes of domain-specific AI systems is the failure to recognize when a query in one domain has safety implications in another. A machining question about cutting fluid chemistry is also a chemistry exposure question. A plumbing question about drain cleaners is also a chemical safety question. A legal question about business formation is also a financial liability question.

BEDAMD implements mandatory cross-disciplinary safety audits as a structural feature of the Manager role. The audit protocol operates as follows:

Shop data involving chemicals or confined spaces triggers mandatory HAWKEYE cross-reference for exposure, physiological hazard, and first aid protocols.

Medical data involving treatments or interventions triggers mandatory cross-reference with legal liability and informed consent frameworks when relevant.

Field survival or foraging data triggers mandatory HAWKEYE cross-reference for toxicology and DARWIN cross-reference for species confirmation before any consumption guidance is provided.

Financial or business decisions with structural or operational components trigger CHIEF cross-reference for physical feasibility before financial analysis proceeds.

The Austere Pivot — a feature borrowed from the SOF operational planning model — provides a fallback protocol when standard resources are unavailable. When SARGE or HAWKEYE identify that the user is operating in a resource-constrained environment, the system pivots to improvised solutions derived from The Knowledge (Dartnell) and the MacWelch survival references, maintaining grounded citation even in field-expedient contexts.

"The question you didn't know to ask is often the most important one. We ask it for you. Automatically. Every time. Well. I'll BEDAMD."

Section 09

BEDAMD vs. RAG: A Direct Comparison

Retrieval-Augmented Generation is the most widely deployed approach to hallucination reduction in production AI systems as of 2026. A direct comparison is instructive.

Dimension BEDAMD Typical RAG System Advantage
Knowledge grounding Deterministic — every fact forced through physical reference library Probabilistic — retrieves "best match" chunks from vector database BEDAMD
Latency Fixed, near-zero (no database round-trip) +150–800ms per query for embedding + vector search BEDAMD
Citation verifiability Physical book + chapter/section — independently verifiable by user Retrieved chunk with source URL — may be unavailable, paywalled, or changed BEDAMD
Infrastructure cost Zero — prompt tokens only Vector DB hosting + embedding model + indexing + maintenance BEDAMD
Privacy & offline resilience Everything in the prompt — works air-gapped or in austere environments Requires live database connection BEDAMD
Anti-drift architecture Engineered zero-point + domain-specific re-verification intervals Dependent on prompt engineering and chunk quality BEDAMD
Raw knowledge volume Bounded to 79-volume library — by design Can ingest millions of pages RAG for volume; BEDAMD for precision
Cross-domain safety audit Mandatory, architectural, automatic Must be built and maintained separately BEDAMD

RAG is the appropriate solution when the knowledge base is large, dynamic, and cannot be curated into a bounded reference set — a corporate document repository, a continuously updated news database, a legal database with millions of case files. BEDAMD is the appropriate solution when the knowledge base is finite, authoritative, and intended to function as a single verified source of truth. The comparison is not that one is better than the other in absolute terms — it is that they solve different problems, and BEDAMD solves the problem it is designed to solve better than RAG does.

"RAG is a search engine. BEDAMD is a disciplined reference librarian who never improvises. Different tools. Different jobs. Know which one you need. Well. I'll BEDAMD."

Section 10

Performance Characteristics & Resource Profile

The BEDAMD prompt architecture has been independently analyzed for token consumption, context window impact, latency, and output characteristics. The following figures are derived from that analysis.

Context Window Footprint

The full BEDAMD operating system — Manager role plus all six specialist roles plus Master Manifest v2.2 — occupies approximately 3,800–4,300 tokens of context. This represents a fixed overhead cost on every conversation turn.

Model Context Window BEDAMD Overhead Practical Impact
128k tokens (standard) ~3.2% Low — negligible for most use cases
1M tokens (Gemini 1.5+) <0.5% Effectively negligible
2M tokens (Gemini 2.0+) <0.25% Operationally zero

Latency Profile

First-token latency increases by approximately 5–15% compared to an equivalent unstructured prompt, due to the additional context the model must process before generating a response. On modern frontier model infrastructure, this delta is measured in hundreds of milliseconds — imperceptible in conversational use. There is no retrieval latency because there is no retrieval step.

Output Characteristics

BEDAMD responses are structurally longer than equivalent unstructured responses by approximately 20–40%, due to mandatory citation formatting, Foreman Notes, and domain-appropriate closing elements. This additional verbosity is a feature, not a bug — it is the mechanism by which grounding is surfaced to the user in a verifiable form.

Deployment Context

BEDAMD is designed for consumer deployment on Gemini and Claude personal settings interfaces. In this context, the resource profile is irrelevant to the user — they experience longer, more accurate, citation-grounded responses with no perceptible latency change. The overhead is borne by the model infrastructure they are already paying for through their existing subscription.

"Medium-heavy prompt. Heavyweight results. The math works out in favor of the results. Well. I'll BEDAMD."

Section 11

The Liability Framework: Enhancement, Not Replacement

BEDAMD operates in domains — medical, legal, structural engineering — where the consequences of incorrect information can be severe. The liability framework is not a disclaimer appended to a terms of service document. It is an architectural feature woven into the system's operation and its public-facing presentation.

The core principle is simple: BEDAMD is not a substitute for licensed professionals. It is the most informed version of you walking into a conversation with a licensed professional.

This framing is not defensive hedging. It is an accurate description of what BEDAMD actually does — and why that is genuinely valuable:

Before the appointment: BEDAMD gives users the vocabulary, the framework, and the right questions to ask. A patient who understands the differential diagnosis framework for their symptoms asks better questions. A client who understands civil procedure uses their attorney's time more efficiently.

During the interaction: Users who understand the reference materials their professionals trained on recognize when an explanation is complete and when it warrants follow-up. They are harder to mislead and easier to inform.

After the consultation: Users who can cross-reference professional advice against the same reference materials the professional used make better decisions about compliance, follow-up, and when to seek a second opinion.

BEDAMD specialists are instructed to make the enhancement/replacement distinction explicit within their responses — not only in disclaimers but in the framing of every substantive recommendation. HAWKEYE does not say "you have X condition." HAWKEYE says "this symptom pattern is consistent with X in the Merck Manual — here is what that means for your conversation with your physician." FRANK does not say "you will win this case." FRANK says "the procedural framework from the Nolo guide gives you these options."

Liability Statement

BEDAMD is not a licensed medical provider, legal counsel, structural engineer, financial advisor, or any other licensed professional service. Information provided through BEDAMD is for educational reference purposes only, grounded in the published reference volumes listed in the Master Manifest v2.2. Users should consult appropriately licensed professionals for all medical, legal, financial, structural, and safety decisions. The reference library grounding that distinguishes BEDAMD from standard AI outputs does not confer professional licensure on the system or its outputs.

The disclaimer is not a limitation of BEDAMD's value. It is the honest description of what BEDAMD is — and what it is is genuinely useful.

"Knowing the right questions is half the battle. Knowing enough to recognize a bad answer is the other half. BEDAMD covers both. The professional covers the rest. Well. I'll BEDAMD."

Section 12

Deployment: Three Steps, No Infrastructure

BEDAMD deployment requires no installation, no configuration, no API access, no development environment, and no technical expertise. It requires an active subscription to Gemini or Claude — both widely available consumer AI platforms — and a BEDAMD activation code.

Subscribe and receive activation code. BEDAMD subscription provides a single activation code delivered to the subscriber's email. The code contains the complete BEDAMD operating system in encoded form.

Paste the code into AI settings. Both Gemini and Claude provide a custom instructions or system prompt field in their personal settings interface. The activation code is pasted into this field. No other configuration is required.

Every conversation is now grounded. From the moment the activation code is in place, every conversation on that AI platform routes through the BEDAMD operating system. The Manager is active. The specialists are available. The library is engaged. The user asks questions.

The activation code is portable. A subscriber can deploy BEDAMD on Gemini, Claude, or both simultaneously using the same code. When either platform updates their underlying model, BEDAMD continues to operate — the operating system is model-agnostic. When BEDAMD releases updates, a new activation code is provided and the swap takes thirty seconds.

Platform Compatibility

BEDAMD is designed for and tested on Google Gemini and Anthropic Claude. Both platforms provide the custom instructions interface required for deployment. Both platforms operate large-context frontier models that handle the BEDAMD context footprint without constraint. BEDAMD is not affiliated with Google or Anthropic.

"Copy. Paste. Done. The entire operating system. No PhD required. No DevOps team. No quarterly infrastructure review. Well. I'll BEDAMD."

Section 13

Conclusion

AI hallucination is a solved problem within a bounded domain — if the domain is defined by a curated physical reference library, and if the AI is constrained by a prompt-architecture operating system that enforces grounding, citation, and specialist routing on every response.

BEDAMD does not claim to solve hallucination for all domains, all queries, or all users. It claims to solve it for the domains it covers, the queries its library addresses, and the users who need reliable answers in those domains more than they need unlimited answers in all domains.

That is a narrower claim than the AI industry typically makes. It is also, unusually for the AI industry, a claim that is demonstrably true in practice. Every BEDAMD response is traceable to a specific physical book on a specific shelf. Every citation can be independently verified. Every specialist operates from a defined reference stack with a defined logic hierarchy. The system does what it says it does, every time, within its defined bounds.

For the user trying to understand their child's symptom at ten o'clock at night, the user trying to figure out whether their landlord can legally do what they just did, the user trying to verify whether the fastener they are about to use will hold the load — the question is not whether BEDAMD covers everything. The question is whether it covers this. And within its library, the answer is yes. Grounded, cited, and verifiable.

That is what it was built to do.

Well. I'll BEDAMD.

"Well. I'll BEDAMD."

— The appropriate response upon discovering that the answer was in the books all along.

Read Enough?
The Crew Is Ready.

Request The Full Brief. All six specialists. The whole library.