Skip to content
AL | Apatheia Labs

Intelligence Analysis Quick Start Guide

5-minute reference for ACH methodology, Structured Analytic Techniques, Admiralty Code, and bias mitigation protocols.

CompleteQuick Start16 January 202615 min read

Intelligence Analysis Quick Start Guide

Purpose: Rapid reference for professional intelligence analysis frameworks Time to read: 5 minutes Full document: 05-intelligence-analysis.md


60-Second Overview

  • Structure Over Intuition: Biases cannot be eliminated by training alone—only mitigated through process
  • ACH (Analysis of Competing Hypotheses): 7-step debiasing methodology tests which hypothesis has FEWEST inconsistencies
  • 66 Structured Analytic Techniques: 8 categories from diagnostic to decision support (Heuer & Pherson)
  • Admiralty Code: Two-character rating (Source Reliability + Information Credibility)
  • Words of Estimative Probability: Standardized language ("likely" = 60-80%) prevents misinterpretation

Key Framework: Analysis of Competing Hypotheses (ACH)

┌──────────────────────────────────────────────────────────────┐
│                    ACH MATRIX (7 Steps)                      │
├──────────────────────────────────────────────────────────────┤
│            │  H1:      │  H2:      │  H3:      │  H4:      │ │
│  Evidence  │Accidental │Deliberate │Systemic   │Fabricated │ │
├────────────┼───────────┼───────────┼───────────┼───────────┤ │
│ Evidence 1 │     C     │     I     │     C     │     I     │ │
│ Evidence 2 │     I     │     C     │     C     │     C     │ │
│ Evidence 3 │     C     │     C     │     I     │     I     │ │
│ Absence E4 │     I     │     I     │     C     │     C     │ │
├────────────┼───────────┼───────────┼───────────┼───────────┤ │
│ TOTALS     │  2C, 2I   │  2C, 2I   │  3C, 1I   │  2C, 2I   │ │
└──────────────────────────────────────────────────────────────┘
           CONCLUSION: H3 (fewest inconsistencies)

C = Consistent, I = Inconsistent, N/A = Not applicable

Critical principle: Hypothesis with FEWEST inconsistencies most likely, NOT most consistent evidence


Essential Tools

FrameworkPurposeSourceApplication
ACHHypothesis testingCIA (Heuer 1999)Contested cases, multiple explanations
Admiralty CodeSource ratingNATO/Five EyesEvery piece of evidence rated
F3EADOperational cycleJSOCFast-moving investigations
66 SATsStructured techniquesHeuer & Pherson 2021Bias mitigation, transparency
ICD 203Analytic standardsUS ODNIQuality control, tradecraft

ACH Workflow (7 Steps)

Step 1: Identify Hypotheses

  • Brainstorm all potential explanations
  • Include unlikely hypotheses (disproving strengthens case)
  • Minimum 3-5 hypotheses, maximum ~8 (cognitive load)
  • State as mutually exclusive where possible

Example (misconduct investigation):

  • H1: Accidental policy violation
  • H2: Deliberate but isolated incident
  • H3: Deliberate and part of systemic pattern
  • H4: No violation occurred (misunderstanding)
  • H5: Evidence fabricated/manipulated

Step 2: List Significant Evidence

  • Facts from documents (highest priority)
  • Logical deductions from facts
  • Assumptions (explicitly labeled)
  • Absence of expected evidence (negative evidence)

Critical: Absence of evidence IS evidence (e.g., no documentation = suspicious)

Step 3: Create ACH Matrix

  • Rows: Evidence items
  • Columns: Hypotheses
  • Cells: Consistency assessment (C, I, N/A)

Step 4: Refine Matrix (MOST IMPORTANT STEP)

  • Work ACROSS rows (test one piece of evidence against ALL hypotheses)
  • DO NOT work down columns (recreates confirmation bias)
  • Remove non-diagnostic evidence (consistent with all hypotheses)
  • Remove clearly disproven hypotheses
  • Add discriminating evidence

Power of ACH: Cross-hypothesis comparison forces objectivity

Step 5: Iterate

  • Collect additional evidence (focused on discriminating between hypotheses)
  • Re-evaluate consistency judgments
  • Seek disconfirming evidence for leading hypothesis
  • Test robustness of inconsistencies

Step 6: Draw Conclusions

  • Identify hypothesis with fewest inconsistencies
  • Rank alternatives
  • Identify diagnostic evidence (what discriminated)
  • State critical assumptions
  • Assign confidence level (High/Moderate/Low)

Report format:

  1. Conclusion (most likely hypothesis)
  2. Alternatives (rank order)
  3. Diagnostic evidence (which evidence was decisive)
  4. Assumptions (what if these are wrong?)
  5. Confidence (see WEP section)

Step 7: Sensitivity Analysis

Question: What would have to change for different hypothesis to be correct?

  • If evidence X proved unreliable, would conclusion flip?
  • If assumption Y false, would conclusion change?
  • What new evidence would disprove current conclusion?

Output: Identification of "pivot points" - evidence/assumptions that would flip conclusion


Admiralty Code (Source Rating System)

Format: [Source Reliability][Information Credibility] Example: A1 = Completely reliable source + Confirmed information (highest confidence)

Source Reliability (First Character)

CodeMeaningDescription
ACompletely reliableHistory of complete reliability
BUsually reliableValid information most of the time
CFairly reliableValid information some of the time
DNot usually reliableInvalid information most of the time
EUnreliableHistory of invalid or no valid information
FCannot be judgedNew source, no history

Assessment basis: Track record, access to information, motivation, vetting

Information Credibility (Second Character)

CodeMeaningDescription
1ConfirmedCorroborated by independent sources
2Probably trueConsistent with known facts (not corroborated)
3Possibly trueNot corroborated; reasonably plausible
4DoubtfulContradicts known facts or implausible
5ImprobableContradicts logic or well-established facts
6Cannot be judgedNo basis to evaluate

Assessment basis: Internal consistency, external consistency, plausibility, specificity

Critical Principle: Independent Assessment

Key insight: Source reliability and information credibility assessed INDEPENDENTLY

Why?:

  • A-rated source can provide low-credibility information (deceived, misunderstood)
  • E-rated source can provide high-credibility information (even liars sometimes tell truth)

Examples:

  • A5: Reliable source reports improbable information → Source trustworthy BUT likely deceived
  • E1: Unreliable source reports confirmed information → Use information but be wary of motives

Words of Estimative Probability (WEP)

Problem: Vague language ("likely," "probable") interpreted differently by consumers Solution: Standardized probability ranges

Standard WEP Scale (ICD 203)

TermProbability RangeUsage
Almost certainly95-99%Very rare to use 100% (acknowledges uncertainty)
Very likely / Highly probable80-95%Strong confidence
Likely / Probable60-80%More likely than not
Even chance40-60%Roughly equal likelihood
Unlikely / Probably not20-40%Less likely than not
Very unlikely / Highly improbable5-20%Low but not impossible
Remote / Almost certainly not1-5%Very rare

Confidence Levels (Separate from Probability)

Critical distinction: Probability of event ≠ Confidence in assessment

  • High confidence: High-quality information, strong analytic consensus
  • Moderate confidence: Credible sources, plausible logic, but gaps or alternative interpretations
  • Low confidence: Limited/ambiguous information, significant uncertainties

Example statements:

  • "We assess with high confidence that Event X is unlikely (20%)."
    • Meaning: We're very sure probability is low (not guessing)
  • "We assess with low confidence that Event Y is very likely (85%)."
    • Meaning: Probability seems high but we have significant uncertainties

Common Mistake: Confusing Confidence and Probability

  • "Low confidence Event X will occur" ≠ "Event X is unlikely"
  • Low confidence = high uncertainty (event might be likely OR unlikely)

Mitigation: Always specify BOTH probability AND confidence


66 Structured Analytic Techniques (8 Categories)

Category 1: Diagnostic Techniques

  • Key Assumptions Check: Identify and challenge foundational assumptions
  • Quality of Information Check: Assess reliability, credibility, relevance
  • Chronologies and Timelines: Establish factual sequence
  • Network Analysis: Map relationships between entities

Category 2: Contrarian Techniques

  • Devil's Advocacy: Argue against consensus view
  • Red Cell Analysis: Adversarial perspective (CIA Red Cell established 9/12/2001)
  • Pre-mortem Analysis: Assume failure, work backwards to explain why
  • Team A/B: Two teams independently analyze same question

Category 3: Imaginative Techniques

  • Brainstorming: Generate ideas without initial critique
  • Outside-In Thinking: Start with global forces → specific situation
  • Alternative Futures: Develop multiple plausible scenarios

Category 4: Hypothesis Generation and Testing

  • ACH: Matrix-based evaluation (see above)
  • Diagnostic Reasoning: Test which hypothesis best explains evidence
  • Argument Mapping: Visual representation of claims, evidence, rebuttals

Category 5: Assessment of Cause and Effect

  • Key Drivers Analysis: Identify factors most affecting outcome
  • Cross-Impact Matrix: How factors influence each other
  • Bayesian Reasoning: Update probabilities as new evidence emerges

Category 6: Challenge Analysis

  • What If? Analysis: Test impact of specific events
  • High Impact/Low Probability: Focus on catastrophic scenarios

Category 7: Conflict Management

  • Structured Debate: Formal presentation of competing views
  • Delphi Method: Iterative anonymous expert survey

Category 8: Decision Support

  • Decision Matrix: Systematic comparison of options
  • SWOT Analysis: Strengths, Weaknesses, Opportunities, Threats

Selection criteria: Choose based on analytic question type, time available, team size, cognitive bias target


F3EAD Operational Cycle (JSOC Model)

Developed: Joint Special Operations Command, 2003-2011 (Iraq/Afghanistan) Characteristic: Speed (hours/days, not weeks/months)

Six Phases

Find

  • Develop target intelligence
  • Identify high-value individuals/networks
  • Output: Target nomination

Fix

  • Confirm target location (high confidence)
  • Multi-INT fusion (SIGINT + IMINT + HUMINT)
  • Output: Targeting package

Finish

  • Execute operation (capture/arrest/interdict)
  • Output: Target neutralized, materials captured

Exploit (CRITICAL PHASE)

  • Immediate exploitation of captured materials
  • Phones, computers, documents, biometrics, interrogation
  • Speed matters: Intelligence has short half-life
  • Output: New leads for next cycle

Analyze

  • Deep analysis of exploited materials
  • Pattern analysis, network mapping, gap identification
  • Output: Updated intelligence picture

Disseminate

  • Share intelligence across community
  • Feed back into Find phase
  • Output: Next target nomination

Key: Self-sustaining cycle (each iteration generates inputs for next)

Civilian applications: Law enforcement (organized crime), regulatory enforcement (financial crimes), forensic intelligence (investigations where each interview/document generates leads)


Multi-Source Intelligence Fusion

Eight INT Types

INT TypeSourceStrengthsWeaknesses
HUMINTRecruited agents, interviewsIntent, motivations, insider knowledgeDeception risk, limited scale
SIGINTIntercepted communicationsHigh volume, real-time, hard to fakeEncryption, legal constraints
IMINTSatellite/aerial imageryObjective physical evidence, geo-locatedInterpretation ambiguity, expensive
OSINTPublic media, social mediaLegal, scalable, diverse perspectivesInformation overload, provenance challenges
GEOINTIMINT + mappingContext, change detection, pattern analysisRequires GIS expertise, data volume
FININTBanking, transactionsTracks money flows, identifies networksSecrecy jurisdictions, crypto challenges
TECHINTForeign equipment analysisCapabilities assessmentRequires specialized expertise
MASINTSensors (radar, acoustic)Detect events without human/commsHighly technical, expensive

Three Fusion Levels

Level 1: Data-Level Fusion (Low-Level)

  • Combine raw data before feature extraction
  • Example: Fuse satellite image with radar return
  • Preserves maximum information

Level 2: Feature-Level Fusion (Mid-Level)

  • Extract features from each source, then combine
  • Example: Vehicle type (IMINT) + radio signature (SIGINT)
  • Reduces data volume, handles asynchronous sources

Level 3: Decision-Level Fusion (High-Level)

  • Each source produces independent assessment, then combine
  • Example: HUMINT says "likely," IMINT says "unlikely," fusion produces weighted average
  • Can incorporate subjective judgments

Contradictory Evidence Handling

Strategies:

  • Discounting (reduce weight of less reliable source)
  • Hypothesis expansion (maybe both correct under different interpretations)
  • Seek adjudication (collect third source to break tie)
  • Temporal explanation (situation changed between observations)
  • Deception hypothesis (one source deliberately misled)

Common Pitfalls (Top 5)

MistakeImpactMitigation
Confirmation biasSeek confirming evidence, ignore disconfirmingACH (forces testing all hypotheses), Devil's Advocacy
AnchoringOver-rely on first informationDelay hypothesis formation, structured brainstorming
GroupthinkPressure to conformRed Cell, assign Devil's Advocate role, protect dissenters
Mirror imagingAssume adversary thinks like youRed Cell analysis, cultural expertise
Single-source relianceDeception/error vulnerabilityMulti-INT fusion, Admiralty Code rating

Specific Warning: ACH Misapplication

Research finding (Fisher et al. 2008): ACH did NOT significantly reduce confirmation bias in experiments

Problem: Analysts worked DOWN columns (tested all evidence against one hypothesis), not ACROSS rows

Solution: Training + process enforcement (work across rows), + transparency value (even if debiasing questionable, audit trail valuable)


Quality Gates (When to Pause)

Before Analysis

  • All relevant INT sources consulted?
  • Sources rated with Admiralty Code?
  • Assumptions explicitly identified?
  • Alternative hypotheses generated (minimum 3)?

During Analysis

  • ACH matrix updated as new evidence arrives?
  • Disconfirming evidence actively sought?
  • Red Cell/Devil's Advocate consulted?
  • Contributory factors considered (not just proximate cause)?

Before Dissemination

  • Peer review completed (minimum 3 reviewers)?
  • Management review (policy implications, sourcing)?
  • Tradecraft review (SATs applied correctly)?
  • Confidence level and probability explicitly stated?
  • Sources protected (if sensitive)?

Post-Dissemination

  • Customer feedback obtained?
  • Accuracy tracked (compare assessment to outcome)?
  • Lessons learned documented?
  • Failed predictions analyzed (what went wrong)?

Integration with Phronesis FCIP

ACH Implementation

  • Matrix builder UI: Hypothesis generation, evidence entry, consistency coding
  • Automated diagnostics: Highlight which evidence discriminates between hypotheses
  • Sensitivity analysis: "What if?" calculator
  • Export: ACH matrix → professional assessment report

Admiralty Code

  • Source registry: Track reliability history (A-F rating)
  • Information rating: Credibility assessment (1-6 rating) per piece of evidence
  • Automated weighting: Downweight low-credibility evidence in analyses
  • Audit trail: Every rating justified

Multi-Source Fusion

  • INT type tagging: Classify evidence by collection discipline
  • Contradiction reconciliation: Prompt analyst when sources conflict
  • Bayesian updating: Confidence scores update as new evidence added
  • Source network mapping: Detect circular reporting (Source A cites Source B who cites Source A)

Structured Techniques

  • SAT library: Guided workflows for 66 techniques
  • Technique selection wizard: Question type → recommended SATs
  • Collaboration tools: Devil's Advocacy assignment, Red Cell mode
  • Output templates: Standardized reporting for each SAT

WEP Enforcement

  • Probability language templates: Force selection from WEP scale
  • Confidence level tracking: High/Moderate/Low for every assessment
  • Consistency checking: Flag if language inconsistent with evidence strength

US Intelligence Community Directive 203 (ICD 203)

Issued: January 2, 2015 Applies to: All 18 US IC agencies

Four Core Standards

  1. Objectivity: Base on available information, minimize biases, acknowledge uncertainties
  2. Political Independence: Not influenced by policymaker preferences
  3. Timeliness: Deliver when it can affect decisions
  4. Good Tradecraft: Apply structured techniques, challenge assumptions, seek disconfirming evidence

Nine Tradecraft Standards

  1. Analytic Standards of Objectivity and Independence
  2. Analytic Rigor (apply SATs, critical thinking)
  3. Bias Awareness (identify and mitigate cognitive biases)
  4. Collaboration (colleagues, other agencies, outside experts)
  5. Consistency (judgments logically consistent)
  6. Intellectual Rigor (depth, breadth, sophistication)
  7. Sourcing (cite sources, evaluate quality)
  8. Uncertainty and Confidence (explain basis)
  9. Validation (test against alternatives and new information)

CIA Red Cell Program

Established: September 12, 2001 (day after 9/11)

Mission:

  • Challenge consensus intelligence judgments
  • Provide adversarial perspective
  • Generate "alternative analysis" on demand

Protection mechanisms:

  • Red Cell analysts cannot be penalized for contrarian views
  • Report directly to senior leadership
  • Products labeled "ALTERNATIVE ANALYSIS - RED CELL"

Example products:

  • "What If Jihadists Gained Access to Pakistan's Nuclear Weapons?" (2004)
  • "How Al-Qa'ida Could Strike US Financial System" (2008)

Value: Stress-test assumptions, force consideration of "unthinkable" scenarios


Israeli Intelligence: Mahleket Bakara

Established: 1973 (after Yom Kippur War intelligence failure)

Yom Kippur War failure (October 1973):

  • Israeli intelligence held firm belief ("the Conception") that Egypt would not attack without air superiority
  • Dismissed mounting evidence of war preparations as bluff
  • Result: Strategic surprise, initial losses

Reform: Department of Control (Mahleket Bakara) created

  • Independent unit within Military Intelligence
  • Must present alternative interpretations to leadership
  • Access to same raw intelligence as Production Division

Key insight: Organizational structure matters more than individual brilliance. Intelligence failures often systemic, not just analyst error. Institutionalize dissent to prevent groupthink.


Resources and Standards

Foundational Texts

  • Heuer, Richards J., Jr. Psychology of Intelligence Analysis (CIA, 1999)
  • Heuer & Pherson. Structured Analytic Techniques (3rd ed., 2021) - 66 techniques
  • Kent, Sherman. Strategic Intelligence for American World Policy (1949)

Doctrine

  • US ODNI. Intelligence Community Directive 203: Analytic Standards (2015)
  • NATO. AJP-2: Allied Joint Doctrine for Intelligence (2016)
  • UK Cabinet Office. Professional Head of Intelligence Assessment Guidance (2010)

Intelligence Failures (Case Studies)

  • Butler Review (2004): UK Iraq WMD failure
  • 9/11 Commission Report (2004): Pre-9/11 intelligence failures
  • Israeli Agranat Commission (1974): Yom Kippur War failure

Academic Research

  • Fisher et al. "Is There an Empirical Basis for Analyst Training?" (2008) - ACH effectiveness critique
  • Tetlock & Gardner. Superforecasting (2015) - Probabilistic forecasting alternative

Document Control

Version: Quick Start 1.0 Date: 2026-01-17 Source: 05-intelligence-analysis.md (48,001 bytes) Lines: ~497 Format: Reference card (printable)

Next steps: Read full methodology for:

  • Detailed 66 SAT catalog with implementation guidance
  • Multi-source fusion algorithms (Bayesian, Dempster-Shafer, Kalman Filter)
  • F3EAD operational cycle detailed workflows
  • Cognitive bias taxonomy and mitigation strategies
  • Quality control mechanisms (peer review, tradecraft review, customer feedback)
  • Intelligence cycle models (traditional vs. F3EAD comparison)

"Biases cannot be eliminated by training alone—only mitigated through structure and tools. Intelligence analysis is a professional discipline with standards and methods, not intuition."