Intelligence Analysis Quick Start Guide
5-minute reference for ACH methodology, Structured Analytic Techniques, Admiralty Code, and bias mitigation protocols.
Intelligence Analysis Quick Start Guide
Purpose: Rapid reference for professional intelligence analysis frameworks Time to read: 5 minutes Full document: 05-intelligence-analysis.md
60-Second Overview
- Structure Over Intuition: Biases cannot be eliminated by training alone—only mitigated through process
- ACH (Analysis of Competing Hypotheses): 7-step debiasing methodology tests which hypothesis has FEWEST inconsistencies
- 66 Structured Analytic Techniques: 8 categories from diagnostic to decision support (Heuer & Pherson)
- Admiralty Code: Two-character rating (Source Reliability + Information Credibility)
- Words of Estimative Probability: Standardized language ("likely" = 60-80%) prevents misinterpretation
Key Framework: Analysis of Competing Hypotheses (ACH)
┌──────────────────────────────────────────────────────────────┐
│ ACH MATRIX (7 Steps) │
├──────────────────────────────────────────────────────────────┤
│ │ H1: │ H2: │ H3: │ H4: │ │
│ Evidence │Accidental │Deliberate │Systemic │Fabricated │ │
├────────────┼───────────┼───────────┼───────────┼───────────┤ │
│ Evidence 1 │ C │ I │ C │ I │ │
│ Evidence 2 │ I │ C │ C │ C │ │
│ Evidence 3 │ C │ C │ I │ I │ │
│ Absence E4 │ I │ I │ C │ C │ │
├────────────┼───────────┼───────────┼───────────┼───────────┤ │
│ TOTALS │ 2C, 2I │ 2C, 2I │ 3C, 1I │ 2C, 2I │ │
└──────────────────────────────────────────────────────────────┘
CONCLUSION: H3 (fewest inconsistencies)
C = Consistent, I = Inconsistent, N/A = Not applicable
Critical principle: Hypothesis with FEWEST inconsistencies most likely, NOT most consistent evidence
Essential Tools
| Framework | Purpose | Source | Application |
|---|---|---|---|
| ACH | Hypothesis testing | CIA (Heuer 1999) | Contested cases, multiple explanations |
| Admiralty Code | Source rating | NATO/Five Eyes | Every piece of evidence rated |
| F3EAD | Operational cycle | JSOC | Fast-moving investigations |
| 66 SATs | Structured techniques | Heuer & Pherson 2021 | Bias mitigation, transparency |
| ICD 203 | Analytic standards | US ODNI | Quality control, tradecraft |
ACH Workflow (7 Steps)
Step 1: Identify Hypotheses
- Brainstorm all potential explanations
- Include unlikely hypotheses (disproving strengthens case)
- Minimum 3-5 hypotheses, maximum ~8 (cognitive load)
- State as mutually exclusive where possible
Example (misconduct investigation):
- H1: Accidental policy violation
- H2: Deliberate but isolated incident
- H3: Deliberate and part of systemic pattern
- H4: No violation occurred (misunderstanding)
- H5: Evidence fabricated/manipulated
Step 2: List Significant Evidence
- Facts from documents (highest priority)
- Logical deductions from facts
- Assumptions (explicitly labeled)
- Absence of expected evidence (negative evidence)
Critical: Absence of evidence IS evidence (e.g., no documentation = suspicious)
Step 3: Create ACH Matrix
- Rows: Evidence items
- Columns: Hypotheses
- Cells: Consistency assessment (C, I, N/A)
Step 4: Refine Matrix (MOST IMPORTANT STEP)
- Work ACROSS rows (test one piece of evidence against ALL hypotheses)
- DO NOT work down columns (recreates confirmation bias)
- Remove non-diagnostic evidence (consistent with all hypotheses)
- Remove clearly disproven hypotheses
- Add discriminating evidence
Power of ACH: Cross-hypothesis comparison forces objectivity
Step 5: Iterate
- Collect additional evidence (focused on discriminating between hypotheses)
- Re-evaluate consistency judgments
- Seek disconfirming evidence for leading hypothesis
- Test robustness of inconsistencies
Step 6: Draw Conclusions
- Identify hypothesis with fewest inconsistencies
- Rank alternatives
- Identify diagnostic evidence (what discriminated)
- State critical assumptions
- Assign confidence level (High/Moderate/Low)
Report format:
- Conclusion (most likely hypothesis)
- Alternatives (rank order)
- Diagnostic evidence (which evidence was decisive)
- Assumptions (what if these are wrong?)
- Confidence (see WEP section)
Step 7: Sensitivity Analysis
Question: What would have to change for different hypothesis to be correct?
- If evidence X proved unreliable, would conclusion flip?
- If assumption Y false, would conclusion change?
- What new evidence would disprove current conclusion?
Output: Identification of "pivot points" - evidence/assumptions that would flip conclusion
Admiralty Code (Source Rating System)
Format: [Source Reliability][Information Credibility] Example: A1 = Completely reliable source + Confirmed information (highest confidence)
Source Reliability (First Character)
| Code | Meaning | Description |
|---|---|---|
| A | Completely reliable | History of complete reliability |
| B | Usually reliable | Valid information most of the time |
| C | Fairly reliable | Valid information some of the time |
| D | Not usually reliable | Invalid information most of the time |
| E | Unreliable | History of invalid or no valid information |
| F | Cannot be judged | New source, no history |
Assessment basis: Track record, access to information, motivation, vetting
Information Credibility (Second Character)
| Code | Meaning | Description |
|---|---|---|
| 1 | Confirmed | Corroborated by independent sources |
| 2 | Probably true | Consistent with known facts (not corroborated) |
| 3 | Possibly true | Not corroborated; reasonably plausible |
| 4 | Doubtful | Contradicts known facts or implausible |
| 5 | Improbable | Contradicts logic or well-established facts |
| 6 | Cannot be judged | No basis to evaluate |
Assessment basis: Internal consistency, external consistency, plausibility, specificity
Critical Principle: Independent Assessment
Key insight: Source reliability and information credibility assessed INDEPENDENTLY
Why?:
- A-rated source can provide low-credibility information (deceived, misunderstood)
- E-rated source can provide high-credibility information (even liars sometimes tell truth)
Examples:
- A5: Reliable source reports improbable information → Source trustworthy BUT likely deceived
- E1: Unreliable source reports confirmed information → Use information but be wary of motives
Words of Estimative Probability (WEP)
Problem: Vague language ("likely," "probable") interpreted differently by consumers Solution: Standardized probability ranges
Standard WEP Scale (ICD 203)
| Term | Probability Range | Usage |
|---|---|---|
| Almost certainly | 95-99% | Very rare to use 100% (acknowledges uncertainty) |
| Very likely / Highly probable | 80-95% | Strong confidence |
| Likely / Probable | 60-80% | More likely than not |
| Even chance | 40-60% | Roughly equal likelihood |
| Unlikely / Probably not | 20-40% | Less likely than not |
| Very unlikely / Highly improbable | 5-20% | Low but not impossible |
| Remote / Almost certainly not | 1-5% | Very rare |
Confidence Levels (Separate from Probability)
Critical distinction: Probability of event ≠ Confidence in assessment
- High confidence: High-quality information, strong analytic consensus
- Moderate confidence: Credible sources, plausible logic, but gaps or alternative interpretations
- Low confidence: Limited/ambiguous information, significant uncertainties
Example statements:
- "We assess with high confidence that Event X is unlikely (20%)."
- Meaning: We're very sure probability is low (not guessing)
- "We assess with low confidence that Event Y is very likely (85%)."
- Meaning: Probability seems high but we have significant uncertainties
Common Mistake: Confusing Confidence and Probability
- "Low confidence Event X will occur" ≠ "Event X is unlikely"
- Low confidence = high uncertainty (event might be likely OR unlikely)
Mitigation: Always specify BOTH probability AND confidence
66 Structured Analytic Techniques (8 Categories)
Category 1: Diagnostic Techniques
- Key Assumptions Check: Identify and challenge foundational assumptions
- Quality of Information Check: Assess reliability, credibility, relevance
- Chronologies and Timelines: Establish factual sequence
- Network Analysis: Map relationships between entities
Category 2: Contrarian Techniques
- Devil's Advocacy: Argue against consensus view
- Red Cell Analysis: Adversarial perspective (CIA Red Cell established 9/12/2001)
- Pre-mortem Analysis: Assume failure, work backwards to explain why
- Team A/B: Two teams independently analyze same question
Category 3: Imaginative Techniques
- Brainstorming: Generate ideas without initial critique
- Outside-In Thinking: Start with global forces → specific situation
- Alternative Futures: Develop multiple plausible scenarios
Category 4: Hypothesis Generation and Testing
- ACH: Matrix-based evaluation (see above)
- Diagnostic Reasoning: Test which hypothesis best explains evidence
- Argument Mapping: Visual representation of claims, evidence, rebuttals
Category 5: Assessment of Cause and Effect
- Key Drivers Analysis: Identify factors most affecting outcome
- Cross-Impact Matrix: How factors influence each other
- Bayesian Reasoning: Update probabilities as new evidence emerges
Category 6: Challenge Analysis
- What If? Analysis: Test impact of specific events
- High Impact/Low Probability: Focus on catastrophic scenarios
Category 7: Conflict Management
- Structured Debate: Formal presentation of competing views
- Delphi Method: Iterative anonymous expert survey
Category 8: Decision Support
- Decision Matrix: Systematic comparison of options
- SWOT Analysis: Strengths, Weaknesses, Opportunities, Threats
Selection criteria: Choose based on analytic question type, time available, team size, cognitive bias target
F3EAD Operational Cycle (JSOC Model)
Developed: Joint Special Operations Command, 2003-2011 (Iraq/Afghanistan) Characteristic: Speed (hours/days, not weeks/months)
Six Phases
Find
- Develop target intelligence
- Identify high-value individuals/networks
- Output: Target nomination
Fix
- Confirm target location (high confidence)
- Multi-INT fusion (SIGINT + IMINT + HUMINT)
- Output: Targeting package
Finish
- Execute operation (capture/arrest/interdict)
- Output: Target neutralized, materials captured
Exploit (CRITICAL PHASE)
- Immediate exploitation of captured materials
- Phones, computers, documents, biometrics, interrogation
- Speed matters: Intelligence has short half-life
- Output: New leads for next cycle
Analyze
- Deep analysis of exploited materials
- Pattern analysis, network mapping, gap identification
- Output: Updated intelligence picture
Disseminate
- Share intelligence across community
- Feed back into Find phase
- Output: Next target nomination
Key: Self-sustaining cycle (each iteration generates inputs for next)
Civilian applications: Law enforcement (organized crime), regulatory enforcement (financial crimes), forensic intelligence (investigations where each interview/document generates leads)
Multi-Source Intelligence Fusion
Eight INT Types
| INT Type | Source | Strengths | Weaknesses |
|---|---|---|---|
| HUMINT | Recruited agents, interviews | Intent, motivations, insider knowledge | Deception risk, limited scale |
| SIGINT | Intercepted communications | High volume, real-time, hard to fake | Encryption, legal constraints |
| IMINT | Satellite/aerial imagery | Objective physical evidence, geo-located | Interpretation ambiguity, expensive |
| OSINT | Public media, social media | Legal, scalable, diverse perspectives | Information overload, provenance challenges |
| GEOINT | IMINT + mapping | Context, change detection, pattern analysis | Requires GIS expertise, data volume |
| FININT | Banking, transactions | Tracks money flows, identifies networks | Secrecy jurisdictions, crypto challenges |
| TECHINT | Foreign equipment analysis | Capabilities assessment | Requires specialized expertise |
| MASINT | Sensors (radar, acoustic) | Detect events without human/comms | Highly technical, expensive |
Three Fusion Levels
Level 1: Data-Level Fusion (Low-Level)
- Combine raw data before feature extraction
- Example: Fuse satellite image with radar return
- Preserves maximum information
Level 2: Feature-Level Fusion (Mid-Level)
- Extract features from each source, then combine
- Example: Vehicle type (IMINT) + radio signature (SIGINT)
- Reduces data volume, handles asynchronous sources
Level 3: Decision-Level Fusion (High-Level)
- Each source produces independent assessment, then combine
- Example: HUMINT says "likely," IMINT says "unlikely," fusion produces weighted average
- Can incorporate subjective judgments
Contradictory Evidence Handling
Strategies:
- Discounting (reduce weight of less reliable source)
- Hypothesis expansion (maybe both correct under different interpretations)
- Seek adjudication (collect third source to break tie)
- Temporal explanation (situation changed between observations)
- Deception hypothesis (one source deliberately misled)
Common Pitfalls (Top 5)
| Mistake | Impact | Mitigation |
|---|---|---|
| Confirmation bias | Seek confirming evidence, ignore disconfirming | ACH (forces testing all hypotheses), Devil's Advocacy |
| Anchoring | Over-rely on first information | Delay hypothesis formation, structured brainstorming |
| Groupthink | Pressure to conform | Red Cell, assign Devil's Advocate role, protect dissenters |
| Mirror imaging | Assume adversary thinks like you | Red Cell analysis, cultural expertise |
| Single-source reliance | Deception/error vulnerability | Multi-INT fusion, Admiralty Code rating |
Specific Warning: ACH Misapplication
Research finding (Fisher et al. 2008): ACH did NOT significantly reduce confirmation bias in experiments
Problem: Analysts worked DOWN columns (tested all evidence against one hypothesis), not ACROSS rows
Solution: Training + process enforcement (work across rows), + transparency value (even if debiasing questionable, audit trail valuable)
Quality Gates (When to Pause)
Before Analysis
- All relevant INT sources consulted?
- Sources rated with Admiralty Code?
- Assumptions explicitly identified?
- Alternative hypotheses generated (minimum 3)?
During Analysis
- ACH matrix updated as new evidence arrives?
- Disconfirming evidence actively sought?
- Red Cell/Devil's Advocate consulted?
- Contributory factors considered (not just proximate cause)?
Before Dissemination
- Peer review completed (minimum 3 reviewers)?
- Management review (policy implications, sourcing)?
- Tradecraft review (SATs applied correctly)?
- Confidence level and probability explicitly stated?
- Sources protected (if sensitive)?
Post-Dissemination
- Customer feedback obtained?
- Accuracy tracked (compare assessment to outcome)?
- Lessons learned documented?
- Failed predictions analyzed (what went wrong)?
Integration with Phronesis FCIP
ACH Implementation
- Matrix builder UI: Hypothesis generation, evidence entry, consistency coding
- Automated diagnostics: Highlight which evidence discriminates between hypotheses
- Sensitivity analysis: "What if?" calculator
- Export: ACH matrix → professional assessment report
Admiralty Code
- Source registry: Track reliability history (A-F rating)
- Information rating: Credibility assessment (1-6 rating) per piece of evidence
- Automated weighting: Downweight low-credibility evidence in analyses
- Audit trail: Every rating justified
Multi-Source Fusion
- INT type tagging: Classify evidence by collection discipline
- Contradiction reconciliation: Prompt analyst when sources conflict
- Bayesian updating: Confidence scores update as new evidence added
- Source network mapping: Detect circular reporting (Source A cites Source B who cites Source A)
Structured Techniques
- SAT library: Guided workflows for 66 techniques
- Technique selection wizard: Question type → recommended SATs
- Collaboration tools: Devil's Advocacy assignment, Red Cell mode
- Output templates: Standardized reporting for each SAT
WEP Enforcement
- Probability language templates: Force selection from WEP scale
- Confidence level tracking: High/Moderate/Low for every assessment
- Consistency checking: Flag if language inconsistent with evidence strength
US Intelligence Community Directive 203 (ICD 203)
Issued: January 2, 2015 Applies to: All 18 US IC agencies
Four Core Standards
- Objectivity: Base on available information, minimize biases, acknowledge uncertainties
- Political Independence: Not influenced by policymaker preferences
- Timeliness: Deliver when it can affect decisions
- Good Tradecraft: Apply structured techniques, challenge assumptions, seek disconfirming evidence
Nine Tradecraft Standards
- Analytic Standards of Objectivity and Independence
- Analytic Rigor (apply SATs, critical thinking)
- Bias Awareness (identify and mitigate cognitive biases)
- Collaboration (colleagues, other agencies, outside experts)
- Consistency (judgments logically consistent)
- Intellectual Rigor (depth, breadth, sophistication)
- Sourcing (cite sources, evaluate quality)
- Uncertainty and Confidence (explain basis)
- Validation (test against alternatives and new information)
CIA Red Cell Program
Established: September 12, 2001 (day after 9/11)
Mission:
- Challenge consensus intelligence judgments
- Provide adversarial perspective
- Generate "alternative analysis" on demand
Protection mechanisms:
- Red Cell analysts cannot be penalized for contrarian views
- Report directly to senior leadership
- Products labeled "ALTERNATIVE ANALYSIS - RED CELL"
Example products:
- "What If Jihadists Gained Access to Pakistan's Nuclear Weapons?" (2004)
- "How Al-Qa'ida Could Strike US Financial System" (2008)
Value: Stress-test assumptions, force consideration of "unthinkable" scenarios
Israeli Intelligence: Mahleket Bakara
Established: 1973 (after Yom Kippur War intelligence failure)
Yom Kippur War failure (October 1973):
- Israeli intelligence held firm belief ("the Conception") that Egypt would not attack without air superiority
- Dismissed mounting evidence of war preparations as bluff
- Result: Strategic surprise, initial losses
Reform: Department of Control (Mahleket Bakara) created
- Independent unit within Military Intelligence
- Must present alternative interpretations to leadership
- Access to same raw intelligence as Production Division
Key insight: Organizational structure matters more than individual brilliance. Intelligence failures often systemic, not just analyst error. Institutionalize dissent to prevent groupthink.
Resources and Standards
Foundational Texts
- Heuer, Richards J., Jr. Psychology of Intelligence Analysis (CIA, 1999)
- Heuer & Pherson. Structured Analytic Techniques (3rd ed., 2021) - 66 techniques
- Kent, Sherman. Strategic Intelligence for American World Policy (1949)
Doctrine
- US ODNI. Intelligence Community Directive 203: Analytic Standards (2015)
- NATO. AJP-2: Allied Joint Doctrine for Intelligence (2016)
- UK Cabinet Office. Professional Head of Intelligence Assessment Guidance (2010)
Intelligence Failures (Case Studies)
- Butler Review (2004): UK Iraq WMD failure
- 9/11 Commission Report (2004): Pre-9/11 intelligence failures
- Israeli Agranat Commission (1974): Yom Kippur War failure
Academic Research
- Fisher et al. "Is There an Empirical Basis for Analyst Training?" (2008) - ACH effectiveness critique
- Tetlock & Gardner. Superforecasting (2015) - Probabilistic forecasting alternative
Document Control
Version: Quick Start 1.0 Date: 2026-01-17 Source: 05-intelligence-analysis.md (48,001 bytes) Lines: ~497 Format: Reference card (printable)
Next steps: Read full methodology for:
- Detailed 66 SAT catalog with implementation guidance
- Multi-source fusion algorithms (Bayesian, Dempster-Shafer, Kalman Filter)
- F3EAD operational cycle detailed workflows
- Cognitive bias taxonomy and mitigation strategies
- Quality control mechanisms (peer review, tradecraft review, customer feedback)
- Intelligence cycle models (traditional vs. F3EAD comparison)
"Biases cannot be eliminated by training alone—only mitigated through structure and tools. Intelligence analysis is a professional discipline with standards and methods, not intuition."