Skip to content
AL | Apatheia Labs

Methodology Comparison Matrix

Systematic comparative analysis of six professional investigation methodologies informing forensic intelligence platform design.

CompleteMethodologies18 January 202644 min read

Methodology Comparison Matrix for Forensic Intelligence

Document Version: 1.0 Date: January 2026 Purpose: Comprehensive comparative analysis of six professional investigation methodologies Relevance: Informs design decisions for Phronesis FCIP v6.0 and forensic intelligence platforms


1. Overview and Purpose

This matrix provides a systematic comparison of six established investigation methodologies:

  1. Police Investigations (UK College of Policing, HOLMES2, IOPC)
  2. Investigative Journalism (ICIJ, OCCRP, ProPublica)
  3. Legal eDiscovery (EDRM, TAR 2.0, Cochrane-influenced systematic approaches)
  4. Regulatory Investigations (GMC, HCPC, NMC, BPS professional standards)
  5. Intelligence Analysis (CIA, UK JIC, NATO, 66 SATs, ACH)
  6. Academic Research (PRISMA, Cochrane, Grounded Theory, Thematic Analysis)

How to Use This Matrix

For System Design:

  • Identify requirements: Which methodology's standards apply to your use case?
  • Select tools: Which methodology's tooling best matches your investigation scale?
  • Design workflows: Which methodology's process structure fits your institutional context?

For Investigation Planning:

  • Match methodology to investigation type using Decision Tree (Section 9)
  • Understand evidence requirements, quality control standards, timeline expectations
  • Anticipate resource needs (team size, tool costs, time to results)

For Quality Assurance:

  • Benchmark your processes against established standards
  • Identify gaps in current methodology
  • Justify methodological choices to stakeholders

2. Evidence Standards Comparison

2.1 Documentary vs. Testimonial Evidence Priority

MethodologyDocumentary EvidenceTestimonial EvidenceHierarchy
Police🟑 Moderate priority - Contemporaneous records valued, but witness testimony central🟒 High priority - PEACE interviews, Cognitive Interview (46% recall increase)Equal weight - Both critical, cross-verification required
Journalism🟒 Highest priority - Official docs systematically prioritized over testimonial🟑 Moderate priority - Multiple independent testimonies required for corroborationDocumentary > Testimonial - Explicit hierarchy
Legal eDiscovery🟒 Highest priority - Hash-certified, metadata-preserved, chain of custody🟑 Moderate - Depositions important but documentary evidence is core litigation evidenceDocumentary > Testimonial - Legal admissibility priority
Regulatory🟒 High priority - Contemporaneous clinical/professional records strongest🟑 Moderate - Witness statements important but subject to bias assessmentDocumentary > Testimonial - Contemporary records trump later accounts
Intelligence🟒 High priority - SIGINT, IMINT, GEOINT preferred (hard to fake)🟑 Variable - HUMINT critical but highest deception risk (Admiralty Code)Multi-INT fusion - Contradictions require reconciliation
Academic🟒 High priority - Peer-reviewed sources, official records, archival materials🟑 Moderate - Interview data important but requires methodological rigor (IRR testing)Depends on research design - Qualitative research may prioritize interviews

Key Insight: Only Journalism and Legal eDiscovery explicitly prioritize documentary over testimonial evidence. Police and Regulatory balance both. Intelligence fuses multiple source types with explicit reliability ratings.


2.2 Authentication Requirements

MethodologyAuthentication StandardChain of CustodyDigital IntegrityAdmissibility Test
PoliceFBI 5-step protocol: Legal collection, detailed description, accurate ID, proper packaging, chain logsMandatory - Every transfer logged, signed, timestampedSHA-256/MD5 hashing, write-blocking, NIST standardsCourt admissible - FRE 901 authentication
Journalism3-step verification: Primary source authentication, multiple corroborations, expert validationDocumented - Provenance tracked, not legally requiredMetadata preservation, forensic validation for leaksEthical standard - Not legally admissible, reputational risk
Legal eDiscoveryFRE 902(13)/(14) - Self-authenticating with certification; SHA-256 hash certificationAutomated audit trails - Every access logged, immutableDual hash: Collection hash + Production hash; metadata preservation mandatoryCourt admissible - Designed for litigation
RegulatoryBalance of probabilities - Contemporary records authenticated by custodian testimonyLess formal - Not criminal standard, but provenance trackedElectronic health records: NHS audit trails, access logsTribunal standard - Civil burden, hearsay admissible
IntelligenceAdmiralty Code - Source reliability (A-F) + Information credibility (1-6) rated independentlyClassification controls - Access logs, compartmentalizationTechnical validation (SIGINT decryption, IMINT geo-registration)Not court-focused - Supports operational/policy decisions
AcademicPeer review + citation - Source verification, archival confirmation, replication dataAudit trail - Research data management plans, version controlDataset hashing, repository DOIs (Zenodo, OSF), FAIR principlesScholarly standard - Not legal admissibility

Key Takeaway: Legal eDiscovery has most mature digital authentication standards (hash certification, metadata preservation). Police follow similar standards for court use. Intelligence focuses on source reliability over legal admissibility. Journalism and Academic rely on ethical/scholarly standards, not legal requirements.


2.3 Burden and Standard of Proof

MethodologyStandard of ProofBurdenPresumptionAppeal/Review
PoliceCriminal: Beyond reasonable doubt (~95%+ certainty)Prosecution bears burden throughoutInnocence presumedCPS review, court trial, IOPC oversight
JournalismEditorial: Confidence in veracity (defensible against libel)Publication bears burdenNo presumptionEditorial review, legal vetting, fact-checking
Legal eDiscoveryCivil: Preponderance of evidence (51%+) or Reasonable defensibility (for discovery disputes)Producing party must show reasonable effortNo presumption of misconductCourt sanctions for spoliation/bad faith
RegulatoryCivil: Balance of probabilities (51%+)Regulator bears burdenNo presumption of impairmentPanel appeals, judicial review
IntelligenceAnalytic confidence levels (High/Moderate/Low) + Probability estimates (WEP)Analyst bears burden to justify confidenceNo presumptionPeer review, Red Cell challenge, senior review
AcademicScholarly standards - Peer review consensus, replicationResearcher bears burden for claimsNull hypothesis (no effect presumed)Peer review, post-publication critique, replication

Critical Distinction: Criminal (Police) requires highest certainty. Civil (Legal, Regulatory) uses balance of probabilities. Journalism and Intelligence use confidence-based frameworks. Academic uses statistical significance + peer consensus.


3. Timeline Construction and Temporal Analysis

3.1 Methods and Tools

MethodologyTimeline MethodEvidence LinkingTemporal Contradiction DetectionTools
PoliceChronologies (5WH framework) - What, Who, When, Where, Why, How structureEvery event linked to evidence - Witness statements, exhibits, CCTV timestampsGap analysis - Suspicious absences, timeline inconsistencies flag further inquiryHOLMES2 Dynamic Reasoning Engine, MIR logs
JournalismDocument-driven timelines - Events extracted from authenticated documents, not assumptionsBates-style referencing - Every claim cites specific document (Panama Papers: 11.5M docs)Cross-document contradiction flagging - Automated + manual review for date conflictsApache Tika, Solr, Neo4j, Aleph (4B+ docs), Datashare
Legal eDiscovery8-step timeline process - Early construction, central repository, objective language, continuous updatesBates numbers mandatory - Every event cites supporting documents with page/line referencesVersion tracking + Near-duplicate detection - Identify document evolution, flag inconsistent language changesRelativity, Everlaw Storybuilder, CaseFleet, TimeMap
RegulatoryThree-stage temporal analysis - Facts proven β†’ Standards breached β†’ Current impairment (focus on present, not past)Evidence bundles per allegation - Each finding tied to specific records, dates, witnessesTemporal focus on patterns - One-off vs. repeated incidents, remediation timeline assessmentCase management systems (GMC, HCPC databases), Excel-based investigation logs
IntelligenceChronologies as diagnostic SAT - Establish factual sequence, identify assumptions, detect deceptionMulti-INT fusion timelines - SIGINT intercepts + IMINT timestamps + HUMINT accounts reconciledF3EAD cycle exploitation - Each operation generates timeline leads for next cycle (hours, not weeks)Intelligence databases, timeline analysis software, GEOINT platforms
AcademicEvent history analysis - Qualitative timelines, process tracing, historical analysisCitation to sources - Archival references, interview dates, document provenanceThematic coding of temporal themes - Sequences, turning points, trajectories analyzed qualitativelyQDAS tools (NVivo timeline view, Atlas.ti network chronologies), qualitative timeline software

Standout Practice: Legal eDiscovery's 8-step timeline process with mandatory Bates linking is gold standard for audit trails. Intelligence's F3EAD cycle demonstrates fastest iterative timeline construction (hours vs. weeks).


3.2 Temporal Contradiction Handling

MethodologyDetection ApproachResolution ProcessEscalation
PoliceManual review + HOLMES2 automated flagging - Analysts review chronologies, system flags timeline gapsInvestigative action - Collect additional evidence, re-interview witnesses, challenge suspectsSIO review - Senior Investigating Officer assesses significance, adjusts strategy
JournalismMulti-layered verification - Editor review, fact-checker validation, legal vetting before publicationSeek additional sources - Do not publish contradiction unless explained or acknowledgedEditorial decision - Publish with caveat ("conflicting accounts exist") or withhold until resolved
Legal eDiscoveryNear-duplicate detection + version diff analysis - Software flags temporal inconsistencies automaticallyDiscovery dispute - Present contradiction to opposing party, request explanation or admit inconsistencyCourt resolution - Judge rules on admissibility, weight of contradictory evidence
RegulatoryCase examiner review - Dual decision-makers (professional + lay) assess contradiction's impact on findingsRegistrant response - Provide opportunity to explain discrepancies before final hearingTribunal decision - Panel weighs credibility, may find some facts proved, others not
IntelligenceACH matrix evaluation - Contradiction = diagnostic evidence (eliminates hypotheses)Multi-INT reconciliation - Check if temporal difference due to source perspective, timing, or deceptionRed Cell review - Alternative explanations for contradiction examined by contrarian team
AcademicTriangulation - Compare accounts across sources (documents, interviews, observations)Reflexive analysis - Researcher acknowledges contradiction, explores meaning (not error to eliminate)Peer review - Reviewers assess researcher's handling of contradictory data

Key Difference: Intelligence treats contradictions as diagnostic (actively useful for hypothesis elimination). Police and Legal treat contradictions as investigative leads (triggers for additional evidence gathering). Academic treats contradictions as analytical richness (reveals complexity, not necessarily error).


4. Quality Control and Peer Review

4.1 Review Structures

MethodologyReview StagesMinimum ReviewersIndependence RequirementStatistical Validation
Police3-tier: First-line supervision (daily) β†’ Second-line (strategic) β†’ Senior management (major incidents)Minimum 2 - Action supervisor + case officer; Major incidents: 3+ (SIO, Deputy SIO, external review)PIP-level independence - Reviewers must be higher PIP level than investigatorNot statistical - Professional judgment, case law precedent
Journalism4-stage: Reporter β†’ Desk editor β†’ Senior editor β†’ Legal review (pre-publication)Minimum 3 - Reporter + 2 editors; Investigative projects: 5+ (+ fact-checker, lawyer)Editorial independence - Editors not involved in story development review final product3-source rule - Significant claims require 3 independent corroborations
Legal eDiscoveryMulti-phase QC: First-pass review β†’ QC sample (5-10%) β†’ Partner/Senior Counsel review β†’ Court scrutinyMinimum 2 - Reviewer + QC sampler; TAR: Statistical validation requiredIndependent QC - Different attorney from reviewer samples for qualityYes - TAR validation: Precision/recall testing, elusion testing, confidence intervals
RegulatoryDual decision-makers + tribunal: Case examiner pair (professional + lay) β†’ ICP/Hearing panel (3-5 members)Minimum 2 - Case examiners; Hearing: 3 (registrant member, lay member, legal chair)Lay member independence - Non-professional perspective mandatoryBalance of probabilities - Not statistical, but explicit confidence assessment
Intelligence4-layer review: Peer review (tradecraft) β†’ Management review (policy) β†’ Red Cell (contrarian) β†’ Customer feedbackMinimum 3 independent reviewers (research finding for reliable QC)Red Cell independence - Protected from retaliation for contrarian viewsICD 203 standards - Confidence levels (High/Moderate/Low) required, no quantitative validation
Academic2-3 rounds: Peer review (2-3 reviewers, blinded) β†’ Editorial decision β†’ Post-publication critiqueMinimum 2 peer reviewers - Anonymous, expert in fieldDouble-blind review - Reviewers and authors mutually anonymous (not always enforced)Yes - IRR testing: Cohen's Kappa β‰₯0.60 acceptable, β‰₯0.70 preferred for qualitative coding

Gold Standard: Intelligence's minimum 3 independent reviewers is research-backed. Academic's Cohen's Kappa β‰₯0.70 provides quantitative reliability measure. Legal's statistical TAR validation demonstrates defensible AI-assisted review.


4.2 Inter-Rater Reliability Standards

MethodologyIRR MeasureAcceptable ThresholdTesting PhaseRemediation if Below Threshold
PoliceSupervisor agreement - Qualitative assessment, not quantifiedProfessional consensus - No formal thresholdOngoing supervision, spot checksRetraining, reassignment, SIO intervention
JournalismEditorial consensus - Qualitative, no statistical measureEditorial judgment - No formal thresholdPre-publication editorial reviewAdditional reporting, fact-checking, legal review
Legal eDiscoveryQC sampling: % agreement on responsiveness, privilege codingβ‰₯95% agreement for privilege; β‰₯90% for responsivenessQC phase (5-10% sample after initial review)Re-review batch, retrain reviewers, adjust guidelines
RegulatoryCase examiner agreement - Dual review, disagreements escalatedConsensus required - If disagreement, third examiner or ICP decidesCase examiner review stageAdditional evidence sought, legal advice, escalation to panel
IntelligencePeer review consensus - Qualitative, no statistical measureTradecraft compliance - ICD 203 standards, no numerical thresholdPeer review before disseminationRed Cell challenge, additional analysis, alternative hypothesis development
AcademicCohen's Kappa - Statistical IRR for qualitative codingβ‰₯0.60 acceptable; β‰₯0.70 preferred; β‰₯0.80 excellentPilot coding (10-20% of corpus)Refine codebook, retrain coders, re-pilot until threshold met

Standout: Academic research uses quantitative IRR testing (Cohen's Kappa), providing objective reliability measure. Legal eDiscovery's β‰₯95% privilege agreement reflects high-stakes nature of attorney-client privilege. Intelligence's tradecraft review emphasizes methodological compliance over numerical thresholds.


5. Bias Mitigation Techniques

5.1 Structured Approaches

MethodologyPrimary BiasMitigation TechniqueTimingEffectiveness Evidence
PoliceConfirmation bias - Seeking evidence confirming suspect guiltCPIA reasonable lines of enquiry - Must pursue evidence towards AND away from suspectThroughout investigation - Legal obligation from arrest to trialModerate - CPIA violations common in miscarriages of justice; structural requirement doesn't eliminate bias
JournalismConfirmation bias - Story angle driving evidence selectionHypothesis-based framework - Separate facts from assumptions, state testable hypothesesStory planning stage - Before document reviewModerate-High - Editorial oversight + legal review catch bias; Panama Papers scale (370+ journalists) dilutes individual bias
Legal eDiscoverySelection bias - Producing only favorable documentsTAR 2.0/CAL transparency - Algorithm ranks all docs, opposing party can challenge methodologyReview phase - Continuous active learning adjusts to reviewer feedbackHigh - Court-validated, statistically tested; opposing party oversight creates adversarial check
RegulatoryProfessional loyalty bias - Regulators sympathetic to registrant professionDual decision-makers - Professional member + lay member required for balanceCase examiner + tribunal stagesModerate-High - Lay perspective mitigates professional bias; "real prospect test" gatekeeping reduces frivolous cases
IntelligenceConfirmation bias, mirror imaging, groupthink66 Structured Analytic Techniques (SATs) - ACH, Devil's Advocacy, Red Cell, Pre-mortemAnalysis phase - Multiple SATs applied per assessmentModerate - Research (Fisher 2008) found no empirical basis for ACH reducing bias; value is transparency, not debiasing
AcademicConfirmation bias, researcher expectationsReflexivity + Audit trails - Researcher acknowledges positioning, documents all decisionsThroughout research - Memoing during data collection/analysisModerate - Transparency aids detection by reviewers, but doesn't eliminate bias; replication studies rare

Critical Finding: Intelligence research (Fisher et al., 2008) challenges assumption that structured techniques eliminate biasβ€”"Biases cannot be eliminated by training aloneβ€”only mitigated through structure and tools." Value is in transparency (making reasoning auditable) rather than debiasing (eliminating cognitive distortions).


5.2 When Applied and Institutional Protection

MethodologyBias Mitigation TimingInstitutional Protection for Contrarian ViewsDocumentation Requirement
PoliceBefore charge decision - CPS review for prosecution bias; Pre-trial - Disclosure of unused materialIOPC independence - External oversight for serious cases; officers can report concernsPolicy File (Major Incidents) - SIO documents all strategic decisions and rationale
JournalismEditorial review cycles - Multiple stages pre-publicationEditorial independence policies - Protect reporters from advertiser/owner pressureStory memos - Reporters document sourcing, verification, editorial discussions
Legal eDiscoveryQC sampling throughout review - Statistical checks catch systematic biasCourt oversight - Judges sanction bad faith discovery conductPrivilege logs, TAR validation reports - Methodology documented for court scrutiny
RegulatoryCase examiner stage - Before tribunal hearingLay member independence - Non-professionals cannot be professionally biasedInvestigation reports - Evidence, rationale, alternative explanations documented
IntelligenceBefore dissemination - Peer review, management review, Red Cell challengeRed Cell cannot be penalized - Institutional protection for contrarian analysisAnalytic line - Assumptions, evidence, alternative hypotheses, confidence levels documented
AcademicPeer review pre-publication - Blinded reviewers challenge methodology, interpretationsAcademic freedom - Tenure protects researchers from institutional pressureAudit trails - Research data, coding decisions, reflexive memos, methodological rationale

Institutional Protection Matters: Intelligence's Red Cell (established Sept 12, 2001) and Israeli Mahleket Bakara (post-Yom Kippur War 1973) demonstrate that organizational structure matters more than individual analyst training. Protecting dissenters from retaliation is critical for bias mitigation.


6. Scale Capabilities and Automation

6.1 Volume Handled and Automation Level

MethodologyTypical VolumeMaximum Demonstrated ScaleAutomation LevelTool MaturityProcessing Time
PoliceVolume crime: 10-100 docs per case; Major incidents: 1,000-10,000+ docs (HOLMES2)Stephen Lawrence Inquiry: 100,000+ documents reviewedMedium - HOLMES2 automated action tracking, timeline construction, pattern detectionHigh - HOLMES2 mature, cloud-based, 20+ years developmentVolume: days-weeks; Major: months-years
JournalismInvestigative projects: 100,000-11.5M documents (Panama Papers: 2.6TB)Pandora Papers: 11.9M documents, 2.9TB (600+ journalists, 150 orgs)High - Apache Tika extraction, Solr search, Neo4j network analysis, Aleph cross-reference (4B+ docs)High - ICIJ infrastructure battle-tested, OCCRP Aleph matureLarge projects: 12-18 months for 11M+ docs
Legal eDiscoverySmall: <10,000 docs; Medium: 10K-1M; Large: 1M-10M+Largest cases: 10M+ documents (antitrust, securities fraud)Very High - TAR 2.0/CAL (40-60% review reduction), near-duplicate detection, email threading, privilege AI (60-80% time reduction)Very High - Relativity, Everlaw, DISCO mature, court-validatedSmall: days; Medium: weeks-months; Large: 3-12 months
RegulatorySimple: 10-100 docs; Moderate: 100-1,000; Complex: 1,000-10,000GMC fitness-to-practice cases: rarely exceed 10,000 documentsLow - Case management systems track workflow, but analysis is manualMedium - Regulator databases functional but not cutting-edgeSimple: 2-4 months; Complex: 12-24 months
IntelligenceTactical (F3EAD): 100-1,000 docs per cycle; Strategic assessments: 1,000-100,000 sourcesNSA PRISM: Billions of communications monitored (collection, not analysis)Very High - SIGINT automated collection/processing, GEOINT automated change detection, OSINT AI scrapingHigh - National security budgets enable cutting-edge tech, but classifiedF3EAD cycle: hours-days; Strategic: weeks-months
AcademicSmall qualitative: 10-50 interviews/documents; Systematic reviews: 100-10,000 studiesCochrane reviews: 10,000+ studies screened (PRISMA)Medium - QDAS tools (NVivo, Atlas.ti, MAXQDA) for coding; Covidence/Rayyan for systematic review screeningHigh - QDAS tools mature, but manual coding predominatesQualitative: 6-18 months; Systematic reviews: 12-24 months

Scale Champions:

  • Journalism handles largest volumes of heterogeneous documents (11.9M, 2.9TB Pandora Papers).
  • Legal eDiscovery has most mature AI-assisted review (TAR 2.0 court-validated, 40-60% reduction).
  • Intelligence achieves fastest turnaround (F3EAD cycle: hours) but often smaller analytical volumes (collection β‰  analysis).

6.2 Cost-Benefit Analysis by Scale

Investigation ScaleRecommended MethodologyRationaleTypical CostTime to Results
Small (<1,000 docs)Regulatory/Academic - Manual review feasible, structured coding provides rigorLinear review reasonable; TAR/AI overkill for this scale$5K-$20K - Investigator time, basic tools (Excel, Word)1-3 months
Medium (1K-10K docs)Police/Journalism hybrid - HOLMES2-style case management + document-driven timelinesSystematic workflow essential; automation helpful but not mandatory$20K-$100K - Case management system, dedicated team (2-5 investigators)3-6 months
Large (10K-100K docs)Legal eDiscovery - TAR 2.0, email threading, near-duplicate detection mandatoryManual review infeasible; AI-assisted review defensible and cost-effective$100K-$500K - eDiscovery platform license, review team (5-20 attorneys), QC6-12 months
Very Large (100K-10M+ docs)Journalism (Aleph/Datashare) or Legal (TAR 2.0) - Network analysis, entity extraction, AI prioritizationOnly methodologies proven at this scale; academic/regulatory approaches collapse$500K-$5M+ - Platform infrastructure, large team (20-100+), computational resources12-24 months

Critical Threshold: 10,000 documents is inflection point where manual review becomes infeasible and AI-assisted review becomes cost-effective. Legal eDiscovery research: TAR 2.0 saves 40-60% review costs on cases >10K documents.


7.1 Standards Bodies and Court Validation

MethodologyStandards BodyCourt AcceptanceAudit Trail RequirementsExpert Testimony Standard
PoliceCollege of Policing (UK), IACP (International), FBI (US)Criminal courts worldwide - PACE, CPIA compliance required for admissibilityMandatory - Every action, decision, interview logged; Policy File for major incidentsFRE 702/Daubert (US), Ikarian Reefer (UK) - Expert must state methodology, limitations
JournalismNo legal standard - Ethical guidelines (SPJ Code, Reuters Trust Principles) but not court-validatedNot court evidence - Used for libel defense ("truth" or "public interest"), not direct admissibilityEthical obligation - Source protection, fact-checking logs, editorial review notesNot applicable - Journalists generally not expert witnesses (shield laws)
Legal eDiscoveryEDRM (145 countries), Sedona Conference (thought leadership), FRE/FRCP (US), CPR (UK)Highest court acceptance - TAR 2.0 mandated over linear review (Hyles v. NYC 2016), FRE 502(b) safe harborExtensive - Hash certification, metadata preservation, privilege logs, production logs, TAR validation reportsFRE 702 + FRE 902(14) - Forensic experts certify data integrity, ESI self-authenticates with certification
RegulatoryGMC, HCPC, NMC, BPS (UK), NPDB, FSMB (US) - Professional regulators, not courtsTribunal standard - Balance of probabilities, civil rules; judicial review possible but rareModerate - Investigation reports, case examiner rationale, hearing transcriptsExpert witnesses - Clinical/professional standards experts common, independence required
IntelligenceODNI ICD 203 (US), JIC standards (UK), NATO AJP-2 (alliance)Not court-focused - Supports policy/military decisions, rarely admissible in court (state secrets)Classified audit trails - Analytic line, source protection, dissemination controlsNot applicable - Intelligence assessments not for courtroom use
AcademicAPA, ASA, BPS (discipline-specific), COPE, ICMJE (publication ethics)Expert testimony - Researchers as expert witnesses; research findings cited in amicus briefsScholarly standard - Research data repositories, IRB approvals, replication datasets (increasingly required)Daubert/Frye - Scientific methodology must be peer-reviewed, testable, accepted in field

Court-Ready Methodologies: Legal eDiscovery is designed for court admissibility (FRE 902(14), hash certification). Police evidence follows criminal court standards (PACE, CPIA). Regulatory uses civil tribunal standards (balance of probabilities). Intelligence and Journalism not court-focused (operational/editorial decisions). Academic indirectly used (expert witnesses, cited research).


7.2 Precedent Cases and Validation

MethodologyLandmark Cases/Validation EventsImpact on StandardsCurrent Status (2026)
PoliceZubulake v. UBS Warburg (2004) - Duty to preserve when litigation anticipated; Stephen Lawrence Inquiry (1999) - Racism in Met Police, led to PACE reformsCPIA 1996 obligations strengthened; IOPC established 2018 (replaced IPCC); Body-worn cameras mandatedMature - Standards stable, incremental improvements (AI video analysis emerging)
JournalismPentagon Papers (1971) - First Amendment protects publication of classified docs; Panama Papers (2016) - 11.5M docs, no successful libel suitsEstablished "public interest" defense; validated massive leak journalism methodologiesEvolving - ICIJ model now standard for cross-border investigations; AI-assisted document review growing
Legal eDiscoveryDa Silva Moore v. Publicis (2012) - First TAR 1.0 approval; Rio Tinto v. Vale (2015) - TAR 2.0/CAL approved; Hyles v. NYC (2016) - TAR mandated over linear reviewTAR now industry standard; AI-assisted review defensible; Proportionality amendment (FRCP 2015)Mature and evolving - TAR 2.0 standard practice; Generative AI (GPT-4) entering eDiscovery (2023-2026)
RegulatoryGMC v. Meadow (2006) - Expert witness overreach; Bawa-Garba (2018) - Manslaughter conviction of doctor, led to "learning not blaming" culture shiftExpert evidence standards tightened; "Real prospect test" clarified; Apology legislation (2019) - saying sorry doesn't admit liabilityStable - 2025-2026 HCPC policy shift away from "last resort" language for striking-off; otherwise stable standards
Intelligence9/11 Commission (2004) - Intelligence failures led to ODNI creation, ICD 203; Iraq WMD (Butler Review 2004) - UK JIC reforms, Red Teaming institutionalizedICD 203 tradecraft standards mandated (2015); Red Cell/Mahleket Bakara models adopted widelyMature - Standards stable; AI/ML integration ongoing but classified; OSINT growing (80-90% of intel in some domains)
AcademicReproducibility Crisis (2010s) - Psychology, social sciences replication failures; PRISMA 2020 - Updated reporting standardsPre-registration mandated by journals; Open data requirements; IRR reporting standardEvolving - PRISMA extensions (NMA, QES) in development; AI-assisted coding emerging but controversial

Trend: All methodologies show increasing automation and AI integration (TAR 2.0 in Legal, OSINT in Intelligence, QDAS in Academic). Transparency and replicability are growing requirements across all domains (PRISMA, ICD 203, CPIA disclosure).


8. Strengths and Weaknesses Matrix

8.1 Comparative Assessment

DimensionPoliceJournalismLegal eDiscoveryRegulatoryIntelligenceAcademic
Evidence Authentication🟒 Very Strong - FBI 5-step, write-blocking, hash certification🟑 Moderate - Ethical rigor but not legal standard🟒 Very Strong - FRE 902, dual hash, metadata preservation🟑 Moderate - Contemporary records strong, but hearsay admissible🟒 Strong - Technical validation (SIGINT, IMINT) but classified🟑 Moderate - Peer review, citation, but not legal authentication
Timeline Construction🟒 Strong - HOLMES2 automated, 5WH framework🟒 Strong - Document-driven, Bates-style referencing🟒 Very Strong - 8-step process, mandatory evidence linking🟑 Moderate - Manual construction, focus on current impairment🟒 Strong - F3EAD cycle (hours), multi-INT fusion🟑 Moderate - Qualitative timelines, no standardized process
Bias Mitigation🟑 Moderate - CPIA obligations, but violations common🟒 Strong - Editorial layers, 3-source rule, legal review🟒 Very Strong - TAR transparency, opposing party oversight🟒 Strong - Dual decision-makers (professional + lay)🟑 Moderate - SATs transparency, but no empirical debiasing proof🟑 Moderate - Reflexivity, audit trails, but individual researcher bias
Quality Control🟒 Strong - 3-tier review, IOPC oversight🟒 Strong - 4-stage editorial review🟒 Very Strong - QC sampling (β‰₯95% agreement), statistical validation🟒 Strong - Dual case examiners, tribunal review🟒 Very Strong - Minimum 3 reviewers, Red Cell challenge🟒 Strong - Peer review, Cohen's Kappa β‰₯0.70
Scalability🟑 Moderate - HOLMES2 handles 10K docs well, but manual analysis limits scale🟒 Very Strong - Proven at 11.9M docs (Pandora Papers)🟒 Very Strong - TAR 2.0 handles 10M+ docs, 40-60% reductionπŸ”΄ Weak - Manual review limits to <10K docs typically🟒 Strong - F3EAD cycle handles large collection volumes, but analysis often smaller🟑 Moderate - Systematic reviews handle 10K studies, but qualitative limited to <100
Speed🟑 Moderate - Volume crime: weeks; Major incidents: months-yearsπŸ”΄ Slow - Large projects: 12-18 months🟑 Moderate - Small: days; Large: 6-12 monthsπŸ”΄ Slow - Simple: 2-4 months; Complex: 12-24 months🟒 Very Fast - F3EAD: hours-days (tactical); Strategic: weeks-monthsπŸ”΄ Slow - Qualitative: 6-18 months; Systematic reviews: 12-24 months
Legal Defensibility🟒 Very Strong - Criminal court standards (beyond reasonable doubt)πŸ”΄ Weak - Not court-focused, editorial/ethical standards🟒 Very Strong - Designed for court admissibility (FRE, FRCP)🟒 Strong - Civil tribunal standards (balance of probabilities)πŸ”΄ Weak - Operational/policy focus, rarely court-admissible🟑 Moderate - Expert testimony accepted, but not legal evidence standard
Cost Efficiency🟑 Moderate - Public funding, but major incidents expensive (Β£millions)πŸ”΄ Expensive - Large projects require consortia (Panama Papers: 370+ journalists)πŸ”΄ Very Expensive - eDiscovery platforms + attorney review ($100K-$5M+)🟒 Strong - Public sector funding, smaller scale casesπŸ”΄ Very Expensive - National security budgets, classified infrastructure🟒 Strong - Academic funding models, lower labor costs (grad students)
Transparency🟑 Moderate - CPIA disclosure required, but Policy Files restricted🟒 Strong - Methodology published, sources protected but process transparent🟒 Very Strong - Privilege logs, TAR validation, production logs all auditable🟒 Strong - Investigation reports, tribunal transcripts publicπŸ”΄ Weak - Classified analytic lines, sources protected (but ICD 203 requires internal transparency)🟒 Very Strong - Peer review, replication data, audit trails, open access
Tool Maturity🟒 Strong - HOLMES2 mature (20+ years), cloud-based🟒 Strong - ICIJ/OCCRP tools battle-tested, Aleph 4B+ docs🟒 Very Strong - Relativity, Everlaw, DISCO mature, court-validated🟑 Moderate - Case management functional but not cutting-edge🟒 Strong - National security budgets enable advanced tech (but classified)🟒 Strong - NVivo, Atlas.ti, MAXQDA mature, PRISMA/Cochrane frameworks established

Legend:

  • 🟒 = Strong capability (competitive advantage)
  • 🟑 = Moderate capability (adequate but not exceptional)
  • πŸ”΄ = Weak capability (limitation or disadvantage)

8.2 Summary Scorecard

MethodologyOverall StrengthsCritical WeaknessesBest Use Case Match
PoliceEvidence authentication, quality control, legal defensibilityBias mitigation (CPIA violations common), speed (major incidents years), costCriminal investigations - When court admissibility and chain of custody critical
JournalismScalability (11M+ docs), transparency, editorial reviewSpeed (12-18 months), cost (consortia required), legal defensibility (not court-focused)Large-scale leaks, public interest investigations - When volume is massive and publication goal
Legal eDiscoveryScalability, automation (TAR 2.0), legal defensibility, quality control (statistical)Cost (most expensive), tool complexityLitigation, regulatory enforcement - When court admissibility and AI-assisted review needed
RegulatoryBias mitigation (dual decision-makers), quality control, cost efficiencyScalability (limited to <10K docs), speed (12-24 months), tool maturityProfessional misconduct, standards violations - When current impairment assessment required
IntelligenceSpeed (F3EAD: hours), quality control (3+ reviewers, Red Cell), multi-INT fusionLegal defensibility (not court-focused), transparency (classified), bias mitigation (no empirical proof)Operational intelligence, time-sensitive analysis - When speed and multi-source fusion critical
AcademicTransparency (peer review, open data), quality control (IRR testing), tool maturity (QDAS)Speed (6-24 months), scalability (qualitative <100), legal defensibilitySystematic reviews, qualitative research - When peer-reviewed rigor and replicability essential

9. Decision Tree: Which Methodology for Which Investigation?

9.1 Primary Decision Factors

START: What is the investigation goal?

1. COURT ADMISSIBILITY REQUIRED?
   β”œβ”€ YES β†’ Use POLICE or LEGAL eDISCOVERY
   β”‚         β”œβ”€ Criminal prosecution? β†’ POLICE (beyond reasonable doubt)
   β”‚         └─ Civil litigation? β†’ LEGAL eDISCOVERY (balance of probabilities)
   └─ NO β†’ Continue to #2

2. DOCUMENT VOLUME?
   β”œβ”€ <1,000 docs β†’ REGULATORY or ACADEMIC (manual review feasible)
   β”œβ”€ 1K-10K docs β†’ POLICE or JOURNALISM (case management essential)
   β”œβ”€ 10K-100K docs β†’ LEGAL eDISCOVERY (TAR 2.0 mandatory)
   └─ >100K docs β†’ JOURNALISM (Aleph/Datashare) or LEGAL (TAR)

3. TIME SENSITIVITY?
   β”œβ”€ Hours-days β†’ INTELLIGENCE (F3EAD cycle)
   β”œβ”€ Weeks-months β†’ POLICE or LEGAL eDISCOVERY
   └─ Not time-sensitive β†’ REGULATORY or ACADEMIC

4. BUDGET?
   β”œβ”€ <$20K β†’ REGULATORY or ACADEMIC (public/grant funding)
   β”œβ”€ $20K-$100K β†’ POLICE (public funding)
   β”œβ”€ $100K-$500K β†’ LEGAL eDISCOVERY (mid-sized case)
   └─ >$500K β†’ JOURNALISM (consortium) or LEGAL (large case)

5. PROFESSIONAL STANDARDS ASSESSMENT?
   β”œβ”€ YES β†’ REGULATORY (GMC, HCPC, NMC, BPS frameworks)
   └─ NO β†’ Continue to #6

6. PEER-REVIEWED RIGOR REQUIRED?
   β”œβ”€ YES β†’ ACADEMIC (PRISMA, Cochrane, IRR testing)
   └─ NO β†’ Default to POLICE or JOURNALISM

9.2 Use Case Matching Table

Investigation TypePrimary MethodologySecondary (Hybrid)Rationale
Criminal investigationPolice+ Legal eDiscovery (if digital evidence volume >10K)Court admissibility, chain of custody, beyond reasonable doubt standard
Civil litigation discoveryLegal eDiscovery+ Police (if criminal crossover)Court admissibility, TAR 2.0 efficiency, statistical validation
Regulatory enforcement (SEC, FDA, FCA)Legal eDiscovery+ Regulatory (professional standards)Document volume typically >10K, civil standard, court admissibility
Professional misconduct (medical, legal, psychological)Regulatory+ Academic (systematic review for precedent)Current impairment focus, professional standards, balance of probabilities
Institutional accountability (corruption, abuse)Journalism+ Academic (qualitative analysis for patterns)Public interest, transparency, large document volumes (leaks)
National security threat assessmentIntelligence+ Academic (open-source research)Speed (F3EAD), multi-INT fusion, operational focus
Systematic evidence synthesisAcademic+ Journalism (if large document corpus)Peer-reviewed rigor, PRISMA transparency, IRR testing
Internal corporate investigationLegal eDiscovery+ Police (if criminal referral likely)Document volume management, attorney-client privilege, audit trail
Historical analysis (truth commissions)Academic+ Journalism (document-driven narratives)Long timescales, qualitative depth, peer review
Complaint triage (initial assessment)Regulatory+ Intelligence (ACH for complex cases)Fast decision-making, balance of probabilities, real prospect test

10. Integration Recommendations for Phronesis FCIP

10.1 Hybrid Methodology Framework

Phronesis FCIP should adopt a hybrid methodology integrating strengths across domains:

Investigation PhasePrimary MethodologyRationalePhronesis Implementation
Document AssemblyLegal eDiscovery (EDRM) + Academic (PRISMA)Systematic identification, transparent selection, deduplicationinvestigate.rs orchestration: PRISMA flow diagram, hash-based deduplication, metadata preservation
Timeline ConstructionLegal eDiscovery (8-step) + Police (5WH)Evidence linking mandatory, objective language, temporal contradiction detectiontemporal.rs engine: Bates-style references, event-document linking, gap analysis, version tracking
Contradiction DetectionIntelligence (ACH) + Legal (near-duplicate detection)Hypothesis testing, multi-document comparison, version diff analysisS.A.M. contradiction engine: 8 types (TEMPORAL, EVIDENTIARY, MODALITY_SHIFT, etc.) with ACH integration
Bias MitigationIntelligence (SATs, Red Cell) + Regulatory (dual decision-makers)Structured techniques, contrarian review, institutional protection for dissentersbias.rs engine: 66 SATs catalog, Red Cell mode (alternative explanations), transparency dashboard
Quality ControlAcademic (Cohen's Kappa β‰₯0.70) + Legal (QC sampling)Statistical IRR, peer review, audit trailsqc.rs module: Sample findings for human review, calculate precision/recall, version control all changes
Source ReliabilityIntelligence (Admiralty Code) + Journalism (evidence hierarchy)Source rating (A-F) + information credibility (1-6), documentary > testimonialevidence.rs: Source reliability ratings, information credibility assessment, provenance tracking
AutomationLegal (TAR 2.0/CAL) + Journalism (Aleph network analysis)AI-assisted review (40-60% reduction), entity extraction, network graphsAI prioritization: Continuous active learning ranks documents, entity relationship networks, concept clustering
Professional StandardsRegulatory (GMC/HCPC/NMC/BPS)Standards mapping, current impairment assessment, contributory factors analysisprofessional.rs engine: Code violation detection, severity assessment, remediation tracking
ReportingAcademic (Framework Method) + Intelligence (ICD 203)Matrix-based analysis (case x theme), confidence levels (High/Moderate/Low), Words of Estimative ProbabilityExport module: Framework matrices, confidence assessments, PRISMA diagrams, audit package with hashes

FunctionTool CategorySpecific RecommendationRationale
Case ManagementPolice-inspiredHOLMES2-style architecture (cases, documents, actions, entities, timelines)Proven at 10K+ docs, systematic workflow
Document ProcessingLegal eDiscoveryDe-duplication (hash-based), metadata extraction, OCR, email threadingEDRM standards, court-validated
AI ReviewLegal eDiscoveryTAR 2.0/Continuous Active Learning40-60% review reduction, court-approved
Timeline EngineLegal + Police8-step process with HOLMES2-style timeline viewEvidence linking mandatory, gap analysis
Network AnalysisJournalismNeo4j-style entity relationship graphsProven at Panama Papers scale (11.5M docs)
Contradiction DetectionIntelligence + LegalACH matrix + near-duplicate detection + version diffHypothesis testing + automated flagging
Coding FrameworkAcademicNVivo/Atlas.ti-inspired hierarchical coding with Framework matricesSystematic qualitative analysis, IRR testing
Quality ControlAcademic + LegalCohen's Kappa calculation + QC sampling dashboardStatistical reliability + spot-check validation
ReportingAcademicPRISMA flow diagram + Framework matrix export + confidence levelsTransparency, auditability, peer review-ready

10.3 Feature Priority Matrix

FeaturePriorityMethodology SourceImplementation PhaseComplexity
Hash certification (SHA-256)πŸ”΄ CriticalLegal eDiscoveryPhase 1 (Core)Low
Timeline with evidence linkingπŸ”΄ CriticalLegal + PolicePhase 1 (Core)Medium
S.A.M. contradiction detection (8 types)πŸ”΄ CriticalIntelligence + LegalPhase 1 (Core)High
Document de-duplicationπŸ”΄ CriticalLegal eDiscoveryPhase 1 (Core)Low
Metadata preservationπŸ”΄ CriticalLegal eDiscoveryPhase 1 (Core)Low
Admiralty Code source ratings🟠 HighIntelligencePhase 2 (Enhanced)Low
ACH matrix implementation🟠 HighIntelligencePhase 2 (Enhanced)High
TAR 2.0/CAL prioritization🟠 HighLegal eDiscoveryPhase 2 (Enhanced)Very High
Framework matrix export🟠 HighAcademicPhase 2 (Enhanced)Medium
Entity extraction + network graphs🟑 MediumJournalismPhase 3 (Advanced)High
Near-duplicate detection🟑 MediumLegal eDiscoveryPhase 3 (Advanced)Medium
Cohen's Kappa IRR calculation🟑 MediumAcademicPhase 3 (Advanced)Low
Red Cell alternative analysis🟑 MediumIntelligencePhase 3 (Advanced)Medium
Professional standards mapping🟑 MediumRegulatoryPhase 2 (Enhanced)Medium
PRISMA flow diagram generation🟒 Nice-to-haveAcademicPhase 4 (Polish)Low
Words of Estimative Probability🟒 Nice-to-haveIntelligencePhase 4 (Polish)Low

Legend:

  • πŸ”΄ Critical = Core functionality, must-have for MVP
  • 🟠 High = Important for professional use, implement soon
  • 🟑 Medium = Valuable enhancement, implement when resources permit
  • 🟒 Nice-to-have = Refinement, polish, future roadmap

11. Cost and Resource Requirements

11.1 Typical Team Composition by Methodology

MethodologySmall InvestigationMedium InvestigationLarge InvestigationSpecialized Roles
Police1-2 officers (volume crime)5-10 officers + SIO + analyst20-50+ officers + MIR team (SIO, Deputy, analysts, indexers, disclosure officer)Forensic specialists, intelligence officers, family liaison
Journalism1-3 reporters + editor5-15 reporters + editors + fact-checker100-600+ reporters (Panama Papers: 370+) + editors + data analystsData journalists, digital forensics, legal advisors
Legal eDiscovery2-5 attorneys (review + QC)10-50 attorneys + project manager + IT support50-200+ attorneys + managed review teams + forensic expertseDiscovery consultants, forensic examiners, privilege specialists
Regulatory1 case officer + 2 case examiners2-3 investigators + expert witness5-10 investigators + multiple experts + legal advisorClinical/professional experts, legal advisors, HR specialists
Intelligence3-5 analysts + manager (minimum 3 reviewers for QC)10-20 analysts + collection management + targeteers50-100+ analysts + collectors + all-source fusion + Red CellSIGINT analysts, GEOINT specialists, HUMINT handlers, targeteers
Academic1-2 researchers (PhD students) + supervisor3-5 researchers + PI + research assistants10-20 researchers (systematic review team) + librarian + statisticianResearch assistants, coders, statisticians, methodologists

Key Insight: Intelligence requires minimum 3 independent reviewers for reliable quality control (research-backed). Legal eDiscovery scales to largest teams (200+ attorneys for 10M+ doc cases). Academic most efficient for small-scale investigations (1-2 researchers).


11.2 Time to Results

MethodologySmall (<1K docs)Medium (1K-10K)Large (10K-100K)Very Large (>100K)
Police2-4 weeks (volume crime)2-6 months (serious crime)6-18 months (major incident)12-36 months (complex fraud)
Journalism1-3 months (single-story investigation)6-12 months (investigative series)12-18 months (cross-border project)12-24 months (Panama Papers: 12 months)
Legal eDiscovery1-2 weeks (TAR on small set)1-3 months (medium case)3-12 months (large litigation)12-24 months (10M+ docs, complex)
Regulatory2-4 months (simple case)6-12 months (moderate case)12-18 months (complex case)Rare (regulatory cases rarely exceed 10K docs)
IntelligenceHours-days (F3EAD tactical cycle)Weeks (operational intelligence)1-6 months (strategic assessment)Not applicable (collection β‰  analysis)
Academic3-6 months (small qualitative study)6-12 months (moderate study)12-18 months (large qualitative/systematic review)18-36 months (very large systematic review)

Fastest: Intelligence (F3EAD: hours-days for tactical). Slowest: Regulatory and Academic (12-36 months for complex). Most scalable: Legal eDiscovery (TAR 2.0 handles 10M+ docs in 12 months).


11.3 Tool Costs (2026 Estimates)

Tool CategoryFree/Open SourceMid-Tier CommercialEnterprise/PremiumTypical Use Case
Case ManagementExcel, Google Sheets ($0)Airtable, Notion ($10-20/user/month)HOLMES2, Relativity ($50K-$500K/year)Small: Free; Medium: Mid-tier; Large: Enterprise
Document ProcessingTika, Tesseract OCR ($0)Nuix, Exterro ($50K-$200K/year)Relativity Processing (included in platform)Small: Free; Medium-Large: Commercial
eDiscovery PlatformNone viableLogikcull ($250-$10K/month)Relativity, Everlaw, DISCO ($100K-$5M/year)Small: Logikcull; Large: Relativity/Everlaw
QDAS (Qualitative)QualCoder, Taguette ($0)Dedoose ($10-30/month)NVivo, Atlas.ti, MAXQDA ($600-$1,500/license)Small: Free; Medium-Large: NVivo/Atlas
Network AnalysisGephi, Cytoscape ($0)NodeXL ($500/year)Palantir, i2 Analyst's Notebook ($10K-$100K/year)Small-Medium: Gephi; Large: Palantir (gov/corp only)
Systematic ReviewRayyan (free tier) ($0-$1K/year)Covidence ($1K-$5K/year)DistillerSR, Nested Knowledge ($5K-$20K/year)Small: Rayyan; Medium-Large: Covidence
Timeline SoftwareExcel, Google Sheets ($0)Aeon Timeline, TimelineJS ($50-$100)CaseFleet, TimeMap ($500-$5K/year)Small: Excel; Medium-Large: CaseFleet

Cost-Effective Stack for <$5K:

  • Case management: Airtable ($240/year)
  • Document processing: Tika + Tesseract (free)
  • QDAS: NVivo ($1,500 one-time)
  • Network analysis: Gephi (free)
  • Systematic review: Rayyan (free tier)
  • Timeline: Excel/Google Sheets (free)
  • Total: ~$2,000 (viable for small investigations)

Enterprise Stack for Large Investigations ($100K-$1M):

  • eDiscovery platform: Relativity ($200K-$500K/year)
  • Processing: Nuix ($100K/year)
  • Network analysis: i2 Analyst's Notebook ($50K/year)
  • Systematic review: Covidence ($5K/year)
  • Total: $355K-$655K/year (large-scale, court-focused)

12. Hybrid Approach Recommendations

12.1 Common Hybrid Patterns

Hybrid CombinationUse CaseStrengths CombinedExample
Police + Legal eDiscoveryCriminal investigation with large digital evidence volume (>10K docs)Court admissibility (Police) + TAR 2.0 efficiency (Legal)Fraud investigation: Police chain of custody + eDiscovery TAR review
Journalism + AcademicInvestigative research projects, historical analysisLarge-scale document handling (Journalism) + peer-reviewed rigor (Academic)Truth commission: Journalism document assembly + Academic qualitative analysis
Legal + IntelligenceComplex civil litigation requiring rapid prioritizationCourt admissibility (Legal) + ACH hypothesis testing (Intelligence)Securities fraud: Legal TAR + Intelligence ACH for multiple defendant theories
Regulatory + AcademicProfessional standards violations with precedent researchCurrent impairment assessment (Regulatory) + systematic evidence synthesis (Academic)Medical misconduct: Regulatory case + Academic systematic review of prior cases
Intelligence + PoliceCounter-terrorism, organized crimeF3EAD speed (Intelligence) + Court admissibility (Police)Counter-terror: Intelligence F3EAD targeting + Police arrest/prosecution evidence
Journalism + LegalHigh-profile public interest litigationTransparency (Journalism) + Legal defensibility (Legal)Whistleblower case: Journalism leak analysis + Legal eDiscovery for court

Most Common Hybrid: Police + Legal eDiscovery (digital evidence in criminal cases). Most Effective: Journalism + Academic (combines scale with rigor for research-focused investigations).


12.2 When to Combine Methodologies

Combine methodologies when:

  1. Volume exceeds methodology's typical scale

    • Example: Regulatory case with >10K documents β†’ Add Legal eDiscovery TAR 2.0
  2. Multiple goals require different standards

    • Example: Investigation for both internal discipline (Regulatory) AND criminal prosecution (Police)
  3. Speed and rigor both critical

    • Example: Safeguarding investigation requiring F3EAD speed (Intelligence) + court admissibility (Police)
  4. Cross-border or multi-jurisdictional

    • Example: Journalism collaboration (ICIJ model) + Legal eDiscovery (discovery coordination)
  5. Public interest + legal accountability

    • Example: Journalism transparency (public reporting) + Legal defensibility (litigation)

Don't combine when:

  • Single methodology already handles scale and requirements (avoid unnecessary complexity)
  • Methodologies have conflicting standards (e.g., Journalism source protection vs. Police disclosure obligations)
  • Budget insufficient for dual tooling

13. Conclusion and Key Takeaways

13.1 Methodology Selection Summary

If you need...

  • Criminal court admissibility β†’ Police (beyond reasonable doubt, PACE/CPIA compliance)
  • Civil court admissibility β†’ Legal eDiscovery (balance of probabilities, TAR validated)
  • Large-scale document handling (>100K) β†’ Journalism (Aleph 4B+ docs) or Legal (TAR 2.0)
  • Fastest results (hours-days) β†’ Intelligence (F3EAD cycle)
  • Professional standards assessment β†’ Regulatory (GMC/HCPC/NMC/BPS)
  • Peer-reviewed rigor β†’ Academic (PRISMA, Cochrane, IRR β‰₯0.70)
  • Transparency and public accountability β†’ Journalism (editorial review, publication) or Academic (open data)
  • Cost-efficiency (<$20K) β†’ Regulatory or Academic (public/grant funding)
  • Bias mitigation with institutional protection β†’ Intelligence (Red Cell, minimum 3 reviewers)
  • Statistical validation of findings β†’ Legal (TAR precision/recall) or Academic (Cohen's Kappa)

13.2 Critical Insights for Forensic Intelligence Platforms

  1. No single methodology handles all requirements. Phronesis FCIP must adopt hybrid approach integrating:

    • Legal eDiscovery (evidence authentication, timeline construction, TAR automation)
    • Intelligence (bias mitigation via SATs, ACH, Red Cell, Admiralty Code)
    • Academic (quality control via IRR testing, Framework Method, audit trails)
    • Regulatory (professional standards, current impairment assessment)
  2. 10,000 documents is inflection point where manual review becomes infeasible and AI-assisted review (TAR 2.0) becomes cost-effective (40-60% reduction).

  3. Transparency β‰  Debiasing. Intelligence research (Fisher 2008) found no empirical basis for SATs eliminating bias. Value is in making reasoning auditable, not bias-free. Combine with peer review and contrarian challenge (Red Cell).

  4. Minimum 3 independent reviewers required for reliable quality control (Intelligence finding, research-backed).

  5. Evidence linking is non-negotiable. Every timeline event, contradiction, finding must cite supporting documents (Bates numbers, page/line references). Legal eDiscovery's 8-step timeline + Journalism's document-driven approach are gold standards.

  6. Hash certification (SHA-256) + metadata preservation are baseline requirements for evidence integrity. Legal eDiscovery standards (FRE 902(14), dual hash) should be adopted universally.

  7. Source reliability (Admiralty Code) independent of information credibility. Reliable source can report bad information (deceived); unreliable source can report true information (broken clock). Rate independently.

  8. Words of Estimative Probability (WEP) should replace vague language ("likely" = 60-80%, not subjective interpretation). Separate probability (likelihood of event) from confidence (quality of evidence).

  9. Court validation matters. Legal eDiscovery (TAR mandated in Hyles v. NYC 2016) and Police (PACE/CPIA compliance) have strongest legal defensibility. If court admissibility possible, design for it from start.

  10. Tool maturity varies dramatically. Legal eDiscovery (Relativity, Everlaw), Journalism (Aleph), Academic (NVivo, Atlas.ti) have mature, battle-tested tools. Regulatory lags significantly (Excel-based case management common).


13.3 Implementation Priorities for Phronesis FCIP

Phase 1 (Core - MVP):

  1. Hash certification (SHA-256) for evidence integrity
  2. Timeline engine with mandatory evidence linking (Legal 8-step + Police 5WH)
  3. S.A.M. contradiction detection (8 types: TEMPORAL, EVIDENTIARY, MODALITY_SHIFT, etc.)
  4. Document de-duplication (hash-based, EDRM standard)
  5. Metadata preservation (Legal eDiscovery standard)

Phase 2 (Enhanced - Professional Use): 6. Admiralty Code source reliability ratings (Intelligence) 7. ACH matrix implementation (Intelligence hypothesis testing) 8. Professional standards mapping (Regulatory: GMC/HCPC/NMC/BPS) 9. Framework matrix export (Academic: case x theme analysis) 10. TAR 2.0/Continuous Active Learning prioritization (Legal eDiscovery)

Phase 3 (Advanced - Specialized Features): 11. Entity extraction + network graphs (Journalism: Neo4j-style) 12. Near-duplicate detection + version diff (Legal eDiscovery) 13. Cohen's Kappa IRR calculation (Academic quality control) 14. Red Cell alternative analysis mode (Intelligence contrarian review) 15. Quality control dashboard with precision/recall metrics

Phase 4 (Polish - Reporting Enhancements): 16. PRISMA flow diagram generation (Academic transparency) 17. Words of Estimative Probability templates (Intelligence reporting) 18. Confidence level assessment framework (High/Moderate/Low) 19. Audit package export (report + hashes + source docs) 20. Framework matrix visualization (Academic Framework Method)


13.4 Final Recommendation

For Phronesis FCIP: Adopt Legal eDiscovery as architectural foundation (evidence authentication, timeline construction, document management) with targeted enhancements from other methodologies:

  • Intelligence β†’ Bias mitigation (SATs, ACH, Red Cell), source reliability (Admiralty Code)
  • Academic β†’ Quality control (IRR testing, Framework Method), transparency (audit trails)
  • Regulatory β†’ Professional standards (GMC/HCPC/NMC/BPS), current impairment assessment
  • Journalism β†’ Network analysis (entity relationships), large-scale document handling
  • Police β†’ Investigative workflow (5WH framework), chain of custody

This hybrid approach provides:

  • Legal defensibility (FRE 902(14), hash certification, metadata preservation)
  • Scalability (TAR 2.0 handles 10M+ docs)
  • Quality control (IRR testing, peer review, Red Cell challenge)
  • Transparency (audit trails, PRISMA diagrams, Framework matrices)
  • Professional standards compliance (Regulatory frameworks integrated)

Result: A forensic intelligence platform that combines court-validated rigor (Legal) with peer-reviewed quality (Academic) and operational speed (Intelligence), suitable for investigations from 100 documents to 10 million.


Document Version: 1.0 Last Updated: January 2026 Next Review: Annually or upon significant methodology updates (e.g., PRISMA extensions, new TAR court cases) Maintained By: Phronesis FCIP Research Team Purpose: Living reference document for methodology selection and system design decisions


Sources and References

All findings in this matrix are synthesized from the six comprehensive methodology research documents:

  1. Police Investigation Workflows - College of Policing, HOLMES2, PEACE, CPIA
  2. Investigative Journalism Methods - ICIJ, OCCRP, Panama Papers, Aleph
  3. Legal eDiscovery Workflows - EDRM, TAR 2.0, Cochrane-influenced approaches
  4. Regulatory Investigations - GMC, HCPC, NMC, BPS professional standards
  5. Intelligence Analysis Methods - CIA, UK JIC, 66 SATs, ACH, ICD 203
  6. Academic Research Methods - PRISMA 2020, Cochrane v6.5, Grounded Theory, QDAS

See individual methodology documents for full citations and source materials.