Intelligence agencies have introverted thinking. They adapt to new threats and technologies, but their entrenched attitudes and methods can make them a danger to themselves, the public and to peace. Agencies look inwards to secrecy instead of out to the broader world, they struggle to avoid tunnel vision, they resist major organisational change, and the increasing use of AI to judge humans brings greater security but potentially prejudices the introversion.
Introverted thinking is one of the top 5 ethical challenges facing intelligence agencies.
February 2026. This article has been updated to cover changes in AI and related practices. Minor style changes have also been made.
Agencies trust secrets above public information
Intelligence researchers and analysts often ignore Open Source Intelligence. OSINT is the kind of information available to anyone, if they know where to look. Without OSINT, analysts can misjudge people and miss clues. (It also adds to tunnel vision – see below.)
The arguments against OSINT:
- “OSINT can’t be trusted.” OSINT is not trusted because its providence (origin) has not been assessed for reliability, and it’s slow to perform the checks.
- “OSINT is inconvenient.” Intelligence agencies operate behind electronic firewalls that block most OSINT. So, OSINT has to be accessed by different teams or from different parts of the building.
- “We’re already swamped with raw intelligence”. “We need to make better use of what we have, not add more.” However, the secretly collected intelligence only covers part of the overall picture, and by itself leaves major gaps.
- “Our opponents also have OSINT.” Yes, and they may use it against you. See Paranoia – a 2-page story of work overload and OSINT.
Introverted thinking has improved in the past.
Filtered OSINT has been fed into intelligence systems. There are also separate research teams that specialise in OSINT.Introverted thinking can be improved again.
Improve the technologies and processes to understand providence. Actively question the validity of intel that relies primarily on secret sources. Use OSINT specialists from the private sector.
Disasters from tunnel vision – group think that feeds from introverted thinking
Faulty intelligence analysis and estimates have led to too many political and military disasters. Part of it comes from processes that were not followed, wrong actions or judgements, or ambiguous jargon. But underlying these mistakes there’s a common theme: people in agencies are prone to group think and a failure to challenge the big assumptions.
- Where was the warning that Saddam Hussein was intentionally lying about having weapons of mass destruction? The key thinkers ignored the null hypothesis: that he was lying about the weapons. And that was aggravated by a process error of relying heavily on the “evidence” of a single human source, who subsequently turned out to be wrong.
Group think besets all large organisations, especially those with long histories and a resistance to recruiting experienced people from other areas. However, within the intelligence profession, people die when it goes wrong.
Introverted thinking has improved in the past. Intelligence analysis, estimation and strategic notice has come a long way since the 1990s. In 1996, improved practices were described by Michael Herman, an ex director of GCHQ. Since then an academic discipline of intelligence studies has developed with multiple books, conferences and university courses.
Introverted thinking can be improved again. The customers of intelligence product should be increasingly critical and demanding. It’s in the self-interest of the intelligence services to support this, and it’s something that can be provided independently by the academics and think tanks. It can also be provided by fiction writing, as on this website – see my stories.
Artificial Intelligence embeds introverted thinking
Artificial Intelligence is an essential part of processing raw intelligence, and it provides powerful research tools. For focussed tasks, it can outperform people. And for advanced technology states, it is cheaper than people.
AI in intelligence analysis can collect and process relevant research fast, draft reports, and provide new insights. Human analysts can achieve more, and much faster. The ethical impact is risk. It’s not that there’ll be a Terminator (because even the best AI is narrow-minded). The dangers are that people trust it. For example, AI snoops on all of us. It identifies suspects to be put on “wolf lists”.
But:
- AI can be tricked, because it looks backwards at what’s happened before. There is nothing creative in its calculations.
- AI systems are built on human assumptions and models of behaviour, but these are simplifications, often biased towards western cultures and normal people. In intelligence, people who are atypical can be identified as potential threats, while others who learnt to pretend to be normal go undetected.
- The AI systems are repeatedly tweaked to correct anomalies in their results. It can produce a labyrinth of distorted logic.
- Each AI system is focussed on a single measure of success. In the human world, success is dependent on many things, and we change our minds about what those should be. (Despite multiple films and stories, “General AI” doesn’t exist, even in the laboratory. AI is narrow.)
See Person Lurking – a 2-page story of automated intelligence analysis.
Introverted thinking has improved in the past. The intelligence profession has repeatedly fluctuated between overdependence on technology and underplaying human intelligence. Events like 9/11 triggered adjustments.
Introverted thinking can be improved again, hopefully without a disaster to trigger change. Build skills at countering AI and understanding its failures. Build expertise that can counter technology. Encourage lateral thinking, situational awareness and diversity of thinking. (See Omand-2020 in Related Reading, below.)
Battlefield automation could trigger war
The speed of hypersonic missiles gives very little time for verification of a threat, and for deciding on the response.
The US military are reputed to be building a new system for Command, Control, Communication and Intelligence (C3I). It will inevitably make extensive use of AI. For that and other cases, it’s also inevitable that military forces across the world will research and build automated systems for cases where humans cannot make decisions fast enough. There are major ethical questions about using them, other than experimentally. There are also legal questions of who is responsible for the consequences of the machine’s actions.
The technology would use automated intelligence collection, and then apply research and analysis techniques using predefined logic and “optimised” with AI.
- But what if key sensors send incorrect signals?
- Or the underlying logic of the AI is faulty?
- And what if the AI has been over-refined and takes an unexpected direction?
- Or there is a rogue human operator?
If a battlefield weapon fires incorrectly then a counter-retaliation would be expected. A tit-for-tat escalation could follow between machines. It would continue until the human commanders realised the mistake and considered how to stop the sequence.
That’s for battlefield weapons, which are close together. There’s also the nightmare that automated decision-making will be used for strategic nuclear weapons; but that technology has not been “perfected”, yet.
Introverted thinking has improved in the past. Case 1. The deployment of drones involved extensive attention to the legal, political and related ethical challenges. (Many would argue that it’s only partially handled the ethics, but it’s a distinct difference from no ethics.)
Case 2. Between conflicting nations there are communication channels for avoiding crises and defusing them when they occur. In the 1960s a “hot line” was established between the American and Russian presidents. There are now also channels between military commands, and for foreign affairs (international politics). Some of these are backchannels via intermediaries.
Case 3. There are norms of behaviour (codes of conduct) for states; and there are also norms for military commands. An example was when President Trump ordered a cruise missile attack on Syria, but first warned Russia.
Introverted thinking can be improved again. The same approaches can be adapted for automated intelligence processes. That includes hotlines, back channels, norms, careful testing of technologies, and legal constraints. There needs to be public awareness of the existence of these safeguards, because unlike during the Cold War we live in a socially connected world where rumours abound and there are players who actively spread misinformation.
Related reading on introverted thinking in intelligence agencies
Intelligence power in peace and war, by Michael Herman. Published by Cambridge University Press, 1996. ISBN-13 : 978-0521566360. Also available as a free PDF.
Securing the State, by Sir David Omand (a former director of GCHQ, who also served 7 years on the Joint Intelligence Committee). Published by Hurst, 2010, ISBN 978-1-84904-188-1.
How Spies Think, by Sir David Ormand. Published by Viking, October 2020, ISBN: 978-0-24138-518-0.
Rage inside the machine, by Robert Elliott Smith. Published by Bloomsbury Business, 2019. A thoughtful description of how AI works, and its limitations. Robert Elliott Smith has been a pioneer in AI from the start.
The Perfect Weapon, by David E. Sanger. Published by Crown, 2019. ISBN. 978-0-451-49789-5. A careful study of the use of cyber weapons by nations. David Sanger is a security columnist at the New York Times.

(Picture selection inspired by introverted thinking.)
