Intelligence agencies – shed the introverted thinking
Intelligence agencies – shed the introverted thinking
1 December 2020
Intelligence agencies have introverted thinking. They adapt to new threats and technologies, but their entrenched attitudes and methods are a danger to themselves, the public and to peace. Agencies look inwards to secrecy instead of out to the broader world, they struggle to avoid tunnel vision, they resist major organisational change, they allow machines to judge humans, and now there are threats from automated decision-making.
Introverted thinking within intelligence agencies is one of their top ethical challenges.
Agencies trust secrets above public information
Intelligence researchers and analysts often ignore Open Source Intelligence. OSINT is the kind of information available to anyone, if they know where to look. Without OSINT, analysts can misjudge people and miss clues. (It also adds to the bubble mentality described in the next section.)
The arguments against OSINT:
“OSINT can’t be trusted.” OSINT is not trusted because its province (origin) has not been assessed for reliability, and it’s slow to perform the checks.
“OSINT is inconvenient.” Intelligence agencies operate behind electronic firewalls that block most OSINT. So OSINT has to be accessed by different teams or from different parts of the building.
“We already have ginormous amounts of raw intelligence. We need to make better use of what we have, not add more.”
“Our opponents also have OSINT.” Yes, and they may use it against you.
In the last 15 years there has been a massive rise in the use of electronic interception and image processing, and with it artificial intelligence (AI). The importance of analysis has remained, but for research activities the machines are “on the rise”.
Your opponents also have AI. If their AI is way behind yours, they’ll focus on using human skills to counter your AI. You need the same skills to build defensive security, to fight their counter intelligence activities, and to find your own weaknesses. The conclusion: human expertise is even more important than before.
Introverted thinking has improved in the past. Filtered OSINT has been fed into intelligence systems. There are also separate research teams that specialise in OSINT.
Introverted thinking can be improved again. Improve the technologies and processes. Actively question the validity of intel that relies primarily on secret sources. Use OSINT specialists from the private sector.
Disasters from tunnel vision – group think that feeds from introverted thinking
Faulty intelligence analysis and estimates have led to too many political and military disasters. Part of it comes from processes that were not followed, wrong actions or judgements, or ambiguous jargon. But underlying these mistakes there’s a common theme: people in agencies are prone to group think and a failure to challenge the big assumptions.
Where was the warning that Saddam Hussein was intentionally lying about having weapons of mass destruction? The key thinkers ignored the null hypothesis: that he was lying about the weapons. And that was aggravated by a process error of relying heavily on the “evidence” of a single human source, who subsequently turned out to be wrong.
Group think besets all large organisations, especially those with long histories and a resistance to recruiting experienced people from other areas. With the intelligence profession, people die when it goes wrong.
Introverted thinking has improved in the past. Intelligence analysis, estimation and strategic notice has come a long way in the last 35 years. In 1996, improved practices were described by Michael Herman, an ex director of GCHQ. Since then an academic discipline of intelligence studies has developed: papers, books, conferences and university courses. The most recent publication was from Sir David Omand, another ex director of GCHQ. “How Spies Think” contains detailed guidelines for analysts. It’s also useful reading for the “customers” of intelligence product, in government and beyond.
Introverted thinking can be improved again. The customers of intelligence product should be increasingly critical and demanding. It’s in the self-interest of the intelligence services to support this, and it’s something that can be provided independently by the academics and think tanks and fiction writers. (See more in About Adrian.)
Artificial Intelligence is an essential part of processing raw intelligence, and it provides powerful research tools. For focussed tasks, it can outperform people. And for advanced technology states, it is cheaper than people.
The ethical impact is risk. It’s not that there’ll be a Terminator (because even the best AI is narrow-minded). The dangers are that people trust it. For example, AI snoops on all of us. It identifies suspects to be put on “wolf lists” and then tracks them electronically without showing the evidence to humans. (If it showed the evidence, that would be breaking privacy laws.)
But:
AI can be tricked, because it looks backwards at what’s happened before. There is nothing creative in its calculations.
AI systems are built on human assumptions and models of behaviour, but all models are simplifications of the world.
The AI systems are repeatedly tweaked to correct anomalies in their results. It produces a labyrinth of distorted logic.
AI systems are focussed on a single measure of success. In the human world, success is dependent on many things, and we change our minds about what those should be.
Introverted thinking has improved in the past. The intelligence profession has repeatedly fluctuated between overdependence on technology and underplaying human intelligence. Events like 9/11 triggered adjustments.
Introverted thinking can be improved again, hopefully without a disaster to trigger change. Build skills at countering AI and understanding its failures. Build expertise that can counter technology. Encourage lateral thinking, situational awareness and diversity of thinking. (See Omand-2020 in Related Reading, below.)
Battlefield automation could trigger war
The speed of hypersonic missiles gives very little time for verification of a threat, and for deciding on the response.
The US military are building a new system for Command, Control, Communication and Intelligence (C3I). It will inevitably include AI. For that and other cases, it’s also inevitable that military forces across the world will research and build automated systems for cases where humans cannot make decisions fast enough. There are major ethical questions about using them, other than experimentally. There are also legal questions of who is responsible for the consequences of the machine’s actions.
The technology would use automated intelligence collection, and then apply research and analysis techniques using predefined logic and “optimised” with AI.
But what if the sensors send incorrect signals?
Or the underlying logic of the AI is faulty?
And what if the AI has been over-refined and takes an unexpected direction?
Or there is a rogue human operator?
If a battlefield weapon fires incorrectly then a counter-retaliation would be expected. A tit-for-tat escalation could follow between machines. It would continue until the human commanders realise the mistake and consider how to stop the sequence.
That’s for battlefield weapons, which are close together. There’s also the nightmare that automated decision-making will be used for strategic nuclear weapons; but that technology has not been perfected, yet.
Introverted thinkinghas improved in the past. Case 1. The deployment of drones involved extensive attention to the legal, political and related ethical challenges. (Many would argue that it’s only partially handled the ethics, but it’s a distinct difference from no ethics.)
Case 2. Between conflicting nations there are communication channels for avoiding crises and defusing them when they occur. In the 1960’s a “hot line” was established between the American and Russian presidents. There are now also channels between military commands, and for foreign affairs (international politics). Some of these are backchannels via intermediaries.
Case 3. There are norms of behaviour (codes of conduct) for states; and there are also norms for military commands. An example was when President Trump ordered a cruise missile attack on Syria, but first warned Russia.
Introverted thinking can be improved again. The same approaches can be adapted for automated intelligence processes. That includes hotlines, back channels, norms, careful testing of technologies, and legal constraints. There needs to be public awareness of the existence of these safeguards, because unlike during the Cold War we live in a socially connected world where rumours abound and there are players who actively spread misinformation.
Related reading on introverted thinking in intelligence agencies
Securing the State, by Sir David Omand (a former director of GCHQ, who also served 7 years on the Joint Intelligence Committee). Published by Hurst, 2010, ISBN 978-1-84904-188-1.
How Spies Think, by Sir David Ormand. Published by Viking, October 2020, ISBN: 978-0-24138-518-0.
Rage inside the machine, by Robert Elliott Smith. :Published by Bloomsbury Business, 2019. A thoughtful description of how AI works, and its limitations. Robert Elliott Smith has been a pioneer in AI from the start.
The Perfect Weapon, by David E. Sanger. Published by Crown, 2019. ISBN. 978-0-451-49789-5. A careful study of the use of cyber weapons by nations. David Sanger is a security columnist at the New York Times.
The non-necessary cookies are to retain statistics about visitors, using Google Analytics. This helps us improve existing content and identify what new content to create. (There is also a non-necessary cookie to aid with caching pages, to improve website performance.)
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Intelligence agencies – shed the introverted thinking
Intelligence agencies have introverted thinking. They adapt to new threats and technologies, but their entrenched attitudes and methods are a danger to themselves, the public and to peace. Agencies look inwards to secrecy instead of out to the broader world, they struggle to avoid tunnel vision, they resist major organisational change, they allow machines to judge humans, and now there are threats from automated decision-making.
Introverted thinking within intelligence agencies is one of their top ethical challenges.
Agencies trust secrets above public information
Intelligence researchers and analysts often ignore Open Source Intelligence. OSINT is the kind of information available to anyone, if they know where to look. Without OSINT, analysts can misjudge people and miss clues. (It also adds to the bubble mentality described in the next section.)
The arguments against OSINT:
See Paranoia – a 2-page story of work overload and OSINT.
In the last 15 years there has been a massive rise in the use of electronic interception and image processing, and with it artificial intelligence (AI). The importance of analysis has remained, but for research activities the machines are “on the rise”.
Your opponents also have AI. If their AI is way behind yours, they’ll focus on using human skills to counter your AI. You need the same skills to build defensive security, to fight their counter intelligence activities, and to find your own weaknesses. The conclusion: human expertise is even more important than before.
Disasters from tunnel vision – group think that feeds from introverted thinking
Faulty intelligence analysis and estimates have led to too many political and military disasters. Part of it comes from processes that were not followed, wrong actions or judgements, or ambiguous jargon. But underlying these mistakes there’s a common theme: people in agencies are prone to group think and a failure to challenge the big assumptions.
Group think besets all large organisations, especially those with long histories and a resistance to recruiting experienced people from other areas. With the intelligence profession, people die when it goes wrong.
Artificial Intelligence embeds introverted thinking
Artificial Intelligence is an essential part of processing raw intelligence, and it provides powerful research tools. For focussed tasks, it can outperform people. And for advanced technology states, it is cheaper than people.
The ethical impact is risk. It’s not that there’ll be a Terminator (because even the best AI is narrow-minded). The dangers are that people trust it. For example, AI snoops on all of us. It identifies suspects to be put on “wolf lists” and then tracks them electronically without showing the evidence to humans. (If it showed the evidence, that would be breaking privacy laws.)
But:
See Person Lurking – a 2-page story of automated intelligence analysis.
Battlefield automation could trigger war
The speed of hypersonic missiles gives very little time for verification of a threat, and for deciding on the response.
The US military are building a new system for Command, Control, Communication and Intelligence (C3I). It will inevitably include AI. For that and other cases, it’s also inevitable that military forces across the world will research and build automated systems for cases where humans cannot make decisions fast enough. There are major ethical questions about using them, other than experimentally. There are also legal questions of who is responsible for the consequences of the machine’s actions.
The technology would use automated intelligence collection, and then apply research and analysis techniques using predefined logic and “optimised” with AI.
If a battlefield weapon fires incorrectly then a counter-retaliation would be expected. A tit-for-tat escalation could follow between machines. It would continue until the human commanders realise the mistake and consider how to stop the sequence.
That’s for battlefield weapons, which are close together. There’s also the nightmare that automated decision-making will be used for strategic nuclear weapons; but that technology has not been perfected, yet.
Related reading on introverted thinking in intelligence agencies
Intelligence power in peace and war, by Michael Herman. Published by Cambridge University Press, 1996. ISBN-13 : 978-0521566360. Also available as a free PDF.
Securing the State, by Sir David Omand (a former director of GCHQ, who also served 7 years on the Joint Intelligence Committee). Published by Hurst, 2010, ISBN 978-1-84904-188-1.
How Spies Think, by Sir David Ormand. Published by Viking, October 2020, ISBN: 978-0-24138-518-0.
Rage inside the machine, by Robert Elliott Smith. :Published by Bloomsbury Business, 2019. A thoughtful description of how AI works, and its limitations. Robert Elliott Smith has been a pioneer in AI from the start.
The Perfect Weapon, by David E. Sanger. Published by Crown, 2019. ISBN. 978-0-451-49789-5. A careful study of the use of cyber weapons by nations. David Sanger is a security columnist at the New York Times.
Related Posts
CITP Chartered IT Professional, salaries – get better benefits
Rage – an intimidation case
Stop project risk analysis failure – 6 tips from (secret) intelligence estimation