We’re sidelining our ‘unknown knowns’ to make AI ubiquitous

How the Rumsfeld Matrix helps us decode the threats technology brings

Mike Scialom
8 min readFeb 21, 2024
Games for children now use AI as a marketing tool

Donald Rumsfeld is best known for being both the youngest and the oldest Secretary of Defense in US history — though fortunately he had gaps between his tenures. Just as well really, because he was a Machiavellian warmonger who caused tremendous suffering and slaughter in the time he did have.

Chicago-born Rumsfeld was Defense Secretary in President Gerald Ford’s administration in 1975–77 (when he was 43), and then again in 2001 to 2006 under President George W Bush (when he was in his 70s).

He was in post for 9/11, which was followed with unseemly haste by the Iraq War. Rumsfeld’s green-lighting of the torture of Iraqis remains one of the biggest scandals in modern US history. He perpetrated the lie that Saddam Hussein was developing weapons of mass destruction, and that Hussein was allied with Al Qa’ida.

Most of Rumsfeld’s public pronouncements were geopolitical word salads, but in 2002 he said something which entered the collective and has remained there ever since.

In a 2002 news briefing about the lack of evidence linking the Iraqi government with the supply of weapons of mass destruction to terrorist groups, Rumsfeld made a gnomic remark about “known and unknown knowns” which has been endlessly analysed, recycled and repeated.

What he said was this: “Reports that say that something hasn’t happened are always interesting to me, because as we know, there are known knowns; there are things we know we know. We also know there are known unknowns; that is to say we know there are some things we do not know. But there are also unknown unknowns — the ones we don’t know we don’t know.”

This “analysis” has been given a name: the Rumsfeld Matrix.

The Rumsfeld Matrix can be used to analyse pretty much any proposition. Take an apparently benign one: “Today is going to be great”. Feed it into the Rumsfeld Matrix, and what comes out? There are known knowns — the day will have 24 hours. There will be the known unknowns — the weather, for instance. And there will be unknown unknowns, such as whether your partner is going to move out and set up shop with a younger lover because of your lazy-ass ways.

Donald Rumsfeld

In fact, the Rumsfeld Matrix wasn’t entirely Rumsfeld’s idea. The concept of “unknown unknowns” was created in 1955 by two American psychologists, Joseph Luft and Harrington Ingham, in their development of the Johari window.

The four quadrants of the Johari window include an open area (things that you and others know about you); a blind area (things that you don’t know about yourself, but others do); a hidden area (things you know about yourself, but others don’t); and an unknown area (things that neither you nor others know about yourself).

The Johari window was used as a technique to help people better understand their relationship with themselves as well as others, but fast forward a bit and it was being used by NASA as they developed a programme for space travel. It helped to assess safety issues, for instance.

Canadian columnist Mark Steyn called the Rumsfeld interpretation “in fact a brilliant distillation of quite a complex matter”. Australian economist and blogger John Quiggin said: “Although the language may be tortured, the basic point is both valid and important.” However, the psychoanalytic philosopher Slavoj Žižek pointed out the key slip — the trio of examples given by Rumsfeld misses out the fourth option, the unknown known.

Žižek exposed this slip and proposed that the unknown known is “that which one intentionally refuses to acknowledge that one knows”.

He suggested that politicians are good at this — for instance Rumsfeld made his “known unknown” remarks during the Iraq War. He must have known, in 2002, that there weren’t really any weapons of mass destruction — but it was very inconvenient to acknowledge this himself, not least because perhaps because he might then have had to acknowledge it to others.

AI-enabled toys are unregulated and already on the market, and there’s no legal framework for what they can and can’t do

Subsequent politicians have taken this intellectual subterfuge to new heights — Boris Johnson, the former prime minister of the UK, must have known that his “oven-ready deal” to leave the UK was totally inedible, but he needed to get Brexit over the line. Donald Trump must know that he’s a crook, but he’s built a political career on the back of being a successful — and legitimate — businessman, and his intention to refuse to acknowledge the truth is very strong. Trump is a master at airbrushing away — even vaporising — the unknown knowns, the inconvenient truths, of his life. But we all do it to some extent — we rationalise and then dismiss inconvenient truths.

George Orwell, in 1984, assumed that the brainwashing process took place in the external world, but in fact we are our own best brainwashers.

There are other applications for the Rumsfeld Matrix. Let’s try it with AI.

Part of the trouble with AI is that there aren’t that many known knowns — it’s too young as a technology. It’s probably not even in its infancy — the reality is more like it’s still being born. But we know that AI makes Siri work and enables ChatGPT and helps make video games great.

So how about known unknowns? Well we’ve got a few of those. Take the idea that AI is going to save people doing a lot of boring jobs. If you’re a lawyer, that’s probably great — you can use AI to search large language models for case studies. But what if you sell tickets in a train station? Or you’re a translator, or even an accountant?

Pretty quickly it becomes a question of what will these millions of people DO when they no longer have a job, or any prospect of getting a job? That’s one big known unknown we’re talking about right there.

But as if that’s not enough to worry about, there’s the unknown unknowns of AI. The big question here is “what will happen if AI gets smarter than us?”.

The prospect of being somehow overtaken by AI is ringing an alarm bell deep in our consciousness. It’s asking: “Are we writing ourselves out of history?” Because AI can run a country far better than a politician — not that hard, if you look at the evidence of the last 10 years, but still a bit…. chilling? Armies, transport, education — are we going to hand over all that level of functionality to AI? Because that’s where we’re headed, as I found out when I was in my local Sainsbury’s the other day and saw this toy on sale.

AI-enabled Pictionary on sale in Sainsbury’s in Cambridge. Picture: Mike Scialom

On the front of this child’s toy was the following legend:

‘Pictionary — Human sketches, AI guesses!’ And then, in smaller lettering further down the front of the box: ‘Can you predict if the AI will get it right?’

Now please bear in mind that Pictionary is for ages 8 to 15. What’s happening here is that AI is being normalised to children — this at a time when the legal, political and business communities still haven’t decided how to (or whether to) legislate the use of AI. They all know that it’s bad when it comes to deepfakes, identity theft and simulations of everything from actor’s voices to writer’s books, and they all say they’re drafting legislation to deal with the threat but the fact is, folks, they’re already selling AI as a feature to eight year old children.

And, of course, to sell stuff you have to lie a little bit, and the biggest lies are the ones that are most difficult to defend against. Take this little bit of text on the front of Pictionary, enlarged from the main picture:

‘App designed with children’s privacy in mind’? The whole point of AI is to make privacy as redundant as the horse and cart!

It says : ‘App designed with children’s privacy in mind.’

This is a straight-up lie and I’d be happy if Sainsbury’s sues me because it would give me an opportunity to bring the matter to a wider audience.

When are we ever going to get ahead of technology instead of allowing it to dictate to us, to the point of allowing it to rewire our brains?

The whole point of AI is to learn from your data — the data that not so long ago was private: what you read, what shows you watch, what products you buy, what services you use… AI is a clear and present danger to humanity’s current cultural model. Releasing untested technology into consumer markets is always fraught with dangers — and costs, massive costs, if you get it wrong. The car insurance industry was built because of that: where’s the AI equivalent? The pharmaceutical industry takes seven years on average to bring a drug to market — why don’t we apply that rigour to new technologies like AI? Social media causes untold suffering for young people: who clears that up, because it’s sure as hell not Mark Zuckerberg?

But watch out: the Rumsfeld Matrix has only three options, and there’s a fourth. Could it be there are unknown knowns with AI? After all, we know that technology can be bad for us, we’ve seen it with social media — it’s very bad for us. We sideline things we know to be true: we silo them off into the unknown knowns. Facebook, Instagram, X/Twitter — they’ve disfigured our societies, they’re warped our ability to function in the world, but we still gravitate towards their allure like moths to a flame, we still give Silicon Valley a ‘Get Out of Jail Free’ card: when it all goes wrong. We blame ourselves, never the purveyors of these digital delusions.

One last thought: what if the world that the current generation of children is growing up in is so pressurised that they welcome AI into their lives with none of the hesitation of the today’s crop of adults? Maybe youngsters see in AI the chance to rewire their brains — and want to do that? Maybe, by rewiring their brains away from the preoccupations of us adults —and the stress, war, genocide, poverty, greed and nastiness of current Western societies that’s been normalised— they see an escape hatch.

Maybe kids will be gravitating happily towards AI as a survival mechanism — to survive US?

But should we let them? Or should we try and rebuild the world in a way that makes them want to be in it — a friendlier, more caring, fairer, more equitable world which is a pleasure to be in, skewed towards joy and justice, rather than the endless horrors currently on offer?

Anyway, if you can’t be bothered to think it through, you could always ask Siri.

--

--

Mike Scialom

Journalist, writer; facilitator at Cambridge Open Media