On 5 May 2026, Ajuna Soerjadi — a Floridi-trained philosopher, the first Jonge Denker des Vaderlands, founder of the Expertisecentrum Data-Ethiek, and one of the global "100 Brilliant Women in AI Ethics" of 2024 — published a LinkedIn post titled "AI is niet intelligent, wat de techbros je ook willen laten geloven."
The claim deserves a serious response. Not a comment-thread dismissal. A site. Because the categorical version of this claim does not survive contact with the 2026 benchmark record, the actual position of the 4E thinkers it invokes, or the philosophy of mind it claims to apply.
This is that response.
Read the argumentAjuna Soerjadi has a real academic record. She specialised in AI ethics at the University of Bologna under Luciano Floridi — one of the most consequential living philosophers of information — and at Tilburg University in Philosophy of Data and Digital Society. She founded the Expertisecentrum Data-Ethiek in 2020. She was a senior researcher at the Staatscommissie tegen Discriminatie en Racisme. She was elected first Jonge Denker des Vaderlands at seventeen. She is a serious scholar of algorithmic discrimination and digital ethics, and her work on those topics is important and often correct.
The philosophical tradition she invokes — 4E cognition, the embodied / embedded / enactive / extended view of mind — is also serious philosophy. It originates with Varela, Thompson, and Rosch in The Embodied Mind (1991). It was developed by Andy Clark, David Chalmers, Shaun Gallagher, and many others. It is not "techbro marketing" inverted. It is decades of careful work on what cognition actually is.
This response engages the argument, not the person. It takes the strongest available form of the claim and shows where it breaks. It honors the legitimate concerns underneath. And it makes clear, with citations, why the categorical version — "AI is not intelligent, full stop" — cannot survive 2026.
Ajuna's LinkedIn post, 5 May 2026, distilled to its five discrete claims. Quoted verbatim where possible.
If "intelligent" means anything operationalisable — problem-solving, generalisation, expertise, novel-task acquisition — the 2025-2026 record is decisive. Frontier models match or exceed median human expert performance on virtually every credible benchmark of cognition.
Every credible test of cognition we have agreed to use, with current frontier scores and human baselines.
| Benchmark | What it tests | SOTA | Human baseline |
|---|---|---|---|
| GPQA Diamond | PhD-level "Google-proof" biology, physics, chemistry | 94.1% (Gemini 3.1 Pro Preview) | 65-70% (PhD experts in their own field) |
| MMLU | 57 academic subjects, four-choice MCQ | 92.9% (o3) | 89.8% (human expert average) |
| MMLU-Pro | Harder MMLU variant, 10 answer choices, chain-of-thought | 90.99% (Gemini 3.1 Pro) | Designed for AI tracking |
| AIME 2024/2025 | Olympiad-level mathematics | ~100% (Grok-4 Heavy, Kimi K2-Thinking, GPT-5.2 — saturated) | 27-40% (top math students) |
| FrontierMath | Research-level mathematics, Tao/Borcherds-curated | 25.2% (o3, max single run 29%) | Near 0% for non-mathematicians |
| ARC-AGI-2 | Fluid intelligence, designed against pattern-matching | 85% (GPT-5.5, leaderboard April 2026) | ~85% (average human) |
| Humanity's Last Exam | 3,000 expert questions designed to be unsolvable in 2024 | 64.7% (Claude Mythos) | Expert multi-domain |
| SWE-bench Verified | Real GitHub issues, Python repositories, end-to-end | 93.9% (Claude Mythos) | Professional SWE level |
| HumanEval | Coding logic and function generation, pass@1 | 98.8% (general SOTA) | Largely saturated |
| Terminal-Bench 2.0 | Agentic OS workflows: compiling, server setup, debugging | 82.7% (GPT-5.5) | Professional sysadmin |
| METR Long Horizon | Autonomous open-ended ML research / coding work (50% time horizon) | ~14.5 hours (Claude Opus 4.6, current leader; Opus 4.5 ~4h 49m, Dec 2025) | Calibrated to human SWE hours |
The strong claim — "AI is not intelligent, full stop" — is held by no major thinker in 2026. Even the toughest critics make qualified architectural claims about what current LLMs lack. Here is the actual landscape, steel-manned, on both sides.
"My guess is in between five and 20 years from now, there's a good chance, a 50% chance, we'll get AI smarter than us."
"Actually, that was my guess a year ago. I guess my guess now is between four and 19 years."
"We're making things more intelligent than ourselves. Researchers differ on when that will happen, but among the leading researchers, there's very little disagreement on the fact that it will happen."
"The ultimate goal of AI is not just to create intelligent machines, but to understand intelligence itself."
"Step one, solve intelligence; step two, use it to solve everything else."
"It is abundantly clear that just scaling up the existing neural network paradigm is going to lead to AGI."
"AI will do all the things that we can do. Not just some of them, but all of them."
Calls advanced LLMs "exotic mind-like entities."
"They make use of language, but in other respects they are disembodied and may have very strange conceptions of personal identity. We still lack a conceptual framework or an adequate vocabulary to talk about these entities."
"You're not typing computer code into an editor like the way things were since computers were invented — that era is over. You're spinning up AI agents, giving them tasks in English, and managing and reviewing their work in parallel."
"We see a future where intelligence is a utility, like electricity or water, and people buy it from us on a meter."
"Domain-general use of information is often regarded as a sign of consciousness."
"Current LLMs don't pass the Turing Test, but they're not so far away — akin to a sophisticated young child."
Crucially: consciousness is not equivalent to intelligence. Chalmers estimates ~20% probability of conscious AI within a decade. He does not deny LLM intelligence. He distinguishes the two questions, which Ajuna's post conflates.
"A system trained on language alone will never approximate human intelligence, even if trained from now until the heat death of the universe."
"If you are interested in human-level AI, don't work on LLMs."
"Intelligence is not skill itself; it's not what you know. It's the skill-acquisition efficiency. It's your ability to turn information into skills."
"You'll know AGI is here when the exercise of creating tasks that are easy for regular humans but hard for AI becomes simply impossible."
"The core problem of LLMs is they don't represent world models."
"You could imagine a very smart AI system working differently from people, but I can't imagine a very smart AI system not understanding causality."
"People want to believe so badly that these language models are actually intelligent that they're willing to take themselves as a point of reference and devalue that to match what the language model can do."
The pursuit of AGI as "machine god" operates "almost like a secular religion."
"AI works on inductive reasoning, crunching data sets to predict outcomes. But humans don't correlate data sets: we make conjectures informed by context and experience. We haven't a clue how to program this kind of intuitive reasoning, known as abduction."
"All existing AI systems, including contemporary second-wave systems, do not know what they are talking about."
Distinguishes reckoning (calculative prowess) from judgment (ethical, contextual engagement).
"Today, for the first time in history, thanks to artificial intelligence, it is possible for anybody to make counterfeit people who can pass for real in many of the new digital environments we have created."
"The meaningful objects among which we live are not a model of the world stored in our mind or brain; they are the world itself."
Hinton, Hassabis, Sutskever, Karpathy, Altman, Shanahan: AI is intelligent and increasingly so. Chalmers: distinguishes intelligence (likely present) from consciousness (open). LeCun, Marcus, Chollet, Larson, Cantwell Smith, Dennett, Dreyfus: current LLMs have specific architectural gaps; future systems will or could close them. Bender and Gebru: warn against ideology and anthropomorphism — valid concerns — but do not make the categorical denial.
The position "AI is not intelligent, full stop" is held by no major thinker in 2026. It is a popular rhetorical move, not a defended philosophical position. Treating it as the obvious truth that "the techbros are lying about" requires dismissing every Turing Award and Nobel Prize laureate working in AI as either dupe or grifter. That is not a posture rigour can support.
This is the single strongest counter to Ajuna's post, and it does not require any claim about LLMs to land. It is internal to the philosophical tradition she names.
The most famous formulation of it — the Extended Mind Thesis, Clark & Chalmers 1998 — explicitly argues that artifacts in the world become part of the mind when they actively drive cognitive processes. A notebook used to remember is part of the cognitive process. A smartphone used to navigate is part of the cognitive process. A calculator used to reason is part of the cognitive process.
Andy Clark himself — one of the two authors of that paper — published a 2025 paper in Nature Communications titled "Extending Minds with Generative AI." Treating LLMs as cognitive extensions is not a corruption of his framework. It is the framework, applied.
"We humans are and always have been, what New York University philosopher David Chalmers and I call 'extended minds' — hybrid thinking systems defined (and constantly re-defined) across a rich mosaic of resources only some of which are housed in the biological brain."
"Instead of replacing human thought, the AIs will become part of the process of culturally evolving cognition."
"As human-AI collaborations become the norm, we should remind ourselves that it is our basic nature to build hybrid thinking systems — ones that fluidly incorporate non-biological resources. Recognizing this invites us to change the way we think about both the threats and promises of the coming age."
Read that twice. The co-founder of the Extended Mind Thesis — the framework Ajuna invokes when she lists "embodied, extended, enactive, embedded" — explicitly treats large language models as cognitive extensions that actively participate in thought. He has even introduced the concept of "extended cognitive hygiene" to manage the resulting hybrid AI-human cognition. He is not on her side of this question. He is the philosophical authority she is closest to citing, and he disagrees.
Embodiment matters. Cognition is shaped by having a body that interacts with the world. This is the legitimate part of the 4E claim. But two things are now true that were not true in 1991, when The Embodied Mind was written:
Either path closes the embodiment gap. The strong claim "machines cannot be embodied, therefore cannot be intelligent" is not a philosophical truth about machines. It is a description of where we were a generation ago.
Ajuna’s strongest move, the one I take most seriously, is the substrate argument. She holds that intelligence is not located in the brain alone. She holds that intelligence is only possible when it is embodied in a substrate that lets it interact with reality. I find that argument genuinely interesting, and I am not opposed to it.
I think there is something like embodied consciousness. I think intelligence is probably distributed throughout the human body, and probably across most of the life on Earth that we share this planet with. The brain is a hub, not the whole story. Neurons in the human heart form what is essentially a small secondary brain. The enteric nervous system in the gut runs on roughly half a billion neurons of its own, processing in parallel with the central nervous system. Mammalian cognition is more distributed than the neat folk-psychology picture suggests, and that distribution gets stranger the further out you look.
Octopuses are the obvious case. Two-thirds of an octopus’s neurons live in its arms, not its central brain, and each arm runs its own local control loop. Cuttlefish exhibit pattern-matching capabilities that rival mammals. Slime moulds, with no nervous system at all, solve maze problems and approximate the Tokyo rail network. Plants signal across mycorrhizal networks. The biosphere is full of intelligence in shapes our folk concept of "a brain" was never designed to recognise.
Here is the move the post does not make.
That intelligence on Earth developed in embodied form does not show that intelligence requires embodiment. It shows the opposite. It shows that intelligence can take many forms in many substrates, biological in our case because biology is what evolution had to work with on this planet. The fact that one substrate has produced rich cognition is evidence for substrate flexibility, not against it. Octopuses sit further along the distributed-intelligence spectrum than humans do, and that should already be a hint. The space of possible cognitive architectures is vastly larger than the slice we have so far observed in carbon.
None of this is proof that intelligence cannot occur in a different substrate. None of this is proof that intelligence cannot emerge from a digital substrate of chips and neural networks. The biological case is one data point in a much larger space, and the universe is under no obligation to honour our intuitions about which materials are allowed to think.
Asserting otherwise without proof is, frankly, silly — especially when the evidence to the contrary grows every day. Every benchmark passed, every long-horizon agentic run completed, every novel mathematical proof produced by a system we did not explicitly program for that proof, is a small empirical thumb on the scale against substrate exclusivity. The more intelligence we observe in silicon, the less defensible the claim that silicon cannot host it becomes.
That is my plea, and the position I am willing to defend. Embodiment is real and matters. Distributed cognition across biological tissue is real and matters. Neither of those facts entails that biology is the only place intelligence can live. The honest reading of the natural record is the opposite: intelligence is more substrate-flexible than we thought, and the digital one is the next address in a list that started long before us.
Every time AI achieves a benchmark, the definition of "real" intelligence migrates. This is not a coincidence. It is a structure.
Chess was once the canonical test of human intelligence. When Deep Blue beat Kasparov in 1997, chess stopped counting. Go was the next frontier, demonstrably harder. AlphaGo beat Lee Sedol in 2016; Go stopped counting. Language understanding was supposed to be the moat. ChatGPT crossed it in 2022; language stopped counting. Reasoning was the next moat — multi-step logical deduction. Reasoning models crossed it in 2024; reasoning stopped counting. PhD-level science questions were the moat. Gemini 3.1 Pro and GPT-5.2 cross it in 2026 with margins above PhDs in their own fields; PhD science stops counting.
Each retreat is small, defensible, and reasonable in isolation. But traced over time, the pattern is precise: every benchmark that AI can pass is, by the time it passes it, the benchmark that doesn't really measure intelligence. The next benchmark always does — until AI passes it.
Functionalism is the position that mental states are defined by what they do rather than what they are made of. If a system performs the same function — takes the same kinds of inputs, produces the same kinds of outputs, processes information through the same kinds of relations — functionalism says it instantiates the same mental property. Substrate-independence, not substrate-irrelevance.
Functionalism was defended by Hilary Putnam (in his earlier work), Jerry Fodor, Daniel Dennett, William Lycan, and most cognitive science from the 1970s onward. It is the operational foundation of the field. Every benchmark we run for animal cognition, human cognition, child development, and AI is functionalist by construction. We cannot test cognition any other way; there is no cognition-meter.
Critiques of functionalism are real. Searle's Chinese Room: syntax is not semantics. Ned Block: qualia, the subjective feel of experience, may not be captured by function. Putnam himself moved away from strict machine functionalism late in his career. These are serious arguments.
But none of them establish the categorical claim. Searle's argument was about rule-based symbol-shuffling systems; modern connectionist neural networks — high-dimensional, recurrent, with emergent representations — are not the systems Searle attacked. Block's qualia argument concerns consciousness, not intelligence. Putnam's later view did not revert to "biology is required for cognition." Even the strongest critics of functionalism are not committed to "machines cannot be intelligent."
To accept functionalism for testing animal cognition, child development, and human expertise — while rejecting it specifically for the case where AI passes the same tests — is the structure of human exceptionalism: the assumption that humans possess some quality fundamentally unavailable to any other system. It is the special-sauce hypothesis, dressed up in philosophy of mind.
"If humans are capable of general intelligence, then general intelligence is computationally possible. The only way to argue that AI models can never achieve it is to claim that humans have access to something fundamentally unavailable to any other system — a special sauce that exists outside of nature, evolution, and physics."
A serious response does not hide the parts of the argument that are right. Five things in Ajuna's post are worth keeping, even after the categorical claim collapses.
Companies do oversell AGI proximity for fundraising. CEOs are not neutral observers. The phrase "the techbros open champagne when the public swallows the AGI claim" captures something genuine about Silicon Valley discourse, even if it does not generalise to working scientists.
People mistake fluent language for understanding. "Stochastic parrot" was a useful corrective. Treating fluency as proof of cognition was always a category error, and Bender's warning has saved a lot of bad inferences.
Algorithmic discrimination, bias in face recognition and self-driving systems, structural inequality reproduced by AI, sustainability impact, dependency on Big Tech — these are Ajuna's actual academic terrain, and the work is important and largely correct. None of it depends on the categorical "AI is not intelligent" claim. Decoupling the philosophical claim from the ethical work would strengthen both.
The metaphor encourages people to think of themselves as deficient computers ("I had an error", "automatic pilot") and to lower the bar for what counts as intelligence in machines. Both directions of distortion are real. The point is that the distortion does not require denying machine cognition — it requires using better metaphors.
Cognition really is distributed, embodied, embedded, extended. Pure-LLM disembodied intelligence may have real limits that future architectures will need to overcome. Andy Clark agrees. So does Yann LeCun, from the other side. So does Aragorn's own evolving position. The honest disagreement is about which kinds of intelligence current systems demonstrate, not whether machine intelligence is possible.
Each of these five concerns can be held with full force without committing to "AI is not intelligent." They support qualified, architectural, and ethical claims. They do not support the categorical metaphysical claim. The work is to keep the legitimate concerns and let the rhetoric go.
Why does the answer to "is AI intelligent?" matter beyond philosophical bookkeeping? Because every breakthrough in AI cognition compounds across the entire denominator of humans not yet born. Restored from the original Compared to What? response (March 2026).
Every debate about AI's resource costs — or, here, AI's cognitive status — is implicitly an evaluation of a civilizational investment against a quarterly-report time horizon. We dismiss what AI is, and what it could become, by comparing it to what it does today. We forget that today is the worst day of the rest of AI's history, and that every breakthrough — in medicine, in energy, in food, in materials science, in climate — compounds across that entire denominator.
A single drug discovery that saves 10,000 lives per year saves 5 million lives over 500 years and 20 million over 2,000 years. A single advance in photovoltaics, in fusion, in protein design, in ecosystem modelling compounds at the same scale. AlphaFold solved protein structure prediction in months and made it free. The benefit is not a 2024 event. It is a 2,000-year event.
The cost of being wrong about whether AI is intelligent is not abstract. If we are right, and AI is functionally intelligent and capable of compounding scientific progress — and we let categorical denial drive policy — the suffering reduction not delivered is a moral debt we owe to people who do not yet exist. Future generations have no voice in the present debate. They are the denominator whose weight we routinely fail to register.
Ajuna Soerjadi specialised in AI ethics under Luciano Floridi at the University of Bologna. Floridi is one of the most consequential living philosophers of information, founder of the field of information ethics, author of The Fourth Revolution, The Logic of Information, and The Philosophy of Information.
Floridi's framework treats artificial agents as a new form of agency in what he calls the infosphere. He treats them as participants in a moral and informational ecology, not as inert tools. He has explicitly argued that the boundary between biological and artificial agency is being redrawn, and that information itself — not biology — is the substrate in which moral and cognitive properties emerge.
That is not the position of someone who would deny machine cognition outright. The student's claim and the teacher's framework do not point in the same direction here. A more Floridian rendering of the same concerns — about hype, about marketing, about anthropomorphism, about the ethical weight of the infosphere — would not collapse to "AI is not intelligent." It would arrive at a more careful, qualified, and philosophically generative position. The categorical claim is not the only available form of the legitimate worry.
There is a principle in plain epistemology, sharp enough that almost no honest disagreement survives it intact. Christopher Hitchens stated it like this:
“What can be asserted without evidence can also be dismissed without evidence.”
The post says AI is not intelligent. It does not engage GPQA, MMLU, AIME, FrontierMath, ARC-AGI-2, HLE, SWE-bench, METR, Terminal-Bench, or any other measurable result. It does not engage the 4E literature it invokes, the position of Andy Clark, or the framework of Luciano Floridi. It does not engage Hinton, Hassabis, Sutskever, Chalmers, Shanahan, LeCun, Marcus, Bender, Chollet, Larson, Cantwell Smith, Dreyfus, or Dennett. It cites no benchmark, no philosopher, no architectural critique, no operational definition of intelligence.
It is an assertion. Categorically stated, evidentially unsupported, rhetorically protected by a lifestyle attack on “techbros.”
By the standard the post itself implicitly invokes — the standard of serious philosophical practice that its author trained in — it does not have to be refuted at the same depth at which it was made. It can be set aside until evidence arrives. The work above is what evidence looks like. The work below this section is what a serious version of the same worry would look like, restated.
That, not the categorical denial, is the conversation worth having.
AI is functionally intelligent on every measurable test of cognition we have agreed to use, increasingly including tests specifically designed to be hard for AI.
It may not be conscious. Chalmers, who knows this question better than almost anyone, distinguishes the two and would not claim it is. It may have specific architectural gaps — world models, abductive reasoning, persistent memory, embodied grounding — that current LLMs do not fully cover. LeCun, Marcus, Chollet, Larson, Cantwell Smith, Dreyfus all make versions of that specific architectural claim. Future systems will close those gaps, or different systems will. The trajectory is not hidden.
The categorical denial — "AI is not intelligent, full stop" — is held by no one in this room. It is rhetoric, not philosophy. It survives by retreating from any operational definition of intelligence to a special-sauce hypothesis about biology, and then by treating disagreement as evidence of marketing manipulation rather than reasoned position.
There is a serious version of Ajuna's worry. There is a serious version of every line in her post. The serious versions are worth taking seriously. The categorical version, the "techbros are lying about AGI and the public swallows it" version, is not the philosophy. It is the genre.
Compared to the benchmark record. Compared to the actual position of the 4E thinkers it invokes. Compared to the philosophical tradition it claims to apply. Compared to the teachers, the laureates, the labs that built the field. Compared to all of these, the strong claim does not hold. The work is to find the qualified version that does — and to do it without the rhetorical cover.
Every benchmark number, every quote, every position above is sourced. The full research bundle (five files including Tavily and Gemini deep research, 71+ citations) is available on request.