parpfish 2 days ago

My take on stochastic parrots is the similar to the authors concluding section.

This debate isn’t about the computations underlying cognition, it’s about wanting to feel special.

The contention that “it’s a stochastic parrot” usually implied “it’s merely a stochastic parrot, and we know that we must be so much more than that, so obviously this thing falls short”.

But… there never was any compelling proof that we’re anything more than stochastic parrots. Moreover, those folks would say that any explanation of cognition falls short because they can always move the goal posts to make sure that humans are special.

  • jfengel 2 days ago

    The way I see it, the argument comes down to an assertion that as impressive as these technologies are, they are a local maximum that will always have some limitation that keeps it from feeling truly human (to us).

    That's an assertion that has not been proven one way or the other. It's certainly true that progress has leveled off after an extraordinary climb, and most people would say it's still not a fully general intelligence yet. But we don't know if the next step requires incremental work or if it requires a radically different approach.

    So it's just taking a stance on an unproven assertion rather than defining anything fundamental.

    • moritonal 2 days ago

      True, and worth appreciating how Humans hit some pretty blunt local maxima that machines have long since suppassed such as land speed or operations per second.

  • AndrewDucker 2 days ago

    Do you really just complete the next token when speaking? You don't plan ahead, you don't form concepts and then translate them into speech? You don't make models, work with them, carefully consider their interactions, and then work out ways to communicate them?

    Because, with a reasonable understanding of how LLMS work, the way that they produce text is nothing like the way that my mind works.

    • Suppafly a day ago

      >Do you really just complete the next token when speaking? You don't plan ahead, you don't form concepts and then translate them into speech? You don't make models, work with them, carefully consider their interactions, and then work out ways to communicate them?

      A good amount of people spend most of their lives operating that way.

  • alganet 2 days ago

    A machine that can, unattended, move the goal post to make itself look special by inventing new forms of expression then would beat "those folks" for good.

    Not the best human behavior, but certainly human behavior.

    LLMs are built with the intention of mimicry. It's no surprise they look like mimicry. If we show a new trick they can't do, and their makers train it to mimic that, can we be blamed for calling that mimicry... mimicry?

    • golly_ned 2 days ago

      The promise of LLMs isn’t that they appear to be intelligent through mimicry, but that they do understand.

      • gizmo686 2 days ago

        How do LLMs promise to "understand". Broadly speaking, AI/ML can be divided into two groups: mimicary, which is given a corpus of assumed good data and attempts to generalize it, and reinforcement learning where the AI is given a high level fitness function and set loose to optimize it.

        The current generation of LLM falls pretty heavy in the mimicary family of AI.

        • smokel 2 days ago

          This is a false dichotomy.

          You introduce two categories, state that LLMs are not part of one category, and then conclude that it must be in the other. In reality, the distinction between the two classes is not so clear.

          The transformer architecture is quite something, and the number of layers and nodes involved in a typical LLM is staggering. This goes way beyond linear regression.

  • habinero 2 days ago

    In my experience, it's a phrase use to mock the weirdos who anthropomorphize and fetishize a statistics model into some fantasy "intelligence", when it has none by design.

    It's less "humans are special" and more "this ain't it, chief".

    • floydnoel 21 hours ago

      well said. as i saw put recently, "LLMs are calculators for text" and nobody seems to get confused whether calculators are sentient

  • gsf_emergency_2 2 days ago

    >moving the goalposts

    I don't know whether to acknowledge that Kurt Goedel was a very special parrot

    Because "moving the goalposts" (exceedingly intricately, one must add) was indeed a specialty of his (or Cantor's).

    Otherwise, imho, humans are superior to parrots because we can kinda derive "understanding" in the face of paradox.. (also Einstein,Lobachevsky, special parrots?)

    YMMV, I personally haven't tried to get fresh in this specific way with a you-get-whadya-pay-for chatbot..

    (Though some say the average person is also an user of tools like Occam's razor, so maybe we are all stochastic crows?)

  • goatlover 2 days ago

    We're conscious animals who communicate because we navigate social spaces, not because we're completing the next token. I wonder about hackers who think they're nothing more than the latest tech.

    • int_19h 2 days ago

      You postulate it as if these two are mutually exclusive, but it's not at all clear why we can't be "completing the next token" to communicate in order to navigate social spaces. This last part is just where our "training" (as species) comes from, it doesn't really say anything about the mechanism.

      • goatlover 2 days ago

        Because what's motivating our language is a variety of needs, emotions and experiences as social animals. As such we have goals and desires. We're not sitting there waiting to be prompted for some output.

        • int_19h a day ago

          You constantly have input from all your senses, which is effectively your "prompt". If you stick a human into a sensory deprivation tank for long enough, very weird things happen.

    • parpfish 2 days ago

      How do you know we’re not just completing the next token?

      It seems eminently plausible that the way cognition works is to take in current context and select the most appropriate next action/token. In fact, it’s hard to think of a form of cognition that isnt “given past/context, predict next thing”

      • throwup238 2 days ago

        Philosophers have been arguing a parallel point for centuries. Does intelligence require some sort of (ostensibly human-ish) qualia or does “if it quacks like a duck, it is a duck” apply?

        I think it's better to look at large language models in the context of Wittgenstein. Humans are more than next token predictors because we participate in “language games” through which we experimentally build up a mental model for what each word means. LLMs learn to “rule follow” via a huge corpus of human text but there’s no actual intelligence there (in a Wittgensteinian analysis) because there’s no “participation” beyond RLHF (in which humans are playing the language games for the machine). There’s a lot to unpack there but that’s the gist of my opinion.

        Until we get some rigorous definitions for intelligence or at least break it up into many different facets, I think pie in the sky philosophy is the best we can work with.

      • giardini 2 days ago

        Trivially, because any two of rarely produce the same "next token"!

        An ensemble of LLMs trained identically would generate the same next token(s) forever. But we don't - we generate different sequences.

        We are not LLMs.

      • goatlover 2 days ago

        If you ignore everything that makes us human to make some sort of analogy between brain activity and LLMs. Let us not forget they are tools we made to serve our goals.

      • bluefirebrand 2 days ago

        > How do you know we’re not just completing the next token

        Because we (humans) weren't born into a world with computers, internet, airplanes, satellites, etc

        "Complete next token" means that everything is already in the data set. It can remix things in interesting ways, sure. But that isn't the same as creating something new

        Edit: I would love to hear someone's idea about how you could "parrot" your way into landing people on the moon without any novel discovery or invention

        • aeonik 2 days ago

          Everything is made out of just Protons, Neutrons, Electrons, along with some fields that allow interaction. (and Muons, Neutrinos, and a few others)

          Everything that is physical is nothing but remixes and recombinations of a very small set of tokens.

          • bluefirebrand 2 days ago

            > Everything that is physical is nothing but remixes and recombinations of a very small set of tokens.

            We're not talking about "physical" with LLMs, we're talking about knowledge and creativity and reasoning, which are metaphysical.

            The sum total of human knowledge cannot possibly be purely composed of remixes and recombinations, there has to be some baseline that humans invented for there to even be something to remix!

            • aeonik a day ago

              All of that is rooted in physics though.

              Lnowledge and creativity absolutely are physical things. It's clear from brain injury studies that there are very localized and specific functions to this creativity.

              Drugs also clearly have a very physical affect on these attributes.

          • goatlover 2 days ago

            You're conflating symbolic descriptions for the physical stuff itself.

            • aeonik a day ago

              You're right to flag the distinction between symbols and substance, but I think you're misapplying it here.

              I'm not conflating symbolic systems with the physical substrate: they're obviously different levels of abstraction. What I am saying is that symbolic reasoning, language, creativity, and knowledge all emerge from the same underlying physical processes. They're not magic. They're not floating in some Platonic realm. They’re instantiated in real, measurable patterns, whether in neurons or silicon.

              You can't have metaphysics without physics. And we have solid evidence, from neuroscience, from pharmacology, from evolutionary biology, that the brain's symbolic output is fundamentally a physical phenomenon. Injuries, chemicals, electrical stimulation, they all modulate “metaphysical” experience in completely physical ways.

              Emergence matters here. Yes, atoms aren’t thoughts, but enough atoms arranged the right way do start behaving like a thinking system. That’s the whole point of complex systems theory, chaos theory, and even early AI work like Hofstadter and Dennett. I recommend "Gödel, Escher, Bach", or Melanie Mitchell's "Complexity: A Guided Tour", if you're curious.

              If you're arguing there's something else, some kind of unphysical or non-emergent component to knowledge or creativity, I'd honestly love to hear more, because that's a bold claim. But waving away the physical substrate as irrelevant doesn’t hold up under scrutiny.

    • triceratops 2 days ago

      Everyone's computing the next token. Intelligence is computing the right token.

      • giardini 2 days ago

        What is the "right" token? How do you identify it?

        Best to not assume humans are LLMs.

      • goatlover 2 days ago

        Until we create the next thing, then intelligence will be compared to that. Anyway, I don't think neuroscientists are making this claim.

  • MichaelZuo 2 days ago

    Well it’s even more dismal in reality.

    Gather enough parrots on a stage and at least one can theoretically utter a series of seemingly meaningfuly word-like sounds that is legitimately novel, that has never been uttered before.

    But I doubt any randomly picked HN user will actually accomplish that in fact before say age 40. Most people just don’t ever get enough meaningful speaking opportunities to make that statistically likely. There’s just too many tens of billions of people that have already existed and uttered words.

    • anon373839 2 days ago

      That not reality, it’s theory.

      • MichaelZuo 2 days ago

        Can you write down the actual argument?

        It seems to be plausible, to me, given enough parrots.

        • thfuran 2 days ago

          Novel utterances happen all the damn time. See https://venturebeat.com/business/15-of-all-google-searches-a... for tangential evidence.

          Edit: actually that looks like it's just an offhand mention of Google's initial report, but I don't really feel like spending more time tracking down details to rebut so silly a claim.

          • MichaelZuo 2 days ago

            Unique gibberish and spelling errors also count as a “unique search” so I don’t see how it relates.

            Do you have an argument that makes sense?

        • anon373839 2 days ago

          This is embarrassing, but I hastily misread your comment as saying something it didn’t say. So just disregard my comment altogether!

  • imtringued 2 days ago

    Except when I encounter people like you, they are mostly interested in saying that LLMs are the be all end all of intelligence only to walk their statements back every time a new LLM innovation comes out that proves them wrong.

    Since humans are just stochastic parrots, we don't need to add features or change anything about LLMs. Innovation is for the weak and stupid. We can just scale LLMs by doing nothing except adding more data, parameters and training time.

    The status quo bias is unreal. I don't even know what purpose it serves other than discouraging technological progress. The people claiming to champion LLMs by denying the differences between humans and LLMs are their biggest enemies.

Tossrock 2 days ago

My favorite part of the "stochastic parrot" discourse was all the people repeating it without truly understanding what they were talking about.

  • posnet 2 days ago

    Clearly all the people repeating it without truely understanding it are just simple bots with a big lookup table of canned responses.

    • Tossrock 2 days ago

      Actually I think they're tiny homunculi, trapped in a room full of meaningless symbols but given rules on how to manipulate them.

_heimdall 2 days ago

This argument isn't particularly compelling in my opinion.

I don't actually like the stochastic parrot argument either to be fair.

I feel like the author is ignoring the various knobs (randomization factors may be a better term) applied to the models during inference that are tuned specifically to make the output more believable or appealing.

Turn the knobs too far and the output is unintelligible garbage. Don't then them far enough and the output feels very robotic or mathematical, its obvious that the output isn't human. The other risk of not turning the knobs far enough would be copyright infringement, but I don't know if that happens often in practice.

Claiming that LLMs aren't stochastic parrots without dealing with the fact that we forced randomization factors into the mix misses a huge potential argument that they are just cleverly disguised stochastic parrots.

  • chongli 2 days ago

    This seems like it was inevitable. Most people do not understand the meaning of the word "stochastic" and so they're likely to simply ignore it in favour of reading the term as "_____ parrot."

    What you have described, a probability distribution with carefully-tuned parameters, is perfectly captured by the word stochastic as it's commonly used by statisticians.

  • amenhotep 2 days ago

    Human brains are similarly finely tuned and have similar knobs, it seems to me. People with no short term memory have the same conversations over and over again. Drunk people tend to be very predictable. There are circuits that give us an overwhelming sense of impending doom, or euphoria, or the conviction that our loved ones have been replaced by imposters. LLMs with very perturbed samplers bear, sometimes, a striking resemblance to people on certain mind-altering substances.

    • _heimdall a day ago

      And that's really a core of the problem, we don't well understand how the human mind works and we can't really define or identify "intelligence."

      I mentioned I don't like the stochastic parrot argument, and that I find this article's argument lacking. Both are for the same reason, the arguments are making claims that we simply can't make while missing the fundamental understanding of what intelligence really is and how human (and other animals) brains work.

  • mbauman 2 days ago

    Yes, this really seems like an argument between two contrived straw people at the absolute extremes.

nopinsight 2 days ago

For the skeptics: Scoring just 10% or so in Math-Perturb-Hard below the original MATH Level 5 (hardest) dataset seems in line with or actually better than most people would do.

Does that mean most people are merely parrots too?

https://math-perturb.github.io/

https://arxiv.org/abs/2502.06453

Leaderboard: https://math-perturb.github.io/#leaderboard

Anyone who continues to use the parrot metaphor should support it with evidence at least as strong as the “On the Biology of a Large Language Model” research by Anthropic which the article refers to:

https://transformer-circuits.pub/2025/attribution-graphs/bio...

  • _heimdall 2 days ago

    You seem to be coming with the assumption that the difference between parrots and what many would consider intelligence is math, or that math is a reliable indicator of those different groups.

    What makes you believe that is the case?

    • nopinsight 2 days ago

      Solving hard math problems requires understanding the structure of complex mathematical reasoning. No animal is known to be capable of that.

      Most definitions and measurements of intelligence by most laypeople and psychologists include the ability to reason, with mathematical reasoning widely accepted as part of or a proxy for it. They are imperfect but “intelligence” does not have a universally accepted definition.

      Do you have a better measurement or definition?

      • _heimdall 2 days ago

        Math is a contrived system though, there are no fundamental laws of nature that require math to be done the way we do it.

        A human society may develop their own math in a base 13 system, or an entirely different way of representing the same concepts. When they can't solve our base 10 math problems in a way that matches how we expect does that mean they are parrots?

        Part of the problem here is that we still have yet to land on a clear, standard definition of intelligence that most people agree with. We could look to IQ, and all of its problems, but then we should be giving LLMs an IQ test to answer rather than a math test.

        • nopinsight 2 days ago

          The fact that much of physics can be so elegantly described by math suggests the structures of our math could be quite universal, at least in our universe.

          Check out the problems in the MATH dataset, especially Level 5 problems. They are fairly advanced (by most people’s standards) and most are not dependent on which N in the base-N system used to solve them. The answers would be different of course but the structures of the problems and solutions remain largely intact.

          Website for tracking IQ measurements of LLMs:

          https://www.trackingai.org/

          The best one already scores higher than all but the top 10-20% of most populations.

      • timr 2 days ago

        > Solving hard math problems requires understanding the structure of complex mathematical reasoning. No animal is known to be capable of that.

        Except, it doesn't. Maybe some math problems do -- or maybe all of them do, when the text isn't in the training set -- but it turns out that most problems can be solved by a machine that regurgitates text, randomly, from all the math problems ever written down.

        One of the ways that this debate ends in a boring cul-de-sac is that people leap to conclusions about the meaning of the challenges that they're using to define intelligence. "The problem has only been solved by humans before", they exclaim, "therefore, the solution of the problem by machine is a demonstration of human intelligence!"

        We know from first principles what transformer architectures are doing. If the problem can be solved within the constraints of that simple architecture, then by definition, the problem is insufficient to define the limits of capability of a more complex system. It's very tempting to instead conclude that the system is demonstrating mysterious voodoo emergent behavior, but that's a bit like concluding that the magician really did saw the girl in half.

      • bluefirebrand 2 days ago

        > Solving hard math problems requires understanding the structure of mathematical reasoning

        Not when you already know all of the answers and just have to draw a line between the questions and the answers!

        • nopinsight 2 days ago

          Please check out the post on Math-Perturb-Hard conveniently linked to above before making a comment without responding to it.

          A relevant bit:

          “for MATH-P-Hard, we make hard perturbations, i.e., small but fundamental modifications to the problem so that the modified problem cannot be solved using the same method as the original problem. Instead, it requires deeper math understanding and harder problem-solving skills.”

          • bluefirebrand 2 days ago

            Seems like that would explain why it scored 10%, not 100%, to me

            A child could score the same knowing the outcomes and guessing randomly which ones go to which questions

            • nopinsight 2 days ago

              My request:

              “Could you explain this sentence concisely?

              For the skeptics: Scoring just 10% or so in Math-Perturb-Hard below the original MATH Level 5 (hardest) dataset seems in line with or actually better than most people would do.”

              Gemini 2.5 Pro:

              “The sentence argues that even if a model's score drops by about 10% on the "Math-Perturb-Hard" dataset compared to the original "MATH Level 5" (hardest) dataset, this is actually a reasonable, perhaps even good, outcome. It suggests this performance decrease is likely similar to or better than how most humans would perform when facing such modified, difficult math problems.”

            • nkurz 2 days ago

              I think 'nopinsight' and the paper are arguing that the drop is 10%, not that the final score is 10%. For example, Deepseek-R1 dropped from 96.30 to 85.19. Are you actually arguing that a child guessing randomly would be able to score the same, or was this a misunderstanding?

int_19h 2 days ago

The whole "debate" around LMs being stochastic parrots is strictly a philosophical one, because the argument hinges on a very specific definition of intelligence. Thought experiments such as Chinese room make this abundantly clear.

  • gwern 2 days ago

    It was not 'strictly philosophical' in the way some things like the Chinese Room argument are; in Chinese Room, it's stipulated that the Room is pragmatically capable of responding like a native Chinese speaker (it is somehow implemented super-fast and can chat in Mandarin with you and pass a Mandarin Turing Test).

    However, the stochastic parrot arguments (and Gary Marcus in many of his writings) made specific, unambiguous empirical predictions about how LLMs would never be able to do many things, such as 'add numbers'. For example, in the original Bender & Koller 2020 paper "Climbing towards NLU: On Meaning, Form, and Understanding in the Age of Data" (which lays out the core ideas which are the justification for the 2021 paper that actually introduced the rhetoric of 'stochastic parrot') made many clear, falsifiable statements about what LLMs would never be able to do; here's one of them: https://aclanthology.org/2020.acl-main.463.pdf#page=14

    > To get a sense of how existing LMs might do at such a task, we let GPT-2 complete the simple arithmetic problem 'Three plus five equals'. The five responses below, created in the same way as above, show that this problem is beyond the current capability of GPT-2, and, we would argue, any pure LM.

    One could say many things about this claim, but not that it is a 'strictly philosophical one'.

  • jfengel 2 days ago

    That's about the only thing the Chinese room makes clear. The argument otherwise strikes me as valueless.

    • jltsiren 2 days ago

      It's important to understand the argument in context.

      Theoretical computer science established early that input-output behavior captures the essence of computation. The causal mechanism underlying the computation does not matter, because all plausible mechanisms seem to be fundamentally equivalent.

      The Chinese room argument showed that this does not extend to intelligence. That intelligence is fundamentally a causal rather than computational concept. That you can't use input-output behavior to tell the difference between an intelligent entity and a hard-coded lookup table.

      On one level, LLMs are literally hard-coded lookup tables. But they are also compressed in a way that leads to emergent structures. If you use the LLM through a chat interface, you are interacting with a Chinese room. But if the LLM has other inputs beyond your prompt, or if it has agency to act on its own instead of waiting for input, it's causally a different system. And if the system can update the model on its own instead of using a fixed lookup table to deal with the environment, this is also a meaningful causal difference.

      • kentonv 2 days ago

        > The Chinese room argument showed that this does not extend to intelligence.

        Searle's argument doesn't actually show anything. It just illustrates a complex system that appears intelligent. Searle then asserts, without any particular reasoning, that the system is not intelligent, simply because, well, how could it be, it's just a bunch of books and a mindless automaton following them?

        It's a cyclic argument: A non-human system can't be intelligent because, uhh, it's not human.

        This is wrong. The room as a whole is intelligent, and knows Chinese.

        People have, of course, made this argument, since it is obivous. Searle responds by saying "OK, well now imagine that the man in the room memorizes all the books and does the entire computation in his head. Now where's the intelligence???" Ummm, ok, now the man is emulating a system in his head, and the system is intelligent and knows Chinese, even though the man emulating it does not -- just like how a NES emulator can execute NES CPU instructions even though the PC it runs on doesn't implement them.

        Somehow Searle just doesn't comprehend this. I guess he's not a systems engineer.

        As to whether a lookup table can be intelligent: I assert that a lookup table that responds intelligently to every possible query is, in fact, intelligent. Of course, such a lookup table would be infinite, and thus physically impossible to construct.

        • jltsiren 2 days ago

          A lot of the controversy around the Chinese room argument is because people don't talk explicitly about different modes of thinking. One mode is searching for useful concepts and definitions. Another starts from definitions and searches for consequences. The discussion about the nature of intelligence is mostly about the former.

          Intelligence, as we commonly understand it, is something humans have but we currently can't define. Turing proposed a definition based on the observable behavior of a system. We take an aspect of human behavior people consider intelligent and test the behavior of other systems against that. If we can't tell the difference between human behavior and the behavior of an artificial system, we consider the artificial system intelligent.

          Searle used a thought experiment to argue that Turing's definition was not useful. That it did not capture the concept of intelligence in the way people intuitively understand it. If it turns out there was a person speaking Chinese answering the questions, the behavior is clearly intelligent. But if there was only a simple mechanism and a precomputed lookup table, it doesn't feel intelligent.

          Maybe we need a better definition of intelligence. Maybe intelligence in the sense people intuitively understand it is not a useful concept. Or maybe something else. We don't know that, because we don't really understand intelligence.

          • kentonv 2 days ago

            > But if there was only a simple mechanism and a precomputed lookup table, it doesn't feel intelligent.

            I think a flaw of the argument is that the way it is framed makes it sound like the system is simple (like a "lookup table") which tricks people's intuitions into thinking it doesn't sound "intelligent". But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.

            • bluefirebrand 2 days ago

              > "But the actual algorithm implemented by the "Chinese room" would in fact be insanely complex. In any case, I think Searle's intuition here is simply wrong. The system is in fact intelligent, even if it's just a series of lookup tables.

              What this sounds like to me is that you don't place much value on the system actually understanding what it is doing. The system does not understand the input or the output, it is just a series of lookup tables

              If you ask it about the input you just gave it, can it remember that input?

              If you ask it to explain your previous input, and explain the output, can it do that? Do those have to be made into new entries in the lookup table first? Does it have the ability to create new entries in the lookup table without being told to do so?

              It seems to me you consider "intelligence" a very low bar

              • kentonv a day ago

                > The system does not understand the input or the output, it is just a series of lookup tables

                What? Why? Of course it understands.

                > If you ask it about the input you just gave it, can it remember that input?

                The system Searle describes has memory, yes.

                Perhaps you are getting at the fact that LLMs performing inference don't have memory, but actually they can be given memory via context. You might argue that this is not the same as human memory, but you don't know this. Maybe the way the brain works is, we spend each day filling our context, and then the training happens when we sleep. If that is true, are humans not intelligent then?

              • adgjlsfhk1 2 days ago

                If you ask it to explain your previous input, and explain the output, can it do that?

                yes. Searle's fundamental misunderstanding is that "syntax is insufficient for semantics", but this is just nonsense that could be only believed by someone that has never actually tried to derive meaning from syntactic transformation (e.g. coding/writing a proof)

          • Kim_Bruning 2 days ago

            I used to amuse myself thinking up attacks against the Chinese room. One was to have an actual Chinese professor feed answers into the room but force the conclusion that there was no intelligence. Another was to simplify the Chinese room experiment to apply to a Turing machine instead, requiring a very large lookup table which would surely give the game away.

            I think ultimately I decided the Chinese room experiment was actually flawed and didn't reveal what it purported to reveal. From a neurophysiological viewpoint: The chinese room is very much the cartesian theater, and Searle places himself as the little man watching the screen. Since the cartesian theater does not exist, he's never going to see a movie.

            I might be missing a more subtle point of Searle's though; maybe the chinese room experiment should be read differently?

    • chongli 2 days ago

      No, the Chinese Room is essentially the death knell for the Turing Test as a practical tool for evaluating whether an AI is actually intelligent.

      • red75prime 2 days ago

        The Chinese Room didn't show anything. It's a misleading intuition pump that for some reason is being brought up again and again.

        Just think about it. All the person in the room does are mechanical manipulations. The person's understanding or not understanding of Chinese language is causally disconnected from everything including functioning of the room. There's zero reasons to look at their understanding to make conclusions about the room.

        The second point is that it's somehow about syntactic manipulation specifically. But why? What would change if the person in the room is solving QM-equations of your brain quantum state? Would it mean that the perfect model of your brain doesn't understand English language?

        • chongli 2 days ago

          The Chinese Room argument is silent on the question of the necessary and sufficient conditions for intelligence, thinking, and understanding. It’s an argument against philosophical functionalism in the theory of mind which states that it is sufficient to compare inputs and outputs of a system to infer intelligence.

          The Chinese Room is also an argument that mere symbolic manipulation is insufficient to model a human mind.

          As for the QM-equations, the many-body problem in QM is your enemy. You would need a computer far larger than the entire universe to simulate the quantum states of a single neuron, never mind a human brain.

          • red75prime 2 days ago

            Again. It's not an argument. It's a misleading intuition pump. Or a failure of philosophy to filter away bullshit, if you will.

            Please, read again what I wrote.

            Regarding "larger than Universe": "the argument" places no restrictions on runtime or space complexity of the algorithm. It's just another intuitive notion: syntactic processing is manageable by a single person, other kinds of processing aren't.

            I'm sorry for the confrontational tone, but I really dismayed that this thing keeps floating and keeps being regarded as a foundational result.

      • int_19h 2 days ago

        Only if you buy into the whole premise, which is dubious to say the least, and is a good example of begging the question.

        • chongli 2 days ago

          What exactly is dubious about faking an AI with a giant lookup table and fooling would-be Turing Test judges with it? Or did you mean the Turing Test is dubious? Because that’s what the Chinese Room showed (back in 1980).

          • int_19h a day ago

            The dubious part is claiming that a large enough lookup table is not intelligent. It's basically asserted on the grounds "well of course it isn't", but no meaningful arguments are presented to this effect.

          • Kim_Bruning a day ago

            Is it just me, or would a giant lookup table fails much weaker tests that you can throw against it. (for instance: just keep asking it to do sums until it runs out)

            • chongli a day ago

              Well presumably the lookup table can have steps you go through (produce this symbol, then go to row 3568), with state as well, so it’s more like a Turing machine than a single-shot table lookup.

      • pixl97 2 days ago

        The Chinese Room is a sophisticated way for humans to say they don't understand systematic systems and processes.

        • chongli 2 days ago

          No, I think the Chinese Room is widely misunderstood by non-philosophers. The goal of the argument is not to show that machines are incapable of intelligent behaviour.

          Even a thermostat can show intelligent behaviour. The issue for the thermostat is that all the intelligence has happened ahead of time.

          • pixl97 2 days ago

            I mean that is just talking about probabilistic systems where the probability is either zero or one. When you get in to probabilistic systems with a wider range of options than that, can you can feed back new data into the system you start getting systems that look adaptively intelligent.

            • chongli 2 days ago

              There's nothing inherent to the Chinese Room thought experiment that prohibits the operator inside from using a random number source combined with an arbitrarily sophisticated sequence of lookup tables to produce "stochastic parrot" behaviour. Anyone who has played as a Dungeon Master in D&D has used dice and tables for this.

              Similarly for feedback. All the operator needs to do is log each input in a file marked for that user and then when new input arrives the old input is used as context in the lookup table. Ultimately, arbitrarily sophisticated intelligent behaviour can be produced without the operator ever having any understanding of it.

treetalker 4 days ago

> The parrot is dead. Don’t be the shopkeeper.

Continuing the metaphor, we never wanted to work in a pet shop in the first place. We wanted to be … lumberjacks! Floating down the mighty rivers of British Columbia! With our best girls by our side!

skybrian 2 days ago

There’s still a lot to learn about how LLM’s do things. They could be doing it in either a deep or a shallow way (parroting information) depending on the task. It’s not something to be settled once and for all.

So what’s “dead?” Overconfidently assuming you can know how an LLM does something without actually investigating it.

agentultra 2 days ago

The conclusion goes into that glassy-eyed realm of, “what if we’re no better than the algorithm?”

Problem is, we don’t even know what makes us think. So you can jump to any conclusion and nobody could really tell if you’re wrong.

We do know how transformers and layers work. They’re algorithms that crunch numbers. A great deal of numbers. And we can use the training set to generate plausible outputs given some input. Yes, stochastic parrot is a reduction of all the technical sophistication in LLMs. But it’s not entirely baseless. At the end of the day it is copying what’s in the training data. In a very clever way.

However, resist the temptation to believe we understand human brains and human thought. And resist the temptation to anthropomorphize algorithms. It’s data and patterns.

jrmg 2 days ago

For a while, some people dismissed language models as “stochastic parrots”. They said models could just memorise statistical patterns, which they would regurgitate back to users.

The problem with this theory, is that, alas, it isn’t true.

If a language model was just a stochastic parrot, when we looked inside to see what was going on, we’d basically find a lookup table. … But it doesn’t look like this.

But does that matter? My understanding is that, if you don’t inject randomness (“heat”) into a model while it’s running, it will always produce the same output for the same input. In effect, a lookup table. The fancy stuff happening inside that the article describes is, in effect, [de]compression of the lookup table.

Of course, maybe that’s all human intelligence is too (the whole ‘free will is an illusion in a deterministic universe’ argument is all about this) - but just because the internals are fancy and complicated doesn’t mean it’s not a lookup table.

  • red75prime 2 days ago

    Everything can be represented as a lookup table. Well, at least everything we can rigorously reason about. Because set theory can serve as a foundation of mathematics. And relations there are sets of pairs (essentially lookup tables).

    I guess it means that we can throw away the notion that "it can be represented as a lookup table" has some profound meaning. Without further clarifications, at least. Finite/infinite lookup table, can/can't be constructed in time polynomial in the number of entries. Things like that.

alganet 2 days ago

"ESSE É UM ESPERTO", or, "this is a smart one", in portuguese.

So far, LLM models have not demonstrated grasp on dual language phonetic jokes and false cognates.

Humans learn a second language very quickly, and false cognates that work on phonetics are the first steps in doing so, doesn't require a genius to understand.

I am yet to see an LLM that can demonstrate that. They can translate it, or repeat known false cognates, but can't come up with new ones on the spot.

If they do acquire that, we will come up with another creative example of what humans can do that machines can't.

  • NooneAtAll3 2 days ago

    do deaf/mute people recognize phonetic bilingual jokes?

    • _heimdall 2 days ago

      I have a deaf friend who can read lips in two languages. As far as I am aware she can pick up humor of all kinds in both.

      She knows ASL as well, but I don't think she knows any other dialect of sign language (is dialect the right term? I'm not actually sure).

    • alganet 2 days ago

      Sign language in Brazil (Libras) is different from ASL.

      I am sure there are false cognate signs among them, and dual users of both sign languages can appreciate them.

hulitu 4 days ago

> The Parrot Is Dead

The page says "Something has gone terribly wrong :(".

He's not dead, he's resting.

cadamsdotcom 2 days ago

Why the existential crisis?

LLMs are stochastic parrots and so are humans - but humans still get to be special. Humans are more stochastic as we act on far more input than a several-thousand token prompt.

anothernewdude 2 days ago

> If a language model was just a stochastic parrot, when we looked inside to see what was going on, we’d basically find a lookup table

I disagree right away. There are more sophisticated probability models than lookup tables.

> It'd be running a search for the most similar pattern in its training data and copying this.

Also untrue. Sophisticated probability models combine probabilities based on combining all the bits of context, and by fuzzing similar tokens together via compressing (i.e. you don't care what particular token is used, just that a similar one is used.)

They're parrots, just better parrots than this person can conceive of.

getnormality 2 days ago

"Stochastic parrot" is a deepity, an ambiguous phrase that blends a defensible but trivial meaning with a more profound but false meaning.

It's true, and trivial, that all next word predictors are stochastic and are designed to generate output based on information from their training data.

The claim that this generation merely "parrots" the training data is more significant, but obviously false if you interact with these models at all.

lukasb 2 days ago

Given LLMs' OOD performance the parrot metaphor still looks good to me

devmor 2 days ago

I am getting fairly tired of seeing articles about LLMs that claim “[insert criticism] was wrong” but offer nothing other than the opinion of the author’s interpretation of a collection of other people’s writings with limited veracity.

derbOac 2 days ago

This struck me as a strawman argument against the "stochastic parrot" interpretation. I really disagree with this premise in particular: "if a language model was just a stochastic parrot, when we looked inside to see what was going on, we’d basically find a lookup table." I'm not sure how the latter follows from the former at all.

As someone else pointed out, I think there's deep philosophical issues about intelligence and consciousness underlying all this and I'm not sure it can be resolved this way. In some sense, we all might be stochastic parrots — or rather, I don't think the problem can be waved away without deeper and more sophisticated treatments on the topic.

NooneAtAll3 2 days ago

my personal anecdote about stochastic parrot arguments is that the argument itself became so repetitive that its defenders sound as parrots...

kerkeslager 2 days ago

> This kind of circuitry—to plan forwards and back—was learned by the model without explicit instruction; it just emerged from trying to predict the next word in other poems.

This author has no idea what's going on.

The AI didn't just start trying to predict the next word in other poems, it was explicitly instructed to do so. It then sucked in a bunch of poems and parroted them out.

And... the author drastically over-represents its success with a likely cherry-picked example. When I gave Claude lines to rhyme with, it gave me back "flicker" to rhyme with "killer" and "function" to rhyme with "destruction". Of the 10 rhymes I tried, only two actually matched two syllables ("later/creator" and "working"/"shirking")I'm not sure how many iterations the author had to run to find a truly unusual rhyme like "rabbit/grab it", but it pretty obviously is selection bias.

And...

I actually agree with the other poster who says that part of this stochastic parrot argument is about humans wanting to feel special. Exceptionalism runs deep: we want to believe our group (be it our nation, our species, etc.) are better than other groups. It's often wrong: I don't think we're particularly unique in a lot of aspects--it's sort of a combination of things that makes us special if we are at all.

AI are obviously stochastic parrots if you know how they work. The research is largely public and unless there's something going on in non-public research, they're all just varieties of stochastic parroting.

But, these systems were designed in part off of how the human brain works. I do no think it's in evidence at all that humans aren't stochastic parrots. The problem is that we don't have a clear definition of what it means to understand something that's clearly distinct from being a stochastic parrot. At a certain level of complexity of stochastic parroting, a stochastic parrot is likely indistinguishable from someone who truly understands concepts.

I think ultimately, the big challenge for AI isn't that it is a stochastic parrot (and it is a stochastic parrot)--I think a sufficiently complex and sufficiently trained stochastic parrot can probably be just as intelligent as a human.

I think the bigger challenge is simply that entire classes of data simply have not been made available to AI, and can't be made available with current technology. Sensory data. The kind of data a baby gets from doing something and seeing what happens. Real-time experimentation. I think a big part of why humans are still ahead of AI is that we have a lot of implicit training we haven't been able to articulate, let alone pass on to AI.

zeofig 2 days ago

I'm so glad We have all Decided this Together and we can now Enjoy the Koolaid

pyfon 2 days ago

Dead parrot is a Monty Python reference. Also where the Python language get's its name.