scribu 7 hours ago

The HN submission title is editorialized in a non-helpful way. Why beat a dead horse instead of focusing on what’s actually new in TFA?

The linked paper proposes an obvious-in-retrospect form of data augmentation: shuffle the order of the premises, so that the model can’t rely on spurious patterns. That’s kinda neat.

  • spaintech 6 hours ago

    Correct, I updated the title of the original paper. Thank you your bringing up.

spaintech 8 hours ago

When a language model is trained for chain-of-thought reasoning, particularly on datasets with a limited number of sequence variations, it may end up memorizing predetermined step patterns that seem effective but don’t reflect true logical understanding. Rather than deriving each step logically from the previous ones and the given premises, the model might simply follow a “recipe” it learned from the training data. As a result, this adherence to learned patterns can overshadow genuine logical relationships, causing the model to rely on familiar sequences instead of understanding why one step logically follows from another.

In other words, language models are advanced pattern recognizers that mimic logical reasoning without genuinely understanding the underlying logic.

We might need to shift our focus on the training phase for better performance?

  • philipov 7 hours ago

    > As a result, this adherence to learned patterns can overshadow genuine logical relationships, causing the model to rely on familiar sequences instead of understanding why one step logically follows from another.

    To be honest, even humans rarely get above this level of understanding for many tasks. I don't think most people really understand math above the level of following the recipes they learned by rote in school.

    Or beyond following the runbook in their IT department's documentation system.

    And when the recipe doesn't work, they are helpless to figure out why.

  • kingkongjaffa 7 hours ago

    > instead of understanding why one step logically follows from another

    There’s currently 0% chance of “understanding” happening at any point with this technology.

    • ikanreed 7 hours ago

      I mostly agree, but struggle with saying this with perfect certainty.

      Understanding in the "have mental model of the world, apply it, derive thoughts from that model, derive words from thoughts" pattern is a thing they don't do the way we do.

      But understanding of some kinds CAN be encoded into tokens and their relationships. They're clearly capable of novel, correct inferences, that are not directly contained within their training sets.

      I all-but-guarantee my "My fish suffocated when I brought it to space, even though I gave it a space suit filled with pure water, why?" test case is not something it was explicitly trained on, but it correctly inferred "Because fish need oxygenated water"

    • jvanderbot 7 hours ago

      How do we define understanding?

      • TeMPOraL 7 hours ago

        There are many ways to define it. Taking, for example the "definition" from Wikipedia here[0], you could say that LLMs are understanding, in a distilled form, because relationships is precisely what they're made of.

        --

        [0] - https://en.wikipedia.org/wiki/Understanding#Definition - though this feels more like vague musings than a definition proposal.

  • joe_the_user 7 hours ago

    That claim sounds like a quote from a paper but it's not from the currently linked paper. The paper itself seems more like antidote to the problem and does seem to roughly assume the claim.

    I like the claim and I'd guess it's true but this seems like a weird way to introduce it.

  • smrtinsert 7 hours ago

    Isn't that what the study you linked to roughly proposes?

  • belter 7 hours ago

    But John Carmack promised me AGI....

    • zeknife 7 hours ago

      I haven't kept up with his tweets, but I got the impression he deliberately chose to not get involved in LLM hype in his own AI research?

spaintech 6 hours ago

If an LLM’s logic is derived primarily from its training phase… essentially, by following patterns it has previously seen; doesn’t that underscore the critical role of training? We invest significantly in reinforcement learning and subsequent processes, so if the paper’s claim is accurate, perhaps we need to explore innovative approaches during the training phase

fancyfredbot 7 hours ago

The title is actually "Order Doesn't Matter, But Reasoning Does: Training LLMs with Order-Centric Augmentation".

farts_mckensy 7 hours ago

Statistical inference is a form of logic. Can it do pure logical deduction? No. And neither can humans without some underlying pattern recognition to form premises. This notion of true "understanding" is a fantasy.

srveale 7 hours ago

Is there someone you're trying to disprove? LLMs are inherently statistical, as opposed to other techniques that rely on symbolic or logical relationships. I'm no expert but this is one of the very first things I learned when taking a class on neural networks.