• Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    35
    ·
    13 hours ago

    Did someone not know this like, pretty much from day one?

    Not the idiot executives that blew all their budget on AI and made up for it with mass layoffs - the people interested in it. Was that not clear that there was no “reasoning” going on?

    • froztbyte@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      7 hours ago

      there’s a lot of people (especially here, but not only here) who have had the insight to see this being the case, but there’s also been a lot of boosters and promptfondlers (ie. people with a vested interest) putting out claims that their precious word vomit machines are actually thinking

      so while this may confirm a known doubt, rigorous scientific testing (and disproving) of the claims is nonetheless a good thing

      • Soyweiser@awful.systems
        link
        fedilink
        English
        arrow-up
        1
        ·
        23 minutes ago

        No they do not im afraid, hell I didnt even know that even ELIZA caused people to think it could reason (and this worried the creator) until a few years ago.

    • khalid_salad@awful.systems
      link
      fedilink
      English
      arrow-up
      24
      ·
      edit-2
      12 hours ago

      Well, two responses I have seen to the claim that LLMs are not reasoning are:

      1. we are all just stochastic parrots lmao
      2. maybe intelligence is an emergent ability that will show up eventually (disregard the inability to falsify this and the categorical nonsense that is our definition of “emergent”).

      So I think this research is useful as a response to these, although I think “fuck off, promptfondler” is pretty good too.

    • DarkThoughts@fedia.io
      link
      fedilink
      arrow-up
      13
      ·
      11 hours ago

      A lot of people still don’t, from what I can gather from some of the comments on “AI” topics. Especially the ones that skew the other way with its “AI” hysteria is often an invite from people who know fuck all about how the tech works. “Nudifier” or otherwise generative images or explicit chats with bots that portray real or underage people being the most common topics that attract emotionally loaded but highly uninformed demands and outrage. Frankly, the whole “AI” topic in the media is so massively overblown on both fronts, but I guess it is good for traffic and nuance is dead anyway.

      • Optional@lemmy.world
        link
        fedilink
        English
        arrow-up
        7
        ·
        11 hours ago

        Indeed, although every one of us who have seen a tech hype train once or twice expected nothing less.

        PDAs? Quantum computing. Touch screens. Siri. Cortana. Micropayments. Apps. Synergy of desktop and mobile.

        From the outset this went from “hey that’s kind of neat” to quite possibly toppling some giants of tech in a flash. Now all we have to do is wait for the boards to give huge payouts to the pinheads that drove this shitwagon in here and we can get back to doing cool things without some imaginary fantasy stapled on to it at the explicit instruction of marketing and channel sales.

        • Soyweiser@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          19 minutes ago

          Xml also used to be a tech hype for a bit.

          And i still remember how media outlets hyped up second life, forgot about it and a few months later discovered it again and more hype started. It was fun.

            • rook@awful.systems
              link
              fedilink
              English
              arrow-up
              1
              ·
              1 hour ago

              The trackpad and trackpoint of my aging linux laptop stop working if the thing gets its lid shut. The touchscreen continues to work just fine, however. It turns out that while two stupid things can’t make a good thing, they can sometimes cancel each other out.

    • conciselyverbose@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      10
      ·
      edit-2
      12 hours ago

      Yes.

      But the lies around them are so excessive that it’s a lot easier for executives of a publicly traded company to make reasonable decisions if they have concrete support for it.

    • astrsk@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      12 hours ago

      Isn’t OpenAI saying that o1 has reasoning as a specific selling point?

        • astrsk@fedia.io
          link
          fedilink
          arrow-up
          3
          ·
          4 hours ago

          Which is my point, and forgive me, but I believe is the point of the research publication.

      • DarkThoughts@fedia.io
        link
        fedilink
        arrow-up
        5
        ·
        11 hours ago

        My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate. Not sure if I’d call that “reasoning” but I guess it could potentially improve results in some cases. With OpenAI not being so open it is hard to tell though. They’ve been overpromising a lot already so it may as well be just complete bullshit.

        • lunarul@lemmy.world
          link
          fedilink
          English
          arrow-up
          4
          ·
          8 hours ago

          My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.

          Didn’t the previous models already do this?

  • Optional@lemmy.world
    link
    fedilink
    English
    arrow-up
    20
    ·
    12 hours ago

    We suspect this research is likely part of why Apple pulled out of the recent OpenAI funding round at the last minute.

    Perhaps the AI bros “think” by guessing the next word and hoping it’s convincing. They certainly argue like it.

    🔥

    • lunarul@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      edit-2
      8 hours ago

      Perhaps the AI bros “think” by guessing the next word and hoping it’s convincing

      Perhaps? Isn’t that the definition of LLMs?

      Edit: oh, i just realized it’s not talking about the LLMs, but about their apologists

  • masterplan79th@lemmy.world
    link
    fedilink
    English
    arrow-up
    6
    arrow-down
    10
    ·
    9 hours ago

    When you ask an LLM a reasoning question. You’re not expecting it to think for you, you’re expecting that it has crawled multiple people asking semantically the same question and getting semantically the same answer, from other people, that are now encoded in its vectors.

    That’s why you can ask it. because it encodes semantics.

    • self@awful.systems
      link
      fedilink
      English
      arrow-up
      11
      ·
      6 hours ago

      thank you for bravely rushing in and providing yet another counterexample to the “but nobody’s actually stupid enough to think they’re anything more than statistical language generators” talking point

    • leftzero@lemmynsfw.com
      link
      fedilink
      English
      arrow-up
      9
      ·
      6 hours ago

      Paraphrasing Neil Gaiman, LLMs don’t give you information; they give you information shaped sentences.

      They don’t encode semantics. They encode the statistical likelihood that each token will follow a given sequence of tokens.

      • LainTrain@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        6
        ·
        5 hours ago

        It’s worth pointing out that it does happen to reconstruct information remarkably well considering it’s just likelihood. They’re pretty useful tools like any other, it’s funny ofc to watch silicon valley stumble all over each other chasing the next smartphone.

    • ebu@awful.systems
      link
      fedilink
      English
      arrow-up
      17
      ·
      8 hours ago

      because it encodes semantics.

      if it really did so, performance wouldn’t swing up or down when you change syntactic or symbolic elements of problems. the only information encoded is language-statistical