lunarul@lemmy.worldtoTechTakes@awful.systems•LLMs can’t reason — they just crib reasoning-like steps from their training dataEnglish
3·
4 hours agoMy best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.
Didn’t the previous models already do this?
Perhaps? Isn’t that the definition of LLMs?
Edit: oh, i just realized it’s not talking about the LLMs, but about their apologists