Researchers at Apple have come out with a new paper showing that large language models can’t reason — they’re just pattern-matching machines. [arXiv, PDF] This shouldn’t be news to anyone here. We …
My best guess is it generates several possible replies and then does some sort of token match to determine which one may potentially be the most accurate.
Didn’t the previous models already do this?
No idea. I’m not actually using any OpenAI products.