Title is a bit dramatic, but yes, Claude 3 claims to be better than GPT 4 in most ways.

  • @june@lemmy.world
    link
    fedilink
    English
    284 months ago

    I just spent some time on Claude 3, and I see how it can be considered ‘better’ than GPT4, however I quickly found that it tends to lie about itself in subtle ways. When I called it out on an error it would say things like ‘I’ll strive to be better’. I called it out on the fact that it’s model doesn’t grow or change based on conversations it has and that it’s impossible for it to strive to do anything outside of, maybe, that chat. It then went on to show me that it couldn’t even adjust within that chat by doing the same thing 5 more times in 5 different ways.

    I see the model it used for the apologies (acknowledge, apologize, state intent to do better in the future) which is appropriate for people or beings capable of learning, but it is not. I went from having a good conversation with it about a poem I wrote to being weirdly grossed out by it. GPT does a good job of not pretending to be human, and I appreciate that.

      • @june@lemmy.world
        link
        fedilink
        English
        74 months ago

        Yea that’s what I’m saying, and I don’t like it. I don’t want my LLM acting human, I want it acting like an LLM. My interactions with Claude 3 were very uncanny valley and bugged me a lot.

        • @9bananas@lemmy.world
          link
          fedilink
          English
          34 months ago

          so you’re basically saying it talked itself squarely into uncanny valley?

          i honestly didn’t consider that would be an issue for LLMs, but in hindsight…yeah, that’s gonna be a problem…

          • @june@lemmy.world
            link
            fedilink
            English
            14 months ago

            Yea, that’s exactly what it did. It was bizarre to realize actually because I felt the same way because it’s text. But here I am