• @RGB3x3@lemmy.world
      link
      fedilink
      English
      28
      edit-2
      1 month ago

      I just tried to have Gemini navigate to the nearest Starbucks and the POS found one 8hrs and 38mins away.

      Absolute trash.

          • @RGB3x3@lemmy.world
            link
            fedilink
            English
            71 month ago

            I would totally leave if the “salary to cost of living” ratio wasn’t so damn good.

            I’d move to Germany or the Netherlands or Sweden or Norway so fast if I could afford it.

          • @RGB3x3@lemmy.world
            link
            fedilink
            English
            51 month ago

            No VPN, it all has proper location access. I even tried it with a local restaurant that I didn’t think was a chain, and it found one in Tennessee. I’m like 10 minutes away from where I told it to go.

    • IndiBrony
      link
      fedilink
      English
      201 month ago

      Despite that, it delivers its results with much applum!

  • @Empricorn@feddit.nl
    link
    fedilink
    English
    651 month ago

    Some “AI” LLMs resort to light hallucinations. And then ones like this straight-up gaslight you!

    • @eatCasserole@lemmy.world
      link
      fedilink
      501 month ago

      Factual accuracy in LLMs is “an area of active research”, i.e. they haven’t the foggiest how to make them stop spouting nonsense.

      • @Swedneck@discuss.tchncs.de
        link
        fedilink
        281 month ago

        duckduckgo figured this out quite a while ago: just fucking summarize wikipedia articles and link to the precise section it lifted text from

      • @Excrubulent@slrpnk.net
        link
        fedilink
        English
        12
        edit-2
        1 month ago

        Because accuracy requires that you make a reasonable distinction between truth and fiction, and that requires context, meaning, understanding. Hell, full humans aren’t that great at this task. This isn’t a small problem, I don’t think you solve it without creating AGI.

  • Margot Robbie
    link
    fedilink
    411 month ago

    Ok, let me try listing words that ends in “um” that could be (even tangentially) considered food.

    • Plum
    • Gum
    • Chum
    • Rum
    • Alum
    • Rum, again
    • Sea People

    I think that’s all of them.

    • @TachyonTele@lemm.ee
      link
      fedilink
      271 month ago

      There’s going to be an entire generation of people growing up with this and “learning” this way. It’s like every tech company got together and agreed to kill any chance of smart kids.

        • @Maalus@lemmy.world
          link
          fedilink
          61 month ago

          How do they know something is obviously wrong when they try to learn it? For “bananum” sure, for anything at school, college though?

          • @tigeruppercut@lemmy.zip
            link
            fedilink
            11 month ago

            The bananum was my point. Maybe as ai improves there won’t be as many of these obviously wrong things, but as it stands virtually any google search gets a shitty wrong answer from ai, and so they see tons of this bad info well before college.

  • @paddirn@lemmy.world
    link
    fedilink
    English
    28
    edit-2
    1 month ago

    And yet it doesn’t even list ‘Plum’, or did it think ‘Applum’ was just a variation of a plum?

          • @TexasDrunk@lemmy.world
            link
            fedilink
            11 month ago

            A lot of folks on the internet don’t get even the most obvious jokes without some sort of sarcasm indicator because some things are really hard to read in text vs in person. LLMs have no idea what the hell sarcasm is and definitely include some in their training, especially if they were trained on any of my old Reddit comments.

  • shininghero
    link
    fedilink
    171 month ago

    Strawberrum sounds like it’ll be at least 20% abv. I’d like a nice cold glass of that.

  • Sunny' 🌻
    link
    fedilink
    141 month ago

    It’s crazy how bad d AI gets of you make it list names ending with a certain pattern. I wonder why that is.

    • @bisby@lemmy.world
      cake
      link
      fedilink
      English
      111 month ago

      I’m not an expert, but it has something to do with full words vs partial words. It also can’t play wordle because it doesn’t have a proper concept of individual letters in that way, its trained to only handle full words

      • @Swedneck@discuss.tchncs.de
        link
        fedilink
        31 month ago

        they don’t even handle full words, it’s just arbitrary groups of characters (including space and other stuff like apostrophe afaik) that is represented to the software as indexes on a list, it literally has no clue what language even is, it’s a glorified calculator that happens to work on words.

          • @Swedneck@discuss.tchncs.de
            link
            fedilink
            11 month ago

            not really, a basic calculator doesn’t tend to have variables and stuff like that

            i say it’s a glorified calculator because it’s just getting input in the form of numbers (again, it has no clue what a language or word is) and spitting back out some numbers that are then reconstructed into words, which is precisely how we use calculators.

    • @Even_Adder@lemmy.dbzer0.com
      link
      fedilink
      English
      51 month ago

      It can’t see what tokens it puts out, you would need additional passes on the output for it to get it right. It’s computationally expensive, so I’m pretty sure that didn’t happen here.

        • adderaline
          link
          fedilink
          English
          11 month ago

          it chunks text up into tokens, so it isn’t processing the words as if they were composed from letters.

      • @Jesusaurus@lemmy.world
        link
        fedilink
        English
        11 month ago

        With the amount of processing it takes to generate the output, a simple pass over the to-be final output would make sense…

    • @blindsight@beehaw.org
      link
      fedilink
      5
      edit-2
      1 month ago

      LLMs aren’t really capable of understanding spelling. They’re token prediction machines.

      LLMs have three major components: a massive database of “relatedness” (how closely related the meaning of tokens are), a transformer (figuring out which of the previous words have the most contextual meaning), and statistical modeling (the likelihood of the next word, like what your cell phone does.)

      LLMs don’t have any capability to understand spelling, unless it’s something it’s been specifically trained on, like “color” vs “colour” which is discussed in many training texts.

      "Fruits ending in ‘um’ " or "Australian towns beginning with ‘T’ " aren’t talked about in the training data enough to build a strong enough relatedness database for, so it’s incapable of answering those sorts of questions.

  • @some_guy@lemmy.sdf.org
    link
    fedilink
    41 month ago

    Ok, I feel like there has been more than enough articles to explain that these things don’t understand logic. Seriously. Misunderstanding their capabilities at this point is getting old. It’s time to start making stupid painful.