Trust in AI technology and the companies that develop it is dropping, in both the U.S. and around the world, according to new data from Edelman shared first with Axios.

Why it matters: The move comes as regulators around the world are deciding what rules should apply to the fast-growing industry. “Trust is the currency of the AI era, yet, as it stands, our innovation account is dangerously overdrawn,” Edelman global technology chair Justin Westcott told Axios in an email. “Companies must move beyond the mere mechanics of AI to address its true cost and value — the ‘why’ and ‘for whom.’”

  • TrickDacy@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    arrow-down
    1
    ·
    8 months ago

    basically just a spell-checker on steroids.

    I cannot process this idea of downplaying this technology like this. It does not matter that it’s not true intelligence. And why would it?

    If it is convincing to most people that information was learned and repeated, that’s smarter than like half of all currently living humans. And it is convincing.

    • nyan@lemmy.cafe
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      8 months ago

      Some people found the primitive ELIZA chatbot from 1966 convincing, but I don’t think anyone would claim it was true AI. Turing Test notwithstanding, I don’t think “convincing people who want to be convinced” should be the minimum test for artificial intelligence. It’s just a categorization glitch.

      • TrickDacy@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        8 months ago

        Maybe I’m not stating my point explicitly enough but it actually is that names or goalposts aren’t very important. Cultural impact is. I think already the current AI has had a lot more impact than any chatbot from the 60s and we can only expect that to increase. This tech has rendered the turing test obsolete, which kind of speaks volumes.

        • nyan@lemmy.cafe
          link
          fedilink
          English
          arrow-up
          9
          ·
          edit-2
          8 months ago

          Calling a cat a dog won’t make her start jumping into ponds to fetch sticks for you. And calling a glorified autocomplete “intelligence” (artificial or otherwise) doesn’t make it smart.

          Problem is, words have meanings. Well, they do to actual humans, anyway. And associating the word “intelligence” with these stochastic parrots will encourage nontechnical people to believe LLMs actually are intelligent. That’s dangerous—potentially life-threatening. Downplaying the technology is an attempt to prevent this mindset from taking hold. It’s about as effective as bailing the ocean with a teaspoon, yes, but some of us see even that as better than doing nothing.

            • nyan@lemmy.cafe
              link
              fedilink
              English
              arrow-up
              6
              ·
              edit-2
              8 months ago

              How about taking advice on a medical matter from an LLM? Or asking the appropriate thing to do in a survival situation? Or even seemingly mundane questions like “is it safe to use this [brand name of new model of generator that isn’t in the LLM’s training data] indoors?” Wrong answers to those questions can kill. If a person thinks the LLM is intelligent, they’re more likely to take the bad advice at face value.

              If you ask a human about something important that’s outside their area of competence, they’ll probably refer you to someone they think is knowledgeable. An LLM will happily make something up instead, because it doesn’t understand the stakes.

              The chance of any given query to an LLM killing someone is, admittedly, extremely low, but given a sufficiently large number of queries, it will happen sooner or later.

                • nyan@lemmy.cafe
                  link
                  fedilink
                  English
                  arrow-up
                  5
                  ·
                  8 months ago

                  Half of the human population is of below-average intelligence. They will be that dumb. Guaranteed. And safeguards generally only get added until after someone notices that a wrong answer is, in fact, wrong, and complains.

                  In part, I believe someone’s going to die because large corporations will only get serious about controlling what their LLMs spew when faced with criminal charges or a lawsuit that might make a significant gouge in their gross income. Untill then, they’re going to at best try to patch around the exact prompts that come up in each subsequent media scandal. Which is so easy to get around that some people are likely to do so by accident.

                  (As for humans making up answers, yes, some of them will, but in my experience it’s not all that common—some form of “how would I know?” is a more likely response. Maybe the sample of people I have contact with on a regular basis is statistically skewed. Or maybe it’s a Canadian thing.)

                • Eccitaze@yiffit.net
                  link
                  fedilink
                  English
                  arrow-up
                  4
                  ·
                  8 months ago

                  if you even ask a person and trust your life to them like that, unless they give you good reason they are reliable, you are a moron. Why would someone expect a machine to be intelligent and experienced like a doctor? That is 100% on them.

                  Insurance companies are already using AI to make medical decisions. We don’t have to speculate about people getting hurt because of AI giving out bad medical advice, it’s already happening and multiple companies are being sued over it.

                  • TrickDacy@lemmy.world
                    link
                    fedilink
                    English
                    arrow-up
                    1
                    arrow-down
                    1
                    ·
                    8 months ago

                    Somehow we went from me saying this technology shouldn’t be downplayed to “but it’s costing lives already!”

                    Not really sure how that happened but yeah it’s obviously shitty that people are irresponsible shitheads and I think downplaying it or quibbling about whether it’s actually AI or not is far from helpful in light of such consequences

            • Krauerking@lemy.lol
              link
              fedilink
              English
              arrow-up
              5
              ·
              edit-2
              8 months ago

              Because one trained in a particular way could lead people to think it’s intelligent and also give incredibly biased data that confirms the bias of those listening.

              It’s creating a digital prophet that is only rehashing the biases of the creator.
              That makes it dangerous if it’s regarded as being above the flaws of us humans. People want something smarter than them to tell them what to do, and giving that designation to a flawed chatbot that simply predicts what’s the most coherent word sentence, through the word “intelligent”, is not safe or a good representation of what it actually is.