• Soyweiser@awful.systems
    link
    fedilink
    English
    arrow-up
    3
    ·
    11 hours ago

    On the subject of AI agents, I saw a baffling commercial (or well half saw it several times) where they where trying to sell I think AI powered phones where they had the revolutionary AI agent idea of randomly feeling like you should meet up with friends, then just go to your friends and tell your phone that it should reschedule everything they had planned for a day later. Which baffled me in various ways, as mentioning this to your phone doesn’t do anything, you can’t reschedule without input from other people. Sure I might not want to go to my doc appointment today, but I can’t just tell my phone ‘hey tell my doc I will do it tomorrow not today’. And this is ignoring the fact that with AI reliability you need to actually check they did it correctly. It might have forgotten the order of the days, I mean it is a technology that already has failed on the most basic task. Just a very strange commercial, disconnected from how modern agendas and time works.

  • swlabr@awful.systems
    link
    fedilink
    English
    arrow-up
    18
    ·
    edit-2
    1 day ago

    Why? Per the poll: “a lack of reliability.” The things being sold as “agents” don’t … work.

    Vendors insist that the users are just holding the agents wrong. Per Bret Taylor of Sierra (and OpenAI):

    Accept that it is imperfect. Rather than say, “Will AI do something wrong”, say, “When it does something wrong, what are the operational mitigations that we’ve put in place to deal with it?”

    I think this illustrates the situation of the LLM market pretty well, not just at a shallow level of the base incentives of the parties at play, but also at a deeper level, showing the general lack of humanity and toleration of dogshit exhibited by the AI companies that they are trying to brainwash everyone with.

    • andrew_bidlaw@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      ·
      1 day ago

      It’s not unlike medical supplements before they got pushed away from real drugs and had all the markings on them about being just supplements rather than panaceas…

      ATTENTION: LLM is not a real worker. Consult a competent manager before firing everyone around you.

      Snake oil won’t be a miscomparison on all points.

  • Inucune@lemmy.world
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 day ago

    If a human employee makes a mistake, I can find out why, and correct the process used by the employee.

    If an AI makes a mistake, I (currently) cannot see how it came to the result, nor can I correct it to not take that action again.