• James R Kirk@startrek.website
    link
    fedilink
    English
    arrow-up
    41
    arrow-down
    1
    ·
    edit-2
    1 month ago

    If there was a book or website out there that described something poisonous as not poisonous, and someone believed what was written and became poisoned, I think most reasonable people would point the blame at whoever published the bad information.

    Yet when the bad (potentially deadly in this case!) information comes from ChatGPT, OpenAI gets a pass (including by everyone so far in this comment section) and the blame is placed on the person who was poisoned!

    • artyom@piefed.social
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 month ago

      the blame is placed on the person who was poisoned!

      I think there’s a toxic mentality that has infected society that there’s only ever 1 person or entity or group or “side” to blame. It’s OpenAI’s fault for feeding him deadly information and it’s also his fault for not fact-checking said information. He has paid dearly for his mistake. Has OpenAI?

      That said, if we can put aside blame for a second and just agree that OpenAI is feeding dangerous and unchecked information to the masses, and it should be OpenAI’s responsibility to either figure out how to fix it or (preferably) just stop doing it entirely. I’m not sure if it’s legal for a company to be giving out medical advice, or if they’re responsible for the ramifications of such advice, without a doctor involved, but it probably should be. They can’t just put a “you should fact check this info” at the bottom and absolve themselves of all responsibility.

    • Cryptagionismisogynist@lemmy.worldBanned
      link
      fedilink
      English
      arrow-up
      6
      ·
      1 month ago

      Engineers, including for software and AI, have a literal moral duty to not make stuff that will kill people. Eg Hyatt Hotel collapse in Kansas City. This has long been established.

    • bluGill@fedia.io
      link
      fedilink
      arrow-up
      5
      ·
      1 month ago

      home depot type books on diy wiring have been forcably recalled for deadly information. That you can’t make something safe is reason not to do it at all.

  • s@piefed.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 month ago

    I thought this comment on Ann’s video was interesting:

    I recently read a story about a teacher who got so fed up with students using ChatGPT to “write” their essays that they flipped the tables and had their students use ChatGPT to write their essays on a particular subject… and then do manual research of what the AI got wrong. Apparently, almost the entire class stopped using ChatGPT for any of their schoolwork,. (And yes, that “almost” is still concerning, but at least ChatGPT got put in its place for a change.)

    • biggerbogboy@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 month ago

      I actually use AI a lot, I’ve seen that the safeguards aren’t very well managed, I find some situations where it mentions completely fabricated information even after deep search or reasoning, although that said, it is also improving, since even last year it was way worse.

      Then again though, it is also the poisoned dude’s fault for not searching up what these chemicals are, so it’s really both sides being responsible.

    • shalafi@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      4
      ·
      1 month ago

      Agreed. ChatGPT will not tell you sodium bromide is a safe salt substitute. This guy carefully prompted and poked the thing until it said what he wanted to hear. That should be the takeaway, the fact that with a little twisting, you can confirm any opinion you like.

      Anybody doesn’t believe me can try it themselves.

      • biggerbogboy@sh.itjust.works
        link
        fedilink
        arrow-up
        2
        ·
        1 month ago

        It’s difficult to be sure, since GPT 5, the newest model, comes with a new structure of smaller, more specialised models combining outputs after being given a prompt by a different model the user interfaces with first, this is called mixture of experts.

        How do you know that OpenAI had made sure the outputs from multiple expert models wouldn’t contradict, wouldn’t cause accidental safeguard bypasses, etc?

        Personally, I trust GPT 4o more, even then though, I usually substitute the output with actual research when needed.