• AwesomeLowlander@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    66
    arrow-down
    3
    ·
    2 months ago

    The measure, aimed at reducing potential risks created by AI, would have required companies to test their models and publicly disclose their safety protocols to prevent the models from being manipulated to, for example, wipe out the state’s electric grid or help build chemical weapons.

    How exactly do LLMs do that? If you’ve given an LLM’s pseudorandom output control over your electrical grid, no regulation will mitigate your stupidity.

    • bamfic@lemmy.world
      link
      fedilink
      English
      arrow-up
      12
      ·
      2 months ago

      Could he understand the halting problem? I doubt he does, but the legislators evidently don’t either

    • oce 🐆
      link
      fedilink
      English
      arrow-up
      6
      ·
      2 months ago

      I think it’s more about asking it the steps to create a bomb or how to disrupt the grid, for example, where to cut the major edges.

        • dual_sport_dork 🐧🗡️@lemmy.world
          link
          fedilink
          English
          arrow-up
          16
          ·
          2 months ago

          That, and the Internet has been teaching people how to create bombs since the dial-up days. I don’t predict that LLM’s will be either a benefit or a detriment to that particular strain of natural selection.

        • oce 🐆
          link
          fedilink
          English
          arrow-up
          1
          arrow-down
          1
          ·
          2 months ago

          Still a public safety issue.

            • oce 🐆
              link
              fedilink
              English
              arrow-up
              1
              arrow-down
              1
              ·
              2 months ago

              No, but I think it could make the knowledge more easily available which increases the risk that it may happen.

                • oce 🐆
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  2 months ago

                  I think I heard about it before, but instead of having to remember that, I could just ask an uncensored LLM.

                  • AwesomeLowlander@sh.itjust.works
                    link
                    fedilink
                    English
                    arrow-up
                    3
                    ·
                    2 months ago

                    The actual point was, bomb making instructions have been floating around on search engine results since the days of dial up. That particular manuscript itself has existed since before the days of the Internet. There’s nothing cgpt could give you that you couldn’t have found by typing the same query into Google. Getting the instructions is literally the easiest, least effort, least risk part of building a bomb.

    • UnderpantsWeevil@lemmy.world
      link
      fedilink
      English
      arrow-up
      10
      arrow-down
      5
      ·
      2 months ago

      How exactly do LLMs do that?

      If you hook an LLM up as an interface replacement for a manual/analog Power Plant interface and start asking the translator to intuit decisions based on fuzzy inputs, you can create a cascade of errors that result in grid failure.

      If you’ve given an LLM’s pseudorandom output control over your electrical grid, no regulation will mitigate your stupidity.

      This rule would prevent a business or public regulator from doing such a thing without proving out safeguards.

      And the governor vetoed it.