• NevermindNoMind@lemmy.world
    link
    fedilink
    arrow-up
    44
    ·
    1 year ago

    For those who haven’t read the article, this is not about hallucinations, this is about how AI can be used maliciously. Researchers used GPT-4 to create a fake data set from a fake human trial, and the result was convincing. Only advanced techniques were able to show that the data was faked, like too many patient ages ending with 7 or 8 than would be likely in a real sample. The article points out that most peer review does not go that deep into the data to try to spot fakes. The issue here is that a malicious researcher could use AI to generate fake data supporting whatever theory they want and theoretically get published in peer reviewed journal.

    I don’t have the expertise to assess how much of a problem this is. If someone was that determined, couldn’t they already fake data by hand? Does this just make it easier to do, or is AI better at it thereby increasing the risk? I don’t know, but it’s an interesting data point as we as a society think about what AI is capable of and how it could be used maliciously.

    • AnonStoleMyPants@sopuli.xyz
      link
      fedilink
      arrow-up
      10
      arrow-down
      2
      ·
      1 year ago

      I don’t think this is a problem at all. Are they saying fake data is hard to do? I don’t get it. Why would it be hard to fake data? Get real results and shift the values by some number. That’s it. I mean, obviously if you shift too much then you will have problems, but enough to be credible? Easy.

      Sure it is slightly harder ro make the data from scratch, but let’s be real here, a TON of the data is just a random csv file a machine pops out. Why on earth would it be hard to fake?

      Now, human trials are a bit different than some measurement data but I fail to see why this would be hard, assuming you are an expert in rhe field.

      Much more prominent problem in science is cherry picking data. It is very common to have someone make 50 new devices and measure them all, and conveniently leave out half of the measurements. Happens alllll the time

      • appel@whiskers.bim.boats
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        There are some statistical tests and methods you can do to quite easily spot fake data from what I remember. (The name has escaped me, sorry). Ie. To check if it has come from an RNG, or if it is too positive given the sample, etc. but you are right in that it is often enough to fool the review board and get something published. Often the data is only scrutinized with these methods thoroughly after it has been published.

        • AbouBenAdhem@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          1 year ago

          There was an infamous case a few years ago, which was caught because the researcher forgot to delete the fake-data-generating formula from the Excel file.

  • BT_7274@lemmy.world
    link
    fedilink
    arrow-up
    37
    arrow-down
    2
    ·
    1 year ago

    How many times do we have to play this game before people realize it’s not a researcher, lawyer, doctor, or anything that has to rely on facts and established, valid data?

    It’s a next-word generator that’s remarkably good at sounding human. Yes, this can often lead to accurate sounding information, but it doesn’t actually “know” anything. Not in any sense that could be relied on.

  • Imacat@lemmy.dbzer0.com
    link
    fedilink
    arrow-up
    21
    arrow-down
    2
    ·
    1 year ago

    I too can produce vaguely plausible fake data sets. Why’s everyone get so worried when some software can do it?

    • Mothra@mander.xyz
      link
      fedilink
      arrow-up
      14
      arrow-down
      2
      ·
      1 year ago

      I’m going to guess it’s because millions of people across the globe don’t consult your fake data sets

  • ArugulaZ@kbin.social
    link
    fedilink
    arrow-up
    9
    ·
    1 year ago

    Wow, it’s getting more and more human every day in its thought process! I wonder when it’ll start coming up with its own conspiracy theories?

  • andrew_bidlaw@sh.itjust.works
    link
    fedilink
    arrow-up
    4
    ·
    1 year ago

    Wilkinson … has examined several data sets generated by earlier versions of the large language model, which he says lacked convincing elements when scrutinized, because they struggled to capture realistic relationships between variables.

    This revealed a mismatch in many ‘participants’ between designated sex and the sex that would typically be expected from their name. Furthermore, no correlation was found between preoperative and postoperative measures of vision capacity and the eye-imaging test. Wilkinson and Lu also inspected the distribution of numbers in some of the columns in the data set to check for non-random patterns. The eye-imaging values passed this test, but some of the participants’ age values clustered in a way that would be extremely unusual in a genuine data set: there was a disproportionate number of participants whose age values ended with 7 or 8.

    It’s 2 am and the homework is due this morning-energy. It seems they were careless and probably thought their data wouldn’t be studied at all. Relationships between columns is where forgeries like these would always suffer. It takes a good amount of understanding to make one, and LLMs lack it unless explicitly guided by a human to take them into account. Otherwise they would find their own, where post-op condition may depend on patient’s last name and 8’s and 9’s are the most popular age’s second digit to choose.