“… The “dirty secret” of the insurance industry is that most denials can be successfully appealed…”

  • SomeoneSomewhere@lemmy.nz
    link
    fedilink
    English
    arrow-up
    58
    arrow-down
    2
    ·
    3 months ago

    Interesting idea, but I imagine it suffers from similar issues to writing legal opinions: by signing your name to it, you’re swearing that it’s all true. Given AI’s propensity for making things up, you need to check everything.

    I wouldn’t be surprised if ‘knowingly filing a false appeal’ is a reason to boot you off the plan in the first place.

    • Nougat@fedia.io
      link
      fedilink
      arrow-up
      27
      ·
      3 months ago

      It’s still a lot easier to review and understand something you weren’t able to write than to also write that same thing without knowing how to write it.

      • SomeoneSomewhere@lemmy.nz
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 months ago

        Indeed. Just need to remember that AI can and will hallucinate entire studies or court cases into existence.

    • RobotToaster@mander.xyz
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      2
      ·
      3 months ago

      I wouldn’t be surprised if ‘knowingly filing a false appeal’ is a reason to boot you off the plan in the first place.

      For that to be an issue you would have to “know” it was false.

      • chuckleslord@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        1
        ·
        3 months ago

        You signed it, verifying that you knew what it entailed. That’s what the comment was pointing out.

        • NιƙƙιDιɱҽʂ@lemmy.world
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          3 months ago

          Usually when signing things off like this, it’s affirming that you believe all statements to be true. They would have to prove you willingly lied, not that you were simply wrong, which is very difficult to prove legally.

          That said, IANAL.

          • SomeoneSomewhere@lemmy.nz
            link
            fedilink
            English
            arrow-up
            2
            ·
            2 months ago

            ‘Reckless disregard for the truth’ shows up sometimes, especially in e.g. defamation.

            If the AI cites some legal case from 2015 or a random medical article, you probably need to ensure that those articles actually exist, and not simply assume that the AI is right.

            If the AI said that a month’s supply of Fentanyl is the recommended treatment for a headache, no reasonable person is going to believe it. That means that if you say that you believe that, the court isn’t going to consider you a reasonable person.

            IANAL either.

            • NιƙƙιDιɱҽʂ@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              2 months ago

              Hah true, true. If you don’t read the output at all and do the most minimal of research, that’s on you for sure.

              Now excuse me while I pop some Fent, my head is killing me.

        • madcaesar@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          ·
          3 months ago

          What’s the legal code if you THINK something is true and you affirm it, but you are wrong. It can’t be the same as lying since you thought it was true.

          I really wonder what the law says on something like that.

          • SomeoneSomewhere@lemmy.nz
            link
            fedilink
            English
            arrow-up
            1
            ·
            2 months ago

            ‘Reckless disregard for the truth’ shows up sometimes, especially in e.g. defamation.

            If the AI cites some legal case from 2015 or a random medical article, you probably need to ensure that those articles actually exist, and not simply assume that the AI is right.

            If the AI said that a month’s supply of Fentanyl is the recommended treatment for a headache, no reasonable person is going to believe it. That means that if you say that you believe that, the court isn’t going to consider you a reasonable person.

    • Sibbo@sopuli.xyz
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      I think when you use AI to write the claim and there turn it to be errors even after you checked it, it could still be a case of negligence. Like, not that I think it necessarily should be, but I can see that one could make the argument.

    • kalkulat@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      3 months ago

      by signing your name to it, you’re swearing that it’s all true.

      Lawyers too use qualifiers like ‘To the best of our knowledge’ and ‘in our studied opinion’ to indicate that opinions may differ. That’s why judges exist, and some of them are -so reasonable- that they will accept that people cannot be expected to decide whether a hospital’s decision to operate -immediately- is not good enough.

      These US ‘insurance’ companies are in the business of making money from people’s health problems. In MOST OF THE CIVILIZED WORLD that’s not how health-care works. We, the people of the US, let the system get rigged this way … we have to fix that. Permanently.