WhatsApp’s AI shows gun-wielding children when prompted with ‘Palestine’::By contrast, prompts for ‘Israeli’ do not generate images of people wielding guns, even in response to a prompt for ‘Israel army’

    • theyoyomaster@lemmy.world
      link
      fedilink
      English
      arrow-up
      30
      arrow-down
      12
      ·
      1 year ago

      This isn’t anything they actively did though. The literal point of AI is that it learns on its own and comes up with its own response absent human interaction. Meta very likely specifically added code to try and prevent this, but it just fell short of overcoming the bias found in the overwhelming majority of content that led to the model associating Hamas with Palestine.

      • Tetsuo
        link
        fedilink
        English
        arrow-up
        15
        arrow-down
        2
        ·
        1 year ago

        It’s up to them to moderate the content generated by their app.

        And yes it’s almost impossible to have a completely safe AI so that will be an issue for all generative AIs like that. It’s still their implementation and content generated by their code.

        Also I highly doubt they had a specific code to prevent that kind of depiction of Palestinian kids.

        Even if they did, someone will come up with an injection prompt that overrides the code in question and the AI will again display biased or racist stuff.

        An AI generating racist stuff is absolutely not more acceptable because it got inspired by real racist people…

        • JohnEdwa@sopuli.xyz
          link
          fedilink
          English
          arrow-up
          1
          ·
          edit-2
          1 year ago

          The thing is, it’s almost impossible to perfectly prevent something like this before it happens. The data comes from humans, it will include all the biases and racism humans have. You can try to clean it up if you know what you want to avoid, but you can’t make it sterile for every single thing that exists. Once the AI is trained, you can pre-censor it so that it doesn’t generate certain types of images you know are “true” from the data but not acceptable to depict - e.g “jews have huge noses in drawings” is a thing it would learn because that’s a caricature we have used for ages - but again, only if you know what you are looking for and you won’t make it perfect.

          If the word “palestine” makes it generate children with guns, it’s simply because the data it trained on made it think those two things are correlated somehow, and that wasn’t known until now. It will get added to the list of things to censor next time.

        • theyoyomaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          2
          ·
          1 year ago

          I imagine they likely have hardcoded rules about associating content indexed as “terrorist” against a query for a nationality. Most mainstream AI models do have specific rules built in to prevent stuff like this, they just aren’t all encompassing and can still happen if there is sufficient influence from the training data.

          While FB does have content moderators, needing human verification of every single piece of AI generated defeats the purpose of AI. If people want AI there is a certain amount of non politically correct results that will slip through the cracks. The bottom line is content moderation as we know it has extreme biases applied to fit the safest “viewpoint model” and any system based on objective data analysis, especially with biased samples such as openly available internet, is going to get results that do not fit the standard “curated” viewpoint.

          • Tetsuo
            link
            fedilink
            English
            arrow-up
            4
            ·
            1 year ago

            It doesn’t matter. I don’t really care about moderation being impossible to do. Google decided most moderation should be done automatically on YT and there are constantly false positives. They are not being held accountable both for false positives and false negatives. No human is involved.

            And reading that type of comment I’m assuming we are heading the same way. Businesses not being accountable for something that is absolutely being generated by their code. If you choose to deploy a black box that generates random stuff you can’t understand how it was generated it shouldn’t make you not responsible for the damage done.

            I don’t think we should naively just accept apologies from AI owners and move on. They knew the risk of dangerous content being generated and decided it was acceptable.

            Also considering the damage that Facebook has done in the past and their careless attitude toward privacy, I cannot understand why you would find it likely that they took the time to add some kind of safeguard for nationality and terrorism to be wrongfully associated.

            Even then, the very concept of nationality is certainly not clear for an AI. For some Palestine is not a country. How would you think they would have coded a safeguard to prevent that kind of mistake anyway ?

            There is a contradiction also in saying that you can’t moderate every single AI output manually but that they manually added a moderation of sort to the AI specifically for Palestinians and terrorism. There is no way they got so specific. As you said it’s not a practical approach.

            The very important point for me to convey is that just because some black box generating text can randomly say racist stuff doesn’t and shouldn’t be more socially acceptable. That’s it.

            Then obviously I think these AI shouldn’t have been released before their owners have a very good understanding on how they work and on how to prevent 99.9999999999% of the dangerous outputs. Right now my opinion is that Whatsapp deployed this knowing a lot of racist stuff would be generated and they just decided they will figure it out along the way with the help of the users.

            It was either that or being late to the competition for the AI market.

            If an innocent user can generate that easily some racist output I would argue they did not responsibly released this AI.

      • Valmond@lemmy.mindoki.com
        link
        fedilink
        English
        arrow-up
        13
        arrow-down
        2
        ·
        1 year ago

        It’s not about “adding code” or any other bullshit.

        AI today is trained on datasets (that’s about it), the choice of datasets can be complicated, but that’s where you moderate and select. There is nothing “AI learns of its own” sci-fi dream going on.

        Sigh.

        • Serdan@lemm.ee
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          It’s reasonable to refer to unsupervised learning as “learning on its own”.

        • Torvum@lemmy.world
          link
          fedilink
          English
          arrow-up
          5
          arrow-down
          3
          ·
          1 year ago

          Really wish the term virtual intelligence was used (literally what it is)

          • GiveMemes
            link
            fedilink
            English
            arrow-up
            9
            arrow-down
            1
            ·
            1 year ago

            We should honestly just take the word intelligence out of the mix for rn bc these machines aren’t “intelligent”. They can’t do things like critically think, form its own opinions, etc. They’re just super efficient data aggregation at the end of the day, whether or not they’re based on the human brain.

            We’re so far off from ‘intelligent’ machine learning that I think it really throws off how people think about it to call it intelligence of any sort.

            • Torvum@lemmy.world
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              Techbros just needed to use the search engine optimization buzzword tbh.

            • Serdan@lemm.ee
              link
              fedilink
              English
              arrow-up
              3
              arrow-down
              2
              ·
              1 year ago

              LLMs can reason about information. It’s fine to call them intelligent systems.

          • ichbinjasokreativ@lemmy.world
            link
            fedilink
            English
            arrow-up
            1
            arrow-down
            2
            ·
            1 year ago

            One of the many great things about the mass effect franchise is its separation of AI and VI, the latter being non-conscious and simple and the former being actually ‘awake’

        • theyoyomaster@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          It is about adding code. No dataset will be 100% free of undesirable results. No matter what marketing departments wish, AI isn’t anything close to human “intelligence,” it is just a function of learned correlations. When it comes to complex and sensitive topics, the difference between correlation and causation is huge and AI doesn’t distinguish. As a result, they absolutely hard code AI models to avoid certain correlations. Look at the “[character] doing 9/11” meme trend. At the fundamental level it is impossible to restrict undesirable outcomes by avoiding them in training models because there are an infinite combinations of innocent things that become sensitive when linked in nuanced ways. The only way to combat this is to manually delink certain concepts; they merely failed to predict it correctly for this specific instance.

    • pete_the_cat@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      14
      ·
      edit-2
      1 year ago

      I forget if it was on here or Reddit, but I remember seeing an article a week or so ago where the translation feature on Facebook ended up calling Palestinians terrorists “accidentally”. I cited the fact that Mark is Jewish, and probably so are a lot of the people that work there. The US is also largely pro-Israel, so it was probably less of an accidental bug and more of an intentional “fuck Palestine”. I got downvoted to hell and called a conspiracy theorist. I think this confirms I had the right idea.