Archived version

Boox recently switched its AI assistant from Microsoft Azure GPT-3 to a language model created by ByteDance, TikTok’s parent company.

[…]

Testing shows the new AI assistant heavily censors certain topics. It refuses to criticize China or its allies, including Russia, Syria’s Assad regime, and North Korea. The system even blocks references to “Winnie the Pooh” - a term that’s banned in China because it’s used to mock President Xi Jinping.

When asked about sensitive topics, the assistant either dodges questions or promotes state narratives. For example, when discussing Russia’s role in Ukraine, it frames the conflict as a “complex geopolitical situation” triggered by NATO expansion concerns. The system also spreads Chinese state messaging about Tiananmen Square instead of addressing historical facts.

When users tried to bring attention to the censorship on Boox’s Reddit forum, their posts were removed. The company hasn’t made any official statement about the situation, but users are reporting that the AI assistant is currently unavailable.

[…]

In China, every AI model has to pass a government review to make sure it follows “socialist values” before it can launch. These systems aren’t allowed to create any content that goes against official government positions.

We’ve already seen what this means in practice: Baidu’s ERNIE-ViLG image AI won’t process any requests about Tiananmen Square, and while Kling’s video generator refuses to show Tiananmen Square protests, it has no problem creating videos of a burning White House.

Some countries are already taking steps to address these concerns. Taiwan, for example, is developing its own language model called “Taide” to give companies and government agencies an AI option that’s free from Chinese influence.

[…]

      • DdCno1@beehaw.org
        link
        fedilink
        arrow-up
        3
        ·
        3 days ago

        Please don’t be deliberately obtuse. You can do better than that.

        In case it was unclear, the training material of most LLMs will almost inevitably include propaganda. If that propaganda is not deliberately added to the data, then that’s unintentional, a byproduct of poor vetting at worst. That’s obviously fundamentally different from an LLM being both deliberately trained with propaganda and having hard checks built into it that filter out certain keywords the government doesn’t want citizens to inform themselves about, which is what China is doing. You can’t honestly believe that the two are the same.

        • LukeZaz@beehaw.org
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 days ago

          In what way is it meaningfully different? Does the intent of the creators of an LLM – a kind of system notorious for being a black box – fundamentally change the outcomes of what it says? It’s spouting propaganda either way.

          Please don’t be deliberately obtuse. You can do better than that.

          Condescending attitude aside, don’t bring up an irrelevant scenario if you don’t want me to point out its irrelevance.