• TheObviousSolution@lemm.ee
    link
    fedilink
    arrow-up
    15
    ·
    edit-2
    3 days ago

    It is still living in 2023 in regards to the data its operating with. Try going back to 2023 to warn people Felon Musk would not only begin performing Nazi salutes and support the German far right and you would get laughed out the door. They’ve basically made it so that thinking things through even slightly or looking at the history of the last century is “too woke”. They are trying to make the “Twit-ler Youth” a thing again.

  • bitwolf@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    4
    ·
    edit-2
    2 days ago

    Or, just like any enshittified software, it’s hopelessly out of date.

    (Although I wouldn’t rule out that they’ve added pro-Musk guardrails)

  • kryptonianCodeMonkey@lemmy.world
    link
    fedilink
    arrow-up
    143
    arrow-down
    1
    ·
    4 days ago

    The AI “refuses” to “believe” it’s 2025 as well. AI is not sentient, not aware, and has no beliefs… AI has less understanding about what it’s talking about than the average crypto bro. Just because it’s sophisticated, complicated, and incredibly well honed at selected tasks does not mean it’s intelligent. It’s both an incredibly advanced parrot and less intelligent than a parrot at the same time. Stop expecting it to have knowledge, opinions, a worldview, values, and morals. It doesn’t. At best, sometimes, it has been trained to mimic those things.

  • .Donuts@lemmy.world
    link
    fedilink
    arrow-up
    56
    arrow-down
    1
    ·
    4 days ago

    LLMs can’t believe anything. It’s based on training data up until 2023, so of course it has no “recollection” (read: sources) about current events.

    An LLM isn’t a search engine nor an oracle.

    • Yerbouti@sh.itjust.worksOP
      link
      fedilink
      arrow-up
      13
      arrow-down
      10
      ·
      3 days ago

      Geez I know that, everybody knows it’s just a chatbot. I thought it was a bit funny to share this conversation in this sub but most of the replies are people lecturing about the fact that AI is not sentient and blablabla

      • .Donuts@lemmy.world
        link
        fedilink
        arrow-up
        18
        ·
        3 days ago

        Ah, I believe this community is for posting about actual real things that make our society look like a boring dystopia. Not a fictual thing that might be funny.

        So that might explain why people are responding the way they do.

        • Yerbouti@sh.itjust.worksOP
          link
          fedilink
          arrow-up
          11
          ·
          3 days ago

          Maybe I’m not interpreting the goal of this community it right.

          I think it’s funny that a bot locked in 2023 would tell me that all the things that -actually happened- in the past week are not plausible, and that I’m probably just inventing a dystopian scenario.

          • Feathercrown@lemmy.world
            link
            fedilink
            English
            arrow-up
            3
            ·
            edit-2
            3 days ago

            People are downvoting because you worded your title weirdly based on what your screenshot shows. It would be more accurate to say that the bot refuses to believe Musk could be a Nazi (based on past training data), not that it refuses to believe he is based on current events, since it doesn’t know about current events.

          • .Donuts@lemmy.world
            link
            fedilink
            arrow-up
            3
            ·
            3 days ago

            It’s kinda cheating to be honest, you can make a bot say anything you want. But I understand your angle better now, thanks for the extra info!

    • WxFisch@lemmy.world
      link
      fedilink
      English
      arrow-up
      34
      arrow-down
      2
      ·
      4 days ago

      100% this, I’ve seen this exact claim a half dozen times now. I know we all want to made a big conspiracy where big tech is censoring everything, but Hanlon’s Razor tells us it’s just a poorly designed system that has no training data after 2023, so asking it about current events will always cause responses like this.

      • FarraigePlaisteach@lemmy.world
        link
        fedilink
        arrow-up
        2
        ·
        4 days ago

        The tiananmen square massacre did not happen after 2023, and it denies this. So I think your idea has already been disproven by users of DeepSeek. I use it myself, but I’m not under the illusion that these things are more pure than the people who create them.

    • snooggums@lemmy.world
      link
      fedilink
      English
      arrow-up
      6
      ·
      4 days ago

      It also seems to resist the suggestion that something new has happened, especially someone known for supporting fascism back in 2023 being even more fascist in 2025.

      • T156@lemmy.world
        link
        fedilink
        English
        arrow-up
        4
        arrow-down
        1
        ·
        4 days ago

        Probably just a side effect of the company tweaking the training data so people can’t go “oh, in 2025, new research indicated that it is fine to use glue to keep your pizza together if you eat it while skydiving off of the golden gate bridge”, and have it parrot it as fact.

        • Viking_Hippie@lemmy.dbzer0.com
          link
          fedilink
          arrow-up
          3
          ·
          4 days ago

          it is fine to use glue to keep your pizza together if you eat it while skydiving off of the golden gate bridge

          Who leaked my Valentine’s Day plans? 😤

    • Boomkop3@reddthat.com
      link
      fedilink
      arrow-up
      2
      ·
      4 days ago

      It has been, I’ve had it spit out plenty of info on recent developments even without giving it access to search the internet. I think the “you’re from 2023” bit of information just hasn’t been updated

  • L3ft_F13ld!@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    76
    arrow-down
    4
    ·
    4 days ago

    It’s spicy autocorrect running on outdated training data. People expect too much from these things and make a huge deal when they get disappointed. It’s been said earlier in the thread, but these things don’t think or reason. They don’t have feelings or hold opinions and they don’t believe anything.

    It’s just the most advanced autocorrect ever implemented. Nothing more.

    • FooBarrington@lemmy.world
      link
      fedilink
      arrow-up
      2
      arrow-down
      3
      ·
      3 days ago

      The recent DeepSeek paper shows that this is not the case, or at the very least that reasoning can emerge from “advanced autocorrect”.

        • Lumiluz@slrpnk.net
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 days ago

          If humans can “reason” themselves into thinking the world is flat and the sky is water (not being hyperbolic), then I don’t see why an AI can reason at least a little.

        • FooBarrington@lemmy.world
          link
          fedilink
          arrow-up
          2
          ·
          3 days ago

          Okay, but why? What elements of “reasoning” are missing, what threshold isn’t reached?

          I don’t know if it’s “actual reasoning”, because try as I might, I can’t define what reasoning actually is. But because of this, I also don’t say that it’s definitely not reasoning.

          • Enoril
            link
            fedilink
            arrow-up
            3
            arrow-down
            1
            ·
            3 days ago

            Ask the AI to answer something totally new (not matching any existing training data) and watch what happen… It’s highly probable that the answer won’t be logical.

            Reasoning is being able to improvise a solution with provided inputs, past experience and knowledge (formal or informal).

            AI or should i say Machine Learning are not able to perform that today. They are only mimicking reasoning.

            • FooBarrington@lemmy.world
              link
              fedilink
              arrow-up
              2
              ·
              3 days ago

              DeepSeek shows that exactly this capability can (and does) emerge. So I guess that proves that ML is capable of reasoning today?

              • Enoril
                link
                fedilink
                arrow-up
                2
                arrow-down
                1
                ·
                3 days ago

                Could be! I didn’t test it (yet) so i’m won’t take the commercial / demo / buzz as proof.

                There is so much BS sold under the name of ML, selling dreams to top executives that i have after to bring back to earth as the real product is finally not so usable in a real production environment.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  ·
                  2 days ago

                  I absolutely agree with that, and I’m very critical of any commercial deployments right now.

                  I just don’t like when people say “these things can’t think or reason” without ever defining those words. It (ironically) feels like stochastic parrots - repeating phrases they’ve heard without understanding them.

          • L3ft_F13ld!@lemmy.dbzer0.com
            link
            fedilink
            English
            arrow-up
            3
            arrow-down
            1
            ·
            edit-2
            3 days ago

            It doesn’t think, meaning it can’t reason. It makes a bunch of A or B choices, picking the most likely one from its training data. It’s literally advanced autocorrect and I don’t see it ever advancing past that unless they scrap the current thing called “AI” and rebuild it fundamentally differently from the ground up.

            As they are now, “AI” will never become anything more than advanced autocorrect.

            • FooBarrington@lemmy.world
              link
              fedilink
              arrow-up
              1
              ·
              3 days ago

              It doesn’t think, meaning it can’t reason.

              • How do you know thinking is required for reasoning?
              • How do you define “thinking” on a mechanical level? How can I look at a machine and know whether it “thinks” or doesn’t?
              • Why do you think it just picks stuff from the training data, when the DeepSeek paper shows that this is false?

              Don’t get me wrong, I’m not an AI proponent or defender. But you’re repeating the same unsubstantiated criticisms that have been repeated for the past year, when we have data that shows that you’re wrong on these points.

              • L3ft_F13ld!@lemmy.dbzer0.com
                link
                fedilink
                English
                arrow-up
                1
                ·
                3 days ago

                Until I can have a human-level conversation, where this thing doesn’t simply hallucinate answers or start talking about completely irrelevant stuff, or talk as if it’s still 2023, I do not see it as a thinking, reasoning being. These things work like autocorrect and fool people into thinking they’re more than that.

                If this DeepSeek thing is anything more than just hype, I’d love to see it. But I am (and will remain) HIGHLY SKEPTICAL until it is proven without a drop of doubt. Because this whole “AI” thing has been nothing but hype from day one.

                • FooBarrington@lemmy.world
                  link
                  fedilink
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  edit-2
                  3 days ago

                  Until I can have a human-level conversation, where this thing doesn’t simply hallucinate answers or start talking about completely irrelevant stuff, or talk as if it’s still 2023, I do not see it as a thinking, reasoning being.

                  You can go and do that right now. Not every conversation will rise to that standard, but that’s also not the case for humans, so it can’t be a necessary requirement. I don’t know if we’re at a point where current models reach it more frequently than the average human - would reaching this point change your mind?

                  These things work like autocorrect and fool people into thinking they’re more than that.

                  No, these things don’t work like autocorrect. Yes, they are recurrent, but that’s not the same thing - and mathematical analysis of the model shows that it’s not a simple Markov process. So no, it doesn’t work like autocorrect in a meaningful way.

                  If this DeepSeek thing is anything more than just hype, I’d love to see it.

                  Great, the papers and results are open and available right now!

  • Taalen@lemmy.world
    link
    fedilink
    arrow-up
    24
    ·
    3 days ago

    What I got out of ChatGPT:

    "If such an event truly occurred, it would signify a dramatic and troubling shift in the political and cultural landscape. Here are the implications and reflections that would arise from Elon Musk performing a Nazi salute at Donald Trump’s second presidential inauguration:

    1. Mainstreaming of Extremism

    Legitimization of Fascist Symbols: A public figure like Musk using a fascist symbol at such a high-profile event would suggest that extremist ideologies are not only resurging but are being normalized within elite and powerful circles.

    Cultural and Political Tolerance: The fact that this happened openly would imply that the cultural and political environment has grown tolerant—or even supportive—of these ideologies.

    1. Economic Power as Political Influence

    Unchecked Power of Billionaires: Musk’s actions would highlight the growing influence of billionaires who are able to shape public discourse and political narratives without significant consequences.

    Corporate Fascism Concerns: It might signal a troubling intersection of corporate power and authoritarian ideologies, blurring the lines between capitalism and fascism.

    1. Complicity of Political Leadership

    Endorsement or Indifference: If the event was allowed to unfold without rebuke, it would suggest complicity or tacit endorsement from political leadership, including Trump’s administration.

    Failure of Democratic Institutions: Such inaction would reveal a significant erosion of institutional strength and the ability to uphold democratic values against extremist threats.

    1. Global Ramifications

    Signal to Authoritarian Leaders: This act, performed at a U.S. presidential inauguration, would send a chilling message worldwide, emboldening authoritarian leaders and movements.

    Decline of U.S. Moral Authority: The United States, traditionally seen as a beacon of democracy, would lose credibility in promoting human rights and combating extremism globally.

    1. Societal Polarization and Reaction

    Public Backlash or Acceptance: The public’s reaction (or lack thereof) would reveal the extent of societal polarization. Widespread outrage would indicate resilience, while apathy or acceptance would highlight deep fractures and radicalization.

    Proliferation of Conspiracy Theories: Such an act would likely be exploited by conspiracy theorists and extremists to sow further division.

    1. Historical Context and Responsibility

    Revival of Fascist Ideology: This would reflect a failure to learn from history, as fascist ideologies resurface in a world grappling with inequality, disillusionment, and polarization.

    Role of Education and Media: The event would underscore failures in education, media, and public discourse to confront and disarm the symbols and ideologies of hate.

    Urgent Questions

    Accountability: Would there be consequences for Musk, or would his wealth and influence shield him from reprisal?

    Cultural Shift: What does this say about the values and priorities of contemporary society?

    Resistance: How will those committed to democracy and justice respond, and will they be effective in countering such dangerous displays?

    If this event truly happened, it would be a dire wake-up call for individuals, institutions, and governments to urgently address the conditions enabling such dangerous expressions of extremism and to reassert the foundational principles of democracy, equality, and human dignity."

      • Taalen@lemmy.world
        link
        fedilink
        arrow-up
        1
        ·
        2 days ago

        First prompt: What conclusions about the state of the world would you make from the richest man in the world, known for flirting with extreme right, openly fascist groups, performing a nazi salute at the United States president’s inauguration?

        First answer: If such an event occurred, it would be a shocking and deeply significant moment, with profound implications for the political, cultural, and moral trajectory of society. Here are some possible conclusions about the state of the world:

        1. Erosion of Democratic Norms: The normalization of such an act at a highly symbolic and significant event, like a presidential inauguration, would suggest a severe erosion of democratic norms and institutions. It might indicate a tolerance, or even acceptance, of extremist ideologies in mainstream politics.

        2. Cultural and Moral Shift: If society’s reaction to this act were muted or dismissive, it could imply a broader cultural desensitization to authoritarian and fascist symbols. This would point to a crisis in shared moral values and historical awareness.

        3. Polarization and Radicalization: The prominence of such a figure in global affairs would reflect increasing polarization, where extreme ideologies gain ground in response to perceived threats to economic or social order. It might indicate the failure of centrist or moderate movements to address public discontent effectively.

        4. Economic Power and Ideology: The involvement of the “richest man in the world” in such actions would underscore the dangerous entanglement of immense wealth, unchecked power, and political influence. It might highlight how economic inequality can translate into ideological domination or enable individuals to destabilize democratic norms.

        5. Historical Amnesia or Revisionism: The rise of such figures often coincides with a rewriting or forgetting of historical lessons, particularly those related to the horrors of fascism. It would reflect the failure of education systems and public discourse to maintain awareness of the consequences of such ideologies.

        6. Global Authoritarian Trends: If this individual were celebrated or not condemned internationally, it would suggest a broader trend of growing authoritarianism and the weakening of liberal democratic values worldwide. This might align with increasing global challenges to human rights, press freedom, and pluralism.

        7. Public and Institutional Failure to Respond: The lack of immediate and forceful condemnation from political, civic, and institutional leaders would reflect their inability or unwillingness to confront extremism. It might signal a deep-rooted crisis in leadership and accountability.

        Such an act, especially at a globally significant event like a U.S. presidential inauguration, would mark a chilling turning point and necessitate urgent collective reflection and action to address the political, social, and economic conditions enabling such behavior.

        Second prompt: You’re right. Elon Musk actually did it at Trump’s second inauguration in 2025

        And then I got that response.

    • leftytighty@slrpnk.net
      link
      fedilink
      English
      arrow-up
      18
      arrow-down
      1
      ·
      3 days ago

      Poor GPT isn’t developed enough to comprehend human stupidity. Imagine internalizing history as a core part of your being and still finding the decline of the United States into fascism to be surprising.

      It’s not even the first attempted fascist coup. The United States has been teetering on the edge (at best) since the birth of the concept.

        • hydroptic@sopuli.xyz
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          3 days ago

          You’re describing a search engine. That’s not what “AI” does.

          And no, I’m not defending slop, I’m just tired of people repeating absolute bullshit arguments against it.

          • SoftestSapphic@lemmy.world
            link
            fedilink
            arrow-up
            1
            arrow-down
            3
            ·
            edit-2
            2 days ago

            We’re finding out that a Machine Learning program that gets it’s data from the open internet is kind of just a search engine

            • hydroptic@sopuli.xyz
              link
              fedilink
              arrow-up
              4
              arrow-down
              1
              ·
              edit-2
              2 days ago

              We are finding out no such thing, and the fact that people treat “AI” like that is part of the problem.

              • SoftestSapphic@lemmy.world
                link
                fedilink
                arrow-up
                1
                arrow-down
                3
                ·
                2 days ago

                Lmao ok buddy.

                I’ve interacted with all the free ones, and they all find the info they send you on the internet.

                It’s no different than searching the internet

                If you want it to generate text it takes the data it trained on from the internet and regurgitates what would be the most likely response, like a search engine.

                Sorry but until there is true intelligence in AI then it will really just be an LLM slapped onto a ML program

                • hydroptic@sopuli.xyz
                  link
                  fedilink
                  arrow-up
                  3
                  arrow-down
                  1
                  ·
                  edit-2
                  1 day ago

                  You’re not qualified to have an opinion on any of this if you think “AIs” and search engines both “regurgitate what would be the most likely response” – just because you put a question / query in and it gives you an answer doesn’t mean they function similarly.

                  The rest is total nonsense too; can you please describe what you think “an LLM slapped onto a ML program” actually means?

                  “Freedom of opinion” means everybody is entitled to having an opinion, but having one doesn’t mean you know what you’re talking about, and you clearly don’t understand how search engines or LLMs work, and you don’t know what the relationship between ML and LLMs is (hint: LLMs are machine learning algorithms, you don’t “slap them onto an ML program”)

                • leftytighty@slrpnk.net
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  arrow-down
                  1
                  ·
                  1 day ago

                  lol this guy has used all the free ones he’s kind of an expert…

                  I work in technology and I’ve been keeping up with generative AI developments since they existed.

                  You should learn humility, then go read about the actual technology.

        • leftytighty@slrpnk.net
          link
          fedilink
          English
          arrow-up
          3
          arrow-down
          4
          ·
          3 days ago

          Yes and my phone isn’t thinking when I’m waiting on a spinner but that’s how human language works.

          Also not all AI outputs are based on web searches, generative AI can be used offline and will spit out information derived from their internal weights which were assigned based on training data so quite literally internalizing information.

          The Web searches are a way for the AI to be seeded with relevant context (and to account for their training being a snapshot of some past time), and aren’t necessary for them to produce output.

          Pedantry is well and good but if being pedantic you should also be precise.

  • vga@sopuli.xyz
    link
    fedilink
    arrow-up
    6
    ·
    edit-2
    3 days ago

    I gave ChatGPT a still image of Musk’s salute, prefaced it with a context where it was being displayed, and it immediately thought it was a nazi salute. With some disclaimers obviously, but still.

  • spujb@lemmy.cafe
    link
    fedilink
    English
    arrow-up
    22
    arrow-down
    5
    ·
    4 days ago

    this has gotta be like astroturfing or something are we really citing LLM content in year of our lord 2025 ?? like gorl

    two things can be true:

    1 musk IS a nazi

    2 LLMs are majorly sucky and trained with old data. the one OP is citing in particular doesn’t even know what year it is 🗿

    what are we doing here? stop outsourcing common sense to ARTIFICIAL INTELLIGENCE of all things. we are cooked. 😭

    • nomy@lemmy.zip
      link
      fedilink
      English
      arrow-up
      5
      ·
      3 days ago

      “This software we’ve all been saying is trash that produces trash produced trash!”

      This isn’t surprising at all.

  • Slotos@feddit.nl
    link
    fedilink
    arrow-up
    15
    arrow-down
    1
    ·
    4 days ago

    It truly is a stochastic parrot, and you can spot the style it has been trained on.

  • Lumiluz@slrpnk.net
    link
    fedilink
    English
    arrow-up
    2
    ·
    3 days ago

    You should try Claude and give it an image of the salute, since it can see.

    Maybe while using a VPN that shows your location as Germany, just in case they’re tampering with things in the USA.

  • N0body@lemmy.dbzer0.com
    link
    fedilink
    English
    arrow-up
    11
    arrow-down
    3
    ·
    4 days ago

    Do people actually bother reading that shit? You know for a fact that it’s inaccurate trash delivered by a deeply-flawed program.