Seems pretty bad?

  • 𝒍𝒆𝒎𝒂𝒏𝒏@lemmy.one
    link
    fedilink
    English
    arrow-up
    1
    ·
    1 year ago

    That example someone posted where the AI refused to explain the oklch CSS functional notation, and instead said it doesn’t exist, pretty much exemplifies why this is a bad idea, although I can see how maybe there was good intentions by whoever implemented it.

    In my opinion, the “AI explain” is unnecessary, as I find the MDN contributors already do an excellent job of explaining things as-is, especially in the Examples section under the documentation itself

    • HairHeel@programming.dev
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      maybe there was good intentions by whoever implemented it

      If an executive saying “find ways to use ChatGPT so we can be on the cutting edge” and a developer saying “eh, I guess maybe…” counts as good intentions.

  • TheOtherKundotron@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    This feature is in beta. That issue title is sort of exaggerated tbh. Test it if you want, but take everything their beta LLM spits out with grain of salt.

    • heartlessevil@lemmy.one
      link
      fedilink
      English
      arrow-up
      0
      ·
      1 year ago

      The “ai explain” button doesn’t mention that it’s in beta even in the expanded detail text. But more importantly, even once out of beta, LLMs will never be trustworthy references without humans vetting them. This isn’t a “beta” problem it’s a “completely misunderstood the problem and solution” problem.

      • key@lemmy.keychat.org
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        It’s crazy how this technology that does nothing more than automatically generate text similar to text humans would write (or whatever else it’s trained on) has so many people convinced it’s a source of expertise on everything.

        There’s nothing in there with a capacity of reasoning or awareness of fact. It’s the difference between an ALU and a CPU at this point. And a lot of people perfectly aware of that fact are essentially grifting the less savvy masses who see a black box that sounds smart.

  • miega@lemmy.world
    link
    fedilink
    English
    arrow-up
    0
    ·
    1 year ago

    I sometimes think that we might currently be at best AI state in the next 20 years or so until other significant technological improvements are achieved.

    these AIs were trained on human generated data, but now we’re gonna trash the Internet with AI generated truth sounding nonesense, so the same methods will likely produce worse and worse results