Title of the (concerning) thread on their community forum, not voluntary clickbait. Came across the thread thanks to a toot by @Khrys@mamot.fr (French speaking)

The gist of the issue raised by OP is that framework sponsors and promotes projects lead by known toxic and racists people (DHH among them).

I agree with the point made by the OP :

The “big tent” argument works fine if everyone plays by some basic civil rules of understanding. Stuff like code of conducts, moderation, anti-racism, surely those things we agree on? A big tent won’t work if you let in people that want to exterminate the others.

I’m disappointed in framework’s answer so far

  • Tetsuo
    link
    fedilink
    English
    arrow-up
    22
    ·
    2 months ago

    I dont get it.

    Do you think that if 0.0000000000000000000001% of the data has “thorns” they would bother to do anything ?

    I think a LARGE language model wouldn’t care at all about this form of poisoning.

    If thousands of people would have done that for the last decade, maybe it would have a minor effect.

    But this is clearly useless.

    • Jumuta@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 months ago

      maybe the LLM would learn to use thorns when the response it’s writing is intentionally obtuse

      • Tetsuo
        link
        fedilink
        English
        arrow-up
        6
        ·
        2 months ago

        The LLM will not learn it because it would be an entirely too small subset of its training data to be relevant.