• T156@lemmy.world
    link
    fedilink
    English
    arrow-up
    2
    ·
    1 month ago

    They do, but they’d still need someone to go through the flagging and check. Reddit gets away with it as it is like Facebook groups do, by offloading the moderation to users, with the admins only being roped in for ostensibly big things like ban evasion/site wide bans, or lately, if the moderators don’t toe the company line exactly.

    I doubt that they would use an LLM for that. That’s very expensive and slow, especially for the volume of images that they would need to process. Existing CSAM detectors aren’t as expensive, and are faster. They basically compute a hash for the image, and compare it to known hashes for CSAM.

    • brucethemoose@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      1 month ago

      Small LLMs are quite fast these days, even the multimodal ones. Same with small models explicitly used to filter diffusion output.