• gravitas_deficiency@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    38
    arrow-down
    6
    ·
    10 months ago

    There are very valid philosophical and ethical reasons not to use it. We’re not just being luddites for the hell of it. In many cases, we’re engineers and scientists with interest, experience, or expertise in neural nets and LLMs ourselves, and we don’t like how fast and loose (in a lot of really, really important ways) all these big companies are playing it with the training datasets, nor how they’re actively disregarding any sort of legal or ethical responsibility around the technology writ large.

    • tsonfeir@lemm.ee
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      12
      ·
      10 months ago

      Likewise. The same could be said about every technology.

      • Feathercrown@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        10 months ago

        Uh, no. Why would that be the case? Every technology has unique upsides and downsides and the downsides of this one are not being handled correctly and are in fact being exacerbated.