• Curtis "Ovid" Poe (he/him)@fosstodon.org
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      il y a 4 mois

      @froztbyte As for the issue of transparency, it’s ridiculously hard in real life. For example, for my website, I used a format I created called “blogdown”, which is Markdown combined with a template language to make it easy to write articles. I never cited my sources, nor do I think I could. From decades of programming, how can I cite everything I’ve ever learned from?

      As for how AI is transparent for arriving at decisions, this falls into a separate category and requires different thinking.

        • earthquake@lemm.ee
          link
          fedilink
          English
          arrow-up
          8
          ·
          edit-2
          il y a 4 mois

          You’re not just confident that asking chatGPT to explain it’s inner workings works exactly like a --verbose flag, you’re so sure that’s what happening that it apparently does not occur to you to explain why you think the output is not just more plausible text prediction based on its training weights with no particular insight into the chatGPT black box.

          Is this confidence from an intimate knowledge of how LLMs work, or because the output you saw from doing this looks really really plausible? Try and give an explanation without projecting agency onto the LLM, as you did with “explain carefully why it rejects”