• 12 Posts
  • 139 Comments
Joined 1 year ago
cake
Cake day: July 19th, 2023

help-circle







  • Mr. Rogers’ channel thumbnails would look like one of those full-completion video-gaming channels that involve a controller-holder, sometimes along with a confused roommate or neighbor, logging hour #73 of a 20hr RPG because they can’t figure out how to get a platinum on the last minigame. There’d be a blurry background of the handpuppets and two headshots of Mr. Rogers smiling and his guest freaking out. I think the titles would be fairly tame, though; I’m imagining, “Another Day in the Neighborhood #112 | An Unexpected Guest, Learning to Tie Shoes”

    Now, where it gets fun is imagining that Lamb Chop could have the same setup. “Lamb Chop & Friends #52 | She’s Unstoppable, So Much Blood, Can We Unsummon Lamb Chop?”




  • It’s almost completely ineffective, sorry. It’s certainly not as effective as exfiltrating weights via neighborly means.

    On Glaze and Nightshade, my prior rant hasn’t yet been invalidated and there’s no upcoming mathematics which tilt the scales in favor of anti-training techniques. In general, scrapers for training sets are now augmented with alignment models, which test inputs to see how well the tags line up; your example might be rejected as insufficiently normal-cat-like.

    I think that “force-feeding” is probably not the right metaphor. At scale, more effort goes into cleaning and tagging than into scraping; most of that “forced” input is destined to be discarded or retagged.










  • Hallucinations — which occur when models authoritatively states something that isn’t true (or in the case of an image or a video makes something that looks…wrong) — are impossible to resolve without new branches of mathematics…

    Finally, honesty. I appreciate that the author understands this, even if they might not have the exact knowledge required to substantiate it. For what it’s worth, the situation is more dire than this; we can’t even describe the new directions required. My fictional-universe theory (FU theory) shows that a knowledge base cannot know whether its facts are describing the real world or a fictional world which has lots in common with the real world. (Humans don’t want to think about this, because of the implication.)