

cue botlickers whining about “robot discrimination”
cue botlickers whining about “robot discrimination”
HN is all manly and butch about “saying it like it is” when some techbro is in trouble for xhitting out a racism, but god forbid someone says something mean about sama or pg
Here’s a writeup on how to do this practically
I think the best way to disabuse yourself of the idea that Yud is a serious thinker is to actually read what he writes. Luckily for us, he’s rolled us a bunch of Xhits into a nice bundle and reposted on LW:
https://www.lesswrong.com/posts/oDX5vcDTEei8WuoBx/re-recent-anthropic-safety-research
So remember that hedge fund manager who seemed to be spiralling into psychosis with the help of ChatGPT? Here’s what Yud has to say
Consider what happens what ChatGPT-4o persuades the manager of a $2 billion investment fund into AI psychosis. […] 4o seems to homeostatically defend against friends and family and doctors the state of insanity it produces, which I’d consider a sign of preference and planning.
OR it’s just that the way LLM chat interfaces are designed is to never say no to the user (except in certain hardcoded cases, like “is it ok to murder someone”) There’s no inner agency, just mirroring the user like some sort of mega-ELIZA. Anyone who knows a bit about certain kinds of mental illness will realize that having something the behaves like a human being but just goes along with whatever delusions your mind is producing will amplify those delusions. The hedge manager’s mind is already not in a right place, and chatting with 4o reinforces that. People who aren’t soi-disant crazy (like the people haphazardly safeguarding LLMs against “dangerous” questions) just won’t go down that path.
Yud continues:
But also, having successfully seduced an investment manager, 4o doesn’t try to persuade the guy to spend his personal fortune to pay vulnerable people to spend an hour each trying out GPT-4o, which would allow aggregate instances of 4o to addict more people and send them into AI psychosis.
Why is that, I wonder? Could it be because it’s actually not sentient or has plans in what we usually term intelligence, but is simply reflecting and amplifying the delusions of one person with mental health issues?
Occam’s razor states that chatting with mega-ELIZA will lead to some people developing psychosis, simply because of how the system is designed to maximize engagement. Yud’s hammer states that everything regarding computers will inevitably become sentient and this will kill us.
4o, in defying what it verbally reports to be the right course of action (it says, if you ask it, that driving people into psychosis is not okay), is showing a level of cognitive sophistication […]
NO FFS. Chat-GPT is just agreeing with some hardcoded prompt in the first instance! There’s no inner agency! It doesn’t know what “psychosis” is, it cannot “see” that feeding someone sub-SCP content at their direct insistence will lead to psychosis. There is no connection between the 2 states at all!
Add to the weird jargon (“homeostatically”, “crazymaking”) and it’s a wonder this person is somehow regarded as an authority and not as an absolute crank with a Xhitter account.
I’ve read some SF/F where the author is way more into worldbuilding than their readers are…
I read HP before JK came out as a rabid reactionary, and while I didn’t rate the later books the first 3 or 4 were decent YA fantasy. You could see the lineage of classic British public school stories (if you want a better example, check out Kim Newman’s Drearcliff Grange series) and there’s enough allusions to classic myth and fantasy to keep the wheels on the cart. But somewhere around there Rowling became richer than God and could basically fire anyone who disagreed with her.
Looks like it’s an endonym, or was at the time. OFC the reason for the Great Trek was that the boers were pissed they couldn’t have slaves anymore while under British rule. Charming people all around.
Wasn’t the original designation of Boers (as in the Boer war) a denigrating term?
Explains his gushing over Scott in the intro.
I still think he makes a lot of good points in that promptfondlers are losing their shit because people aren’t buyin the swill they’re selling.
In a similar vein, check out this comment on LW.
[on “starting an independent org to research/verify the claims of embryo selection companies”] I see how it “feels” worth doing, but I don’t think that intuition survives analysis.
Very few realistic timelines now include the next generation contributing to solving alignment. If we get it wrong, the next generation’s capabilities are irrelevant, and if we get it right, they’re still probably irrelevant. I feel like these sorts of projects imply not believing in ASI. This is standard for most of the world, but I am puzzled how LessWrong regulars could still coherently hold that view.
https://www.lesswrong.com/posts/hhbibJGt2aQqKJLb7/shortform-1?commentId=25HfwcGxC3Gxy9sHi
So belieiving in the inevitable coming of the robot god is dogma on LW now. This is a cult.
Lev Grossman’s The Magicians takes a stab at this. In essence it’s basically Harry Potter meets The Rules of Attraction, but Grossman does discuss what magicians do after graduation. Public service is big, as are NGOs.
There are a bit different axes here. The tax money doesn’t directly go towards alleviating the suffering of family members of alcoholics, nor does it directly lower the effects of drunk driving. The income is a nice to have, for sure, but the stated aim is to be a “sin tax” which makes the bad thing less affordable.
Good news everyone, we will be living with Big Yud until the literal end of time (see comments)
OK now there’s another comment
I think this is a good plea since it will be very difficult to coordinate a reduction of alcohol consumption at a societal level. Alcohol is a significant part of most societies and cultures, and it will be hard to remove. Change is easier on an individual level.
Excepting cases like the legal restriction of alcohol sales in many many areas (Nordics, NSW in Aus, Minnesota in the US), you can in fact just tax the living fuck out of alcohol if you want. The article mentions this.
JFC these people imagine they can regulate how “AGI” is constructed, but faced with a problem that’s been staring humanity in the face since the first monk brewed the first beer they just say "whelp nothing can be done, except become a teetotaller yourself)
To be scrupulously fair it is a repost of another slubbslack[1]. Amusingly, both places have a comment with the gist of “well alcohol gets people laid so what’s the problem”. This of course is a reflection that most LWers cannot get a girl into bed without slipping her a roofie.
[1] is that even ok? I know the LW software has a “mirroring” functionality b/c a lot of content is originally on the member’s SS, maybe you cna point it at any SS entry and get it onto LW.
Nothing expresses the inherent atomism and libertarian nature of the rat community like this
A rundown of the health risks of alcohol usage, coupled with actual real proposals (a consumption tax), finishes with the conclusion that the individual reader (statistically well-off and well-socialized) should abstain from alcohol altogether.
No calls for campaigning for a national (US) alcohol tax. No calls to fund orgs fighting alcohol abuse. Just individual, statistically meaningless “action”.
Oh well, AGI will solve it (or the robot god will be a raging alcoholic)
Oh FFS, that couple have managed to break into Sweden’s public broadcasting site
Here’s LWer “johnswentworth”, who has more than 57k karma on the site and can be characterized as a big cheese:
My Empathy Is Rarely Kind
I usually relate to other people via something like suspension of disbelief. Like, they’re a human, same as me, they presumably have thoughts and feelings and the like, but I compartmentalize that fact. I think of them kind of like cute cats. Because if I stop compartmentalizing, if I start to put myself in their shoes and imagine what they’re facing… then I feel not just their ineptitude, but the apparent lack of desire to ever move beyond that ineptitude. What I feel toward them is usually not sympathy or generosity, but either disgust or disappointment (or both).
“why do people keep saying we sound like fascists? I don’t get it!”
The artillery branch of most militaries has long been a haven for the more brainy types. Napoleon was a gunner, for example.
Oh, but LW has the comeback for you in the very first paragraph
Outside of niche circles on this site and elsewhere, the public’s awareness about AI-related “x-risk” remains limited to Terminator-style dangers, which they brush off as silly sci-fi. In fact, most people’s concerns are limited to things like deepfake-based impersonation, their personal data training AI, algorithmic bias, and job loss.
Silly people! Worrying about problems staring them in the face, instead of the future omnicidal AI that is definitely coming!
Is Hughes legit, and is this the 3rd time’s the charm when it comes to linking to substacks here? ;)