It’s like they correctly realized capitalism is a death cult and propagating global suffering, but instead of seeing if anyone else has ever worked out alternatives before them () they came up with the brilliant idea to…pick the right careers and make donations? Without any actual material analysis of why things around the world are the way they are. Also apparently they have an algorithm that they say can predict world events or something.
On first blush, there’s a certain amount of logic to this, but then the EA people all “realized that AI is the most dangerous thing ever” and started giving all of their money to each other’s AI startups, so whatever it started as it ended up being a techbro grift.
There is a certain logic to the original ideas of effective altruism. If you want to make positive change in the world, you will be more able to do so if you are a powerful person.
The problem is you can essentially use it as a justification for endlessly pursuing your own personal power. After all the more powerful you become, the more good you can do! In fact it makes it easy to justify committing immoral acts in the pursuit of power-- think of it from a utilitarian point of view! I am going to do so much good once I’m rich, that it will far out weight the harm I’m doing now. In other words, an okay idea that is used and abused by the greedy and power hungry to convince themselves they aren’t bad people.
yeah like some charities are more effective than others, and if you’re already an engles maybe it’s some kind of better to find some billable hours and pay people to “volunteer” at the soup kitchen rather than do it directly yourself but those guys all went fucking nuts about pascal’s wager instead.
Yeah, there’s the seed of a sensible idea in there, which is just “if you’re going to try to improve the world, you should think carefully about where and how to spend your limited resources to have the most impact.” That’s fine–good, even–but these ghouls have used that idea and their own ideological preconceptions to bootstrap themselves into some absolutely insane positions.
A big part of the problem is that they extend their utility calculations indefinitely into the future with no temporal discounting whatsoever. That leads to a hyperfixation on “existential risks” and trying to optimize for lives 100,000 years in the future at the expense of lives today. Their “leading philosophers” say unhinged things about this: one of them claims that delaying the technological singularity is “costing” us something like 250 billion lives per second, since in the future when we’re an intergalactic civilization with 500 trillion people, that’s how many people will die every second before we have post-Star Trek levels of technology. Somehow this is taken by actual thinking human beings to be a compelling argument instead of a reductio ad absurdum of the position.
The lack of any temporal discounting means that they see things that actually matter–like climate change, a lack of healthcare, or the ravages of global capitalism–as fundamentally unimportant. After all, if you can save 30 trillion lives in the year 45,000, who cares if a few billion die in the year 2060? The eventual impact of this is that most of them have talked themselves into thinking that AI “alignment” is the place they should all be focusing all their resources, since Robot God will either save us all from death forever or usher in human extinction, and thus its future utility numbers are either infinitely positive or infinitely negative. Therefore, they all insist that anyone not giving all their money and time to science fiction AI grifts like MIRI is fundamentally irrational and that trying to help actual people who are actually suffering right now is dumb, short sighted, and based in emotion (pejorative) instead of reason.
It flags transphobes red and trans allies green. Works with reddit, twitter, wikipedia, and I’m sure other social media, idk i don’t really use them much.
ive been doin lots of reading bout native american tribes and theres so many stories like “and then they slowly roasted a 6 year old girl to death” or “then they sliced the bottoms off his feet and made him walk over coals” that im like idk mebbe colonialism was an improvement?
Not knowing too much about the middle east, I was like “idk there’s rarely good guys and bad guys, I’m sure it’s complicated, media can be biased, I reserve my opinion till I know more”
But the more I learn the more I’m like ok Israel seems clearly more good here
that second and third tweet are related. as a Rationalist, she’s not (just) being duped by propaganda. she’s knowingly and willingly supporting a project of racist settler colonialism because she thinks it is right that the greater races should dominate the lesser races.
This extension supports Twitter, Bluesky, Mastodon, Facebook, Reddit, Tumblr, Medium, YouTube, Wikipedia articles, search engine results and all the sites with Disqus comments.
Not yet it seems, can’t imagine it would be very hard to get working with lemmy though.
Shinigami eyes flags the account red. Who even is this person
-famous sex worker, took some goofy gnome pics like a decade ago, posted about doing acid, is part of the Effective Altruist cult
It’s like they correctly realized capitalism is a death cult and propagating global suffering, but instead of seeing if anyone else has ever worked out alternatives before them () they came up with the brilliant idea to…pick the right careers and make donations? Without any actual material analysis of why things around the world are the way they are. Also apparently they have an algorithm that they say can predict world events or something.
What an amazing prediction, no one else could have possibly seen that one coming, I’m sure.
Apparently a bunch of techbros like Sam Bankman-Fried were heavily involved so that’s how you know it’s a joke ideology.
On first blush, there’s a certain amount of logic to this, but then the EA people all “realized that AI is the most dangerous thing ever” and started giving all of their money to each other’s AI startups, so whatever it started as it ended up being a techbro grift.
It’s just utilitarianism with the serial number filed off and happiness demons being real life demons
There is a certain logic to the original ideas of effective altruism. If you want to make positive change in the world, you will be more able to do so if you are a powerful person.
The problem is you can essentially use it as a justification for endlessly pursuing your own personal power. After all the more powerful you become, the more good you can do! In fact it makes it easy to justify committing immoral acts in the pursuit of power-- think of it from a utilitarian point of view! I am going to do so much good once I’m rich, that it will far out weight the harm I’m doing now. In other words, an okay idea that is used and abused by the greedy and power hungry to convince themselves they aren’t bad people.
yeah like some charities are more effective than others, and if you’re already an engles maybe it’s some kind of better to find some billable hours and pay people to “volunteer” at the soup kitchen rather than do it directly yourself but those guys all went fucking nuts about pascal’s wager instead.
Yeah, there’s the seed of a sensible idea in there, which is just “if you’re going to try to improve the world, you should think carefully about where and how to spend your limited resources to have the most impact.” That’s fine–good, even–but these ghouls have used that idea and their own ideological preconceptions to bootstrap themselves into some absolutely insane positions.
A big part of the problem is that they extend their utility calculations indefinitely into the future with no temporal discounting whatsoever. That leads to a hyperfixation on “existential risks” and trying to optimize for lives 100,000 years in the future at the expense of lives today. Their “leading philosophers” say unhinged things about this: one of them claims that delaying the technological singularity is “costing” us something like 250 billion lives per second, since in the future when we’re an intergalactic civilization with 500 trillion people, that’s how many people will die every second before we have post-Star Trek levels of technology. Somehow this is taken by actual thinking human beings to be a compelling argument instead of a reductio ad absurdum of the position.
The lack of any temporal discounting means that they see things that actually matter–like climate change, a lack of healthcare, or the ravages of global capitalism–as fundamentally unimportant. After all, if you can save 30 trillion lives in the year 45,000, who cares if a few billion die in the year 2060? The eventual impact of this is that most of them have talked themselves into thinking that AI “alignment” is the place they should all be focusing all their resources, since Robot God will either save us all from death forever or usher in human extinction, and thus its future utility numbers are either infinitely positive or infinitely negative. Therefore, they all insist that anyone not giving all their money and time to science fiction AI grifts like MIRI is fundamentally irrational and that trying to help actual people who are actually suffering right now is dumb, short sighted, and based in emotion (pejorative) instead of reason.
They’re fucking infuriating grifters.
Oh, the Gnome lady. Did she do a Mime thing too?
I hate that the gnome pics are what I recognize her from.
What does that first sentence even mean?
browser extension to check if someone is transphobic
Neat!
https://shinigami-eyes.github.io/
It flags transphobes red and trans allies green. Works with reddit, twitter, wikipedia, and I’m sure other social media, idk i don’t really use them much.
Does it say what threw the flag. I’m certainly not seeing anything in the history.
idk what comment someone flagged but there’s plenty of sus ones
https://twitter.com/Aella_Girl/status/1362635548477726721
https://twitter.com/Aella_Girl/status/1707623419666469309
https://twitter.com/Aella_Girl/status/1716709798945685512
Yeah she’s a clown.
that second and third tweet are related. as a Rationalist, she’s not (just) being duped by propaganda. she’s knowingly and willingly supporting a project of racist settler colonialism because she thinks it is right that the greater races should dominate the lesser races.
Woof
That’s very cool. Does it work on Hexbear?
if you seen anyone transphobic on hexbear, report them to the mods, they aren’t welcome here
Shinigami Mods
Not yet it seems, can’t imagine it would be very hard to get working with lemmy though.
Making it work on Hexbear might be dangerous. Transphobes would literally start getting Death Noted. Wait actually….
deleted by creator
One of those rich, fake job having pseudo intellectual who sit around pondering about life inside their mansion/luxury condo