• 6 Posts
  • 372 Comments
Joined 2 years ago
cake
Cake day: August 29th, 2023

help-circle
  • this explicitly isn’t happening because the private sector is clamoring to get some of that EY expertise

    I mean, Peter Thiel might like him to bend the knee and I’m sure OpenAI/Anthropic would love to have him as a shill, idk if they’d actually pay 600K for it. Also it would be a betrayal of every belief about AI Eliezer claims to have, so in principle it really shouldn’t take lucrative compensation to keep him from it.

    paying me less would require me to do things that take up time and energy in order to get by with a smaller income

    Well… it is an improvement on cults making their members act as the leader’s servants/slaves because the leader’s time/effort is allegedly so valuable!



  • PauseAI Leader writes a hard take down on the EA movement: https://forum.effectivealtruism.org/posts/yoYPkFFx6qPmnGP5i/thoughts-on-my-relationship-to-ea-and-please-donate-to

    They may be a doomer with some crazy beliefs about AI, but they’ve accurately noted EA is pretty firmly captured by Anthropic and the LLM companies and can’t effectively advocate against them. And they accurately call out the false balanced style and unevenly enforced tone/decorum norms that stifle the EA and lesswrong forums. Some choice quotes:

    I think, if it survives at all, EA will eventually split into pro-AI industry, who basically become openly bad under the figleaf of Abundance or Singulatarianism, and anti-AI industry, which will be majority advocacy of the type we’re pioneering at PauseAI. I think the only meaningful technical safety work is going to come after capabilities are paused, with actual external regulatory power. The current narrative (that, for example, Anthropic wishes it didn’t have to build) is riddled with holes and it will snap. I wish I could make you see this, because it seems like you should care, but you’re actually the hardest people to convince because you’re the most invested in the broken narrative.

    I don’t think talking with you on this forum with your abstruse culture and rules is the way to bring EA’s heart back to the right place

    You’ve lost the plot, you’re tedious to deal with, and the ROI on talking to you just isn’t there.

    I think you’re using specific demands for rigor (rigor feels virtuous!) to avoid thinking about whether Pause is the right option for yourselves.

    Case in point: EAs wouldn’t come to protests, then they pointed to my protests being small to dismiss Pause as a policy or messaging strategy!

    The author doesn’t really acknowledge how the problems were always there from the very founding of EA, but at least they see the problems as they are now. But if they succeeded, maybe they would help slow the waves of slop and capital replacing workers with non-functioning LLM agents, so I wish them the best.



  • I posted about Eliezer hating on OpenPhil for having too long AGI timelines last week. He has continued to rage in the comments and replies to his call out post. It turns out, he also hates AI 2027!

    https://www.lesswrong.com/posts/ZpguaocJ4y7E3ccuw/contradict-my-take-on-openphil-s-past-ai-beliefs?commentId=3GhNaRbdGto7JrzFT

    I looked at “AI 2027” as a title and shook my head about how that was sacrificing credibility come 2027 on the altar of pretending to be a prophet and picking up some short-term gains at the expense of more cooperative actors. I didn’t bother pushing back because I didn’t expect that to have any effect. I have been yelling at people to shut up about trading their stupid little timelines as if they were astrological signs for as long as that’s been a practice (it has now been replaced by trading made-up numbers for p(doom)).

    When we say it, we are sneering, but when Eliezer calls them stupid little timelines and compares them to astrological signs it is a top quality lesswrong comment! Also a reminder for everyone that I don’t think we need: Eliezer is a major contributor to the rationalist attitude of venerating super-forecasters and super-predictors and promoting the idea that rational smart well informed people should be able to put together super accurate predictions!

    So to recap: long timelines are bad and mean you are a stuffy bureaucracy obsessed with credibility, but short timelines are bad also and going to expend the doomer’s crediblity, you should clearly just agree with Eliezer’s views, which don’t include any hard timelines or P(doom)s! (As cringey as they are, at least they are committing to predictions in a way that can be falsified.)

    Also, the mention about sacrificing credibility make me think Eliezer is intentionally willfully playing the game of avoiding hard predictions to keep the grift going (as opposed to self-deluding about reasons not to explain a hard timeline or at least put out some firm P()s ).




  • I kinda half agree, but I’m going to push back on at least one point. Originally most of reddit’s moderation was provided by unpaid volunteers, with paid admins only acting as a last resort. I think this is probably still true even after they purged a bunch of mods that were mad Reddit was being enshittifyied. And the official paid admins were notoriously slow at purging some really blatantly over the line content, like the jailbait subreddit or the original donald trump subreddit. So the argument is that Reddit benefited and still benefits heavily from that free moderation and the content itself generated and provided by users is valuable, so acting like all reddit users are simply entitled free riders isn’t true.





  • Eliezer is mad OpenPhil (EA organization, now called Coefficient Giving)… advocated for longer AI timelines? And apparently he thinks they were unfair to MIRI, or didn’t weight MIRI’s views highly enough? And doing so for epistemically invalid reasons? IDK, this post is a bit more of a rant and less clear than classic sequence content (but is par for the course for the last 5 years of Eliezer’s content). For us sane people, AGI by 2050 is still a pretty radical timeline, it just disagrees with Eliezer’s imminent belief in doom. Also, it is notable Eliezer has actually avoided publicly committing to consistent timelines (he actually disagrees with efforts like AI2027) other than a vague certainty we are near doom.

    link

    Some choice comments

    I recall being at a private talk hosted by ~2 people that OpenPhil worked closely with and/or thought of as senior advisors, on AI. It was a confidential event so I can’t say who or any specifics, but they were saying that they wanted to take seriously short AI timelines

    Ah yes, they were totally secretly agreeing with your short timelines but couldn’t say so publicly.

    Open Phil decisions were strongly affected by whether they were good according to worldviews where “utter AI ruin” is >10% or timelines are <30 years.

    OpenPhil actually did have a belief in a pretty large possibility of near term AGI doom, it just wasn’t high enough or acted on strongly enough for Eliezer!

    At a meta level, “publishing, in 2025, a public complaint about OpenPhil’s publicly promoted timelines and how those may have influenced their funding choices” does not seem like it serves any defensible goal.

    Lol, someone noting Eliezer’s call out post isn’t actually doing anything useful towards Eliezer’s goals.

    It’s not obvious to me that Ajeya’s timelines aged worse than Eliezer’s. In 2020, Ajeya’s median estimate for transformative AI was 2050. […] As far as I know, Eliezer never made official timeline predictions

    Someone actually noting AGI hasn’t happened yet and so you can’t say a 2050 estimate is wrong! And they also correctly note that Eliezer has been vague on timelines (rationalists are theoretically supposed to be preregistering their predictions in formal statistical language so that they can get better at predicting and people can calculate their accuracy… but we’ve all seen how that went with AI 2027. My guess is that at least on a subconscious level Eliezer knows harder near term predictions would ruin the grift eventually.)


  • Image and video generation AI can’t create good, novel, art, but it can serve up mediocre remixes of all the standard stuff with only minor defects an acceptable percentage of the time, and that is a value proposition soulless corporate executive are more than eager to take up. And that is just a bonus, I think your last fourth point is Disney’s real motive, establish a monetary value of their IP served up as slop, so they can squeeze other AI providers for their money. Disney was never an ally in this fight.

    The fact that Sam was slippery enough to finagle this deal makes me doubt the analysts like Ed Zitron… they may be right from a rational perspective, but if Sam can secure a few major revenue streams and build moat through nonsense like this Disney deal… still it will be tough even if he has another dozen tricks like this one up his sleeves, smaller companies without all the debts and valuation of OpenAI can undercut his prices.



  • Yud, when journalists ask you “How are you coping?”, they don’t expect you to be “going mad facing apocalypse”, that is YOUR poor imagination as a writer/empathetic person. They expect you to be answering how you are managing your emotions and your stress, or bar that give a message of hope or of some desperation, they are trying to engage with you as real human being, not as a novel character.

    I think the way he reads the question is telling on himself. He knows he is sort of doing a half-assed response to the impending apocalypse (going on a podcast tour, making even lower-quality lesswrong posts, making unworkable policy proposals, and continuing to follow the lib-centrist deep down inside himself and rejecting violence or even direct action against the AI companies that are hurling us towards an apocalypse). He knows a character from one of his stories would have a much cooler response, but it might end up getting him labeled a terrorist and sent to prison or whatever, so instead he rationalizes his current set of actions. This is in fact insane by rationalist standards, so when a journalist asks him a harmless question it sends him down a long trail of rationalizations that include failing to empathize with the journalist and understand the question.


  • One part in particular pissed me off for being blatantly the opposite of reality

    and remembering that it’s not about me.

    And so similarly I did not make a great show of regret about having spent my teenage years trying to accelerate the development of self-improving AI.

    Eliezer literally has multiple sequence about his foolish youth where he nearly destroyed the world trying to jump straight to inventing AI instead of figuring out “AI Friendliness” first!

    I did not neglect to conduct a review of what I did wrong and update my policies; you know some of those updates as the Sequences.

    Nah, you learned nothing from what you did wrong and your sequence posts were the very sort of self aggrandizing bullshit you’re mocking here.

    Should I promote it to the center of my narrative in order to make the whole thing be about my dramatic regretful feelings? Nah. I had AGI concerns to work on instead.

    Eliezer’s “AGI concerns to work on” was making a plan for him, personally, to lead a small team, which would solve meta-ethics and figure out how to implement these meta-ethics in a perfectly reliable way in an AI that didn’t exist yet (that a theoretical approach didn’t exist for yet, that an inkling of how to make traction on a theoretical approach for didn’t exist yet). The very plan Eliezer came up with was self aggrandizing bullshit that made everything about Eliezer.





  • even assuming sufficient computation power, storage space, and knowledge of physics and neurology

    but sufficiently detailed simulation is something we have no reason to think is impossible.

    So, I actually agree broadly with you in the abstract principle but I’ve increasingly come around to it being computationally intractable for various reasons. But even if functionalism is correct…

    • We don’t have the neurology knowledge to do a neural-level simulation, and it would be extremely computationally expensive to actually simulate all the neural features properly in full detail, well beyond the biggest super computers we have now and “moore’s law” (scare quotes deliberate) has been slowing down such that I don’t think we’ll get there.

    • A simulation from the physics level up is even more out of reach in terms of computational power required.

    As you say:

    I think there would be other, more efficient means well before we get to that point

    We really really don’t have the neuroscience/cognitive science to find a more efficient way. And it is possible all of the neural features really are that important to overall cognition, so you won’t be able to do it that much more “efficiently” in the first place…

    Lesswrong actually had someone argue that the brain is within an order or magnitude or two of the thermodynamic limit on computational efficiency: https://www.lesswrong.com/posts/xwBuoE9p8GE7RAuhd/brain-efficiency-much-more-than-you-wanted-to-know