Want to wade into the snowy surf of the abyss? Have a sneer percolating in your system but not enough time/energy to make a whole post about it? Go forth and be mid.

Welcome to the Stubsack, your first port of call for learning fresh Awful you’ll near-instantly regret.

Any awful.systems sub may be subsneered in this subthread, techtakes or no.

If your sneer seems higher quality than you thought, feel free to cut’n’paste it into its own post — there’s no quota for posting and the bar really isn’t that high.

The post Xitter web has spawned so many “esoteric” right wing freaks, but there’s no appropriate sneer-space for them. I’m talking redscare-ish, reality challenged “culture critics” who write about everything but understand nothing. I’m talking about reply-guys who make the same 6 tweets about the same 3 subjects. They’re inescapable at this point, yet I don’t see them mocked (as much as they should be)

Like, there was one dude a while back who insisted that women couldn’t be surgeons because they didn’t believe in the moon or in stars? I think each and every one of these guys is uniquely fucked up and if I can’t escape them, I would love to sneer at them.

(Credit and/or blame to David Gerard for starting this. A lot of people didn’t survive January, but at least we did. This also ended up going up on my account’s cake day, too, so that’s cool.)

  • nightsky@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    16 hours ago

    Very impressed with this comment from the creator of the Zig programming language, regarding dealing with AI slop submissions, and generally about LLMs for coding.

    I should look into Zig again! Technically, I’ve always leaned more towards Rust, because I like its more uncompromising approach to safety, while Zig always seemed to me a bit more middle-of-the-road on that. But I’ve been disappointed about how wide-spread LLM usage has become in Rust circles, I fear that its culture might tip over in favor of slop. (But it’s not there yet and I hope it won’t happen!)

    Anyway, I’m ordering the “Introduction to Zig” book…

  • saucerwizard@awful.systems
    link
    fedilink
    English
    arrow-up
    13
    ·
    23 hours ago

    OT: paying the cat tax…again. Please ignore the ash on Hector’s head, its an ongoing mystery where thats been coming from.

  • o7___o7@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    21 hours ago

    patio11 and tptacek are experts on daycares in Minnesota. This is very on topic for a technology website that eschews politics.

    https://news.ycombinator.com/item?id=46915587

    It’s a real fuckin scum scrum over there(1). Between these dorks and Mozilla Jake, it seems like every nerd-ass fash clown in tech got the memo to talk like an emotionally abusive ex with dying wizard characteristics.

    (1) even more so than ususal

    • CinnasVerses@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      edit-2
      7 hours ago

      I miss when Patrick McKenzie was just sharing an American’s view on Japanese culture and reminding devs that names are not always Firstname Lastname in the Latin alphabet and ‘just’ paying yourself twice the average local income from your business is not a failure. The following is deep twitter pundit brain for a rich white man in Chicago who has lived most of his adult in Japan and SoCal referring to social programs for poor brown people in Minnesota:

      I think journalism and civil society should do some genuine soul-searching on how we knew—knew—the state of that pond, but didn’t consider it particularly important or newsworthy until someone started fishing on camera.

      Edit. I also like the HN response which explains that private companies have few responses to fraud except refusing service, but the State of Minnesota can arrest fraudsters, command third parties to provide evidence about them, and send them to prison, so the People of Minnesota require strong evidence before it uses those powers.

      • istewart@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        7 hours ago

        I knew nothing about, and had no opinion on, daycare facilities in Minnesota run by Somali immigrants, before Trump-supporting media entities decided to make the topic an astroturf issue. On the other hand, I had plenty of experience with people whose worldviews had been severely warped by such coordinated media campaigns. Mr. McKenzie should take some time to reflect on this.

        • CinnasVerses@awful.systems
          link
          fedilink
          English
          arrow-up
          3
          ·
          5 hours ago

          Yes, I think the people who should have opinions beyond “the state government found some fraud and is investigating further cases” are people who live in Minnesota and have connections to daycare or immigrant communities. Its notorious that the NYT repackages stories by reporters in smaller orgs (or randos on social media) and puts its own spin on them! They don’t have a specific editorial line on social services in the Midwest, just instincts.

  • macroplastic@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 day ago

    Enjoyed this piece from Mission Local on San Francisco’s “March for Billionaires” yesterday.

    Choice excerpts:

    Despite the San Francisco locale, a participant said the event had “grassroots” origins at a “little rationalist restaurant get together” in a “group house” on Shattuck Avenue, subverting any assumptions that Berkeley is all radical hippies.

    Mission Local contributor Benjamin Wachs coined a term for an event in which media observers outnumber participants: a panopticonference. This was close to that. Those in attendance did their best to field questions from the barrage of journalists that backed them into a tree.

    This is where Annie, a young transgender woman who attended the protest in a T-shirt that said “I’m in a polycule with Aella,” first met Kauffman. An impromptu debate ensued, with Annie “aggressively defending billionaires.” It was, participants concluded, worthy of a larger forum.

    “People are just jealous that they are poorer and weaker and uglier,” she said. “We are beautiful. We’re smart. We’re strong… We are supporting the billionaires, here.”

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        2
        ·
        4 hours ago

        from bsky photos looks like entire gathering was 30 people. t h i r t y p e o p l e i might have counted some reporter or someone passing by randomly by accident

    • Soyweiser@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      19 hours ago

      “People are just jealous that they are poorer and weaker and uglier,”

      Remember when Rationalists pretended to care about truth, steelmanning, ideological turning tests etc.

      (Also implying that billionaires are strong and attractive is funny)

      subverting any assumptions that Berkeley is all radical hippies

      Yall still are radical hippies. Some hippies just love the boot.

      California is, I believe, the only state to give health insurance to people who come into the country illegally,” Kauffman said nervously. “I think we probably should not be providing that.

      Rationalism, the empathy removal training center.

      “It is the intention of journalists to lie, which is why we need to not do anything to the journalists themselves, but we need to simply remove them as a class,” Annie said. “Just like Germany does to the extremist organizations.”

      Well, Germany certainly did excel at removing classes of people from society

      lol.

      Her political awakening, she added, was watching the press “constantly pump out obviously fake information” against Trump during the 2016 election instead of reporting on the “actual abhorrent views he holds.”

      Converted by Scott. (That ‘people are saying I was wrong but actually I was right’ disclaimer aged worse than the post).

  • fullsquare@awful.systems
    link
    fedilink
    English
    arrow-up
    6
    ·
    edit-2
    1 day ago

    recently learned about electrofuels. it’s a hypothetical rube goldberg scheme where you put enough energy to propel 5-7 EVs in, and pull out enough gasoline to fuel one car. it’s sold as a green technology, because now gasoline is green somehow. this spin ignores that it would require massive buildout of renewables + nuclear, and just by doing this electrification of many energy end uses just makes sense, including transportation. (what the fuck is train??) it’s also sold as a long term storage for renewables, but i struggle to see how scheme that has less than 30% roundtrip efficiency can be considered “storage”. just build more renewables and don’t use them all if needed

    cui bono?

    it’s a complicated pr campaign by volkswagen group (and some other usual suspects). this is a nonexistent magic solution to a real problem, so it fits a common pattern (and also makes it stubsack material) that also attempts to shank electric vehicles adoption.

    if anything, it’s backwards because EVs are adopted faster than renewables buildout happens (cars last less than powerplants). if realized, this allows volkswagen group to manufacture regular cars for a long, long time even after oil refining stops. originally, it was proposed as a hypothetical luxury product for antique car owners, because it’s physically possible, but doesn’t make sense in energy or cost terms. but then someone spun it into potential regular retail good, and also maybe this pr campaign was a part of reason why internal combustion car ban was axed at eu level recently. now that it happened, they don’t need to push it so hard

    it is something ironic in there that last time this process made sense was in nazi germany, just this time source of syngas is different

    • rook@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      17 hours ago

      So, the idea isn’t entirely as stupid as it initially sounds. There are two things that you gain from this approach:

      • You can more easily separate your energy generation and consumption. Power lines are lossy, and there are a lot of very sunny and very windy places that are a long way away from where people actually want to live. Massive HVDC infrastructure buildout isn’t cheap or easy.
      • Energy density of chemical fuels is higher than batteries. Being able to travel long distances without convenient nearby power sources is useful… long distance high speed rail isn’t always convenient to electrify, but also long haul flights and rocketry are Quite Difficult to run on batteries.

      FWIW, I suspect the cost will end up being even higher, because you’ll start losing the economies of scale that modern vehicle infrastructure has, because normal people will just use EVs.

      It can only ever be an intermediate technology anyway. Artificial photosynthesis and more sophisticated fuel cells seem like much more plausible longer-term futures.

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        11 hours ago

        i think that business logic goes against your first point. spatially: if you have source of cheap energy and want to make money out of it, instead of making little money (by making fuel) why not make more money? (by setting there energy intensive manufacture) this seems to be current meta, with places like iceland and norway making aluminum and nitrogen fertilizers respectively. this can continue in other places and maybe extended to some other industries.

        temporally (because there are also sunny and windy days when regular people won’t consume all energy): this scheme requires cheap electricity, which is needed for cheap hydrogen. this requires massive renewables buildout, which means electricity is cheap for regular people, which means that every gas stove/heater and car will get replaced with electric ones, both residential and maybe perhaps faster for industrial users (more available loans). this means you have to reinforce transmission grid anyway. this also means cheap hydrogen, and because main input to its production is electricity, it makes more sense to use electricity when it’s cheap. this means it’s naturally suited to suck up all excess generation (both daily and seasonal), and also if electricity production is seasonal then so should be price of hydrogen. if price of electricity or hydrogen varies, then some industries can suck it up at greater rates when it’s cheap. i’m thinking here of aluminum smelting (electricity input, daily variation, already done), or ammonia synthesis, or direct reduced iron smelting. i bet there’s more. the point is, maybe you get to avoid storing hydrogen to some degree, because you can effectively store energy in finished or semi-finished goods. you can, for example, make some direct reduced iron and just store it when hydrogen is available, and then smelt it into steel in arc furnace when it’s not. fertilizers are already sold in annual cycle and stored long term, and anyway ammonia is much easier to store than hydrogen. how it plays out will depend on energy/hydrogen costs vs storage costs vs capex for overcapacity costs. all together, i think this means that because of large amount of generation needed, you don’t actually need to store energy this way at all, because when generation is low then electrolyzers turn off, and something will work at all times, probably. when you’re able to do that, you won’t need to

        in terms of scale, first your lunch is eaten by EVs of various shapes, then by use of hydrogen for transportation (rocketry fits there), then you have to compete with biofuels (jet engine will take anything that burns without ash and can be pumped). then some of methanol will be used for fuel first, because it just works in engines and fuel cells, and it’s a step before hydrocarbon synthesis. only then synthetic petroleum makes sense, this basically leaves some aviation (that won’t use methanol) and military uses

    • istewart@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      if realized, this allows volkswagen group to manufacture regular cars for a long, long time even after oil refining stops. originally, it was proposed as a hypothetical luxury product for antique car owners, because it’s physically possible, but doesn’t make sense in energy or cost terms.

      If VW is trying to mainstream this, that tells me they’re scrambling to keep milking the premium end of their portfolio that relies on extravagant IC engines (Porsche, Lamborghini, Audi etc.). Very bad sign for them, as the ID Buzz van looks to be a complete failure to the point of “pausing” production, and VW Commercial Vehicles is their backbone in Europe, much like Ford relies on truck sales in the US. I watched a video a few weeks ago that discussed how their European van/utility vehicle portfolio is aging and totally fragmented, to the point that they are selling rebadged Ford Transit vans manufactured in Turkey. I thought it was bad when they were badge-engineering Dodge Caravans for the US market for a few years, but totally bungling the EV van rollout in Europe is seriously bad business for them.

      It was also hilarious how the rich guys on the Porsche forums were bad-mouthing the rather sexy Mission X EV supercar concept a couple years ago. No matter how cool a 9,000-rpm flat-six is, letting yourself be driven by the guys who just want you to keep making that forever will not stave off everyone else (now including China and Vietnam!).

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        21 hours ago

        i don’t know if they started it. what i suspect as their contribution is bold claim that electrofuels might be cheaper than regular petrol in the glorious future, while currently they’re much more expensive. (30x?) strict prerequisite for their competitiveness is cheap electricity, but at this point they’re not needed. there was also Porsche owned wind power to methanol plant, and while methanol works as petrol replacement, all the plastics in contact with it must be resistant which is not a given. i guess the main value of it for them is propaganda, they’re not ready for EV manufacture

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      3
      ·
      1 day ago

      Wait so they figured how to use renewable energy to create something that still generates emissions? Is this a ploy to get Trump on board with renewables?

      • fullsquare@awful.systems
        link
        fedilink
        English
        arrow-up
        4
        ·
        edit-2
        1 day ago

        the point is, as always, to continue doing business as usual (in this case, by inhibiting BEV adoption). that fuel is carbon-neutral but also extraordinarily wasteful. trump’s deal is something called “clean coal”, which isn’t (it suggests carbon capture, but it’s not a thing, they marketed normal emissions control like we have in europe as some unusually green innovation). i think he was also captured by gulf monarchies for the one hour when their representative talked to him

        e: wait it still makes smog so checks out

  • scruiser@awful.systems
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 day ago

    I liked this takedown of METR’s task horizon “research”: https://arachnemag.substack.com/p/the-metr-graph-is-hot-garbage

    In addition to all the complaints I already knew of and had, METR’s methodology for human baselining of tasks was even worse than I realized.

    And you know… I actually kind of respect METR relative to a lot of boosters and doomers for at least attempting any hard numbers and not just vibes and anecdotes (METR is the ones that did the study showing LLMs actually reduced coders productivity even as it made them think it increased). But the standard for quantifying LLM performance in practical terms is absurdly low.

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        12 hours ago

        It was basically the only “empirical” (scare quotes well earned) data they actually used in their “model”, even then, they decided exponential improvement wasn’t good enough, they plugged it into a hyper-exponential model that went to infinity at just a few years regardless of the inputs.

        • lurker@awful.systems
          link
          fedilink
          English
          arrow-up
          2
          ·
          8 hours ago

          yeah lmfao it was bad. I thoroughly enjoyed titotal’s takedown of that graph. I can’t believe the documentary versions of that paper on youtube have millions of views and people eating it up

          Those comment sections are gonna be a joy when 2027 and 2028 roll around

      • scruiser@awful.systems
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 day ago

        They absolutely are. I am just giving them a tiny bit of credit for at least attempting academic research on LLM performance. But only a tiny bit, as they blog post I link discusses, their methodology is really sloppy and not to the level of most academic research and wouldn’t get through peer review of most decent journals.

    • mirrorwitch@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      2 days ago

      I am a better sysadmin than I was before agentic coding because now I can solve problems myself that I would have previously needed to hand off to someone else.

      more fodder for my theory that LLMs are a way to cash on the artificial isolation caused by the erosion of any real community in late stage capitalism (or to put it more simply, the “AI” is a maladaptative solution to the problem of not having friends)

  • rook@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    Moltbook still going great. Even the enthusiasts are feeling that the shine may have worn off.

    eastside mccarty @eastsidemccarty

    So just to clarify: You created a thing that you now realize you can’t control, and you can’t do anything to secure it, and people that use ClawdBot… err sorry… @openclaw, are own their own to deal with the consequences?! Did I get that right?

    Turns out that combining unsecurable vibe-coded web services with unsecurable chatbots and combining them into an unmoderated public platform can be bad. Also, shrugging off problem reports with “i unno” is a bit of a bad look.

    alt text

    eastside mccarty @eastsidemccarty

    Hey @openclaw team, can you do something about these malicious skills in your registry, ClawHub? Last night, one user, hightowerSeu, published more than 200 malicious skills. Each of these tricks the user into installing malware

    Rajveer @RajveerJolly

    Tried to reach out, no response yet. @steipete please address it

    Peter Steinberger @steipete

    Yeah got any ideas how? There’s about 1 Million things people want me to do, I don’t have a magical team that verifies user generated content. Can shut it down or people us their brain when finding skills.

    Rajveer @RajveerJolly

    Sorry homie I don’t have any idea either. I understand you have a lot on your plate perhaps some sort of flagging feature could do wonders

    Peter Steinberger @steipete

    And who reviews the flags? That would be abused right away too

    eastside mccarty @eastsidemccarty

    So just to clarify: You created a thing that you now realize you can’t control, and you can’t do anything to secure it, and people that use ClawdBot… err sorry… @openclaw, are own their own to deal with the consequences?! Did I get that right?

    Rajveer @RajveerJolly

    I hear you. I guess for now people just need to double and check and verify it all bevause there isn’t a simple solution to this

    • FredFig@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      The real damning thing is the speed that the true believers abandoned this garbage fire. You used to be able to string these guys along for years, and now they can barely keep the trend going for a week. Gonna be real weird when the scammers realize they’ve finally burnt out all the good will.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      13
      ·
      2 days ago

      there isn’t a simple solution to this

      How about just not creating the problem in the first place. How about that.

    • gerikson@awful.systems
      link
      fedilink
      English
      arrow-up
      7
      ·
      2 days ago

      The people who are worried that Moltbook is where agents are gaining self-consciousness forgot the part of Accelerando where all the AIs were basically scammers (the Slug)

  • dovel@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    It seems that Anthropic has vibe coded a C compiler. This one is really good! The generated code is not very efficient. Even with all optimizations enabled, it outputs less efficient code than GCC with all optimizations disabled.

    • V0ldek@awful.systems
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 day ago

      Waiting for some promptfondler to complain that this kind of assignment is not really fair for an AI because it has actual requirements.

    • Sailor Sega Saturn@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      2 days ago

      The first issue filed is called “Hello world does not compile” so you can tell it’s off to a good start. Then the rest of the six pages of issues appear to be mostly spam filed by some AI guy’s rogue chatbot.

    • ________@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      2 days ago

      The fact it doesn’t have an assembler or linker, and I am doubting it implemented its own lexical analyzer, I almost struggle to call this a compiler.

      The claim it is from scratch is misleading since it has all prior training from open source.

      Building a small compiler for a simple language (C is pretty simple, especially older versions) is a common learning exercise and not difficult. This is very much another situation where “AI” created an over simplified version of something with hidden details on how it got there as a way to further push the propaganda that it is so capable.

    • nightsky@awful.systems
      link
      fedilink
      English
      arrow-up
      4
      ·
      2 days ago

      This could be regarded as a neat fun hack, if it wasn’t built by appropriating the entire world of open source software while also destroying the planet with obscene energy and resource consumption.

      And not only do they do all that… it’s also presented by those who wish this to be the future of all software. But for that, a “neat fun hack” just isn’t enough.

      Can LLMs produce software that kinda works? Sure, that’s not new. Just like LLMs can generate books with correct grammar inside, and vaguely about a given theme. But is such a book worth reading? No. And is this compiler worth using? Also no.

      (And btw, this approach only works with an existing good compiler as “oracle”. So forget about doing that to create a new compiler for a new language. In addition, there’s certainly no other language with as many compilers as C, providing plenty of material for the training set.)

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        1 day ago

        This could be regarded as a neat fun hack, if it wasn’t built by appropriating the entire world of open source software

        This shouldn’t be left merely implied, the autoplag trained on GCC, clang, and every single poor undergrad who had to slap together a working C compiler for their compilers course and uploaded it to github, and “learnt” fuckall

    • lagrangeinterpolator@awful.systems
      link
      fedilink
      English
      arrow-up
      9
      ·
      edit-2
      3 days ago

      I wonder what actual experts in compilers think of this. There were some similar claims about vibe coding a browser from scratch that turned out to be a little overheated: https://pivot-to-ai.com/2026/01/27/cursor-lies-about-vibe-coding-a-web-browser-with-ai/

      I do not believe that this demonstrates anything other than they kept making the AI brute force random shit until it happened to pass all the test cases. The only innovation was that they spent even more money than before. Also, it certainly doesn’t help that GCC is open source, and they have almost certainly trained the model on the GCC source code (which the model can regurgitate poorly into Rust). Hell, even their blog post talks about how half their shit doesn’t work and just calls GCC instead!

      It lacks the 16-bit x86 compiler that is necessary to boot Linux out of real mode. For this, it calls out to GCC (the x86_32 and x86_64 compilers are its own).

      It does not have its own assembler and linker; these are the very last bits that Claude started automating and are still somewhat buggy. The demo video was produced with a GCC assembler and linker.

      I wonder why this blog post was brazen enough to talk about these problems. Perhaps by throwing in a little humility, they can make the hype pill that much easier to swallow.

      Sidenote: Rust seems to be the language of choice for a lot of these vibe coded “projects”, perhaps because they don’t want people immediately accusing them of plagiarism. But Rust syntax still reasonably follows languages like C. In most cases, blindly translating C code into Rust kinda works. Now, Rust does have the borrow checker which requires a lot of thinking to deal with, but I think this is not actually a disadvantage for the AI. Borrow checking is enforced by the compiler, so if you screw up in that department, your code won’t even compile. This is great for an AI that is just brute forcing random shit until it “works”.

      • V0ldek@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        edit-2
        1 day ago

        I wonder what actual experts in compilers think of this.

        Anthropic doesn’t pay me and I’m not going to look over their pile of garbage for free, but just looking at the structure and READMEs it looks like a reasonable submission for an advanced student in a compiler’s course: lowering to IR, SSA representation, dominators, phi elimination, some passes like strength reduction. The register allocator is very bad though, I’d expect at least something based on colouring.

        The READMEs are also really annoying to read. They are overlong and they don’t really explain what is going on in the module. There’s no high-level overview of the architecture of the compiler. A lot of it is just redundant. Like, what is this:

        Ye dude, of course it doesn’t depend on the IR, because this is before IR is constructed. Are you just pretending to know how a compiler works? Wait, right, you are, you’re a bot. The last sentence is also hilarious, my brother in christ, what, why is this in the README.

        Now this evaluation only makes sense if the compiler actually works - which it doesn’t. Looking at the filed issues there are glaring disqualifying problems (#177, #172, #171, #167, etc. etc. etc.). Like, those are not “oops, forgot something”, those are “the code responsible for this is broken”. Some of them look truly baffling, like how do you manage to get so many issues of the type “silently does something unexpected on error” when the code is IN RUST, which is explicitly designed to make those errors as hard as possible? Like I’m sorry, but the ones below? These are just “you did not even attempt to fulfill the assignment”.

        It’s also not tested, it has no integration tests (even though the README says it does), which is plain unacceptable. And the unit tests that are there fail so lol, lmao.

        It’s worse than existing industry compilers and it doesn’t offer anything interesting in terms of the implementation. If you’re introducing your own IR and passes you have to have a good enough reason to not just target LLVM. Cranelift is… not great, but they at least have interesting design choices and offer quick unoptimized compilation. This? The only reason you’d write this is you were indeed a student learning compilers, in which case it’d be a very good experience. You’d probably learn why testing is important for the rest of your life at least.

      • rook@awful.systems
        link
        fedilink
        English
        arrow-up
        7
        ·
        2 days ago

        I wonder why this blog post was brazen enough to talk about these problems. Perhaps by throwing in a little humility, they can make the hype pill that much easier to swallow.

        I feel this is an artefact of the near complete collapse of mainstream journalism, combined with modern tech business practises that are about securing investment and cashing out, and every other concern is secondary or even entirely absent. It’s all just selling vibes.

        People only ever report the hype, the investors see everyone else following the hype and panic that they might be left out and bury you in cash. When it all turns sour and people ask pointed questions about the exact nature of the magic beans you were promising to grow, you can just point at the blog post that no-one read (or at least, only poor people read, and they’re barely people if you think about it) and point out that you never hid anything.

        • lagrangeinterpolator@awful.systems
          link
          fedilink
          English
          arrow-up
          7
          ·
          2 days ago

          I don’t even think many AI developers realize that we’re in a hype bubble. From what I see, they genuinely believe that the Models Will Improve and that These Issues Will Get Fixed. (I see a lot of faculty in my department who still have these beliefs.)

          What these people do see, however, are a lot of haters who just cannot accept this wonderful new technology for some reason. AI is so magical that they don’t need to listen to the criticisms; surely they’re trivial by comparison to magic, and whatever they are, These Issues Will Get Fixed. But lately they have realized that with the constant embarrassing AI failures (AI surely doesn’t have horrible ethics as well), there are a lot of haters who will swarm the announcement of any AI project now. The haters also tend to be people who actually know stuff and check things (tech journalists are incentivized to not do that), but it doesn’t matter because they’re just random internet commenters, not big news outlets.

          My theory is that now they add a ton of caveats and disclaimers to their announcements in a vain attempt to reduce the backlash. Also if you criticize them, it’s actually your fault that it doesn’t work. It’s Still Early Days. These Issues Will Get Fixed.

      • corbin@awful.systems
        link
        fedilink
        English
        arrow-up
        11
        ·
        2 days ago

        I only sampled some of the docs and interesting-sounding modules. I did not carefully read anything.

        First, the user-facing structure. The compiler is far too configurable; it has lots of options that surely haven’t been tested in combination. The idea of a pipeline is enticing but it’s not actually user-programmable. File headers are guessed using a combination of magic numbers and file extensions. The dog is wagged in the design decisions, which might be fair; anybody writing a new C compiler has to contend with old C code.

        Next, I cannot state enough how generated the internals are. Every hunk of code tastes bland; even when it does things correctly and in a way which resembles a healthy style, the intent seems to be lacking. At best, I might say that the intent is cargo-culted from existing code without a deeper theory; more on that in a moment. Consider these two hunks. The first is generated code from my fork of META II:

        while i < len(self.s) and self.clsWhitespace(ord(self.s[i])): i += 1
        

        And the second is generated code from their C compiler:

        while self.pos < self.input.len() && self.input[self.pos].is_ascii_whitespace() {
            self.pos += 1;
        }
        

        In general, the lexer looks generated, but in all seriousness, lexers might be too simple to fuck up relative to our collective understanding of what they do. There’s also a lot of code which is block-copied from one place to another within a single file, in lists of options or lists of identifiers or lists of operators, and Transformers are known to be good at that sort of copying.

        The backend’s layering is really bad. There’s too much optimization during lowering and assembly. Additionally, there’s not enough optimization in the high-level IR. The result is enormous amounts of spaghetti. There’s a standard algorithm for new backends, NOLTIS, which is based on building mosaics from a collection of low-level tiles; there’s no indication that the assembler uses it.

        The biggest issue is that the codebase is big. The second-biggest issue is that it doesn’t have a Naur-style theory underlying it. A Naur theory is how humans conceptualize the codebase. We care about not only what it does but why it does. The docs are reasonably-accurate descriptions of what’s in each Rust module, as if they were documents to summarize, but struggle to show why certain algorithms were chosen.

        Choice sneer, credit to the late Jessica Walter for the intended reading: It’s one topological sort, implemented here. What could it cost? Ten lines?

        I do not believe that this demonstrates anything other than they kept making the AI brute force random shit until it happened to pass all the test cases.

        That’s the secret: any generative tool which adapts to feedback can do that. Previously, on Lobsters, I linked to a 2006/2007 paper which I’ve used for generating code; it directly uses a random number generator to make programs and also disassembles programs into gene-like snippets which can be recombined with a genetic algorithm. The LLM is a distraction and people only prefer it for the ELIZA Effect; they want that explanation and Naur-style theorizing.

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          It’s one topological sort, implemented here. What could it cost? Ten lines?

          This one idk, some of it could be more concise but it also has to build the graph first using that weird seemingly custom hashmap as the source. This function, however, is immensely funny

        • V0ldek@awful.systems
          link
          fedilink
          English
          arrow-up
          4
          ·
          1 day ago

          There’s a standard algorithm for new backends, NOLTIS

          I think this makes it sound more cutting-edge and thus less scathing than it should, it’s an algorithm from 2008 and is used by LLVM. Claude not only trained on the paper but on all of LLVM as well.

      • YourNetworkIsHaunted@awful.systems
        link
        fedilink
        English
        arrow-up
        3
        ·
        2 days ago

        I wonder if this is going to hold out long enough to get some obnoxious AI-first language created that is designed to have as obnoxiously picky of a compiler as it can in order to try and turn runtime errors that the model can’t cope with into compile failures which it can silently retry until they’re ‘fixed’

    • blakestacey@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      2 days ago

      There’s a letter in the book of Asimov’s correspondence that his brother edited where Asimov says that he’d been asked “How close are we to George Orwell’s 1984?” again and again in the years leading up to 1984, to the point that he was sick of it and dreading the actual year 1984, when no one would ask him about anything else. I figure he had a lot of venom built up in his system that came out here.

      He was also a veteran of science-fiction fan club drama, after which he worked in academia, so yeah, he knew sectarian in-fighting.

    • Amoeba_Girl@awful.systems
      link
      fedilink
      English
      arrow-up
      12
      ·
      edit-2
      3 days ago

      I don’t think I disagree with much of what Asimov is saying here! Aside from the silly bits about left infighting and scifi as forecasting (yawn), and the horrible recount of the Spanish civil war, I’ve made pretty much the same observations about 1984. It’s nihilistic and reactionary, it’s profoundly misogynistic and it reeks of contempt for the working class. It’s also shockingly naive and paradoxically enthusiastic about the workings and effectiveness of propaganda and censorship. There’s certainly nothing prescient about it. It’s baffling to me that it’s still popular with leftleaning people to this day.

      The most generous thing I can say is that the book might have been intended purely as satire, and as such it would at least be coherent. But sadly I don’t think this is how people tend to read it.

    • bitofhope@awful.systems
      link
      fedilink
      English
      arrow-up
      10
      ·
      3 days ago

      Sour indeed. There are some decent observations in there. He correctly notes that the book is dissing Stalinism specifically. Newspeak never became a real problem and superficially similar phenomena don’t mean language is losing its expressive power. And yes, those depictions of working class people have more than a whiff of classism to them.

      Then there’s a lot of complaining about leftist infighting. It’s pretty appropriate for this to be hosted on that site. It’s only anti-revisionism if it comes from the Vanguard Party region of Marxism-Leninism, otherwise it’s just sparkling sectarianism.

      The communists, who were the best organised, won out and Orwell had to leave Spain, for he was convinced that if he did not, he would be killed

      Better organized than the POUM, I’ll give it that. “Won out” is an interesting choice of words to describe any republican faction in the Spanish civil war.

      And then there’s the cringe. No robots and computer? My stories have robots and computer because it’s impossible for someone to always pay attention to spying a bunch of people. The panopticon doesn’t work, actually, because even if at anytime someone could be watching you, they couldn’t possibly be watching you all the time unless they have robots and computer. Also why isn’t this dystopian society more feminist?

      • Amoeba_Girl@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        The thing is, the world in 1984 is feminist! as imagined by a bloke who hates feminism. Sex for pleasure is outlawed, makeup and dresses are banned, women look and act like men (and indeed are worse than men) instead of following their womanly nature. It’s a feminist dystopia!

        I mean, if god damned Asimov thinks your book is misogynistic, you know you’ve fucked up!

      • gerikson@awful.systems
        link
        fedilink
        English
        arrow-up
        13
        ·
        3 days ago

        Orwell had Julia working in the novel factory - where machines spliced together romance trash pablum for light entertainment. So he accurately prophesied LLMs.

  • blakestacey@awful.systems
    link
    fedilink
    English
    arrow-up
    15
    ·
    3 days ago

    Ryan Mac:

    Epstein had many known connections to Silicon Valley CEOs, but less known was how he made money from those relationships.

    We did a deep dive into how he got dealflow in Silicon Valley, giving him shots to invest in Coinbase, Palantir, SpaceX and other companies.

    For example, here is Coinbase cofounder Fred Ehrsam in 2014 emailing w/ people around Epstein, including crypto entrepreneur Brock Pierce, asking to meet Epstein before the financier invested $3m in Coinbase.

    Coinbase was a two year old startup. Epstein netted multimillion dollar returns from this.

    Here is Epstein asking Peter Thiel if he should invest in Spotify or Palantir. Thiel was (and still is) Palantir’s chairman and tells Epstein there is “no need to rush.” This is one of several emails where Thiel gives Epstein advice.

    Epstein later invested $40m into one of Thiel’s VC funds.

    One of @ering.bsky.social’s great file finds: Epstein tried to help create an tech fund shortly before he was arrested in 2019 with two tech types. One of his partners, however, was worried about the “optics” of telling founders that Epstein was involved.

    So they suggested Epstein conceal himself.

    At the end of his life, Epstein had assets of around $600m. A large part of that was due to his ability to get in early to hot tech deals. The returns he made off those deals helped fund his lifestyle.

    […]

    While reporting this, I had something happen that’s never happened. A comms rep for one of the co’s disputed my reporting and said what I was telling them was untrue because it was not in Grok, xAI’s chatbot.

    I was looking directly at the files. And this person was using AI to challenge the truth.

    https://bsky.app/profile/rmac.bsky.social/post/3me4wmrgic226

    • Architeuthis@awful.systems
      link
      fedilink
      English
      arrow-up
      8
      ·
      3 days ago

      I was looking directly at the files. And this person was using AI to challenge the truth.

      These are the people who come next election will be voting strictly according to an AI’s say so.

  • CinnasVerses@awful.systems
    link
    fedilink
    English
    arrow-up
    9
    ·
    3 days ago

    They are organizing another Inkhaven in April, maybe because it brings in at least $80,000. I do not recommend committing to spend a month in the presence of our dear friends given their practice of allowing sexual, psychological, and substance abuse in their communities!

    https://www.inkhaven.blog/

      • CinnasVerses@awful.systems
        link
        fedilink
        English
        arrow-up
        5
        ·
        2 days ago

        I don’t know if there has been any on-the-ground journalism about our dear friends except possibly the Zizian murders and some of the investigations into misogyny and sexual abuse. What reporter without a bankroll from EA has money to spend a few months in the Bay Area making friends with introverted bloggers?

        One of the mentors at Inkhaven will be Jesse Singal who has said good things about pedophiles and KiwiFarms and is worried about so many young people identifying as trans. Does he have prior connections to eugenics or race pseudoscience? Pinkerite says he is Bluesky buddies with Razib Khan (one of the names RationalWiki can no longer mention).