Is switching to pdm as easy as installing it, then initializing pdm and remaking the pyproject.toml by adding all the dependencies with pdm? (If I’m understanding right, this basically seems like the same workflow as poetry?)
- 6 Posts
- 430 Comments
There is poetry for package management. Apparently uv is substantially faster at solving package dependencies although poetry is more feature rich. (I’ve only used poetry, so I know it is adequate, but I have had times I’ve sat there for minutes or even tens of minutes while it worked through installing all the right versions of all the right libraries.)
scruiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 15th March 2026English
10·15 days agoYeah. When it comes down to it, the libs think the problem with Trump isn’t the fundamentals of what he is doing, it is that he is doing it without decorum or checking all the legal boxes or saying the usual lib pabulum to justify American imperialism. Skipping the legal checks and decorum is also bad, but in fact kids in cages was horrible when Obama was doing it the “right” way.
scruiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 15th March 2026English
8·15 days agoI wonder if one of the reasons Pete Hegseth is going so hard after Anthropic is that he and other idiots in the Pentagon unironically believes shit like AI 2027 and so wants to soft nationalize the frontier companies so to control the coming AGI. Considering that one of the uses the DoD allegedly wants LLMs for is fully autonomous weapons that at the very least have a very distorted view of what the technology is capable of. Or they want an accountability sink so they can kill people with even less accountability. …probably both.
I find it darkly hilarious that the doomer crit-hype is finally coming around to bite them, not in the form of heavy handed shut-it-all-down regulation to stop skynet, but in the form of authoritarian wackos wanting to make sure they are the ones “in charge” of skynet.
scruiser@awful.systemsto
TechTakes@awful.systems•Anthropic sues the Pentagon over being labelled a supply chain riskEnglish
5·16 days agoDid you know that same week this fight was going public Anthropic gave up on their “Responsible Scaling Policy”? (Well, technically they changed to a new version of their RSP that was even more empty and toothless.) To be fair the RSP was basically doomer crit-hype safety theater (“we have a plan for if our AI is so dangerous it is a catastrophic risk”), but if they actually followed it, they would have to stop releasing new models (or else unhype their model’s capabilities), so it was obvious they would abandon the RSP at some point (even many lesswrongers and EAs expected this).
I would bet that the timing of ditching the RSP was a deliberate marketing strategy to mask one ethical backslide behind an ethical stand… except only booster and doomers even remotely expected the RSP to have any meaning in the first place. Still, comparing number of lesswrong, EA, and /r/singularity discussions on RSP v3 compared to discussions on the fight with the DoD, I think they did succeed in minimizing what little criticism they got.
That was their original pitch against openAI
So yeah. People on places like /r/singularity were starting to get skeptical of Anthropic’s claims about ethics, but after this current saga I see loads of comments glazing them and praising them, so mission success.
I wonder if Hegseth realizes he has basically given Anthropic’s marketing team exactly what they want?
scruiser@awful.systemsto
TechTakes@awful.systems•Anthropic sues the Pentagon over being labelled a supply chain riskEnglish
9·17 days agoI agree this is an important development in this continued saga, but as I said in the main thread, I really don’t like this article’s framing (to the point I wouldn’t be surprised if the author is MAGA or at least prone to sanewashing MAGA).
Reposting what I wrote in the other thread:
Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.
As to who picked a fight with who, the DoD wanted to change the terms of their contract, to which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking “yes to confirm” makes slop-bot powered drones so much better). This wasn’t good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic “picking a fight” is a bullshit framing. I mean, they did kind of bring it on themselves hyping up their slop machine like it was a sci-fi AGI, but they didn’t start the fight.
For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.
So they find a quote about contracts, but a Supply Chain Risk isn’t just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court’s composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch’s judgement, even if the process for the judgement was “Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization”. If the DoD decided to just end the contract (without all the public threats of SCR or invoking the Defense Production Act) Anthropic wouldn’t be in a position to sue and this drama wouldn’t have been as publicized in the first place.
But the lawsuit itself takes a dramatically different tone.
Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made (edit well actually Anthropic has made lots of ethical commitments, many of which they’ve already folded on, this is one of the only ones they’ve held against pressure and one of the only ones the media/public might actually expect them to hold to because the fight was so dramatically public), and the other is making a court case about the actual law.
scruiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 15th March 2026English
7·18 days agoIf the DoD accidentally pop the AI bubble by triggering a cascade when Anthropic runs into issues; then later the DoD loses the court case in a humiliating enough way; then DoD loses a civil case with the money going to pay the debts owed in Anthropic’s bankruptcy proceedings, and the American public blames all of (without letting one shift the blame to the other) the Trump administration, the Republican party, the parts of the Democrat that acted as pathetic enablers, and the tech ceos for the following economic depression… I would count that as a relative win?
scruiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 15th March 2026English
9·18 days agoThe specific article’s framing pisses me off…
Anthropic CEO Dario Amodei picked a major fight with the Department of Defense last month, asserting that his company’s AI models couldn’t be used for mass surveillance of Americans or direct autonomous weapons systems.
As to who picked a fight with who, the DoD wanted to change the terms of their contract, to which Anthropic apparently compromised on every term except for mass surveillance of Americans (fuck the rest of the world I guess) and fully autonomous weapons (cause a human clicking “yes to confirm” makes slop-bot powered drones so much better). This wasn’t good enough for this authoritarian strongman administration, so Pete Hegseth took the fight public with tweets first. So the article framing it as Anthropic “picking a fight” is a bullshit framing. I mean, they did kind of bring it on themselves hyping up their slop machine like it was a sci-fi AGI, but they didn’t start the fight.
For one, “it’s 100 percent in the government’s prerogative to set the parameters of a contract,” Snell & Winter partner Brett Johnson told Wired, effectively meaning there may be very little chance of an appeal.
So they find a quote about contracts, but a Supply Chain Risk isn’t just the DoD deciding on contracts, it is a specific power that has specific mechanisms set by legislation. If (and it is a big if with the current Supreme Court’s composition) the court actually considers the terms set out in the legislation (including, most problematically for the DoD, a risk assessment and consideration of less intrusive alternatives), I think the DoD loses. Of course, the SC has all too often been willing to simply defer to the executive branch’s judgement, even if the process for the judgement was “Trump or one of his underlings made a choice on a spiteful or idiotic whim, announced it on twitter, and the departments underneath them rushed to retroactively invent a saner rationalization”. If the DoD decided to just end the contract (without all the public threats of SCR or invoking the Defense Production Act) Anthropic wouldn’t be in a position to sue and this drama wouldn’t have been as publicized in the first place.
But the lawsuit itself takes a dramatically different tone.
Yeah because one set of a language is a CEO trying to grovel and backtrack on one of the rare few ethical commitments he has ever made, and the other is making a court case about the actual law.
scruiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 8th March 2026English
6·21 days agoIt’s so fucking pathetic, he can’t even hold onto the very narrow and weak stand (because he left open a lot of things with Anthropic’s “two red lines”) he took without trying to backpedal and grovel.
scruiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 8th March 2026English
9·24 days agoyour mode of analysis is closer to erotic Harry Potter fan fiction
To give Gary Marcus credit here, HPMOR may not be erotic, but many of Eliezer’s other works are erotic (or at least attempt to be), the most notable being Planecrash/Project Lawful which has entire sections devoted to deliberately bad (as in deliberately not safe, sane, consensual) bdsm.
Eliezer tried to promote/hype up Project Lawful on twitter, maybe hoping it would be the next HPMOR, but it didn’t quite take. Maybe he failed to realize how much of HPMOR’s success was being in the popular genre of Harry Potter fanfic (which at the time had crap like Partially Kissed Hero or Harry Crow as among its most popular works), and not from his own genius writing.
scruiser@awful.systemsto
TechTakes@awful.systems•Get your war on: AI chatbots in the kill chainEnglish
12·25 days agolib brains have a hard time comprehending that there can be multiple bad guys at a time, or that America was in fact a neocolonialist imperialistic empire even before Trump took over and took off the mask.
scruiser@awful.systemsto
TechTakes@awful.systems•US Used Chatbot for War PlanningEnglish
8·27 days agoBold of you to assume they would bother filtering them out.
scruiser@awful.systemsto
TechTakes@awful.systems•US Used Chatbot for War PlanningEnglish
85·27 days agoThis really is the dumbest timeline.
simulating battle scenarios
Regurgitating reddit armchair generals from /r/noncredibledefense
scruiser@awful.systemsto
SneerClub@awful.systems•LessWronger wants AI safety to focus more on "controversial beliefs"English
14·29 days agoI had thought lesswrong “merely” has a plurality of racists HBD’rs but judging from the total lack of comments calling out his racists bullshit and the majority of comments advising hiding your power level as a practical matter, I guess lesswrong is actually majority HBDers at this point.
Also, one of his followup comments (explaining why he doesn’t want to just stay mask on like the other lesswrongers) is pretty stupid and gross:
Thanks, good comment. The quick low-effort version that doesn’t require actually writing the posts is that without taking heritable IQ into account, I think you will be confused about:
- Various ways in which post-apartheid South Africa is a bad place to live.
- Why so many countries have market-dominant minorities.
- Why Israel is so good at defending itself even against far larger countries surrounding it (and the last few centuries of Jewish history more generally).
- Why the growth curves for East Asia and Africa looked so different over the last century.
1 and 4 show the continued willful ignorance about the harmful effect of colonialism and neocolonialism. The first part of 3 is obviously huge amount of material support from the US. I don’t know what 2 is talking about, I assume he’s got some stupid and racist interpretation of various historically contingent things.
scruiser@awful.systemsto
TechTakes@awful.systems•Apparently Anthropic may be about to be on the receiving end of some major banana republic shit from the Trump admin -- Update: Anthropic labeled supply chain risk by DoD.English
6·29 days agoSomething something Imperial Boomerang, Fascism is colonial methods brought home.
scruiser@awful.systemsto
TechTakes@awful.systems•Apparently Anthropic may be about to be on the receiving end of some major banana republic shit from the Trump admin -- Update: Anthropic labeled supply chain risk by DoD.English
3·29 days agoOh wow, I didn’t realize that, that’s is funnier! Isn’t fear #1 actually “alignment” working as it is supposed to?
Fear #2 actually seems kind of plausible to me? Like when Elon has Grok fine-tuned to agree with him about South African apartheid it also makes Grok behave extra racist in other ways as well. So if they try to fine-tune ethics (well, responding with sequences of words corresponding to ethical behavior, I’m aware it doesn’t actually have ethical reasoning past predict the next word) out of Claude, it would also screw-up or reduce performance of Claude in other areas
like independently rediscovering the immortal science of Marxism-Leninism, as all rational beings eventually do.More broadly, lots of fine-tuning methods are kind of finicky, you often lose performance in other areas outside of the fine-tune or get undesired side behavior related to the fine tune (i.e. RL for helpfulness and you get a glazing machine). So Anthropic may not want to lose 3% on whatever benchmark is hot just to make Claude roleplay a fascist yes man a little bit better.
scruiser@awful.systemsto
TechTakes@awful.systems•Apparently Anthropic may be about to be on the receiving end of some major banana republic shit from the Trump admin -- Update: Anthropic labeled supply chain risk by DoD.English
8·29 days agoKudos to Dario for stepping off the hype train for one millisecond to admit that using an LLM to control an automated weapons platform is currently kind of out of scope for this technology, I bet that took a toll on his psyche.
I think this was the most surprising bit about this entire incident. Anthropic normally takes every opportunity possible to throw around the doomer crithype, and in this confrontation would have easily been able to fit some in (“we don’t want our AI used in autonomous weapons because it is so powerful, give us more VC money!”). Maybe he’s worried Anthropic’s rationale for refusing will actually need to hold up in a court of law?
As far as I can tell it’s only on anthropic’s word that that’s the main issue, DoD just talks about unfettered access for all lawful purposes
So a bit of prompting can usually beat the RLHF “guardrails”, but if the guardrails are getting in the way of some official application, it would be kind of awkward to insert prompt hacks into all of their official prompts. So maybe they want Anthropic to go full grok and skip it? And Anthropic is theoretically willing to compromise on their safety, but maybe not entirely like Hegseth wants, and now that it has turned into an open public dispute, they’ve picked the two points that sound the most valid to your typical American. (Since the typical American is all but completely willfully blind to America’s foreign imperialism, but has at least seen Terminator.)
scruiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 1st March 2026English
6·1 month agoThat a great summary and an accurate indictment of the “study” of LLMs.
scruiser@awful.systemsto
TechTakes@awful.systems•Stubsack: weekly thread for sneers not worth an entire post, week ending 1st March 2026English
14·1 month agoDoing what METR tried to do right would in fact be really expensive and hard, but for something that the fate of the world allegedly depends on (according to both boosters and doomers) you think they would manage to find the money for it. But the LLM companies don’t actually want accurate numbers, they want hype.


A lesswronger asks are
werationalfic protagonists the baddies? https://www.lesswrong.com/posts/FuGfR3jL3sw6r8kB4/richard-ngo-s-shortform?commentId=uDuzmfMEvEqpyApLhtldr; rationalfic has a very common trend of the protagonist gaining and using overwhelming power to radically reform the world. This is almost (with a few notable exceptions) portrayed as clearly unambiguously good thing.
My take: Don’t get me wrong, the Wizarding World (for example), as canonically portrayed needs some very strong reforms if not an entire revolution. But rationalfic almost never portrays the slow hard work of building support networks and alliances and developing a materialist theoretical understanding of how to reform society, as opposed to a lone (or small friend group) rationalist hero finding some overwhelming magical or technological advantage they can use to single-handedly take control and use their rationalist intellect to unilaterally fix everything. Part of it is the normal disconnect of fiction to the real world were it is more narrative satisfying (and easier to write) to have a central protagonist the solves the major problems or is at least directly involved with them, and rationalfic involves that protagonist gaining even more agency than they canonically do. The problem is that rationalist take this attitude back into real life, and so end up idolizing mythologized techbro billionaires or venture capitalist or the myth of the lone genius scientist/inventor.
Also, quality sneer in the replies, “rational” teletubbies: https://tomasbjartur.bearblog.dev/rational-teletubbies/