

It’s pretty sobering to see the financials laid out like this, and Ed even highlights some areas of uncertainty as though begging someone from OpenAI or Microsoft to provide the information to rebut his conclusions.
It’s pretty sobering to see the financials laid out like this, and Ed even highlights some areas of uncertainty as though begging someone from OpenAI or Microsoft to provide the information to rebut his conclusions.
At its heart I think that the real problem. The right has built up “wokeness” into this all-consuming conspiracy theory that is responsible for everything, which was an effective way to take power by offering simple plans that hurt people that large swathes of the voting public already believed had it too good, but now that they’re in power they need to actually do something about this fictitious issue they’ve convinced themselves is at the heart of all problems, and this is what that looks like. There is no simple common-sense policy that would protect people from “being forced to say DEI shibboleths” or whatever they’re whining about because nobody is forcing you to do that in the first place, but you can’t sweep in on a wave of “antiwokism” and do nothing about it.
I’m actually reminded of the similar bizzaro push against “color revolutions” that seems to animate Putin and some of the other crazies in international politics. Like, it’s pretty obviously bullshit if for no other reason than because it it was possible to culturally mind control a people into overthrowing their governments by throwing a relatively tiny sum of money at some artists and shouting a lot there’s no way that the CIA would have gone after Kyrgyzstan and Ukraine but not Russia itself. But a lot of Russian foreign policy, including the invasion of Ukraine, seems to be at least partially in response to this imagined threat from a nonexistent conspiracy, and the blood flowing down the Dnipro is the cost that the world is paying for that delusion.
Orange site really is out here reinventing hard behaviorism.
“We can’t directly observe internal states beyond our own subjectivity” -> Let’s try to ignore them and see what we get" -> “We’ve developed a model that doesn’t feature internal states as a meaningful element of cognition” -> “there are no internal states” -> “I know I’m a stochastic parrot but what are you?”
I think we’re going to see an ongoing level of AI-enabled crapification for coding and especially for spam. I’m guessing there’s going to be enough money from the spam markets to support a level of continued development to keep up to date with new languages and whatever paradigms are in vogue, so vibe coding is probably going to stick around on some level, but I doubt we’re going to see major pushes.
One thing that this has shown is how much of internet content “creation” and “communication” is done entirely for its own sake or to satisfy some kind of algorithm or metric. If nobody cares whether it actually gets read then it makes economic sense to automate the writing as much as possible, and apparently LLMs represent a “good enough” ability to do that for plausible deniability and staving off existential dread in the email mines.
I don’t know, I think we’re just talking about using AI to make the government more efficient, which is basically just the stated policy goal at this point.
How did the last round of manifest destiny work out for anyone who already lived in that “new” land again?
The fact that it appears to be trying to create a symbolic representation of the problem is interesting, since that’s the closest I’ve ever seen this come to actually trying to model something rather than just spewing raw text, but the model itself looks nonsensical, especially for such a simple problem.
Did you use any of that kind of notation in the prompt? Or did some poor squadron of task workers write out a few thousand examples of this notation for river crossing problems in an attempt to give it an internal structure?
Screw that quantum crap, what we really need is good old fashioned augury. Who wants to shell out for some sheep entrails?
I wouldn’t think that our poking and prodding is sufficient to actually impact usage metrics, and even if it is I don’t think diz is using a paid version (not that even the “pro” offerings are actually profitable per query) so at most we’re hastening the financial death spiral.
Besides, they’ve shown an ability to force the narrative of their choosing onto basically any data in order to keep pulling in the new investor money that’s driven this bubble well beyond any sensible assessment of the market’s demand for it.
Now hang on how many of those conquests were for actual land grab reasons and how many were because they expected people to take issue with starting massive offensive wars for land grab reasons, especially what with the declared intent to ethnically cleanse at least all of Eastern Europe. That’s definitely distinct from planning world conquest, right?
I mean the left has been mostly absent from America in general since at least the Reagan years, so it’s not all that surprising.
If Democrats win the midterm I think they get to shave her head.
I hate you so much for the word “overtonussy”.
Slather us in steak sauce and serve with a baked fucking potato because we are so cooked.
It’s the front-end of the hype cycle. The tech-debt problems will come home to roost in a year or two. The market can remain irrational longer than you can remain solvent.
This is the most VC-pilled possible response to people talking about the difficulties of actually working with LLMs. Who cares about the people who actually have to use this crap, think about what it could mean for Number!
““For once you have tasted flight you shall walk the earth with your eyes turned skyward, for there you have been and there shall you long to return” -Leonardl Da Vinci” -Civilization IV narrator.
Apparently including a camera-esque filename in prompts for the latest mid journey release can make it more photorealistic. Unfortunately it also looks like the distinctive AI art style was pretty key to preventing the usual set of AI generated image “tells”. Mirrors, hands, teeth, etc are all very visibly wrong.
Talk about ripped from the headlines!
Wasn’t that just the plot of The Caves of Steel? Or Robots of Dawn, which combined it with some weird sex thing?
I’m not familiar with the cannibal/missionary framed puzzle, but reading through it the increasingly simplified notation reads almost like a comp sci textbook trying to find or outline an algorithm for something, but with an incredibly simple problem. We also see it once again explicitly acknowledge then implicitly discard part of the problem; in this case it opens by acknowledging that each boat can carry up to 6 people and that each boat needs at least one person, but somehow gets stuck on the pattern that we need to alternate trips left and right and each trip can only consist of one boat. It’s still pattern matching rather than reasoning, even if the matching gets more sophisticated.