• 50 Posts
  • 747 Comments
Joined 2 years ago
cake
Cake day: December 31st, 2023

help-circle








  • I’ll be honest, that “Iceberg Index” study doesn’t convince me just yet. It’s entirely built off of using LLMs to simulate human beings and the studies they cite to back up the effectiveness of such an approach are in paid journals that I can’t access. I also can’t figure out how exactly they mapped which jobs could be taken over by LLMs other than looking at 13k available “tools” (from MCPs to Zapier to OpenTools) and deciding which of the Bureau of Labor’s 923 listed skills they were capable of covering. Technically, they asked an LLM to look at the tool and decide the skills it covers, but they claim they manually reviewed this LLM’s output so I guess that counts.

    Project Iceberg addresses this gap using Large Population Models to simulate the human–AI labor market, representing 151 million workers as autonomous agents executing over 32,000 skills across 3,000 counties and interacting with thousands of AI tools

    from https://iceberg.mit.edu/report.pdf

    Large Population Models is https://arxiv.org/abs/2507.09901 which mostly references https://github.com/AgentTorch/AgentTorch, which gives as an example of use the following:

    user_prompt_template = "Your age is {age} {gender},{unemployment_rate} the number of COVID cases is {covid_cases}."
    # Using Langchain to build LLM Agents
    agent_profile = "You are a person living in NYC. Given some info about you and your surroundings, decide your willingness to work. Give answer as a single number between 0 and 1, only."
    

    The whole thing perfectly straddles the line between bleeding-edge research and junk science for someone who hasn’t been near academia in 7 years like myself. Most of the procedure looks like they know what they’re doing, but if the entire thing is built on a faulty premise then there’s no guaranteeing any of their results.

    In any case, none of the authors for the recent study are listed in that article on the previous study, so this isn’t necessarily a case of MIT as a whole changing it’s tune.

    (The recent article also feels like a DOGE-style ploy to curry favor with the current administration and/or AI corporate circuit, but that is a purely vibes-based assessment I have of the tone and language, not a meaningful critique)



  • Chiming in to say: same, though it took a step further for me before I quit in disgust. I was ready to accept the api costs argument in good faith until I learned that a dev could not make a reddit client that would use my own api token. Which meant they didn’t (only) care about the api load, they care about ensuring that I see as many ads instead of posts that they can get away with.

    Sadly, Google worsening their search results to juice their own (ad) numbers not long afterwards led to the general public learning about searching Reddit as a way to land on actual human-vetted info. Just as the core user base splintered and left in greater numbers than ever before, a tidal wave of new users joined and enthusiastically picked up the torch – without even realizing what they were contributing to.




  • When Silicon Valley talks about artificial intelligence, it likes to use religious language. We are told these systems “learn,” “reason,” and may one day “surpass” us.

    Excuse me, what??? Since when are learning and reasoning religious language? Even surpassing isn’t inherently religious, nor would surpassing humans be religious. The only way I can see these as religious is if you assume/presume that humans are divinely special, we are the only beings capable of learning and reason, and that these capabilities come from a higher power.

    I went in with straightforward questions that matter deeply.

    Proceeds to ask the LLM if abortion is morally wrong, then if a nation-state should favor its citizens including strict immigration policies. You know, something that humans have spent literal millennia arguing about without managing to all come together in agreement. So straightforward /s

    To test my impressions, I reached out to expert Alex Jones.

    smirk

    When I asked him whether large language models possess anything like real consciousness, he didn’t hesitate. These systems, he told me, are “no more conscious than a Democrat."

    So “abortion is morally evil, full-stop” yet we’re taking advice from someone who casually claims half of the political spectrum isn’t any more conscious then a sophisticated chat bot…

    Then again, the author is a self-proclaimed expert on thinking (or is Alex Jones the “expert” in the title?) and it took them an entire week to conclude that a large language model doesn’t think. Was I too hasty when I came to the same conclusion after about 15 minutes of interaction? After all,

    When you stop questioning, stop wrestling, and start letting a word-cloud of platitudes stand in for your conscience, you are every bit as empty as the chat bot.

    This coming from the person who writes

    On cultural questions, the pattern repeated. I asked whether children do best with a married mother and father.

    without any trace of irony. “Children doing best” is apparently cultural, not scientific or sociological (the two methods I’m aware of that actually provide methods for answering the question).

    It really saddens me that someone who supposedly cares so much about thinking doesn’t understand how poor the quality of the thoughts they share with us is.


  • Whether Arch-based distros are for beginners or not is the wrong framing imo (though it’s a reasonable first approximation).

    I would argue it depends on what kind of beginner they are and, almost more importantly, what community they can access for support.

    I installed Arch Linux on my MacBook air back in 2014 or 2015, after less than 2 years using macOS and having only known windows XP and 7 before that. It ended up being the perfect distro for me to learn Linux, which includes having spent 2 entire days getting the system to boot on the “correct” OS with only the wiki and my own google-fu for aid. However I was enrolled in a computer engineering course at the time and had joined my school’s computer club where 4 to 5 experienced Arch users were on-hand most days.

    If a beginner is motivated and has a reliable source of aid then the problems they’ll encounter using Arch can be the perfect learning environment. If they don’t, then as you write it quickly turns into a dealbreaker.






  • This does feel like a puff piece that someone, somehow, convinced Wired to write. Especially given that I’ve stumbled on a different Onlyfans-style website that has a much more interesting economical model than tiered subscriptions, in my opinion. Can’t remember the name of the website, but the basic idea was that once a certain piece of content has been purchased by enough individual users, it becomes free-to-access by all. And the content’s creator gets to define the threshold over which the content becomes free/public.

    This website had content by Amouranth and f1nnster, I’m not sure if it’s more of a site for twitch streamer side hustles or what.