BabaIsPissed [he/him]

  • 0 Posts
  • 11 Comments
Joined 2 years ago
cake
Cake day: June 10th, 2022

help-circle
  • We consistently find across all our experiments that, across concepts, the frequency of a concept in the pretraining dataset is a strong predictor of the model’s performance on test examples containing that concept. Notably, model performance scales linearly as the concept frequency in pretraining data grows exponentially

    This reminds me of an older paper on how LLMs can’t even do basic math when examples fall outside the training distribution (note that this was GPT-J and as far as I’m aware no such analysis is possible with GPT4, I wonder why), so this phenomena is not exclusive to multimodal stuff. It’s one thing to pre-train a large capacity model on a general task that might benefit downstream tasks, but wanting these models to be general purpose is really, really silly.

    I’m of the opinion that we’re approaching a crisis in AI, we’ve hit a barrier on what current approaches are capable of achieving and no amount of data, labelers and tinkering with architectural minutiae or (god forbid) “prompt engineering” can fix that. My hopes are that with the bubble bursting the field will have to reckon with the need for algorithmic and architectural innovation, more robust standards for what constitutes a proper benchmark and reproducibility at the very least, and maybe, just maybe, extend its collective knowledge from other fields of study past 1960’s neuroscience and explore the ethical and societal implications of your work more deeply than the oftentimes tiny obligatory ethics section of a paper. That is definetly a overgeneralization, so sorry for any researchers out here <3, I’m just disillusioned with the general state of the field.

    You’re correct about the C suites though , all they needed to see was one of those stupid graphs that showed line going up, with model capacity on the x axis and performance on the y axis, and their greed did the rest.




  • got the Samsung buds pro 2 at half price recently and I kind of like them, but they were a bit underwhelming even at that price. I’ve never spent a lot on audio in general, so they were actually a big improvement, but there was no “wow” factor or anything. Plus having to install bloatware that asks for all permissions under the sun sucks (why the fuck would a settings menu want to know my location???).

    I do think you underestimate how nice the noise cancelation can be though. I moved to a big city and my hick ass cannot deal with all the fucking noise. Plus I’m clumsy and end up getting wires caught on everything, which means wire stuff also becomes e-waste fairly quickly.






  • u did the thing we designed the game to push you towards doing don’t you feel bad u monster lolololol

    To be fair to the game that’s only the bait and switch at the very start with Toriel, designed to make the player reload and introduce the save meta-fuckery with Flowey. From then on the only incentive to do violence is getting stuck at a puzzle or completionism (which is at the heart of the meta-narrative).

    The commentary on violence by itself is naive though (even the game points it out at one point) and if you don’t like the characters or roll your eyes at 4th wall stuff the whole thing falls apart pretty quick.



  • just want to say thanks for this. I’ve been trying to get through a bunch of mental hurdles this past year, including social anxiety, but in the last few months I have been immersed in work and kind of neglected all that. I have a bunch of family I rarely see over for new years, and I kind of lost sight of making the most of it because of some stupid deadline.