Gamer, rider, dev. Interested in anything AI.
Halls of Torment. $5 game on steam that is like a Vampire Survivors clone, but with more rpg elements to it.
These are amazing. Dell, Lenovo and I think HP made these tiny things and they were so much easier to get than Pi’s during the shortage. Plus they’re incredibly fast in comparison.
I’ve got a background in deep learning and I still struggle to understand the attention mechanism. I know it’s a key/value store but I’m not sure what it’s doing to the tensor when it passes through different layers.
I’m on lemmy.world and the sidebar shows 401 subscribers. Is that just a sub count from the local instance or global?
Also not sure how that would be helpful. If every prompt needs to rip through those tokens first, before predicting a response, it’ll be stupid slow. Even now with llama.cpp, it’s annoying when it pauses to do the context window shuffle thing.
Same. I loved the idea of what VE does but playing the game was just a confusing mess for me. I stick to the same 8 mods I always use.
Any data sets produced before 2022 will be very valuable compared to anything after. Maybe the only way we avoid this is to stick to training LLMs on older data and prompt inject anything newer, rather than training for it.
Step 1) Have a bike that women want to talk about. I think that’s about it.
When I had a CRF250L, I’d regularly have women come up and ask how heavy it is, because they’re thinking of buying one. I’d put the bike on the ground and show them how to lift it. So… weirdest thing is dropping my bike intentionally to let women pick it up for me.
I hate these filthy neutrals…
Looks like the original base is suffering from some toxic fallout… might be a while. Enjoy building a new colony here!
The advancements in this space have moved so fast, it’s hard to extract a predictive model on where we’ll end up and how fast it’ll get there.
Meta releasing LLaMA produced a ton of innovation from open source that showed you could run models that were nearly the same level as ChatGPT with less parameters, on smaller and smaller hardware. At the same time, almost every large company you can think of has prioritized integrating generative AI as a high strategic priority with blank cheque budgets. Whole industries (also deeply funded) are popping up around solving the context window memory deficiencies, prompt stuffing for better steerability, better summarization and embedding of your personal or corporate data.
We’re going to see LLM tech everywhere in everything, even if it makes no sense and becomes annoying. After a few years, maybe it’ll seem normal to have a conversation with your shoes?