- cross-posted to:
- hackernews@lemmy.bestiver.se
- fuck_ai@lemmy.world
- cross-posted to:
- hackernews@lemmy.bestiver.se
- fuck_ai@lemmy.world
Does this specify the kinds of AI? Are none of these devs using code completion on their IDEs? Or refactoring tools? Because the bulk of them use AI these says.
that’s not what they mean when they say “ai” though.
the definitions changed since the “ai” hype.
I don’t consider
clangtools to be AI.They parse the code logically and don’t do blind pattern matching and curve fitting.
The rules they use are properly defined in code.If that was AI, then all compilers made with LLVM would be AI.
I’m sure everyone has always explained this to you given the number of down votes, but algorithms aren’t equal to AI.
Ever since the evolution of AI people seem to have lost the ability to recall things prior to 2019.
I mean doesn’t it heavily depend what you refer to as AI?
ML algorithms, come very close to LLMs and have been back in the day refered to as AI. They are also used in code completion.
Also both of these are algorithms, but with weights defined by data input.
You seriously misunderstand what the acronyms you’re using refer to. I’d suggest some reading before commenting, next time.
If something uses a lot of
if elsestatements to do stuff like become a “COM” player in a game, it is called an Expert System.
That is what is essentially in game “AI” used to be. That was not an LLM.Stuff like
clazyandclang-tidyare neither ML nor LLM.
They don’t rely on curve fitting or mindless grouping of data-points.
Parameters in them are decided, based on the programming language specification and tokenisation is done directly using the features of the language. How the tokens are used, is also determined by hard logic, rather than fuzzy logic and that is why, the resultant options you get in the completion list, end up being valid syntax for said language.
Now if you are using Cursor for code completion, of course that is AI.
It is not programmed using features of the language, but iterated until it produces output that matches what would match the features of the language.It is like putting a billion monkeys in front of a typewriter and then selecting one that make something Shakespeare-ish, then killing off all the others. Then cloning the selected one and rinse and repeat.
And that is why it takes a stupendously disproportionate amount of energy, time and money to train something that gives an output that could otherwise be easily done better using a simple
bashscript.No because AI replaces a human role.
Code completion does not replace a human role, that’s like saying that spell check is AI.
I am not talking about what it does, I am talking about what it is.
And all tools do tend to replace human labor. For example, tractors replaced many farmhands.
The thing we face nowadays, and this is by no means limited to things like AI, is that less jobs are created by new tools than old destroyed (in my earlier simile, a tractor needs mechanics and such).
The definition of something is entirely disconnected from its usage (mainly).
And just because everyone calls LLMs now AI, there are plenty of scientific literature and things that have been called AI before. As of now, as it boils down all of these are algorithms.
The thing with machine learning is just that it is an algorithm that fine tunes itself (which is often blackbox-ish btw). And strictly speaking LLMs, commonly refered to as AI, are a subclass of ML with new technology.
I make and did not make any statement of the values of that technology or my stance on it
But these tools are not mere algorithms or ML products, they are LLM backed
Emmet has been around since 2015. So it was definitely not LLM backed.
My friend, nobody says all of them are LLM backed, but some are
I’m saying that code completion does not constitute AI and certainly isn’t LLMs.
I then provided an example of why that isn’t the case.
You decided to respond to this by pointing out that some LLM may be involved in some code completion. Although you didn’t provide an example, so who knows if that’s actually true, it seems sort of weird to use in LLM for code completion as it’s completely unnecessary and entirely inefficient, so I kind of doubt it.
I just want to point it out for a minute, because it’s sort of feels like you don’t know this, code completion is basically autocomplete for programmers. It’s doing basic string matching, so that if you type
fncit also completes tofunction(), hardly the stuff of AI
The seal looks like this:

Code completion is probably a gray area.
Those models generally have much smaller context windows, so the energy concern isn’t quite as extreme.
You could also reasonably make a claim that the model is legally in the clear as far as licensing, if the training data was entirely open source (non-attribution, non-share-alike, and commercial-allowed) licensed code. (A big “if”)
All of that to say: I don’t think I would label code-completion-using anti-AI devs as hypocrites. I think the general sentiment is less “what the technology does” and more “who it does it to”. Code completion, for the most part, isn’t deskilling labor, or turning experts into chatbot-wrangling accountability sinks.
Like, I don’t think the Luddites would’ve had a problem with an artisan using a knitting frame in their own home. They were too busy fighting against factories locking children inside for 18-hour shifts, getting maimed by the machines or dying trapped in a fire. It was never the technology itself, but the social order that was imposed through the technology.
Lovely writing, I agree 👍
Jesus fuck that’s some goal post moving.
Even yesteryear’s code completion systems (that didn’t rely on LLMs) are technically speaking, AI systems.
While the term “AI” became the next “crypto” or “Blockchain”, in reality we’ve been using various AI products for the better part of the past 30 years.
They were technically Expert Systems.
AI was was the Marketing Term even then.Now they are LLMs and AI is still the marketing term.
We used to call the code that determined NPC behaviour AI.
It wasn’t AI as we know it now but it was intended to give vaguely realistic behaviour (such as taking a sensible route from A to B).
Used to?
Lol gramps here thinks bots are AI skullemoji skullemoji bro
And honestly lightweight neural nets can make for some interesting enemy behavior as well. I’ve seen a couple games using that and wouldn’t be surprised if it caught on in the future.
“AI” has become synonymous with “Generative AI”
You mean code completion that just parses a file into an AST and does fuzzy string matching against tokens used to build that AST? I would not personally classify that as AI. It’s code that was written by humans and is perfectly understandable by humans. There is no probabilistic component present, there is no generated matrix, there’s no training process, it’s just simple parsing and string matching.
It’s early and I’m tired and probably in a poor mood and being needlessly fussy, so I apologize if this completely misses the point of your comment. I agree that there’s other stuff we’ve been using for ages which could be reasonably classified as “AI,” but I don’t feel like traditional code completion systems fit there.
AI doesn’t have to be probabilistic, a classical computer science definition of AI states that it has to be an actor that reacts to some percepts according to some policy
By that definition a calculator is AI.
yes we could definitely say that a calculator, technically, is an AI. but we usually don’t think of the calculator as an agent, and it doesn’t really make any decisions, as it just displays the result when prompted
That’s my point. These random definitions of AI that have been come up with by the most pedantic people in existence are not in any way helpful. We should ignore them.
They seek to redefine AI as basically anything that a computer does. This is entirely unhealthful and is only happening because they need to be right on the internet.
These irritating idiots need to go away for they serve no purpose.
but that’s not a redefinition, it was originally defined that way, like back in the 60s, by the people who started this field of research. I think a calculator is a bit of an absurd example, but an NPC that pathfinds towards the player to attack them is still AI, no matter how you look at it
Here is a frog, please help me split its hairs
I would primarily understand it as being free of generative AI (picture and sound), which is what is most obvious when actually playing a game. I’m personally not against using LLMs for coding if you actually know what you’re doing and properly review the output. However at that point most will come to the conclusion that you could write the code manually anyways and probably save time.
Using ai to generate samples to get a framework of the product would be permitted or not? Is placeholder generation allowed?
Since you would never see it that’s pretty much irrelevant. Clearly this is about AI generated art and AI generated assets
Whether or not you use AI to grey box something is a pointless distinction given the fact that there’s no way to prove it one way or the other.
But it still removes labor from the working class. My point is that the lines are blurry. You practically cannot draw a useful line based on the tooling used.
The AI label needs to be present if the finished product contains AI generated assets. So AI generated code, or AI generated art.
In the example above you grey boxed in AI but then replaced all the assets with ones that humans made. There is no distinction there between doing that and just having literal grey boxes.
You couldn’t require an AI label in that scenario because it would be utterly unenforceable. How would a developer prove if they did or did not use AI for temporary art?
So yes you can draw a line. Does the finished product contain AI generated assets. You don’t like that definition because you’re being pedantic but your pedantry interpretation isn’t enforceable, so it’s useless.
Is a scene arranged by AI not undesirable since it does not have artistic intent?
I feel like you have never actually developed a game. Because what you’re arguing is just weird. It makes no logical sense.
A grey box is the very most basic of what a game will ever be, it never bears any resemblance to the finished product. It is the basis most fundamental interpretation of game mechanics and systems. The gray box has no bearing on the final result of the game.
No grey box contains any aspect of artistic intent, the art team are never even involved in its creation it’s always just developers doing things. Go look up some game blogs.
Personally speaking I don’t care at all about dev tools, as they have always been used. Vibe coding does bother me though - if you don’t know HOW to code, you probably shouldn’t be doing it.
The real issue though is using AI generated assets. If you have a game that uses human made art, story, and music, no one is going to complain about you using AI. Even if you somehow managed to get there via vibe coding.
This is exactly my thoughts. You need to specify. Is a product AI when Windows is used to develop it? Windows is an “AI” product as in assisted to be produced by AI.
Labels are meaningless without sensible rules and enforcement.
Another case of Lemmy users angrily downvoting because they don’t understand how the world works. These are exactly the questions that need to be asked.
Right now, I could slap the label “AI free” on my completely AI generated game and just claim that I interprete it as "the game doesn’t use gen AI while running.
Removed by mod
When everyone else is selling poison, selling something actually edible is a pretty good move.
AI free art (aka theft free art) is like cruelty free cotton. A lot of people do, and should, care.
You tout a 80 dollar price, entirely ignoring that indie games are often sub-40. Nice ragebait.
Lmao what is this comment?
Are you really conflating the idea that people want art made by people with racism and hard-right politics?
Well Hitler was a painter, checkmate liberals.
He was a Dew.
Yeah he also had a single testicle, what’s your point?
Edit: I’m a dipshit, that’s a different user. Probably just shitposting XP
You’re not a dipshit. I AM a dipshit
I actually got doxing/threats of physical violence here on lemmy for pushing back on idiots claiming being against AI was like being racist, these people are insufferable and no matter what they think their intentions are, the consequences of their ideology is a sweeping under the rug of actual injustice, systematic prejudice and violence.
Imagine thinking that disliking AI is racist, or a class war wedge issue.
No one wants AI slop.
Y’all don’t realize how much of a bubble you’re in on the anti - AI front, the vast majority of normies are catching on and use genAI
Tech bros, for some reason always trying to use group pressure as a legitimate argument.
That’s literally what the comment above it was doing too though. It’s a very common anti-AI argument to appeal to social proof.
I’m not even advocating, I’m just trying to inform you people who are clearly in a bubble thinking everyone hates AI, they don’t and that became crystal clear at my recent family gatherings as the one tech guy.
Since we’re sharing anecdotes, everyone at my recent family gathering was staunchly against AI and I’m the only tech person in it.
Being anti AI doesn’t mean you’re in some isolated bubble.
I think Lemmy in particular has a bad problem of assumed group think, it makes sense since most of us left prior platforms for freedom and decentralization reasons, stuff like boycotting major companies, switching to Linux, etc all of that is much less prevalent then some very vocal people on Lemmy think.
I’m glad you had a different experience, but I see AI use increasing all around me by otherwise nontechy people, whereas I get on Lemmy and you know what the majority of the opinion is here.
This is still “you’re the minority, better follow the herd, just saying”.
No, it’s “you all seem to think everyone else also hates AI, but they don’t” it was directly in response to the comment “no one wants AI slop.”
Thank you for your concern.
“Normies” are idiots. They’re the same people denying the efficacy of vaccines or the veracity of the moon landing. What they do should NOT be used to validate the correctness of something.
That entire statement is a lie. Antivaxers and other conspiracy nuts are just as tiny a minority as the Lemmy bubble. If you want to discredit the “normies”, at least use arguments that are actually true.
Ah yes hand waive away everything you don’t like as idiots who deny vaccines and the moon landing.
I am talking about normal normies, not fringe conspiracy normies.
Are you being serious with this reply? Holy shit.












