• merc@sh.itjust.works
      link
      fedilink
      arrow-up
      1
      ·
      1 hour ago

      And also possibly checking in code with subtle logic flaws that won’t be discovered until it’s too late.

    • Daedskin@lemm.ee
      link
      fedilink
      arrow-up
      22
      arrow-down
      1
      ·
      edit-2
      9 hours ago

      I like the sentiment of the article; however this quote really rubs me the wrong way:

      I’m not suggesting we abandon AI tools—that ship has sailed.

      Why would that ship have sailed? No one is forcing you to use an LLM. If, as the article supposes, using an LLM is detrimental, and it’s possible to start having days where you don’t use an LLM, then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?

      I personally don’t interact with any LLMs, neither at work or at home, and I don’t have any issue getting work done. Yeah there was a decently long ramp-up period — maybe about 6 months — when I started on ny current project at work where it was more learning than doing; but now I feel like I know the codebase well enough to approach any problem I come up against. I’ve even debugged USB driver stuff, and, while it took a lot of research and reading USB specs, I was able to figure it out without any input from an LLM.

      Maybe it’s just because I’ve never bought into the hype; I just don’t see how people have such a high respect for LLMs. I’m of the opinion that using an LLM has potential only as a truly last resort — and even then will likely not be useful.

      • gamermanh@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        8
        arrow-down
        6
        ·
        8 hours ago

        Why would that ship have sailed?

        Because the tools are here and not going anyway

        then what’s stopping you from increasing the frequency of those days until you’re not using an LLM at all?

        The actually useful shit LLMs can do. Their point is that using only majorly an LLM hurts you, this does not make it an invalid tool in moderation

        You seem to think of an LLM only as something you can ask questions to, this is one of their worst capabilities and far from the only thing they do

        • merc@sh.itjust.works
          link
          fedilink
          arrow-up
          2
          ·
          1 hour ago

          Because the tools are here and not going anyway

          Swiss army knives have had awls for ages. I’ve never used one. The fact that the tool exists doesn’t mean that anybody has to use it.

          The actually useful shit LLMs can do

          Which is?

        • Daedskin@lemm.ee
          link
          fedilink
          arrow-up
          2
          ·
          3 hours ago

          Because the tools are here and not going anyway

          I agree with this on a global scale; I was thinking about on a personal scale. In the context of the entire world, I do think the tools will be around for a long time before they ever fall out of use.

          The actually useful shit LLMs can do.

          I’ll be the first to admit I don’t know many use cases of LLMs. I don’t use them, so I haven’t explored what they can do. As my experience is simply my own, I’m certain there are uses of LLMs that I hadn’t considered. I’m personally of the opinion that I won’t gain anything out of LLMs that I can’t get elsewhere; however, if a tool helps you more than any other method, then that tool could absolutely be useful.

          • The_Terrible_Humbaba@slrpnk.net
            link
            fedilink
            arrow-up
            2
            ·
            2 hours ago

            My 2 cents on this.

            I never used LLMs until recently; not for moral or ideological reasons but because I had never felt much need to, and I also remember when ChatGPT originally came out it asked for my phone number, and that’s a hard no from me.

            But a few months ago I decided to give it another go (no phone number now), and found it quite useful sometimes. However before I explain how I use it and why I find it useful, I have to point out that this is only the case because of how crap search engines are nowadays, which pages and pages of trash results and articles.

            Basically, I use it as a rudimentary search engine to help me solve technical problems sometimes, or to clear something up that I’m having a hard time finding good results for. In this way, it’s also useful to get a rudimentary understanding of something, especially when you don’t even know what terms to use to begin searching for something in the first place. However, this has the obvious limitation that you can’t get info for things that are more recent than the training data.

            Another thing I can think of, is that it might be quite useful if you want to learn and practice another language, since language is what it does best, and it can work as a sort of pen pal, fixing your mistakes if you ask it to.

            In addition to all that, I’ve seen people make what are essentially text based adventure games that allow much more freedom than traditional ones, since you don’t have to plan everything yourself - you can just give it a setting and a set of rules to follow, and it will mould the story as the player progresses. Basically DnD.

            • merc@sh.itjust.works
              link
              fedilink
              arrow-up
              1
              ·
              11 minutes ago

              Basically, I use it as a rudimentary search engine

              The other day I had a very obscure query where the web page results were very few and completely useless. Reluctantly I looked at the Google LLM-generated “AI Overview” or whatever it’s called. What it came up with was completely plausible, but utter bullshit. After a quick look I could see that it had taken text that answered a similar question, and just weaved some words I was looking for into the answer in a plausible way. Utterly useless, and just ended up wasting my time checking that it was useless.

              Another thing I can think of, is that it might be quite useful if you want to learn and practice another language

              No, it’s terrible at that. Google’s translation tool uses an LLM-based design. It’s terrible because it doesn’t understand the context of a word or phrase.

              For instance, a guy might say to his mate: “Hey, you mad cunt!”. Plug that into an LLM translation and it you don’t know what it might come up with. In some languages it actually translates to something that will translate back to “Hey, you mad cunt”. In Spanish it goes for “Oye, maldita sea”, which is basically “Hey, dammit” Which is not the sense it was used at all. Shorten that to “Hey, you mad?” and you get the problem that “mad” could be crazy or it could be angry, depending on the context and the dialect. If you were talking with a human, they might ask you for context cues before translating, but the LLMs just pick the most probable translation and go with that.

              If you use long conversational interface, it will get more context, but then you run into the problem that there’s no intelligence there. You’re basically conversing with the equivalent of a zombie. Something’s animating the body, but the spark of life is gone. It is also designed never to be angry, never to be sad, never to be jealous, it’s always perky and pleasant. So, it might help you learn a language a bit, but you’re learning the zombified version of the language.

              Basically DnD.

              D&D by the world’s worst DM. The key thing a DM brings to a game is that they’re telling a story. They’ve thought about a plot. They have interesting characters that advance that plot. They get to know the players so they know how to subvert their expectations. The hardest things for a DM to deal with is a player doing something unexpected. When that happens they need to adjust everything so that what happens still fits in with the world they’re imagining, and try to nudge the players back to the story they’ve built. An LLM will just happily continue generating text that meets the heuristics of a story. But, that basically means that the players have no real agency. Nothing they do has real consequences because you can’t affect the plot of the story when there’s no plot to begin with.

              And, what if you just use an LLM for dialogue in a game where the story/plot was written by a human. That’s fine until the LLM generates a plausible dialogue that’s “wrong”. Like, say the player is investigating a murder and talks to a guard. In a proper game, the guard might not know the answer, or might know the answer and lie, or might know the answer but not be willing / able to tell the player. But, if you put an LLM in there, it can generate a plausible response from a guard, and that plausible response might match one of those scenarios, but it doesn’t have a concept that this guard is “an honest but dumb guard” or “a manipulative guard who was part of the plot”. If the player comes and talks to the guard again, will they still be that same character, or will the LLM generate more plausible dialogue from a guard, that goes against the previous “personality” of that guard?

    • Mnemnosyne@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      8
      ·
      7 hours ago

      “Every time we use a lever to lift a stone, we’re trading long term strength for short term productivity. We’re optimizing for today’s pyramid at the cost of tomorrow’s ability.”

      • julietOscarEcho@sh.itjust.works
        link
        fedilink
        arrow-up
        10
        ·
        5 hours ago

        Precisely. If you train by lifting stones you can still use the lever later, but you’ll be able to lift even heavier things by using both your new strength AND the leaver’s mechanical advantage.

        By analogy, if you’re using LLMs to do the easy bits in order to spend more time with harder problems fuckin a. But the idea you can just replace actual coding work with copy paste is a shitty one. Again by analogy with rock lifting: now you have noodle arms and can’t lift shit if your lever breaks or doesn’t fit under a particular rock or whatever.

        • wizardbeard@lemmy.dbzer0.com
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          Also: assuming you know what the easy bits are before you actually have experience doing them is a recipe to end up training incorrectly.

          I use plenty of tools to assist my programming work. But I learn what I’m doing and why first. Then once I have that experience if there’s a piece of code I find myself having to use frequently or having to look up frequently, I make myself a template (vscode’s snippet features are fucking amazing when you build your own snips well, btw).

      • AeonFelis@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        ·
        4 hours ago

        Actually… Yes? People’s health did deteriorate due to over-reliance on technology over the generations. At least, the health of those who have access to that technology.

      • Ebber@lemmings.world
        link
        fedilink
        arrow-up
        10
        ·
        6 hours ago

        If you don’t understand how a lever works, then it’s a problem. Should we let any person with an AI design and operate a nuclear power plant?

      • trashgirlfriend@lemmy.world
        link
        fedilink
        arrow-up
        4
        ·
        5 hours ago

        “If my grandma had wheels she would be a bicycle. We are optimizing today’s grandmas at the sacrifice of tomorrow’s eco friendly transportation.”

    • Hoimo@ani.social
      link
      fedilink
      English
      arrow-up
      7
      ·
      10 hours ago

      Not even. Every time someone lets AI run wild on a problem, they’re trading all trust I ever had in them for complete garbage that they’re not even personally invested enough in to defend it when I criticize their absolute shit code. Don’t submit it for review if you haven’t reviewed it yourself, Darren.

      • wizardbeard@lemmy.dbzer0.com
        link
        fedilink
        English
        arrow-up
        1
        ·
        2 hours ago

        My company doesn’t even allow AI use, and the amount of times I’ve tried to help a junior diagnose an issue with a simple script they made, only to be told that they don’t actually know what their code does to even begin troubleshooting…

        “Why do you have this line here? Isn’t that redundant?”

        “Well it was in the example I found.”

        “Ok, what does the example do? What is this line for?”

        Crickets.

        I’m not trying to call them out, I’m just hoping that I won’t need to familiarize myself with their whole project and every fucking line in their script to help them, because at that point it’d be easier to just write it myself than try to guide them.

    • Guttural
      link
      fedilink
      Français
      arrow-up
      9
      ·
      11 hours ago

      This guy’s solution to becoming crappier over time is “I’ll drink every day, but abstain one day a week”.

      I’m not convinced that “that ship has sailed” as he puts it.

    • Agent641@lemmy.world
      link
      fedilink
      arrow-up
      5
      arrow-down
      3
      ·
      10 hours ago

      Nahhh, I never would have solved that problem myself, I’d have just googled the shit out of it til I found someone else that had solved it themselves