• Luccus@feddit.org
      link
      fedilink
      arrow-up
      4
      ·
      edit-2
      2 hours ago

      I understand LLMs well enough that I really don’t want to use them because they are inherently incapable of judging the validity of information they are passing along.

      Sometimes it’s wrong. Sometimes it’s right. But they don’t tell you when they’re wrong, and to find out if they were wrong, you now have to do the research you were trying to avoid in the first place.

      I tried programming with it once, because a friend insisted it was good. But it wasn’t, and it was extremly confidend, while being exceptionally wrong.

  • tauren@lemm.ee
    link
    fedilink
    English
    arrow-up
    2
    arrow-down
    6
    ·
    3 hours ago

    It’s so strange seeing people being proud that they can’t keep up with the technologies.

    • Gabe Bell@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      6
      arrow-down
      1
      ·
      3 hours ago

      Yeah, that’s just judgemental and presumptive.

      I have quite a lot of shit in my life, and I have actively decided to pay no attention to AI. Not because “I can’t keep up with it” but because after some research into it I decided “it was bullshit and nonsense and not something I need to know about”

      • tauren@lemm.ee
        link
        fedilink
        English
        arrow-up
        1
        arrow-down
        1
        ·
        1 hour ago

        Mate, I don’t know you and I don’t care about you. Stop talking about yourself for a second. You posted a screenshot where the person said “I have never even tried it.” That’s it.

    • Gabe Bell@lemmy.worldOP
      link
      fedilink
      English
      arrow-up
      4
      ·
      3 hours ago

      There’s a difference between not knowing something because of ignorance and not knowing something because you know you don’t need to know it.

      I have no idea how to rebuild a combustion engine.

      Is that something of which I should be ashamed? Or something I have actively chosen not to learn because when will I ever need to know it?

      • Kusimulkku@lemm.ee
        link
        fedilink
        arrow-up
        2
        arrow-down
        1
        ·
        2 hours ago

        Would you be the sort of person to proudly proclaim their lack of knowledge about combustion engines at the time they became a thing?

        • Gabe Bell@lemmy.worldOP
          link
          fedilink
          English
          arrow-up
          2
          ·
          2 hours ago

          To be honest? Yeah.

          In my last job before this one I learned a lot of stuff about a topic I needed to know for that job.

          But now I have a new job I don’t need to know any of that stuff. So I am slowly forgetting it because I don’t use it. And instead I am learning a lot of stuff about things I need for my new job.

          And in the midst of all of this why would I take the time to learn something I am never going to use. At all. Ever. I have far too much stuff to learn and remember, and why I would need to learn how to plug the camshaft into the reverse socket twink-phlange?

          I am not afraid of technology. It doesn’t scare me. I am not sitting in a cave railing against these kids with their short skirts and their long hair and their music and “they didn’t do these things in my day”

          I just made what I consider to be a fairly educated judgement call that this is something I don’t need to care about.

          • Kusimulkku@lemm.ee
            link
            fedilink
            arrow-up
            1
            arrow-down
            1
            ·
            2 hours ago

            This isn’t about not needing to learn. If you don’t see any use for this newfangled internal combustion then why learn about whether it is a tiny horse or whatever. But this telling people with pride how little you know is almost always eyeroll worthy. Like wow very cool you don’t know something…

  • glitchdx@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    17 hours ago

    Wait, people actually try to use gpt for regular everyday shit?

    I do lorebuilding shit (in which gpt’s “hallucinations” are a feature not a bug), or I’ll just ramble at it while drunk off my ass about whatever my autistic brain is hyperfixated on. I’ve given up on trying to do coding projects, because gpt is even worse at it than I am.

    • bstix@feddit.dk
      link
      fedilink
      arrow-up
      4
      ·
      5 hours ago

      They absolutely do. Some people basically use it instead of Google or whatever. Shopping lists, vacation planning, gift lists, cooking recipes, just about everything.

      It’s great at it, because it’ll bother trawling webpages for all that stuff that you can’t be bothered to spend hours doing. The internet is really soo shitified that it’s easier to use a computer to do this.

      I hate that it is so. It’s a complete waste of ressources, but I understand it.

      It’s a waste of your resources to close popups, set cookie preferences and read five full screens about grandma’s farm before getting to the point: “Preheat the oven to 200°c and heat the pizza for 15 minutes.”, when ChatGPT could’ve presented it right away without any ads.

  • inclementimmigrant@lemmy.world
    link
    fedilink
    arrow-up
    4
    arrow-down
    1
    ·
    13 hours ago

    I use it somewhat regularly to send snarky emails to coworkers in a professional, buzzword overload responses to mundane inquiries.

    I use it every so often to help craft a professional go fuck yourself email too.

  • jjjalljs@ttrpg.network
    link
    fedilink
    arrow-up
    61
    arrow-down
    2
    ·
    1 day ago

    I feel like it’s an unpopular take but people are like “I used chat gpt to write this email!” and I’m like you should be able to write email.

    I think a lot of people are too excited to neglect core skills and let them atrophy. You should know how to communicate. It’s a skill that needs practice.

    • minorkeys@lemmy.world
      link
      fedilink
      arrow-up
      19
      ·
      edit-2
      22 hours ago

      This is a reality as most people will abandon those skills, and many more will never learn them to begin with. I’m actually very worried about children who will grow up learning to communicate with AI and being dependent on it to effectively communicate with people and navigate the world, potentially needing AI as a communication assistant/translator.

      AI is patient, always available, predicts desires and effectively assumes intent. If I type a sentence with spelling mistakes, chatgpt knows what I meant 99% of the time. This will mean children don’t need to spell or structure sentences correctly to effectively communicate with AI, which means they don’t need to think in a way other human being can understand, as long as an AI does. The more time kids spend with AI, the less developed their communication skills will be with people. GenZ and GenA already exhibit these issues without AI. Most people go experience this communicating across generations, as language and culture context changes. This will emphasize those differences to a problematic degree.

      Kids will learn to communicate will people and with AI, but those two styles with be radically different. AI communication will be lazy, saying only enough for AI to understand. With communication history, which is inevitable tbh, and AI improving every day, it can develop a unique communication style for each child, what’s amounts to a personal language only the child and AI can understand. AI may learn to understand a child better than their parents do and make the child dependent on AI to effectively communicate, creating a corporate filter of communication between human being. The implications of this kind of dependency are terrifying. Your own kid talks to you through an AI translator, their teachers, friends, all their relationships could be impacted.

      I have absolutely zero beleif that the private interests of these technology owners will benefit anyone other than themselves and at the expense of human freedom.

    • Soup@lemmy.world
      link
      fedilink
      arrow-up
      5
      ·
      20 hours ago

      I know someone who very likely had ChatGPT write an apology for them once. Blew my mind.

      • Lemminary@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        16 hours ago

        I use it to communicate with my landlord sometimes. I can tell ChatGPT all the explicit shit exactly as I mean it and it’ll shower it and comb it all nice and pretty for me. It’s not an apology, but I guess my point is that some people deserve it.

        • Soup@lemmy.world
          link
          fedilink
          arrow-up
          4
          arrow-down
          1
          ·
          5 hours ago

          You don’t think being able to communicate properly and control your language, even/especially for people you don’t like, is a skill you should probably have? It’s not that much more effort.

    • Denvil@lemmy.one
      link
      fedilink
      arrow-up
      2
      arrow-down
      2
      ·
      1 day ago

      I think it is a good learning tool if you use it as such. I use it for help with google sheets functions (not my job or anything important, just something I’m doing), and while it rarely gets a working function out, it can set me on the right track with functions I didn’t even know existed.

        • Halosheep@lemm.ee
          link
          fedilink
          arrow-up
          1
          ·
          3 hours ago

          When you can ask a specific question and get related information in the same amount of time opening the web page for documentation/“online resources” takes, why bother?

      • jjjalljs@ttrpg.network
        link
        fedilink
        arrow-up
        16
        arrow-down
        1
        ·
        1 day ago

        We used to have web forums for that, and they worked pretty okay without the costs of LLMs

        This is a little off topic but we really should, as a species, invest more heavily in public education. People should know how to read and follow instructions, like the docs that come with Google sheets.

  • Lucky_777@lemmy.world
    link
    fedilink
    arrow-up
    5
    ·
    18 hours ago

    Using AI is helpful, but by no means does it replace your brain. Sure, it can write emails and really helps with code, but anything beyond basic troubleshooting and “short” code streams, it’s an assistant, not an answer.

    • Lemminary@lemmy.world
      link
      fedilink
      arrow-up
      3
      ·
      16 hours ago

      Yeah, I don’t get the people who think it’ll replace your brain. I find it useful for learning even if it’s not always entirely correct but that’s why I use my brain too. Even if it gets me 60% of the way there, that’s useful.

  • ssillyssadass@lemmy.world
    link
    fedilink
    arrow-up
    7
    arrow-down
    1
    ·
    19 hours ago

    I use ChatGPT mainly for recipes, because I’m bad at that. And it works great, I can tell it “I have this and this and this in my fridge and that and that in my pantry, what can I make?” and it will give me a recipe that I never would have come up with. And it’s always been good stuff.

    And I do learn from it. People say you can’t learn from using AI, but I’ve gotten better at cooking thanks to ChatGPT. Just a while ago I learned about deglazing.

  • TabbsTheBat@pawb.social
    link
    fedilink
    arrow-up
    101
    ·
    1 day ago

    The amount of times I’ve seen a question answered by “I asked chatgpt and blah blah blah” and the answer being completely bullshit makes me wonder who thinks asking the bullshit machine™ questions with a concrete answer is a good idea

    • LarmyOfLone@lemm.ee
      link
      fedilink
      arrow-up
      1
      arrow-down
      1
      ·
      4 hours ago

      We’re in a post truth world where most web searches about important topics give you bullshit answers. But LLMs have read basically all the articles already and has at least the potential make deductions and associations about it - like this belongs to “propaganda network 4335”. Or “the source of this claim is someone who has engaged in deception before”. Something like a complex fact check machine.

      This is sci-fi currently because it’s an ocean wide but can’t think deeply or analyze well, but if you press GPT about something it can give you different “perspectives”. The next generations might become more useful in this in filtering out fake propaganda. So you might get answers that are sourced and referenced and which can also reference or dispute wrong answers / talking points and their motivation. And possibly what emotional manipulation and logical fallacies they use to deceive you.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      arrow-up
      47
      ·
      1 day ago

      This is your reminder that LLMs are associative models. They produce things that look like other things. If you ask a question, it will produce something that looks like the right answer. It might even BE the right answer, but LLMs care only about looks, not facts.

    • CarbonatedPastaSauce@lemmy.world
      link
      fedilink
      English
      arrow-up
      14
      ·
      1 day ago

      A lot of people really hate uncertainty and just want an answer. They do not care much if the answer is right or not. Being certain is more important than being correct.

      • scintilla@lemm.ee
        link
        fedilink
        arrow-up
        6
        ·
        1 day ago

        Why not just read the first part of a wikipedia article if they want that though? It’s not the end all source but it’d better than asking the machine known to make things up the same question.

        • CarbonatedPastaSauce@lemmy.world
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 day ago

          Because the AI propaganda machine is not exactly advertising the limitations, and the general public sees LLMs as a beefed up search engine. You and I know that’s laughable, but they don’t. And OpenAI sure doesn’t want to educate people - that would cost them revenue.

    • can@sh.itjust.works
      link
      fedilink
      arrow-up
      5
      ·
      1 day ago

      I don’t see the point either if you’re just going to copy verbatim. OP could always just ask AI themselves if that’s what they wanted.

  • Whats_your_reasoning@lemmy.world
    link
    fedilink
    arrow-up
    12
    arrow-down
    1
    ·
    1 day ago

    Oh hey it’s me! I like using my brain, I like using my own words, I can’t imagine wanting to outsource that stuff to a machine.

    Meanwhile, I have a friend who’s skeptical about the practical uses of LLMs, but who insists that they’re “good for porn.” I can’t help but see modern AI as a massive waste of electricity and water, furthering the destruction of the climate with every use. I don’t even like it being a default on search engines, so the idea of using it just to regularly masturbate feels … extremely selfish. I can see trying it as a novelty, but for a regular occurence? It’s an incredibly wasteful use of resources just so your dick can feel nice for a few minutes.

    • Foxfire@pawb.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 day ago

      Using it for porn sounds funny to me given the whole concept of “rule 34” being pretty ubiquitous. If it exists, there’s porn of it! Like even from a completely pragmatic prespective, it sounds like generating pictures of cats. Surely there is a never ending ocean of cat pictures which you can search and refine, do you really need to bring a hallucination machine into the mix? Maybe your friend has an extremely specific fetish list that nothing else will scratch? That’s all I can think of.

      • Whats_your_reasoning@lemmy.world
        link
        fedilink
        arrow-up
        3
        ·
        edit-2
        24 hours ago

        He says he uses it to do sexual roleplay chats, treats it kinda like a make-your-own-adventure porn story. I don’t know if he’s used it for images.

        • kipo@lemm.ee
          link
          fedilink
          English
          arrow-up
          1
          ·
          2 hours ago

          If he’s using an online model, I hope he used a privacy-respecting VPN, a hardened browser, and didn’t sign up using his email, or else his IP address and identity are now linked to all those chats, and that info could be exposed, traded, or sold to anyone.

    • minorkeys@lemmy.world
      link
      fedilink
      arrow-up
      1
      arrow-down
      5
      ·
      22 hours ago

      Now imagine growing up where using your own words is less effective than having AI speak for you. Would you have not used AI as a kid when it worked better than your own words?

      • FearMeAndDecay@literature.cafe
        link
        fedilink
        English
        arrow-up
        6
        ·
        20 hours ago

        Wdym “using your own words is less effective than having AI speak for you”? Learning how to express yourself and communicate with others is a crucial life skill, and if a kid struggles with that then they should receive the properly education and support to learn, not given an AI and told to just use that instead

        • minorkeys@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          edit-2
          19 hours ago

          It is, and they should, but that doesn’t mean they will. GenZ and GenA has notable communication and social issues rooted in the technologies of today. Those issue aren’t stopping our use of social media, smart phones or tablets or stopping tech companies from doubling down on the technologies that cause the issues. I have no faith they will protect future children when they have refuse to protect present children.

          What I mean is that much like parents who already put a tablet or TV in front of their kid to keep them occupied, parents will do the same with AI. When a kid is talking to an AI every day, they will learn to communicate their wants and needs to the AI. But AI has infinite patients, is always available, never makes their kid feel bad and can effectively infer and accurately assume the intent of a child from pattern recognizing communication that parents may struggle to understand. Every child would effectively develop a unique language for use with their AI co-parent that really only the AI understands.

          This will happen naturally simply by exposure to AI that parents seem more than willing to allow as easily as tablets and smart phones and tv. Like siblings where one kid understands the other better that parent and translates those needs to the parent. Children raised on AI may end up communication to their caretakers better through the AI, just like the sibling, but worse. Their communication skills with people will suffer because more of their needs are getting met by communicating with AI. They practice communication with AI at the expense of communicating with people.

  • 🇰 🌀 🇱 🇦 🇳 🇦 🇰 🇮 @pawb.social
    link
    fedilink
    English
    arrow-up
    9
    ·
    edit-2
    1 day ago

    I’ve tried a few GenAI things, and didn’t find them to be any different than CleverBot back in the day. A bit better at generating a response that seems normal, but asking it serious questions always generated questionably accurate responses.

    If you just had a discussion with it about what your favorite super hero is, it might sound like an actual average person (including any and all errors about the subject it might spew), but if you try to use it as a knowledge base, it’s going to be bad because it is not intelligent. It does not think. And it’s not trained well enough to only give 100% factual answers, even if it only had 100% factual data entered into it to train on. It can mix two different subjects together and create an entirely new, bogus response.

    • minorkeys@lemmy.world
      link
      fedilink
      arrow-up
      4
      arrow-down
      8
      ·
      22 hours ago

      It’s incredibly effective for task assistance, especially with information that is logical and consistent, like maths, programming languages and hard science. What this means is that you no longer need to learn Excel formulas or programming. You tell it what you want it to do and it spits out the answer 90% of the time. If you don’t see the efficacy of AI, then you’re likely not using it for what it’s currently good at.

      • BURN@lemmy.world
        link
        fedilink
        arrow-up
        8
        arrow-down
        2
        ·
        21 hours ago

        Developer here

        Had to spend 3 weeks fixing a tiny app that a vibe coder built with AI. It required rewriting significant portions of the app from the ground up because AI code is nearly unusable at scale. Debugging is 10x harder, code is undocumented and there is no institutional knowledge of how an internal system works.

        AI code can maybe be ok for a bootstrap single programmer project, but is pretty much useless for real enterprise level development

        • minorkeys@lemmy.world
          link
          fedilink
          arrow-up
          1
          ·
          21 hours ago

          It’s definitely not good for whole programs in one go or complex programming. Businesses hoping to replace coders isn’t really happening. But for bite sized code sections like a simple function or non-coders who need something that does a bespoke task in their life? It seems pretty effective. I don’t know a programming language but decided to try and automate my trading strategies and in a month I’d written a program in Python that automatically trades my opening strategy. I would never have been able to do that without chatGPT. It has effectively reduced the time it takes to have functional code significantly, especially as I need to use APIs which AI has been phenomenal at providing without needing to dig through the documentation.

          It isn’t replacing engineers but it definitely helps save time and can empower non engineers to make useful programs without needing years of schooling.

  • adam_y@lemmy.world
    link
    fedilink
    English
    arrow-up
    16
    ·
    1 day ago

    Spent this morning reading a thread where someone was following chatGPT instructions to install “Linux” and couldn’t understand why it was failing.

    • Tar_Alcaran@sh.itjust.works
      link
      fedilink
      arrow-up
      13
      arrow-down
      2
      ·
      1 day ago

      Hmm, I find chatGPT is pretty decent at very basic techsupport asked with the correct jargon. Like “How do I add a custom string to cell formatting in excel”.

      It absolutely sucks for anything specific, or asked with the wrong jargon.

      • adam_y@lemmy.world
        link
        fedilink
        English
        arrow-up
        14
        arrow-down
        2
        ·
        edit-2
        1 day ago

        Good for you buddy.

        Edit: sorry that was harsh. I’m just dealing with “every comment is a contrarian comment” day.

        Sure, GPT is good at basic search functionality for obvious things, but why choose that when there are infinitely better and more reliable sources of information?

        There’s a false sense of security couple to a notion of “asking” an entity.

        Why not engage in a community that can support answers? I’ve found the Linux community (in general) to be really supportive and asking questions is one way of becoming part of that community.

        The forums of the older internet were great at this… Creating community out of commonality. Plus, they were largely self correcting I’m a way in which LLMs are not.

        So not only are folk being fed gibberish, it is robbing them of the potential to connect with similar humans.

        And sure, it works for some cases, but they seem to be suboptimal, infrequent or very basic.

        • Tar_Alcaran@sh.itjust.works
          link
          fedilink
          arrow-up
          3
          ·
          1 day ago

          Oh, I fully agree with you. One of the main things about asking super basic things is that when it inevitably gets them wrong, as least you won’t waste that much time. And it’s inherently parasitical, basic questions are mostly right with LLMs because thousands of people have answered the basic questions thousands of times.

  • nelly_man@lemmy.world
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 day ago

    I was finally playing around with it for some coding stuff. At first, I was playing around with building the starts of a chess engine, and it did ok for a quick and dirty implementation. It was cool that it could create a zip file with the project files that it was generating, but it couldn’t populate it with some of the earlier prompts. Overall, it didn’t seem that worthwhile for me (as an experienced software engineer who doesn’t have issues starting projects).

    I then uploaded a file from a chess engine that I had already implemented and asked for a code review, and that went better. It identified two minor bugs and was able to explain what the code did. It was also able to generate some other code to make use of this class. When I asked if there were some existing projects that I could have referenced instead of writing this myself, it pointed out a couple others and explained the ways they differed. For code review, it seemed like a useful tool.

    I then asked it for help with a math problem that I had been working on related to a different project. It came up with a way to solve it using dynamic programming, and then I asked it to work through a few examples. At one point, it returned numbers that were far too large, so I asked about how many cases were excluded by the rules. In the response, it showed a realization that something was incorrect, so it gave a new version of the code that corrected the issue. For this one, it was interesting to see it correct its mistake, but it ultimately still relied on me catching it.