Slow June, people voting with their feet amid this AI craze, or something else?

  • wackypants@kbin.social
    link
    fedilink
    arrow-up
    134
    arrow-down
    3
    ·
    1 year ago

    It’s Summer. Students are on break, lots of people on vacation, etc. Let’s wait to see if the trend persists before declaring another AI winter.

    • twicetwotimes@lemmy.world
      link
      fedilink
      English
      arrow-up
      24
      ·
      1 year ago

      Agreed. I think being between academic years is likely a much bigger factor than we realize. I’m a college professor, and at the end of spring quarter we had a lot of conversations with undergrads, grad students, and faculty about how people are actually using AI.

      Literally every undergrad student I spoke with said they use it for every written assignment (for the large part in non-cheating legit educational resource ways). Most students used it for all or most of their programming assignments. Most use it to summarize challenging or long readings. Some absolutely use it to just do all their work for them, though fewer than you might expect.

      I’d be pretty surprised if there isn’t a significant bounce-back in September.

      • sndrtj@feddit.nl
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        This worries me though. I’ve found chatgpt to be wrong in basically every fact-based question I’ve asked it. Sometimes subtly, sometimes completely, but it always hallucinates. You cannot use it as a source of truth.

        • twicetwotimes@lemmy.world
          link
          fedilink
          English
          arrow-up
          6
          ·
          1 year ago

          Honestly I feel like at this point its unreliability is kind of helpful for students. They have to learn how to use it most effectively as a tool for producing their own work and not a replacement. In my classes the more relevant “problem” for students is that GPT produces written work that on the surface feels composed and sensible but is actually straight up garbage. That’s good. They turn that in, it’s extremely obvious to me, and they get an F (because that’s the grade AI earned with the garbage paper).

          But they can and should use it for things it’s great at: reword this long sentence I’m having trouble phrasing concisely, help me think of a title for my paper, take my pseudocode and help me turn it into a while loop in R, generate a list of current researchers on this topic and two of their most recent publications, translate this paragraph of writing from Foucault/Marx/Bourdieu/some-good-thinker-and-bad-writer into simpler wording…

          I have a calculator in my pocket even though my teachers assured me I wouldn’t. Students will have access to and use AI forever now. The worry should be that we fail to teach them the difference between a homework-bot and an incredible, versatile tool to leverage.

      • afraid_of_zombies2@lemmy.world
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I have been using it to do deep dives into subjects. Especially text analysis. Do you want to know the entire voc of the Gospel of Mark in original greek for example? 1080. Now how does this compare to a section of Plato’s republic of the same size? About 6-7x as large.

        So right there we can see why Mark is often viewed as a direct text while Plato is viewed as a more ambiguous writer.

        • InverseParallax@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          1 year ago

          Mark is a direct and terse narrative of a specific segment of Jesus’s life and teachings while the republic is an attempt to expound a philosophy and system of government.

          I agree with you, but I’m not sure I’d call him a more ambiguous writer, mark is a ‘just the facts, ma’am’ notation of verbal histories near contemporary, with the other gospels being attempts to add on contemporary allegories and legends attributed by different groups to Jesus (or John who just did his own thing).

          I’d be curious at the comparison of the apology and crito, similar narratives of a similar figure in a specific segment of his life (the end of it). It’s fairly direct and terse as Socrates was portrayed as being direct and terse, but otherwise the styles are similar as (throw on hard hat) Jesus appears to have been attributed many of the allegories of Socrates in the recorded gospels, which makes sense if you’re trying to appeal to followers of hellenic religions such as those in Rome and Greece.

    • potustheplant@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      15
      ·
      1 year ago

      I think you’re being a bit self-centered, i’s always going to be summer somewhere. This is a tool used globally.

      • Smatt@lemmy.world
        link
        fedilink
        English
        arrow-up
        19
        ·
        1 year ago

        I see your point but:

        1. It’s not always summer somewhere, North and South are in spring/fall half the year.
        2. The global North has way more population than the south.
      • Bak@lemmy.world
        link
        fedilink
        English
        arrow-up
        3
        arrow-down
        1
        ·
        1 year ago

        It’s summer somewhere half the time, but thank you for reminding them the southern hemisphere exists!

  • BonfireOvDreams@lemmy.world
    link
    fedilink
    English
    arrow-up
    41
    ·
    edit-2
    1 year ago

    It’s not just that the novelty has worn off, It’s progressively gotten less useful. Any god damn question I ask gets 90,000 qualifiers and it refuses to provide any data at all. I think OpenAI is so terrified of liabilty they have significantly dumbed down it’s utility in the public release. I can’t even ask ChatGPT to provide a link to study it references, if it references anything at all rather than making ambiguous statements.

    • Kerfuffle@sh.itjust.works
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Also, ChatGPT 4 came out but is still only available to people who pay (as far as I know). So using ChatGPT 3 feels like only having access to the leftovers. When it first came out, that was exciting because it felt like progress was going to be rapid, but instead it stagnated. (Luckily interesting LLM stuff is still happening, it’s just nothing to do with OpenAI.)

      • ultranaut@lemmy.world
        link
        fedilink
        English
        arrow-up
        8
        ·
        1 year ago

        Chatgpt4 has also noticeably declined in quality since it was released too. I use it less because it’s become less useful and more frustrating to use. I think openAI have been steadily gimping it trying to get their costs down and make it respond faster.

      • cybersandwich@lemmy.world
        link
        fedilink
        English
        arrow-up
        6
        ·
        1 year ago

        I pay for it and it’s… Okay for most things. It’s pretty great at nerd stuff though*. Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.

        *If you know enough to sus out the obviously wrong shit it produces every once in a while.

        • Kerfuffle@sh.itjust.works
          link
          fedilink
          English
          arrow-up
          5
          ·
          1 year ago

          Pasting an error code or cryptic log file message with a bit of context and it’s better than googling for 4 days.

          I usually can find what I’m looking for unless it’s really obscure with days of searching. If something is that obscure, it seems kind of unlikely ChatGPT is going to give a good answer either.

          If you know enough to sus out the obviously wrong shit it produces every once in a while.

          That’s one pretty big problem. If something really is difficult/complex you likely won’t be able to tell the difference between a wrong answer from ChatGPT and one that’s correct unless it just says something obviously ridiculous.

          Obviously humans make mistakes too, but at least when you search you see results in context, other can potentially call out/add context to things that might not be correct (or even misleading), etc. With ChatGPT you kind of have to trust it or not.

          • shiftybits@lemmy.world
            link
            fedilink
            English
            arrow-up
            4
            ·
            edit-2
            1 year ago

            Yeah if it’s that hard to find gpt is just going to hallucinate some bs into the response. I use it as a stack overflow at times and often run into garbage when I’m trying to solve a truly novel problem. I’ll often try to simplify it to something contrived but mostly find the output useful as a sort of spark. I can’t say I ever find the raw code it generates useful or all that good.

            It’ll often give wrong answers but some of those can contain useful bits that you can arrange into a solution. It’s cool, but I still think people are oddly enamored with what is really just a talking Google. I don’t think it’s the game changer people are thinking it is.

            • pancakes@sh.itjust.works
              link
              fedilink
              English
              arrow-up
              2
              arrow-down
              1
              ·
              1 year ago

              It’s pretty useful if you’re in a more generalist job. I mostly work in visual design, but I sometimes deal with coding and web dev. As someone with a mostly surface understanding of these things, asking gpt to explain exact things that don’t make sense in basic terms or solve basic issues is a huge time saver for me. Googling these issues usually works but takes way longer than getting a tailored response from gpt if you know how to ask.

    • afraid_of_zombies2@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      arrow-down
      1
      ·
      edit-2
      1 year ago

      I got it to give me a book that was still in copyright status by selectively asking for bigger and bigger quotes. Took a while. Now it seems to have cottoned on to that trick.

  • Platomus@lemm.ee
    link
    fedilink
    English
    arrow-up
    37
    ·
    1 year ago

    It’s because it’s summer and students aren’t using it to cheat on their assignments anymore.

    • TheEllimist@lemmy.world
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      It’s definitely this. Except the kids taking summer classes, which statistically probably have higher instances of cheating.

  • eee@lemm.ee
    link
    fedilink
    English
    arrow-up
    35
    arrow-down
    1
    ·
    1 year ago

    Well yeah it’s kinda cool but the novelty will wear off. It’s useful sometimes but it’s not a magic elixer.

  • Poob@lemmy.ca
    link
    fedilink
    English
    arrow-up
    23
    arrow-down
    1
    ·
    1 year ago

    It’s really fucking annoying getting “As an AI language model, I don’t have personal opinions, emotions, or preferences. I can provide you with information and different perspectives on…” at the beginning of every prompt, followed by the driest, most bland answer imaginable.

    • theneverfox@pawb.social
      link
      fedilink
      English
      arrow-up
      8
      ·
      1 year ago

      Yeah, it’s boring as shit, if want a conversation partner there’s better (if less reliable) options out there, and groups like personal.ai that repackage it for conversation. There’s even scripts to break through the “guardrails”

      I love the boring. Every other day, I think "man, I really don’t want to do this annoying task. I’m not sure if it even saves much time since I have to look over the work, but it’s a hell of a lot less mentally exhausting.

      Plus, it’s fun having it Trumpify speeches. It’s tremendous. I’ve spent hours reading the bigglyest speeches. Historical speeches, speeches about AI, graduation speeches where bears attack midway through… Seriously, it never gets old

    • afraid_of_zombies2@lemmy.world
      link
      fedilink
      English
      arrow-up
      5
      ·
      1 year ago

      It definitely has its uses but it also has massive annoyances as you pointed out. One thing has really bothered me, I asked it a factual question about Mohammed the founder of Islam. This is how I a human not from a Muslim background would answer

      “Ok wikipedia says this ____”

      It answered in this long winded way that had all these things like “blessed prophet of Allah”. Basically the answer I would expect from an Imam.

      I lost a lot of trust in it when I saw that. It assumed this authority tone. When I heard about that case of a lawyer citing madeup caselaw from it I looked it as confirmation. I don’t know how it happened but for some questions it has this very authoritative tone like it knows this without any doubt.

  • anlumo@feddit.de
    link
    fedilink
    English
    arrow-up
    21
    ·
    1 year ago

    For my professional work, the training data is way too outdated by now for ChatGPT to be anywhere near being useful. The browsing feature also can’t make up for it, because it’s pretty bad at Internet search (bad search phrases etc).

    • PupBiru@kbin.social
      link
      fedilink
      arrow-up
      11
      ·
      1 year ago

      i find even for really complex stuff it’s pretty good as long as you direct it: it can suggest some things, you can do some searching based on that, maybe give it a few links to summarise for you, etc

      it doesn’t do the work for you, but it makes a pretty good assistant that doesn’t quite understand the subject matter

      • anlumo@feddit.de
        link
        fedilink
        English
        arrow-up
        3
        ·
        1 year ago

        I’m old enough to not needing a babysitter to use the Internet for research.

        It even told me a few times that its training data is too outdated and that there probably was some progress in that area. I have to freaking push it to actually do a web search to update that knowledge with prompts like “You have web access, use it!”. It then finds a few posts on stackoverflow I’ve already seen and draws some incorrect conclusions from that.

        I’m way faster on my own.

        • PupBiru@kbin.social
          link
          fedilink
          arrow-up
          2
          arrow-down
          1
          ·
          edit-2
          1 year ago

          your experience does not match mine

          which is not saying that your experience is wrong or that you’re using it wrong, however i and many others have managed to get exceptionally good results out of it, and you should be aware of that fact

          referring to these experiences as “needing a babysitter” is needlessly provocative as well; we’re all just talking here: no need to insult the intelligence of anyone that has managed to use the tool in a way that works incredibly well

          i hope that at some point in the future, you’re able to have your experience match ours, and have a similar feeling of “ooooh i see now… wait… OOOOOOH I REALLY SEEEE NOW”

          • anlumo@feddit.de
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Well, I hope that some day I will have the same experience.

            I think the main problem is that I’m only prompting it with lost causes, when I was unable to find anything on my own with very thorough searches, because there just isn’t an answer available online.

            I don’t go there first, because I’m always afraid of hallucinated answers, which are very common. For example, it often just tries to guess function names of programming libraries. That’s just wasting my time.

          • anlumo@feddit.de
            link
            fedilink
            English
            arrow-up
            2
            ·
            1 year ago

            In my experience, Bing Chat is even worse, because it skips the part where ChatGPT is trying to come up with something based on the training data and goes straight to bad web searches with incorrect summaries.

  • zeppo@lemmy.world
    link
    fedilink
    English
    arrow-up
    15
    arrow-down
    2
    ·
    1 year ago

    I love Stable Diffusion but I really have no use for ChatGPT. I’m amazed at how good the output can be… i just don’t have a need to generate text like that. Also, OpenAI has been making it steadily worse with ‘safety’ restrictions. I find it super annoying and even insulting when Bing-Sydney is “THIS CONVERSATION IS OVER”. It’s like being chastised by facebook or twitter for being ‘violent’ when you made a joke.

    The ability to generate photographs and illustrations of practically anything, though, is fantastic. My girlfriend has been flagellating me into creating a bunch of really useless crap to promote her business on social media using SD, and I actually enjoy that part. I’ve made thousands of photos of scenery.

    • incogtino@lemmy.zip
      link
      fedilink
      English
      arrow-up
      1
      ·
      1 year ago

      I use (free) ChatGPT only as tech support (with a large dose of scepticism of the results) so none of the ‘conversational’ limitations bother me

      I didn’t find the image generation AIs as sticky for me, there’s not really anything I do day-to-day that would require a novel image

  • simple@lemmy.world
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    Personally I’ve abandoned ChatGPT in favor of Claude. It’s much more reliable.

  • Meow.tar.gz@lemmy.goblackcat.com
    link
    fedilink
    English
    arrow-up
    12
    ·
    1 year ago

    ChatGPT has mostly given me very poor or patently wrong answers. Only once did it really surprise me by showing me how I configured BGP routing wrong for a network. I was tearing my hair out and googling endlessly for hours. ChatGPT solved it in 30 seconds or less. I am sure this is the exception rather than the rule though.

    • zeppo@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      ·
      1 year ago

      It all depends on the training data. If you pick a topic that it happens to have been well trained on, it will give you accurate, great answers. If not, it just makes things up. It’s been somewhat amusing, or perhaps confounding, seeing people use it thinking it’s an oracle of knowledge and wisdom that knows everything. Maybe someday.

  • froggers@lemmy.world
    link
    fedilink
    English
    arrow-up
    10
    ·
    1 year ago

    I still use it sometimes, but ohhh boy it can be a wreck. Like I’ve started using the Creation Kit for Bethesda games, and you can bet your ass that anything you ask it, you’ll have to ask again. Countless times it’s a back-and-forth of:

    Me: Hey ChatGPT, how can I do this or where is this feature?

    ChatGPT: Here is something that is either not relevant or just does not exist in the CK.

    Me: Hey that’s not right.

    ChatGPT: Oh sorry, here’s the thing you are looking for. and then it’s still a 50-50 chance of it being real or fake.

    Now I realize that the Creation Kit is kinda niche, and the info on it can be a pain to look up but it’s still annoying to wade through all the shit that it’s throwing in my direction.

    With things that are a lot more popular, it’s a lot better tho. (still not as good as some people want everyone to believe)

    • cassetti@kbin.social
      link
      fedilink
      arrow-up
      9
      ·
      1 year ago

      Lol, Chat has it’s pros and cons. For helping me write or refine content, it’s extremely helpful.

      However I did try to use it to write code for me. I design 3D models using a programming language (OpenSCAD) and the results are hilarious. Literally it knows the syntax (kinda) and if I ask it to do something simple, it will essentially write the code for a general module (declaring key variables for the design), and then it calls a random module that doesn’t exist (like it once called a module “lerp()” which is absolutely not a module) - this magical module mysteriously does 99% of the design… but ChatGPT won’t give it to me. When I ask it to write the code for lerp(), it gives me something random like this

      module lerp() { splice(); }

      Where it simply calls up a new module that absolutely does not exist. The results are hilarious, the code totally does not compile or work as intended. It is completely wrong.

      But I think people are working it out of their system - some found novelty in it that wore off fast. Others like myself use it to help embellish product descriptions for ebay listings and such.

    • american_defector@lemmy.world
      link
      fedilink
      English
      arrow-up
      7
      arrow-down
      1
      ·
      1 year ago

      I’ve been building a tool that uses ChatGPT behind the scenes and have found that that’s just part of the process of building a prompt and getting the results you want. It also depends on which chat model is being used. If you’re super vague, it’s going to give you rubbish every time. If you go back and forth with it though, you can keep whittling it down to give you better material. If you’re generating content, you can even tell it what format and structure to give the information back in (I learned how to make it give me JSON and markdown only).

      Additionally, you can give ChatGPT a description of what it’s role is alongside the prompt, if you’re using the API and have control of that kind of thing. I’ve found that can help shape the responses up nicely right out of the box.

      ChatGPT is very, very much a “your mileage may vary” tool. It needs to be setup well at the start, but so many companies have haphazardly jumped on using it and they haven’t put in enough work prepping it.

      • cassetti@kbin.social
        link
        fedilink
        arrow-up
        4
        ·
        1 year ago

        Have you see the JollyRoger Telco - they’ve started using ChatGPT to help have longer conversations with telemarketing scammers. I might actually re-subscribe to the jolly roger (used them previously) if the new updated bots perform as well enough.

      • seal_of_approval@sh.itjust.works
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        If you don’t mind me asking, does your tool programmatically do the “whittling down” process by talking to ChatGPT behind the scenes, or does the user still talk to it directly? The former seems like a powerful technique, though tricky to pull off in practice, so I’m curious if anyone has managed it.

        • american_defector@lemmy.world
          link
          fedilink
          English
          arrow-up
          2
          ·
          edit-2
          1 year ago

          Don’t mind at all! Yeah, it does a ton of the work behind the scenes. I essentially have a prompt I spent quite a bit of time iterating on. Then from there, what the user types gets sent bundled in with my prompt bootstrap. So it reduces the work considerably for the user and dials it in.

          Edit: adding some more context/opinions.

          I think the error that a lot of tools make is that they don’t spend enough time shaping their instructions for the AI. Sure, you can offload a lot of the work to it, but you have to write your own guard rails and instructions. You can tell it things like you would a human, and it will sometimes even fill in the gaps.

          For example, I asked it to give me a data structure back that included an optional “title”. I found that if you left the title blank, ChatGPT took it upon itself to generate a title for you based on the content it wrote.

          A lot of the things I got it to do took time and a ton of test iterations. I was even able to give it a list of exactly how it should structure the content it gave back. Things that I would otherwise do on the programming side, I was able to simply instruct ChatGPT to handle instead.

          • seal_of_approval@sh.itjust.works
            link
            fedilink
            English
            arrow-up
            1
            ·
            1 year ago

            Ah, interesting. I myself have made my own library to create callable “prompt functions” that prompt the model and validate the JSON outputs, which ensures type-safety and easy integration with normal code.

            Lately, I’ve shifted more towards transforming ChatGPT’s outputs. By orchestrating multiple prompts and adding human influence, I can obtain responses that ChatGPT alone likely wouldn’t have come up with. Though, this has to be balanced with giving it the freedom to pursue a different thought process.

      • 80085@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        1 year ago

        What method did you use to generate only JSON? I’m using it (gpt3.5-turbo) in a prototype application, and even with giving it an example (one-shot prompting) and telling it to only output JSON, it sometimes gives me invalid results. I’ve read that the new function-calling feature is still not guaranteed to produce valid json. Microsoft’s “guidance” (https://github.com/microsoft/guidance) looks like what I need, but I haven’t got around to trying it yet.

    • maiskanzler@feddit.de
      link
      fedilink
      English
      arrow-up
      4
      ·
      1 year ago

      I recently asked it about Nix Flakes, which were very niche and bew during ChatGPTs Training. It was able to give me a reasonable answer in English, but if I first asked it in German, it couldn’t do it. It could reasonably translate the english one though, after it generated that. Depending on what language you use to prompt it, you get very different answers, because it doesn’t do the transfer of ideas and concepts between languages or more generally, disconnected bodies of text sources.

      It is somewhat obvious if you know about the statistical nature of the models they use, but it’s a great example of why these things don’t KNOW things, they just regurgitate what they read in context before.

      • MasterCelebrator@feddit.de
        link
        fedilink
        English
        arrow-up
        2
        ·
        1 year ago

        I agree. And i think it actually far from being "intelligent ". However it is a very helpful tool for many Tasks.

  • binwiederhier@discuss.ntfy.sh
    link
    fedilink
    English
    arrow-up
    9
    ·
    1 year ago

    I have noticed that I use it less myself. I think honestly though, at least for me, that it is 90% related to the clunky and awkward UI of ChatGPT. If it was easy to natively type the prompt in the browser bar I’d use it much more.

    Plus, the annoying text scrolling thingy … Just show me the answer already, hehe.

    • henrikx@lemmy.dbzer0.com
      link
      fedilink
      English
      arrow-up
      13
      arrow-down
      2
      ·
      edit-2
      1 year ago

      The annoying text scrolling can’t be removed because the AI generates one word at a time, which is what you are seeing.

      • TiffyBelle@lemm.ee
        link
        fedilink
        English
        arrow-up
        16
        arrow-down
        1
        ·
        1 year ago

        Sure it can. Finish generating it server-side, then send it as one big chunk to the user.

        To be honest though, ChatGPT is pretty fast at generating text these days compared to how it was at the beginning so it doesn’t bother me as much.

        • Mereo@lemmy.ca
          link
          fedilink
          English
          arrow-up
          9
          ·
          1 year ago

          GPT-4 isn’t fast yet so if it will frustrate people if they do that.

        • maiskanzler@feddit.de
          link
          fedilink
          English
          arrow-up
          3
          ·
          1 year ago

          What still bothers me, is that it doesn’t do smooth scrolling while generating. It’s tons of tiny jumps and hiccups which make it very hard to read. I tend to scroll up a little as soon as it has generated a few lines, then read at my own pace. Annoying default behaviour though.

          • TiffyBelle@lemm.ee
            link
            fedilink
            English
            arrow-up
            3
            ·
            1 year ago

            Yeah, that’s pretty much what I do if it’s going to be a long block of text. If not, I usually just wait.

            Having it just say “Generating Text…” then give a percentage, then just show the entire thing would be preferable to me. I’d like the option even if it wasn’t default.

    • Gumus@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      Give phind.com a try. It can be set as your default search provider (manually or with a plugin), so you can just type in the search bar.

  • i_lost_my_bagel@seriously.iamincredibly.gay
    link
    fedilink
    English
    arrow-up
    8
    ·
    1 year ago

    I tried it for about 20 minutes

    Had it do a few funny things

    Thought huh that’s neat

    Went on with life

    Since then the only times I’ve thought about ChatGPT has been seeing people using it in classes I’m in and just sitting here thinking “this is a fucking introductory course and you’re already cheating?”

    • idolofdust@lemmy.world
      link
      fedilink
      English
      arrow-up
      2
      ·
      1 year ago

      In discrete mathematics right now and overheard way too many students hitting a brick wall with the current state of AI chatbots. as if thats what they used almost exclusively up to this point

  • Wats0ns@sh.itjust.works
    link
    fedilink
    English
    arrow-up
    7
    ·
    edit-2
    1 year ago

    OpenAI’s models, including its GPT series, are available via APIs and Microsoft Azure, and so a drop in ChatGPT’s website use may be due to people moving to programmatic interfaces

    I feel like this is an important detail that changes the conclusion of the article: there may be a lot more end user, through 3d party apps, but the way of measuring won’t reveal it. This especially important considering that (correct me if I’m wrong) API users are paying ones !