• Lerios [hy/hym]@hexbear.net
    link
    fedilink
    English
    arrow-up
    29
    ·
    6 months ago

    why? genuinely who does this help and how does it make google money? it seems like they’re paying for the energy for ai content in exchange for absolutely nothing

    • Awoo [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      30
      ·
      6 months ago

      The people internally at Google are techbro true believers. If it’s new technology it is inherently good and an improvement.

      • TheDoctor [they/them]@hexbear.net
        link
        fedilink
        English
        arrow-up
        20
        ·
        6 months ago

        God, it’s sad but you’re probably right. We had to implement something AI-related at work because the board all had massive hard ons for the buzzwords. They literally could not have given less of a shit what we used it for. We had full autonomy as long as ChatGPT ended up in our dependency tree somewhere.

        • homhom9000 [she/her]@hexbear.net
          link
          fedilink
          English
          arrow-up
          10
          ·
          6 months ago

          Same here. Every all hands at work emphasizes the need to use AI. Except they have no clue what to do with it yet beyond chatbots but we need to use it right now or else.

        • Lerios [hy/hym]@hexbear.net
          link
          fedilink
          English
          arrow-up
          4
          ·
          6 months ago

          yeah same. we have an AI assistent now and every meeting has a ‘gentle reminder’ that the sales people and devs and tech support etc etc should be using it. they’re never specific about what we should be using it for and the one time i touched it it didn’t seem like it even had access to our documentation.

          is it really that simple? this is a massive capitalist company, surely they have to understand that they should be acting to improve their material conditions? random libs not understanding shit is fine, but i thought the actual capitalists themselves understood capitalism. exchanging material wealth for like cyberpunk vibes or whatever is genuinely insane.

          • TheDoctor [they/them]@hexbear.net
            link
            fedilink
            English
            arrow-up
            4
            ·
            6 months ago

            surely they have to understand that they should be acting to improve their material conditions?

            In my experience with execs, they find a guiding principle from a book or a conference speaker and treat it like a personal religion. Everything outside of that is very much vibes based. There are a lot of conference talks that try to summarize new tech stuff for execs but it’s very much a short overview followed by practical applications. They don’t understand the stuff experientially unless they happen to do a deep dive on their own.

    • alexandra_kollontai [she/her]@hexbear.net
      link
      fedilink
      English
      arrow-up
      3
      ·
      edit-2
      6 months ago

      The theory is that people don’t want to click through blue links trying to find a source (or sources) they can trust, they rather want an instant summarised answer to any question. Google already does instant summarised answers for things like “when is the next public holiday” - generative AI content would expand these instant answers to any question, at the cost of accuracy. Google thinks ChatGPT is taking their market share (which it kinda is, and kinda was a year ago when they started developing this). The big idea of this new feature from Google is to retain market share, which is a prerequisite to making money.

    • blobjim [he/him]@hexbear.net
      link
      fedilink
      English
      arrow-up
      2
      ·
      6 months ago

      I thought Google was already incorporating some machine learning stuff into the core search algorithm anyways, which would be a much better use than directly making up sentences.