There has been a noticeable shift over the last few months on other operating systems like Android, iOS and Microsoft.

What are your thoughts around how/if integration takes place within Linux?

  • macallikOP
    link
    fedilink
    19 months ago

    Personally, my (uneducated) opinion is that we already have plug-and-play functionality on a program level ie I can add an OpenAI api key to various programs and make them ‘smarter’. Since the Linux experience is often pretty piecemeal as is, this would be a solid enough approach for most.

    In terms of AI being ingrained within a Desktop Environment, that seems harder for me to imagine… Like how the Office Suite has AI functionality, would the KDE suite of apps allow for cross-program functionality? Would this require a substantial change in system requirements for local processing? Would there be an open-source LLM hosted in the cloud for chat purposes that also mirrors the privacy expectations of the average Linux user?

    I understand people’s apprehension towards Linux distros seemingly chasing the latest fad, but I think it’s also worth hypothesizing the alternative if AI and LLMs are here to stay/differentiate.

    • @nottheengineer@feddit.de
      link
      fedilink
      19 months ago

      LLMs are big, so you either need a powerful PC to run them or use cloud services. Linux users tend to not be fans of either, so it’ll probably take a while before anything big happens.

      Besides, for the things where an LLM actually makes sense (like a copilot-style code generator), there are already implementations.

      • waspentalive
        link
        fedilink
        29 months ago

        I am a Debian user, and I can’t really say I am not a fan of “Big”. I have a laptop as my production machine but I also have as big a file server as I can afford. I would not want an AI that is part of my OS unless it is local. I do use ChatGPT and Stable Diffusion, but only for non-critical functions.