Personally, I’d really like the option of running LLMs locally, but the hardware requirements make it hard. Small models run okay on CPU or low-end GPUs, but anything approaching the complexity and usefulness of GPT4 or DeepSeek requires a hefty GPU setup. Considering how much even old hardware like the P40 has gone up in price, it’s hard to justify the cost.
Just dumped chatgpt for Le chat. We’ll see how this pans out
Personally, I’d really like the option of running LLMs locally, but the hardware requirements make it hard. Small models run okay on CPU or low-end GPUs, but anything approaching the complexity and usefulness of GPT4 or DeepSeek requires a hefty GPU setup. Considering how much even old hardware like the P40 has gone up in price, it’s hard to justify the cost.
Maybe of interest https://frame.work/desktop?tab=machine-learning