• cm0002@lemmy.worldOP
    link
    fedilink
    English
    arrow-up
    7
    ·
    6 days ago

    From my reading, if you don’t mind sacrificing speed (tokens/sec), you can run models in system RAM. To be usable though, you’d need at a minimum a dual proc server/workstation for multichannel RAM and enough RAM to fit the model

    So for something like DS R1, you’d need like >512GB RAM

    • SmokeyDope@lemmy.worldM
      link
      fedilink
      English
      arrow-up
      1
      ·
      edit-2
      4 days ago

      You are correct in your understanding. However the last part of your comment needs a big asterisk. Its important to consider quantization.

      The full f16 deepseek r1 gguf from unsloth requires 1.34tb of ram. Good luck getting the ram sticks and channels for that.

      The q4_km mid range quant is 404gb which would theoretically fit inside 512gb of ram with leftover room for context.

      512gb of ram is still a lot, theoretical you could run a lower quant of r1 with 256gb of ram. Not super desirable but totally doable.