• @Naz@sh.itjust.works
    link
    fedilink
    English
    2830 days ago

    Try using a 1-bit LLM to test the article’s claim.

    The perplexity loss is staggering. It’s like 75% accuracy lost or more. It turns a 30 billion parameter model into a 7 billion parameter model.

    Highly recommended that you try to replicate their results.

    • @davidgro@lemmy.world
      link
      fedilink
      English
      930 days ago

      But since it takes 10% of the space (vram, etc.) sounds like they could just start with a larger model and still come out ahead

    • @kromem@lemmy.world
      link
      fedilink
      English
      629 days ago

      There’s actually a perplexity improvement parameter-to-paramater for BitNet-1.58 which increases as it scales up.

      So yes, post-training quantization perplexity issues are apparent, but if you train quantization in from the start it is better than FP.

      Which makes sense through the lens of the superposition hypothesis where the weights are actually representing a hyperdimensional virtual vector space. If the weights have too much precision competing features might compromise on fuzzier representations instead of restructuring the virtual network to better matching nodes.

      Constrained weight precision is probably going to be the future of pretraining within a generation or two looking at the data so far.