- cross-posted to:
- geekfrance@lemmy.world
- stablediffusion@lemmy.ml
- cross-posted to:
- geekfrance@lemmy.world
- stablediffusion@lemmy.ml
Stability AI released three new 3b models for coding:
- stablecode-instruct-alpha-3b (context length 4k)
- stablecode-completion-alpha-3b-4k (context length 4k)
- stablecode-completion-alpha-3b (context length 16k)
I didn’t try any of them yet, since I’m waiting for the GGML files to be supported by llama.cpp, but I think especially the 16k model seems interesting. If anyone wants to share their experience with it, I’d be happy to hear it!
I’ve managed to get it running in koboldcpp, had to add --forceversion 405 because it wasn’t being detected properly, even with q5.1 I was getting an impressive 15 T/s and the code actually seemed decent, this might be a really good candidate for fine-tuning on large datasets and passing massive contexts of basically entire small repos or at least several full files
Odd they chose neox as their model, I think only ctranslate2 can offload those? I had trouble getting the GPTQ running in autogptq… maybe the huggingface TGI would work better