You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
We've successfully applied W4A8KV4 quantization using QoQ to an MLA-based model, kind of like DeepSeek v2, but we built it from scratch. We then tested it out on both perplexity metrics and a bunch of benchmarks, and it performed really well.
Now, we're looking to actually speed things up with this W4A8KV4 model.
Any Helps or Suggestions? We are looking forward to your replies!
The text was updated successfully, but these errors were encountered:
Any Helps or Suggestions? We are looking forward to your replies!
The text was updated successfully, but these errors were encountered: