The Kaitchup – AI on a Budget
Subscribe
Sign in
Share this discussion
Run a 7.7x Smaller Mixtral-8x7B on Your GPU with AQLM 2-bit Quantization
newsletter.kaitchup.com
Copy link
Facebook
Email
Note
Other
Run a 7.7x Smaller Mixtral-8x7B on Your GPU…
Benjamin Marie
Feb 22
7
Share this post
Run a 7.7x Smaller Mixtral-8x7B on Your GPU with AQLM 2-bit Quantization
newsletter.kaitchup.com
Copy link
Facebook
Email
Note
Other
6
This thread is only visible to paid subscribers of The Kaitchup – AI on a Budget
Subscribe to view →
Comments on this post are for paid subscribers
Subscribe
Already a paid subscriber?
Sign in
Share
Copy link
Facebook
Email
Note
Other
This site requires JavaScript to run correctly. Please
turn on JavaScript
or unblock scripts
Run a 7.7x Smaller Mixtral-8x7B on Your GPU with AQLM 2-bit Quantization
Run a 7.7x Smaller Mixtral-8x7B on Your GPU…
Run a 7.7x Smaller Mixtral-8x7B on Your GPU with AQLM 2-bit Quantization
This thread is only visible to paid subscribers of The Kaitchup – AI on a Budget
Comments on this post are for paid subscribers