#ai #localllm #selfhostedai #quantization 7+ main precision formats used in AI ▪️ FP32 ▪️ FP16 ▪️ BF16 ▪️ FP8 (E4M3 / E5M2) ▪️ FP4 ▪️ INT8/INT4 ▪️ 2-bit (ternary/binary quantization) General trend: higher precision for training, lower precision for inference. Save the list and learn more about these formats here: huggingface.co/posts/Kseniase… image
Going to check out the TOR relays you use. Thanks for asking this question Xavier. View quoted note →
#xmr #monero image