HBFP is a hybrid Block Floating-Point (BFP) - Floating-Point (FP) number representation for DNN training introduced by ColTraIn: Co-located DNN Training and Inference team of PARSA and MLO at EPFL. HBFP offers the best of both worlds: the high accuracy of floating-point at the superior hardware density of fixed-point by performing all dot products in BFP and other operations in FP32. For a wide variety of models, HBFP matches floating-point’s accuracy while enabling hardware implementations that deliver up to 8.5x higher throughput. This repository is for ongoing research on training DNNs with HBFP.
This page was last edited on 2024-03-15.
This page was last edited on 2024-03-15.