Qualcomm Leads Charge With On-Device AI

Powering AI On Mobile Devices Requires New Math And Qualcomm Is Pioneering It

Qualcomm was able to get Stable Diffusion running on a hand-held device without internet connectivity, by optimizing and scaling the AI model down to run on more efficient INT8 (8-bit integer) precision operations, rather than the larger, more complex FP32 (32-bit floating point) operations that were used to train the model. The result is that for inference (running the model to accurately create the image and inferring details from text), the Stable Diffusion model was shrunk dramatically, or quantized, to a fraction of its size with little to no degradation in accuracy, but with significantly better performance on a low-power Snapdragon AI accelerator (Qualcomm Hexagon Processor) with much less storage and memory bandwidth required.