Researchers at ByteDance have published 1.58-bit FLUX which is a new approach to AI model image generation compression as they aim to address current challenges of text-to-image models. In the published paper, the people involved state that while many of the popular text-to-image models have “demonstrated remarkable generative capabilities,” they have immense parameter counts and high memory requirements which pose challenges for deployment. This is highlighted as a potential barrier or difficulty on resource-constrained devices like mobile platforms. To overcome this, the team compressed the FLUX system to three values which reduced storage by 8x. “This work introduced 1.58-bit FLUX, in which 99.5% of the transformer parameters are quantized to 1.58 bits.