DC-SSDAE: Deep Compression Single-Step Diffusion Autoencoder
2025-11-01
We introduce DC-SSDAE, a novel autoencoder framework for efficient high-resolution image generation. By integrating a deep compression encoder for high-ratio spatial reduction, a single-step diffusion decoder for fast reconstruction, and equilibrium matching for stable generative training, DC-SSDAE achieves compact latent representations while preserving perceptual quality. Trained on ImageNet-1K, it replaces flow matching with a time-invariant equilibrium gradient, enabling flexible gradient-descent sampling. This combination addresses optimization challenges in high-compression settings, offering potential speedups in diffusion model pipelines without adversarial losses. The purpose of this project is to prove this architecture can work well among s-o-t-a VAE models, and offers a strong & stable codebase for other VAE researchers to build upon.
932 words
|
5 minutes
Entering The Era of 1-bit AI
2025-10-18
It is obvious that the increasing size of LLMs has created enormous model deployment and energy consumption problems.
1234 words
|
6 minutes
Equilibrium beats Flow: Better Way to Train Diffusion Model
2025-10-11
From now on, when trying diffusion model, use Equilibrium Matching (EqM) to learn the equilibrium (static) gradient of an implicit energy landscape instead of using Flow Matching learns non-equilibrium velocity field that varies over time
318 words
|
2 minutes
The Phoenix of Neural Networks: Training Sparse Networks from Scratch
2025-10-02
AIs today are still so Dense! I mean it metaphorically and literally.
2789 words
|
14 minutes