Back to AI Research

AI Research

Turning the TIDE: Cross-Architecture Distillation f... | AI Research

Key Takeaways

  • Turning the Tide: Cross-Architecture Distillation for Diffusion Large Language Models introduces a new framework designed to shrink massive, high-performing...
  • Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance.
  • Distilling knowledge from a large teacher model to a small student model is difficult when the two models are fundamentally different.
  • The Tide framework solves these issues by integrating three specialized, modular components that work together to guide the student model's learning process.
  • The Tide framework uses three core components to manage the transfer of knowledge:
Paper AbstractExpand

Diffusion large language models (dLLMs) offer parallel decoding and bidirectional context, but state-of-the-art dLLMs require billions of parameters for competitive performance. While existing distillation methods for dLLMs reduce inference steps within a single architecture, none address cross-architecture knowledge transfer, in which the teacher and student differ in architecture, attention mechanism, and tokenizer. We present TIDE, the first framework for cross-architecture dLLM distillation, comprising three modular components: (1) TIDAL, which jointly modulates distillation strength across training progress and diffusion timestep to account for the teacher's noise-dependent reliability; (2) CompDemo, which enriches the teacher's context via complementary mask splitting to improve predictions under heavy masking; and (3) Reverse CALM, a cross-tokenizer objective that inverts chunk-level likelihood matching, yielding bounded gradients and dual-end noise filtering. Distilling 8B dense and 16B MoE teachers into a 0.6B student via two heterogeneous pipelines outperforms the baseline by an average of 1.53 points across eight benchmarks, yielding notable gains in code generation, where HumanEval scores reach 48.78 compared to 32.3 for the AR baseline.

Turning the Tide: Cross-Architecture Distillation for Diffusion Large Language Models introduces a new framework designed to shrink massive, high-performing diffusion large language models (dLLMs) into much smaller, efficient versions. While previous distillation methods focused on compressing models within the same architecture, this research addresses the more complex challenge of "cross-architecture" distillation, where the teacher and student models differ in their internal structure, attention mechanisms, and tokenizers.

Bridging the Gap Between Models

Distilling knowledge from a large teacher model to a small student model is difficult when the two models are fundamentally different. The researchers identified three primary barriers: the teacher’s reliability changes depending on the diffusion timestep, heavy masking during training makes it hard for the teacher to provide useful information, and different tokenizers make it mathematically difficult to align the models' outputs. The Tide framework solves these issues by integrating three specialized, modular components that work together to guide the student model's learning process.

How the Framework Works

The Tide framework uses three core components to manage the transfer of knowledge:

  • Tidal (Scheduling): This component acts as a "pacemaker" for the training process. It adjusts the distillation strength based on both the training progress and the diffusion timestep. By doing this, the student model learns from the teacher only when the teacher’s signals are most reliable, avoiding the noise that occurs at high masking levels.

  • CompDemo (Context): To solve the problem of limited context, this component enriches the teacher's input. It splits the masked tokens into subsets, allowing the teacher to see more information during its forward passes. This provides the student with clearer, more robust targets for learning.

  • Reverse Calm (Output Alignment): This component handles the challenge of mismatched tokenizers. It uses a technique that aligns text chunks between the two models rather than trying to map individual tokens directly. By reversing the direction of the loss calculation, it creates a more stable training process that filters out noise and prevents the gradient explosions that often occur when models are poorly aligned.

Notable Performance Gains

The researchers tested the framework by distilling 8B dense and 16B Mixture-of-Experts (MoE) teacher models into a compact 0.6B student model. The results showed that the distilled models significantly outperformed the non-distilled baseline across eight different benchmarks, including tasks in reasoning, knowledge, and commonsense. Most notably, the distilled models showed a major improvement in code generation, with HumanEval scores reaching 48.78 compared to 32.3 for the autoregressive baseline.

Key Takeaways

The study demonstrates that cross-architecture distillation is not only possible but highly effective for dLLMs. By using a modular approach, the researchers were able to tailor the distillation strategy to the specific needs of different model pipelines. The success of these experiments suggests that large, complex diffusion models can be successfully compressed into smaller, faster, and more deployable versions without sacrificing significant performance.

Comments (0)

No comments yet

Be the first to share your thoughts!