BirdMoE: Reducing Communication Costs for Mixture-of-Experts Training Using Load-Aware Bi-random Quantization
Mixture-of-Experts (MoE) model parallelism is prevalent in training Large Language Models (e.g., ChatGPT). However, the intensive all-to-all collective communication of the MoE layer's intermediate computing results substantially degrades MoE training efficiency. In this paper, we propose BirdM...
Saved in:
| Published in: | 2025 62nd ACM/IEEE Design Automation Conference (DAC) pp. 1 - 7 |
|---|---|
| Main Authors: | , , , , , , |
| Format: | Conference Proceeding |
| Language: | English |
| Published: |
IEEE
22.06.2025
|
| Subjects: | |
| Online Access: | Get full text |
| Tags: |
Add Tag
No Tags, Be the first to tag this record!
|
Be the first to leave a comment!