APA (7th ed.) Citation

Liang, P., Qiao, L., Shi, Y., Zheng, H., Tang, Y., & Li, D. (2025). Memory-efficient tensor parallelism for long-sequence Transformer training. Frontiers of information technology & electronic engineering, 26(5), 770-787. https://doi.org/10.1631/FITEE.2400602

Chicago Style (17th ed.) Citation

Liang, Peng, Linbo Qiao, Yanqi Shi, Hao Zheng, Yu Tang, and Dongsheng Li. "Memory-efficient Tensor Parallelism for Long-sequence Transformer Training." Frontiers of Information Technology & Electronic Engineering 26, no. 5 (2025): 770-787. https://doi.org/10.1631/FITEE.2400602.

MLA (9th ed.) Citation

Liang, Peng, et al. "Memory-efficient Tensor Parallelism for Long-sequence Transformer Training." Frontiers of Information Technology & Electronic Engineering, vol. 26, no. 5, 2025, pp. 770-787, https://doi.org/10.1631/FITEE.2400602.

Warning: These citations may not always be 100% accurate.