Citace podle APA (7th ed.)

Liang, P., Qiao, L., Shi, Y., Zheng, H., Tang, Y., & Li, D. (2025). Memory-efficient tensor parallelism for long-sequence Transformer training. Frontiers of information technology & electronic engineering, 26(5), 770-787. https://doi.org/10.1631/FITEE.2400602

Citace podle Chicago (17th ed.)

Liang, Peng, Linbo Qiao, Yanqi Shi, Hao Zheng, Yu Tang, a Dongsheng Li. "Memory-efficient Tensor Parallelism for Long-sequence Transformer Training." Frontiers of Information Technology & Electronic Engineering 26, no. 5 (2025): 770-787. https://doi.org/10.1631/FITEE.2400602.

Citace podle MLA (9th ed.)

Liang, Peng, et al. "Memory-efficient Tensor Parallelism for Long-sequence Transformer Training." Frontiers of Information Technology & Electronic Engineering, vol. 26, no. 5, 2025, pp. 770-787, https://doi.org/10.1631/FITEE.2400602.

Upozornění: Tyto citace jsou generovány automaticky. Nemusí být zcela správně podle citačních pravidel..