本文分为三个章节,深入浅出地解读大模型的技术,具体如下三个部分:
1、GPT、LLaMA、ChatGLM、Falcon等大语言模型的技术细节比较
在深入研究LLaMA、ChatGLM和Falcon等大语言模型时,我们不难发现它们在技术实现上有着诸多共通之处与独特差异。例如,这些模型在tokenizer(分词器)的选择上,可能会根据模型的特性和应用场景来定制;位置编码(Positional Encoding)的实现方式也各具特色,对模型性能的影响不容忽视。此外,Layer Normalization(层归一化)和激活函数(Activation Function)的选择与运用,都直接影响到模型的训练速度和准确性。
![图片[1]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fbc944721j00sedt95000id000hs009tg.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2、大语言模型的分布式训练技术概览
在训练大语言模型时,分布式技术发挥着至关重要的作用。数据并行(Data Parallelism)确保多个处理单元同时处理不同的数据子集,显著提高训练速度。张量模型并行(Tensor Model Parallelism)和流水线并行(Pipeline Parallelism)则针对模型的不同部分进行分布式处理,进一步优化了计算资源的利用率。3D并行则进一步拓展了分布式计算的维度。
同时,零冗余优化器ZeRO(Zero Redundancy Optimizer)和CPU卸载技术ZeRo-offload,通过减少内存占用和提高计算效率,进一步加速了训练过程。混合精度训练(Mixed Precision Training)则通过结合不同精度的计算,平衡了计算速度与内存占用。激活重计算技术(Activation Recomputation)和Flash Attention、Paged Attention等优化策略,则进一步提升了模型的训练效率和准确性。
![图片[2]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F88ea8b36j00sedt96000id000hs008kg.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
3、大语言模型的参数高效微调技术探索
在微调大语言模型时,参数的高效利用成为关键。Prompt Tuning、Prefix Tuning和Adapter等方法,通过调整模型的部分参数而非全部,实现了高效的模型定制。LLaMA-Adapter和LoRA等技术则进一步优化了这一过程,使模型能够更快地适应新的任务和领域,同时保持较高的性能。
![图片[3]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fd355819aj00sedt96000gd000hs00a9g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1. 大语言模型的细节1.0 transformer 与 LLM
![图片[4]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F00886a29j00sedt980024d000hs00k0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.1 模型结构
![图片[5]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F941d4b05j00sedt980026d000hs00k0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.2 训练目标
![图片[6]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F8b55f9c1j00sedt99000zd000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.3 tokenizer
![图片[7]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F2d13318fj00sedt99003rd000hs00u0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.4 位置编码
![图片[8]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fd3ee48b5j00sedt9a001rd000hs00k0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.5 层归一化
![图片[9]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F6844788cj00sedt9a0041d000hs0140g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.6 激活函数
![图片[10]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F79dbbea4j00sedt9b002hd000hs00k0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.7 Multi-query Attention 与 Grouped-query Attention
![图片[11]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fc54ea86ej00sedt9c0049d000hs0140g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.8 并行 transformer block
![图片[12]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fd2fcac06j00sedt9c0016d000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
1.9 总结-训练稳定性
![图片[13]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fb4bf856fj00sedt9d001ad000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2. LLM 的分布式预训练
![图片[14]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fdf516829j00sedt9d000vd000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.0 点对点通信与集体通信
![图片[15]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F53f3c149j00sedt9d001sd000hs00k0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.1 数据并行
![图片[16]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Ff523f4dfj00sedt9e002xd000hs00u0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.2 张量并行
![图片[17]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fbfa08336j00sedt9e003yd000dc0230g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
![图片[18]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F7e535bf2j00sedt9f003kd000hs0140g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.3 流水线并行
![图片[19]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F5efd1acdj00sedt9f0014d000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.4 3D 并行
![图片[20]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F1a3cb569j00sedt9g002fd000hs00k0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.5 混合精度训练
![图片[21]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F15c9d247j00sedt9h002td000hs00u0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.6 激活重计算
![图片[22]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F768d3778j00sedt9h001hd000hs00k0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.7 ZeRO,零冗余优化器
![图片[23]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F9f33bcb1j00sedt9i002yd000dc0190g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.8 CPU-offload,ZeRO-offload
![图片[24]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F57b82bb4j00sedt9i001gd000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.9 Flash Attention
![图片[25]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F06936c00j00sedt9i004fd000hs0140g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
2.10 vLLM: Paged Attention
![图片[26]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F2a0b8b8dj00sedt9j000vd000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
3. LLM 的参数高效微调3.0 为什么进行参数高效微调?
![图片[27]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fa50c37bcj00sedt9j001md000hs00u0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
3.1 prompt tuning
![图片[28]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F6c402f5cj00sedt9k0015d000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
3.2 prefix tuning
![图片[29]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F8242d607j00sedt9k0012d000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
3.3 adapter
![图片[30]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fa3445ce9j00sedt9l000vd000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
3.4 LLaMA adapter
![图片[31]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fdf7c96e3j00sedt9l0013d000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
3.5 LoRA
![图片[32]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fabfecc25j00sedt9m000vd000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
3.6 实验比较
![图片[33]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F72cfdd20j00sedt9m000pd000hs00a0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
4. 参考文献
![图片[34]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2F1438d150j00sedt9n002qd000hs00k0g.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
![图片[35]-当前大模型技术超全总结!-开放智能](https://nimg.ws.126.net/?url=http%3A%2F%2Fdingyue.ws.126.net%2F2024%2F0601%2Fdcf8e359j00sedt9n000dd0008c008cg.jpg&thumbnail=960x2147483647&quality=75&type=jpg)
引用地址:https://mp.weixin.qq.com/s/vNVjrCZ1bfBRi5YGiHxJpw
暂无评论内容