labml.ai Deep Learning Paper Implementations {带注释的 PyTorch 版论文实现}
- 1. labml.ai
- 2. labml.ai Deep Learning Paper Implementations
- 3. Sampling Techniques for Language Models (语言模型的采样技术)
- 4. Multi-Headed Attention (MHA)
- References
1. labml.ai
https://labml.ai/
https://github.com/labmlai
Tools to help deep learning researchers
2. labml.ai Deep Learning Paper Implementations
https://nn.labml.ai/index.html
https://github.com/labmlai/annotated_deep_learning_paper_implementations
Annotated Research Paper Implementations: Transformers, StyleGAN, Stable Diffusion, DDPM/DDIM, LayerNorm, Nucleus Sampling and more
This is a collection of simple PyTorch implementations of neural networks and related algorithms.
3. Sampling Techniques for Language Models (语言模型的采样技术)
https://nn.labml.ai/sampling/index.html
https://github.com/labmlai/annotated_deep_learning_paper_implementations/tree/master/labml_nn/sampling
Greedy Sampling
Temperature Sampling
Top-k Sampling
Nucleus Sampling
4. Multi-Headed Attention (MHA)
https://nn.labml.ai/transformers/mha.html
https://github.com/labmlai/annotated_deep_learning_paper_implementations/blob/master/labml_nn/transformers/mha.py
This is a tutorial/implementation of multi-headed attention from paper Attention Is All You Need in PyTorch. The implementation is inspired from Annotated Transformer.
这是论文 《 Attention is All You Need 》 中多头注意力的 PyTorch 教程/实现。该实现的灵感来自 《带注释的 Transformer 》。
Here is the training code that uses a basic transformer with MHA for NLP auto-regression.
这是使用基础 Transformer 和 MHA 进行 NLP 自回归的训练代码。
Here is an experiment implementation that trains a simple transformer.
这是一个训练简单 Transformer 的代码实现。
References
[1] Yongqiang Cheng, https://yongqiang.blog.csdn.net/