BERT大火却不懂Transformer?读这一篇就够了 原版 可视化机器学习 可视化神经网络 可视化深度学习...20201107

20211016

调节因子 

20211004

【NLP】Transformer模型原理详解 - 知乎

论文所用

20210703

GPT模型与Transformer进行对比_znevegiveup1的博客-CSDN博客_gpt transformer

GPT模型与Transformer进行对比

GPT采用了Transformer的Decoder,而BERT采用了Transformer中的Encoder。GPT使用Decoder中的Mask Multi-Head Attention结构,在使用[u1,u2,…u(i-1)]预测单词ui的时候,会将ui之后的单词Mask掉。

20210622

QxKT 之后的结果也是 batchsize x64?  V 的每一行的每个分量都有一个权值

20210620

Transformer模型有多少种变体?复旦邱锡鹏教授团队做了全面综述_人工智能与算法学习的博客-CSDN博客

Transformer变体综述

为了构建更深的模型,每个模块周围都采用了残差连接

https://jalammar.github.io/illustrated-transformer/

The Illustrated Transformer

Discussions: Hacker News (65 points, 4 comments), Reddit r/MachineLearning (29 points, 3 comments) 
Translations: Chinese (Simplified), Korean 
Watch: MIT’s Deep Learning State of the Art lecture referencing this post

In the previous post, we looked at Attention – a ubiquitous method in modern deep learning models. Attention is a concept that helped improve the performance of neural machine translation applications. In this post, we will look at The Transformer – a model that uses attention to boost the speed with which these models can be trained. The Transformers outperforms the Google Neural Machine Translation model in specific tasks. The biggest benefit, however, comes from how The Transformer lends itself to parallelization. It is in fact Google Cloud’s recommendation to use The Transformer as a reference model to use their Cloud TPU offering. So let’s try to break the model apart and look at how it functions.

The Transformer was proposed in the paper Attention is All You Need. A TensorFlow implementation of it is available as a part of the Tensor2Tensor package. Harvard’s NLP group created a guide annotating the paper with PyTorch implementation. In this post, we will attempt to oversimplify things a bit and introduce the concepts one by one to hopefully make it easier to understand to people without in-depth knowledge of the subject matter.

A High-Level Look

Let’s begin by looking at the model as a single black box. In a machine translation application, it would take a sentence in one language, and output its translation in another.

Popping open that Optimus Prime goodness, we see an encoding component, a decoding component, and connections between them.

The encoding component is a stack of encoders (the paper stacks six of them on top of each other – there’s nothing magical about the number six, one can definitely experiment with other arrangements). The decoding component is a stack of decoders of the same number.

The encoders are all identical in structure (yet they do not share weights). Each one is broken down into two sub-layers:

The encoder’s inputs first flow through a self-attention layer – a layer that helps the encoder look at other words in the input sentence as it encodes a specific word. We’ll look closer at self-attention later in the post.

The outputs of the self-attention layer are fed to a feed-forward neural network. The exact same feed-forward network is independently applied to each position.

The decoder has both those layers, but between them is an attention layer that helps the decoder focus on relevant parts of the input sentence (similar what attention does in seq2seq models).

Bringing The Tensors Into The Picture

Now that we’ve seen the major components of the model, let’s start to look at the various vectors/tensors and how they flow between these components to turn the input of a trained model into an output.

As is the case in NLP applications in general, we begin by turning each input word into a vector using an embedding algorithm.

 
Each word is embedded into a vector of size 512. We'll represent those vectors with these simple boxes.
每个字的嵌入维度是512,整个句子的字个数也是512

The embedding only happens in the bottom-most encoder. The abstraction that is common to all the encoders is that they receive a list of vectors each of the size 512 – In the bottom encoder that would be the word embeddings, but in other encoders, it would be the output of the encoder that’s directly below. The size of this list is hyperparameter we can set – basically it would be the length of the longest sentence in our training dataset.

After embedding the words in our input sequence, each of them flows through each of the two layers of the encoder.

Here we begin to see one key property of the Transformer, which is that the word in each position flows through its own path in the encoder. There are dependencies between these paths in the self-attention layer. The feed-forward layer does not have those dependencies, however, and thus the various paths can be executed in parallel while flowing through the feed-forward layer.

Next, we’ll switch up the example to a shorter sentence and we’ll look at what happens in each sub-layer of the encoder.

Now We’re Encoding!

As we’ve mentioned already, an encoder receives a list of vectors as input. It processes this list by passing these vectors into a ‘self-attention’ layer, then into a feed-forward neural network, then sends out the output upwards to the next encoder.


The word at each position passes through a self-attention process. Then, they each pass through a feed-forward neural network -- the exact same network with each vector flowing through it separately.

Self-Attention at a High Level

Don’t be fooled by me throwing around the word “self-attention” like it’s a concept everyone should be familiar with. I had personally never came across the concept until reading the Attention is All You Need paper. Let us distill how it works.

Say the following sentence is an input sentence we want to translate:

The animal didn't cross the street because it was too tired

What does “it” in this sentence refer to? Is it referring to the street or to the animal? It’s a simple question to a human, but not as simple to an algorithm.

When the model is processing the word “it”, self-attention allows it to associate “it” with “animal”.

As the model processes each word (each position in the input sequence), self attention allows it to look at other positions in the input sequence for clues that can help lead to a better encoding for this word.

If you’re familiar with RNNs, think of how maintaining a hidden state allows an RNN to incorporate its representation of previous words/vectors it has processed with the current one it’s processing. Self-attention is the method the Transformer uses to bake the “understanding” of other relevant words into the one we’re currently processing.

 
As we are encoding the word "it" in encoder #5 (the top encoder in the stack), part of the attention mechanism was focusing on "The Animal", and baked a part of its representation into the encoding of "it".

Be sure to check out the Tensor2Tensor notebook where you can load a Transformer model, and examine it using this interactive visualization.

Self-Attention in Detail

Let’s first look at how to calculate self-attention using vectors, then proceed to look at how it’s actually implemented – using matrices.

The first step in calculating self-attention is to create three vectors from each of the encoder’s input vectors (in this case, the embedding of each word). So for each word, we create a Query vector, a Key vector, and a Value vector. These vectors are created by multiplying the embedding by three matrices that we trained during the training process.

三个矩阵初始化的时候是没有意义的,经过训练之后得到的参数值才有了实际的物理意义

                                                                                                                                    到这里就可以看成每个字有四个向量了

Notice that these new vectors are smaller in dimension than the embedding vector. Their dimensionality is 64, while the embedding and encoder input/output vectors have dimensionality of 512. They don’t HAVE to be smaller, this is an architecture choice to make the computation of multiheaded attention (mostly) constant.

batchsizex512  512X64?

Multiplying x1 by the WQ weight matrix produces q1, the "query" vector associated with that word. We end up creating a "query", a "key", and a "value" projection of each word in the input sentence.


What are the “query”, “key”, and “value” vectors? 

X*query=query

X*key=key

X*value=value

三个向量刚开始都可以看成是字向量本身的表示

key:也可以看成和query 是一样 是字本身的表示

query*key:得到其他词相对于当前词的权重(得分)  是一个标量值

每个value 表示的词向量  在乘上上一步的得到的得分   

value 是64维, 每个元素都要乘以这个得分

20210430


They’re abstractions that are useful for calculating and thinking about attention. Once you proceed with reading how attention is calculated below, you’ll know pretty much all you need to know about the role each of these vectors plays.

The second step in calculating self-attention is to calculate a score. Say we’re calculating the self-attention for the first word in this example, “Thinking”. We need to score each word of the input sentence against this word. The score determines how much focus to place on other parts of the input sentence as we encode a word at a certain position.

The score is calculated by taking the dot product of the query vector with the key vector of the respective word we’re scoring. So if we’re processing the self-attention for the word in position #1, the first score would be the dot product of q1 and k1. The second score would be the dot product of q1 and k2.

The third and forth steps are to divide the scores by 8 (the square root of the dimension of the key vectors used in the paper – 64. This leads to having more stable gradients. There could be other possible values here, but this is the default), then pass the result through a softmax operation. Softmax normalizes the scores so they’re all positive and add up to 1.

This softmax score determines how much how much each word will be expressed at this position. Clearly the word at this position will have the highest softmax score, but sometimes it’s useful to attend to another word that is relevant to the current word.

The fifth step is to multiply each value vector by the softmax score (in preparation to sum them up). The intuition here is to keep intact the values of the word(s) we want to focus on, and drown-out irrelevant words (by multiplying them by tiny numbers like 0.001, for example).

The sixth step is to sum up the weighted value vectors. This produces the output of the self-attention layer at this position (for the first word).

That concludes the self-attention calculation. The resulting vector is one we can send along to the feed-forward neural network. In the actual implementation, however, this calculation is done in matrix form for faster processing. So let’s look at that now that we’ve seen the intuition of the calculation on the word level.

Matrix Calculation of Self-Attention

The first step is to calculate the Query, Key, and Value matrices. We do that by packing our embeddings into a matrix X, and multiplying it by the weight matrices we’ve trained (WQ, WK, WV).

20210430

权重矩阵的维度和最后得到的矩阵维度是不一样

batchsize * 512  X 512 *64 =batchsize*64

 
Every row in the X matrix corresponds to a word in the input sentence. We again see the difference in size of the embedding vector (512, or 4 boxes in the figure), and the q/k/v vectors (64, or 3 boxes in the figure)

Finally, since we’re dealing with matrices, we can condense steps two through six in one formula to calculate the outputs of the self-attention layer.

 
The self-attention calculation in matrix form
batchszie*64 X64 * batchsize=   batchsize * batchsize
第二部分的每一列相当于是每个字用64维来表示了    用64维表示了512  减少了计算量  但模型保留的是权重矩阵

Z的维度是 batchsize * batchsize x batchsize*64=batchsize*64  

也就是最后每一行(而不是每一个字)被一个64位向量表示了?

The Beast With Many Heads

The paper further refined the self-attention layer by adding a mechanism called “multi-headed” attention. This improves the performance of the attention layer in two ways:

  1. It expands the model’s ability to focus on different positions. Yes, in the example above, z1 contains a little bit of every other encoding, but it could be dominated by the the actual word itself. It would be useful if we’re translating a sentence like “The animal didn’t cross the street because it was too tired”, we would want to know which word “it” refers to.

  2. It gives the attention layer multiple “representation subspaces”. As we’ll see next, with multi-headed attention we have not only one, but multiple sets of Query/Key/Value weight matrices (the Transformer uses eight attention heads, so we end up with eight sets for each encoder/decoder). Each of these sets is randomly initialized. Then, after training, each set is used to project the input embeddings (or vectors from lower encoders/decoders) into a different representation subspace.

每个编码器和解码器都有八个头 8x64=512

从编码器输出的是  batchsize*512?  每个头输入的时候 都是  batchsize*512? 矩阵的一行代表一条输入记录而不是一个字向量?  所有自向量是怎么合并到一起的?

20210427


With multi-headed attention, we maintain separate Q/K/V weight matrices for each head resulting in different Q/K/V matrices . As we did before, we multiply X by the WQ/WK/WV matrices to produce Q/K/V matrices.


If we do the same self-attention calculation we outlined above, just eight different times with different weight matrices, we end up with eight different Z matrices

问题  每一行到底是代表一个字还是一个batchsize

This leaves us with a bit of a challenge. The feed-forward layer is not expecting eight matrices – it’s expecting a single matrix (a vector for each word). So we need a way to condense these eight down into a single matrix.

How do we do that? We concat the matrices then multiple them by an additional weights matrix WO.

That’s pretty much all there is to multi-headed self-attention. It’s quite a handful of matrices, I realize. Let me try to put them all in one visual so we can look at them in one place

Wo 的维度难道是  512x512?

Now that we have touched upon attention heads, let’s revisit our example from before to see where the different attention heads are focusing as we encode the word “it” in our example sentence:

好像每行代表是每个字的向量 而不是一行记录  每次处理一行?

 
As we encode the word "it", one attention head is focusing most on "the animal", while another is focusing on "tired" -- in a sense, the model's representation of the word "it" bakes in some of the representation of both "animal" and "tired".

If we add all the attention heads to the picture, however, things can be harder to interpret:

Representing The Order of The Sequence Using Positional Encoding

One thing that’s missing from the model as we have described it so far is a way to account for the order of the words in the input sequence.

To address this, the transformer adds a vector to each input embedding. These vectors follow a specific pattern that the model learns, which helps it determine the position of each word, or the distance between different words in the sequence. The intuition here is that adding these values to the embeddings provides meaningful distances between the embedding vectors once they’re projected into Q/K/V vectors and during dot-product attention.


To give the model a sense of the order of the words, we add positional encoding vectors -- the values of which follow a specific pattern.

If we assumed the embedding has a dimensionality of 4, the actual positional encodings would look like this:


A real example of positional encoding with a toy embedding size of 4

What might this pattern look like?

  位置编码也是一个512维度的向量?

In the following figure, each row corresponds the a positional encoding of a vector. So the first row would be the vector we’d add to the embedding of the first word in an input sequence. Each row contains 512 values – each with a value between 1 and -1. We’ve color-coded them so the pattern is visible.

纵轴是每个字  横轴是向量分量

A real example of positional encoding for 20 words (rows) with an embedding size of 512 (columns). You can see that it appears split in half down the center. That's because the values of the left half are generated by one function (which uses sine), and the right half is generated by another function (which uses cosine). They're then concatenated to form each of the positional encoding vectors.

The formula for positional encoding is described in the paper (section 3.5). You can see the code for generating positional encodings in get_timing_signal_1d(). This is not the only possible method for positional encoding. It, however, gives the advantage of being able to scale to unseen lengths of sequences (e.g. if our trained model is asked to translate a sentence longer than any of those in our training set).

The Residuals

One detail in the architecture of the encoder that we need to mention before moving on, is that each sub-layer (self-attention, ffnn) in each encoder has a residual connection around it, and is followed by a layer-normalization step.

transformer_resideual_layer_norm.png
残差绕过自己

If we’re to visualize the vectors and the layer-norm operation associated with self attention, it would look like this:

This goes for the sub-layers of the decoder as well. If we’re to think of a Transformer of 2 stacked encoders and decoders, it would look something like this:


 

The Decoder Side

翻译任务是有解码器的

分类任务好像没有解码器   直接就是最后一个全连接 bert的作用就相当于是一个词嵌入

Now that we’ve covered most of the concepts on the encoder side, we basically know how the components of decoders work as well. But let’s take a look at how they work together.

                                                                                                                                                                                            8套?并没有合成一个?              所有的解码器都需要                                                 每一套再加权重?相当于对每个z有加了一次权重

又在更高一层抽象了一次? 每个Z 是对每个字加不同的权重,对每个Z加不同权重 相当于对 每个self attention 再加权重      解码器输入的应该是译文的词向量 

The encoder start by processing the input sequence. The output of the top encoder is then transformed into a set of attention vectors K and V. These are to be used by each decoder in its “encoder-decoder attention” layer which helps the decoder focus on appropriate places in the input sequence:

每一步输出一个字?   解码器翻译第一个字的时候 应该是输入的开头字符  和真实翻译字符相差一个位置   有没有用 seq2seq的反向翻译呢?
After finishing the encoding phase, we begin the decoding phase. Each step in the decoding phase outputs an element from the output sequence (the English translation sentence in this case).

The following steps repeat the process until a special symbol is reached indicating the transformer decoder has completed its output. The output of each step is fed to the bottom decoder in the next time step, and the decoders bubble up their decoding results just like the encoders did. And just like we did with the encoder inputs, we embed and add positional encoding to those decoder inputs to indicate the position of each word.

The self attention layers in the decoder operate in a slightly different way than the one in the encoder:                                                                                     

In the decoder, the self-attention layer is only allowed to attend to earlier positions in the output sequence. This is done by masking future positions (setting them to -inf) before the softmax step in the self-attention calculation.

   解码器Q矩阵是自己创建的,K和V 是从最后一个编码器继承过来的

The “Encoder-Decoder Attention” layer works just like multiheaded self-attention, except it creates its Queries matrix from the layer below it, and takes the Keys and Values matrix from the output of the encoder stack.

The Final Linear and Softmax Layer

The decoder stack outputs a vector of floats. How do we turn that into a word? That’s the job of the final Linear layer which is followed by a Softmax Layer.

The Linear layer is a simple fully connected neural network that projects the vector produced by the stack of decoders, into a much, much larger vector called a logits vector.

Let’s assume that our model knows 10,000 unique English words (our model’s “output vocabulary”) that it’s learned from its training dataset. This would make the logits vector 10,000 cells wide – each cell corresponding to the score of a unique word. That is how we interpret the output of the model followed by the Linear layer.

The softmax layer then turns those scores into probabilities (all positive, all add up to 1.0). The cell with the highest probability is chosen, and the word associated with it is produced as the output for this time step.

 
This figure starts from the bottom with the vector produced as the output of the decoder stack. It is then turned into an output word.

Recap Of Training

Now that we’ve covered the entire forward-pass process through a trained Transformer, it would be useful to glance at the intuition of training the model.

During training, an untrained model would go through the exact same forward pass. But since we are training it on a labeled training dataset, we can compare its output with the actual correct output.

To visualize this, let’s assume our output vocabulary only contains six words(“a”, “am”, “i”, “thanks”, “student”, and “<eos>” (short for ‘end of sentence’)).

 
The output vocabulary of our model is created in the preprocessing phase before we even begin training.

Once we define our output vocabulary, we can use a vector of the same width to indicate each word in our vocabulary. This also known as one-hot encoding. So for example, we can indicate the word “am” using the following vector:

 
Example: one-hot encoding of our output vocabulary

Following this recap, let’s discuss the model’s loss function – the metric we are optimizing during the training phase to lead up to a trained and hopefully amazingly accurate model.

The Loss Function

Say we are training our model. Say it’s our first step in the training phase, and we’re training it on a simple example – translating “merci” into “thanks”.

What this means, is that we want the output to be a probability distribution indicating the word “thanks”. But since this model is not yet trained, that’s unlikely to happen just yet.

所有的参数都是随机初始化的   词嵌入也是随机初始化的

Since the model's parameters (weights) are all initialized randomly, the (untrained) model produces a probability distribution with arbitrary values for each cell/word. We can compare it with the actual output, then tweak all the model's weights using backpropagation to make the output closer to the desired output.

How do you compare two probability distributions? We simply subtract one from the other. For more details, look atcross-entropy and Kullback–Leibler divergence.

But note that this is an oversimplified example. More realistically, we’ll use a sentence longer than one word. For example – input: “je suis étudiant” and expected output: “i am a student”. What this really means, is that we want our model to successively output probability distributions where:

  • Each probability distribution is represented by a vector of width vocab_size (6 in our toy example, but more realistically a number like 3,000 or 10,000)
  • The first probability distribution has the highest probability at the cell associated with the word “i”
  • The second probability distribution has the highest probability at the cell associated with the word “am”
  • And so on, until the fifth output distribution indicates ‘<end of sentence>’ symbol, which also has a cell associated with it from the 10,000 element vocabulary.
 
The targeted probability distributions we'll train our model against in the training example for one sample sentence.

After training the model for enough time on a large enough dataset, we would hope the produced probability distributions would look like this:

 
Hopefully upon training, the model would output the right translation we expect. Of course it's no real indication if this phrase was part of the training dataset (see:  cross validation). Notice that every position gets a little bit of probability even if it's unlikely to be the output of that time step -- that's a very useful property of softmax which helps the training process.
贪婪解码

Now, because the model produces the outputs one at a time, we can assume that the model is selecting the word with the highest probability from that probability distribution and throwing away the rest. That’s one way to do it (called greedy decoding). Another way to do it would be to hold on to, say, the top two words (say, ‘I’ and ‘a’ for example), then in the next step, run the model twice: once assuming the first output position was the word ‘I’, and another time assuming the first output position was the word ‘me’, and whichever version produced less error considering both positions #1 and #2 is kept. We repeat this for positions #2 and #3…etc. This method is called “beam search”, where in our example, beam_size was two (because we compared the results after calculating the beams for positions #1 and #2), and top_beams is also two (since we kept two words). These are both hyperparameters that you can experiment with.

Go Forth And Transform

I hope you’ve found this a useful place to start to break the ice with the major concepts of the Transformer. If you want to go deeper, I’d suggest these next steps:

  • Read the Attention Is All You Need paper, the Transformer blog post (Transformer: A Novel Neural Network Architecture for Language Understanding), and the Tensor2Tensor announcement.
  • Watch Łukasz Kaiser’s talk walking through the model and its details
  • Play with the Jupyter Notebook provided as part of the Tensor2Tensor repo
  • Explore the Tensor2Tensor repo.

Follow-up works:

  • Depthwise Separable Convolutions for Neural Machine Translation
  • One Model To Learn Them All
  • Discrete Autoencoders for Sequence Models
  • Generating Wikipedia by Summarizing Long Sequences
  • Image Transformer
  • Training Tips for the Transformer Model
  • Self-Attention with Relative Position Representations
  • Fast Decoding in Sequence Models using Discrete Latent Variables
  • Adafactor: Adaptive Learning Rates with Sublinear Memory Cost

Acknowledgements

Thanks to Illia Polosukhin, Jakob Uszkoreit, Llion Jones , Lukasz Kaiser, Niki Parmar, and Noam Shazeer for providing feedback on earlier versions of this post.

Please hit me up on Twitter for any corrections or feedback.

Written on June 27, 2018
自注意力:自己和本句话中的其他词进行比较
N个编码器,N个解码器
三个向量:
q和k 一起构建权重,v 代表本身的向量表示,qkv 构成最终向量,值随机初始化,人为赋予意义
1.每个词有一个嵌入向量,嵌入向量分别和三个权重矩阵相乘得到3个向量,q,k,v
假设只有一个词
词为[1,512]  权重为 [512,64] 得到的q 为[1,64]
如果N个词
词为[N,512]  权重为 [512,64] 得到的q 为[N,64]
权重矩阵的参数通过训练而得到
q 为当前词,k为所有词,q,k 相乘 得到每个词相对于当前词的权重
每个v再和每个对应权重相乘,放大权重? 使得权重高的越高,权重低的越低(乘以很小的数,甚至完全忽略?)
假设QxK 的结果是[2,2]  V 是[2,3]
相当于V的每列 两个元素(每个词的竖向对应分量) 再和 QK 每行的两个元素作用  
最终得到[2,3]  得到两个词的3个最终分量
2.实际代码中 都是以矩阵形式处理的,相当于批量并行处理多个词
3. 多头
每个头的每套QKV参数都是不同的
Self-attention即K=V=Q,例如输入一个句子,那么里面的每个词都要和该句子中的所有词进行attention计算。目的是学习句子内部的词依赖关系,捕获句子的内部结构。
对于使用自注意力机制的原因,论文中提到主要从三个方面考虑(每一层的复杂度,是否可以并行,长距离依赖学习),并给出了和RNN,CNN计算复杂度的比较。可以看到,如果输入序列n小于表示维度d的话,每一层的时间复杂度self-attention是比较有优势的。当n比较大时,作者也给出了一种解决方案self-attention(restricted)即每个词不是和所有词计算attention,而是只与限制的r个词去计算attention。在并行方面,多头attention和CNN一样不依赖于前一时刻的计算,可以很好的并行,优于RNN。在长距离依赖上,由于self-attention是每个词和所有词都要计算attention,所以不管他们中间有多长距离,最大的路径长度也都只是1。可以捕获长距离依赖关系。
多头的作用  如果只有一个头的话,其对应的当前词的权重是最大的,增加不同的头,期望其他与当前词紧密相关词的权重也大.(提取更多的特征,不同角度的特征  20201106)
每个头把输入词嵌入投影到不同的表征子空间(每个头的作用)
每个编码器中有8个头,每个头输出一个[N,512],8个[N,512] 拼接起来之后再乘以一个权重矩阵
最终得到[N,512] 再给FFNN 全连接层,全连接层再给下一个编码器
另外有6个或12个编码器
多个编码器的作用,有可能 编码器少了,模型的表达能力不足以容纳足够的数据量
4.从最后一个编码器出来之后传输数据(这里的数据不是[N,512],而是K和V )到所有的解码器,解码器中间有个类似seq2seq中的模型,帮助解码器把注意力放在句子中相关的部分.
命名实体识别类比翻译   译文类比于标签
解码器输入一个原文词向量构建查询向量Q和K,V 相互作用后得到,译文的输出
而且解码的时候,只注意输出序列的已经出现的部分,没有出现的译文部分预测的时候是没有的,无法使用
(这里训练的时候有没有像seq2seq一样把译文的顺序颠倒的,好像没有)
训练的时候 译文和真实标签对比并反向传播改变参数

下层的解码器结果输入到上一层解码器,每个解码器所用的K,V都是同一套 

直到遇到结束符号,编码结束

最后的线性层,输出的单词表大小的向量,softmax 之后,取最大的译词或标签

5.其他 
5.1 对每个词的位置进行编码
5.2 残差层的加入 短接   绕过self_attention 或者ffnn
BERT原理及实现_哔哩哔哩_bilibili
https://www.ixigua.com/6889319326990795278/
问题:
1.长时依赖  XLnet
2.体积大      知识蒸馏
3.单词和句子级别的多任务学习
4.MLM 遮蔽语言 模型  单词级别    给定单词的上下文序列,当前单词出现的条件概率的乘积
语言模型:条件概率对应的概率分布    遮蔽扩展了 不止利用前序,后序也利用了
原序列
目标序列
为了减少计算量只对15%的单词进行mask(预测)
只学习有mask 标记的,用随机单词替换的单词不学习  类似于正则化 防止过拟合  允许一些出错的信息  泛化程度更好
5.下一句预测    yes,no 的预测  二值化   cls符号来携带 YES和NO的信息  也是一个向量  当前子句和下个子句是否是上下句的情况
lable=isnext
6.和GPT的区别:没有包括后序单词的信息是单向的
7.与ELMO的区别,用两个模型 一个从左到右  一个从右往左  只不过特征提取器是LSTM   transformer对长序序列的提取效果要比LSTM 更好
8.transformer 包括编码器和解码器  BERT 只用了 编码器部分
transformer 结构  add:残差网络   norm:层归一化  feed forward:前馈型全连接神经网络  选取(encorder layer) N层堆叠(8或者16个单元) 当前层的输出作为下一层的输入
看图,看公式,看代码 说的是同一个事情 
除以根号64是为了使值在合理的范围之内
为pad的部分赋值很小的值避免对softmax产生大的影响
20210511
bert家族中的mask机制 - 知乎
几种mask方式

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/55629.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

基于Python QQBot库的QQ聊天机器人

项目地址&#xff1a;https://github.com/pandolia/qqbot 1.安装 pip install qqbot 2.主动发出消息 from qqbot import _bot as bot# 登录QQ bot.Login([-q, 2816626661]) buddy 获取指定名称/备注的好友 group 获取群buddy bot.List(buddy, b.K)# 判断是佛存在这个好友 …

python+qqbot实现qq聊天机器人

##pythonqqbot实现qq聊天机器人 ###1. 安装qqbot 使用pip安装qqbot pip install qqbot###2. 登录qq 在安装完qqbot后&#xff0c;就可以进行qq的登录了&#xff0c;使用qqbot命令进行登录&#xff0c;在命令行输入qqbot&#xff0c;然后会弹出来二维码&#xff0c;你进行扫码后…

基于go-cqhttp实现QQ机器人

前言 本篇文章原文&#xff1a;http://www.7yue.top/rabbitbot/ 本篇文章记录一下自己在编写QQ机器人的时候所遇到的一些问题和核心功能的实现。 QQ机器人RabbitBot采用python编写&#xff0c;由于是个人学习使用&#xff0c;故目前不会开源完整代码&#xff0c;只会放出核心…

腾讯官方可编程QQ机器人来了?QQHook

今天突然看见关于QQHook的最新消息&#xff0c;现在还是内测阶段 先打开链接&#xff08;需要用手机QQ打开&#xff09;&#xff1a;https://web.qun.qq.com/qunrobot/data.html?robot_uin2854196399&_wwv128&_wv3 打开后就会显示Hook&#xff1a; 点击“添加到群聊…

【腾讯广告】监测链接和API自归因回传接口逻辑

开头吐槽一下腾讯的文档是真的垃圾。以下是我自己的理解和经历 大概流程 新建广告–》配置网页链接—》配置监测链接&#xff08;用来监测网页链接&#xff0c;腾讯到时候会通过这个链接回传给我们一个url&#xff0c;URL的参数就是你配置的参数&#xff1a;如click_id等&…

围观GPT应答全国甲卷高考题

原文&#xff1a;围观GPT如何应答全国甲卷作文题看看AI写出的作文怎么样&#xff1f;https://mp.weixin.qq.com/s/_tk3AxeiQAT6ntQZSe9B1g 2023年全国甲卷作文题目&#xff1a; 阅读下面的材料&#xff0c;根据要求写作-60分 人们因技术发展得以更好的掌控时间&#xff0c;但…

PlumGPT【告别梯子,拥抱AI】

相信很多人苦于没有openai账号或者有着种种原因至今还没有使用过chatgpt&#xff0c;今天向大家推荐一个网站&#xff0c;在国内也可以任意方便使用&#xff0c;让你的办公效率最大化。 那就是PlumGPT&#xff1a;https://plumgpt.com/ PlumGPT&#xff08;国内版的chatgpt&a…

TED演讲集 TED视频打包下载(MP4+中英字幕) TED中英文对照字幕视频 TED资料文档 完整

TED演讲集 TED视频打包下载&#xff08;MP4中英字幕&#xff09; TED中英文对照字幕视频 TED资料文档 学习英语的好帮手 TED1984-2019全部中英对照演讲稿集视频 下载CSDN 一、TED简介&#xff1a; TED&#xff08;指Technology, Entertainment, Design在英语中的缩写&#…

都说程序员加班很严重,来听听国外的程序员怎么说

据说&#xff0c;某互联网公司招了个日本人做研发&#xff0c;上班第一天就对部门同事说&#xff1a;“我在日本工作时是个加班狂&#xff0c;每天都很晚回家&#xff0c;希望大家跟上我的步伐。”一个月之后他辞职回日本了&#xff0c;扔下一句话&#xff1a;“你们这样加班&a…

不想上班啊不想上班

不想上班啊不想上班!!!! 刚星期一就盼着星期五..

如果你不想工作了,先做这3件事

作者| Mr.K 编辑| Emma 来源| 技术领导力(ID&#xff1a;jishulingdaoli) 英国作家毛姆有句名言&#xff1a;“我从来不会厌倦生活&#xff0c;只是厌倦了那些毫无生气的生活方式。”把这句话稍微修改一下&#xff0c;放在职场也无比适用“我并不厌倦工作,只是厌倦了那些毫无…

聊聊自由职业:我为什么不想再回公司上班

离开大公司以后&#xff0c;我一直就没什么“正经”工作。创过业&#xff0c;做过CEO&#xff0c;还有各种或长或短的兼职顾问&#xff0c;按照现在政策的说法&#xff0c;我这也算“灵活就业”了&#xff0c;或者说&#xff0c;是“自由职业”&#xff0c;算算已经7年多了。 一…

如果你不想上班了,建议你做这4件事

作者| Mr.K 编辑| Emma 来源| 技术领导力(ID&#xff1a;jishulingdaoli) 每次假期结束&#xff0c;很多人自己对一成不变的工作提不起兴趣&#xff0c;迟迟不能进入状态。还有很多读者给K哥留言&#xff0c;实名羡慕K哥一边做着上市公司的高管&#xff0c;一边经营着自己的媒…

基于Anki+Vocabulary的英语单词记忆法

在这里给大家分享一下一个背英语单词的方法&#xff08;目前感觉是最适合自己的方法&#xff09; 介绍自己的方法之前&#xff0c;先给大家介绍两款软件&#xff1a; 一、Anki&#xff1a; 介绍&#xff1a; anki是一个辅助记忆软件&#xff0c;它可以在相对合适的时间来告诉你…

可能是全网最好用的桌面背单词软件

前言 之前复习考研英语时&#xff0c;想找一个桌面背单词软件。看到市面上只有一款DesktopVoc&#xff0c;功能实现的不咋地&#xff0c;界面臃肿不美观&#xff0c;关键是还要收费&#xff0c;索性就自己用Python写了一个悬浮窗背单词小程序&#xff0c;实现了调整播放速度、窗…

用python实现背单词的小脚本系统

python 前提准备 安装好python及其环境安装好Oracle数据库python中安装好cx_Oracle包&#xff0c;且能与数据库正常联立交互 注&#xff1a; 前提准备部分的内容不做描述&#xff0c;百度均有教程 步骤&#xff1a; 第一步&#xff1a;在oracle中创建以下表&#xff1a; …

360 Total Security(360国际版)官方中文版V10.8.0.1269 | 360国际版和国内版区别很大-杀毒能力相当且非常纯净不流氓

360 Total Security&#xff08;360国际版&#xff09;是由奇虎360公司开发的纯净无明显商业推广行为且杀毒能力一流的360杀毒软件&#xff0c;360国际版是为广大用户的电脑安全及效能量身打造的专业免费杀毒软件&#xff0c;根据360国际版官网说明得知&#xff0c;360国际版内…

PHP 限制输出内容的字数

2019独角兽企业重金招聘Python工程师标准>>> 一、contentWordNumLimit($content, $maxWordNum) 1 作用描述&#xff1a;内容格式化(英文单双引号替换为中文&#xff0c;回车换行替换为html中的br标签&#xff0c;\n替换为空格)&#xff0c; 限制输出内容的字数&…

孩子写作业比较磨蹭,家长如何处理?

在生活中我们发现&#xff0c;很多孩子都特别磨蹭&#xff0c;比如吃饭磨蹭&#xff0c;收拾东西磨蹭&#xff0c;写作业磨蹭&#xff0c;明明很快就能完成的事情&#xff0c;到他们手下却会一拖再拖。有时候家长会很着急&#xff0c;催促厉害了&#xff0c;他们还会闹情绪&…

减轻教师作业批改负担的神奇—每日交作业之手机扫描批改

给教师们介绍一款免费又实用的客观题自动批改的产品 简介&#xff1a;手机扫描即可实现客观题的自动批改 特点&#xff1a; 1.无需特定纸张&#xff0c;大大减少纸张成本&#xff1b;无纸张大小限制 2.无需特定设备&#xff0c;使用微信小程序或者app扫描即可 3.即时生成详细的…