STViT-R 代码阅读记录

目录

一、SwinTransformer

1、原理

 2、代码

二、STViT-R

1、中心思想

2、代码与原文


本次不做具体的训练。只是看代码。所以只需搭建它的网络,执行一次前向传播即可。

一、SwinTransformer

1、原理

主要思想,将token按区域划分成窗口,只需每个窗口内的token单独进行 self-attention。

但是不同之间的窗口没有进行交互,为了解决这个问题。提出了

 2、代码

1、均匀的划分窗口

x_windows = window_partition(shifted_x, self.window_size)  # nW*B, window_size, window_size, C  window_size 7  # 划分窗口  (64,7,7,96)
x_windows = x_windows.view(-1, self.window_size * self.window_size, C)  # nW*B, window_size*window_size, C  (64,49,96)

二、STViT-R

1、中心思想

在浅层的 transformer保持不变,去提取低层 特征, 保证image token 中包含丰富的空间信息。在深层时,本文提出了 STGM 去生成 语义token, 通过聚类,整个图像由一些具有高级语义信息的标记来表示。。 在第一个STGM过程中,语义token 由 intra and inter-window spatial pooling初始化。 由于这种空间初始化,语义token主要包含局部语义信息,并在空间中实现离散和均匀分布。 在接下来的注意层中,除了进一步的聚类外,语义标记还配备了全局聚类中心,网络可以自适应地选择部分语义标记,以聚焦于全局语义信息。

2、代码与原文

对应

xx = x.reshape(B, H // self.window_size, self.window_size, W // self.window_size, self.window_size, C)  # (1,2,7,2,7,384)
windows = xx.permute(0, 1, 3, 2, 4, 5).contiguous().reshape(-1, self.window_size, self.window_size, C).permute(0, 3, 1, 2)  # (4,384,7,7)
shortcut = self.multi_scale(windows)  # B*nW, W*W, C  multi_scale.py --13  (4,9,384)
if self.use_conv_pos:  # Falseshortcut = self.conv_pos(shortcut)
pool_x = self.norm1(shortcut.reshape(B, -1, C)).reshape(-1, self.multi_scale.num_samples, C)  # (4,9,384)# 
class multi_scale_semantic_token1(nn.Module):def __init__(self, sample_window_size):super().__init__()self.sample_window_size = sample_window_size  # 3self.num_samples = sample_window_size * sample_window_sizedef forward(self, x):  # (4,384,7,7)B, C, _, _ = x.size()pool_x = F.adaptive_max_pool2d(x, (self.sample_window_size, self.sample_window_size)).view(B, C, self.num_samples).transpose(2, 1)  # (4,9,384)return pool_x

注意,这个是按照每个窗口内进行 pooling的。代码中,窗口size为7,分成了4个窗口,故pooling前的 x(4,384,7,7),pooling后,按窗口池化,每个窗口池化后的 size为3,故池化后的输出 (4,9,384)。 至于参数的设置,由于采用的是local,所以文中所述

而且

  

所以 有了如下的操作,将原来窗口的size扩大了,

k_windows = F.unfold(x.permute(0, 3, 1, 2), kernel_size=10, stride=4).view(B, C, 10, 10, -1).permute(0, 4, 2, 3, 1)  # (1,4,10,10,384)
k_windows = k_windows.reshape(-1, 100, C)  # (4,100,384)
k_windows = torch.cat([shortcut, k_windows], dim=1)  # (4,109,384)
k_windows = self.norm1(k_windows.reshape(B, -1, C)).reshape(-1, 100+self.multi_scale.num_samples, C)  # (4,109,384)


 公式1

前边的对应

# P
shortcut = self.multi_scale(windows)  # MHA(P, X, X)pool_x = self.norm1(shortcut.reshape(B, -1, C)).reshape(-1, self.multi_scale.num_samples, C)if self.shortcut:x = shortcut + self.drop_path(self.layer_scale_1 * self.attn(pool_x, k_windows))

中间省略了Norm层,所以括号里的 P是 有Norm的,外面的P是 shortcut

后边的对应

x = x + self.drop_path(self.layer_scale_2 * self.mlp(self.norm2(x)))  # (1,36,384)

对应

 elif i == 2:if self.use_global:semantic_token = blk(semantic_token+self.semantic_token2, torch.cat([semantic_token, x], dim=1))else:  # Truesemantic_token = blk(semantic_token, torch.cat([semantic_token, x], dim=1))

 文中的

定义为(当只有 use_global时才使用)

        if self.use_global:self.semantic_token2 = nn.Parameter(torch.zeros(1, self.num_samples, embed_dim))trunc_normal_(self.semantic_token2, std=.02)

最终的对应

x = shortcut + self.drop_path(self.layer_scale_1 * attn)
x = x + self.drop_path(self.layer_scale_2 * self.mlp(self.norm2(x)))

 注意,在 i=1 到 i=5之间的层是 STGM,当i=5时,开始了哑铃的另一侧

对应代码

elif i == 5:x = blk(x, semantic_token)  # to layers.py--132

如图中的蓝线,原始的 image token作为Q,然后STGM的语义令牌作为KV,


上述过程循环往复,就组成了多个的哑铃结构 

            if i == 0:x = blk(x)  # (1,196,384)  to swin_transformer -- 242elif i == 1:semantic_token = blk(x)  # to layers.py --179elif i == 2:if self.use_global:  # Truesemantic_token = blk(semantic_token+self.semantic_token2, torch.cat([semantic_token, x], dim=1))  # to layers.py--132else:  # Truesemantic_token = blk(semantic_token, torch.cat([semantic_token, x], dim=1))  # to layers.py--132elif i > 2 and i < 5:semantic_token = blk(semantic_token)  # to layers.py--132elif i == 5:x = blk(x, semantic_token)  # to layers.py--132elif i == 6:x = blk(x)elif i == 7:semantic_token = blk(x)elif i == 8:semantic_token = blk(semantic_token, torch.cat([semantic_token, x], dim=1))elif i > 8 and i < 11:semantic_token = blk(semantic_token)elif i == 11:x = blk(x, semantic_token)elif i == 12:x = blk(x)elif i == 13:semantic_token = blk(x)elif i == 14:semantic_token = blk(semantic_token, torch.cat([semantic_token, x], dim=1))elif i > 14 and i < 17:semantic_token = blk(semantic_token)else:x = blk(x, semantic_token)

tiny

SwinTransformer((patch_embed): PatchEmbed((proj): Sequential((0): Conv2d_BN((c): Conv2d(3, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(1): Hardswish()(2): Conv2d_BN((c): Conv2d(48, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(3): Hardswish()))(pos_drop): Dropout(p=0.0, inplace=False)(layers): ModuleList((0): BasicLayer(dim=96, input_resolution=(56, 56), depth=2(blocks): ModuleList((0): SwinTransformerBlock(dim=96, input_resolution=(56, 56), num_heads=3, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=96, window_size=(7, 7), num_heads=3(qkv): Linear(in_features=96, out_features=288, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=96, out_features=96, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): Identity()(norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=96, out_features=384, bias=True)(act): GELU()(fc2): Linear(in_features=384, out_features=96, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): SwinTransformerBlock(dim=96, input_resolution=(56, 56), num_heads=3, window_size=7, shift_size=3, mlp_ratio=4.0(norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=96, window_size=(7, 7), num_heads=3(qkv): Linear(in_features=96, out_features=288, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=96, out_features=96, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.018)(norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=96, out_features=384, bias=True)(act): GELU()(fc2): Linear(in_features=384, out_features=96, bias=True)(drop): Dropout(p=0.0, inplace=False))))(downsample): PatchMerging(input_resolution=(56, 56), dim=96(reduction): Linear(in_features=384, out_features=192, bias=False)(norm): LayerNorm((384,), eps=1e-05, elementwise_affine=True)))(1): BasicLayer(dim=192, input_resolution=(28, 28), depth=2(blocks): ModuleList((0): SwinTransformerBlock(dim=192, input_resolution=(28, 28), num_heads=6, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=192, window_size=(7, 7), num_heads=6(qkv): Linear(in_features=192, out_features=576, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=192, out_features=192, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.036)(norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=192, out_features=768, bias=True)(act): GELU()(fc2): Linear(in_features=768, out_features=192, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): SwinTransformerBlock(dim=192, input_resolution=(28, 28), num_heads=6, window_size=7, shift_size=3, mlp_ratio=4.0(norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=192, window_size=(7, 7), num_heads=6(qkv): Linear(in_features=192, out_features=576, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=192, out_features=192, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.055)(norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=192, out_features=768, bias=True)(act): GELU()(fc2): Linear(in_features=768, out_features=192, bias=True)(drop): Dropout(p=0.0, inplace=False))))(downsample): PatchMerging(input_resolution=(28, 28), dim=192(reduction): Linear(in_features=768, out_features=384, bias=False)(norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)))(2): Deit((blocks): ModuleList((0): SwinTransformerBlock(dim=384, input_resolution=(14, 14), num_heads=12, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=384, window_size=(7, 7), num_heads=12(qkv): Linear(in_features=384, out_features=1152, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.073)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): SemanticAttentionBlock((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(multi_scale): multi_scale_semantic_token1()(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.091)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(2): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.109)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(3): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.127)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(4): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.145)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(5): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.164)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False))))(downsample): PatchMerging(input_resolution=(14, 14), dim=384(reduction): Linear(in_features=1536, out_features=768, bias=False)(norm): LayerNorm((1536,), eps=1e-05, elementwise_affine=True)))(3): BasicLayer(dim=768, input_resolution=(7, 7), depth=2(blocks): ModuleList((0): SwinTransformerBlock(dim=768, input_resolution=(7, 7), num_heads=24, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=768, window_size=(7, 7), num_heads=24(qkv): Linear(in_features=768, out_features=2304, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.182)(norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): SwinTransformerBlock(dim=768, input_resolution=(7, 7), num_heads=24, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=768, window_size=(7, 7), num_heads=24(qkv): Linear(in_features=768, out_features=2304, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.200)(norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False))))))(norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(avgpool): AdaptiveAvgPool1d(output_size=1)(head): Linear(in_features=768, out_features=100, bias=True)
)

网络结构

SwinTransformer((patch_embed): PatchEmbed((proj): Sequential((0): Conv2d_BN((c): Conv2d(3, 48, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(48, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(1): Hardswish()(2): Conv2d_BN((c): Conv2d(48, 96, kernel_size=(3, 3), stride=(2, 2), padding=(1, 1), bias=False)(bn): BatchNorm2d(96, eps=1e-05, momentum=0.1, affine=True, track_running_stats=True))(3): Hardswish()))(pos_drop): Dropout(p=0.0, inplace=False)(layers): ModuleList((0): BasicLayer(dim=96, input_resolution=(56, 56), depth=2(blocks): ModuleList((0): SwinTransformerBlock(dim=96, input_resolution=(56, 56), num_heads=3, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=96, window_size=(7, 7), num_heads=3(qkv): Linear(in_features=96, out_features=288, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=96, out_features=96, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): Identity()(norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=96, out_features=384, bias=True)(act): GELU()(fc2): Linear(in_features=384, out_features=96, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): SwinTransformerBlock(dim=96, input_resolution=(56, 56), num_heads=3, window_size=7, shift_size=3, mlp_ratio=4.0(norm1): LayerNorm((96,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=96, window_size=(7, 7), num_heads=3(qkv): Linear(in_features=96, out_features=288, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=96, out_features=96, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.013)(norm2): LayerNorm((96,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=96, out_features=384, bias=True)(act): GELU()(fc2): Linear(in_features=384, out_features=96, bias=True)(drop): Dropout(p=0.0, inplace=False))))(downsample): PatchMerging(input_resolution=(56, 56), dim=96(reduction): Linear(in_features=384, out_features=192, bias=False)(norm): LayerNorm((384,), eps=1e-05, elementwise_affine=True)))(1): BasicLayer(dim=192, input_resolution=(28, 28), depth=2(blocks): ModuleList((0): SwinTransformerBlock(dim=192, input_resolution=(28, 28), num_heads=6, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=192, window_size=(7, 7), num_heads=6(qkv): Linear(in_features=192, out_features=576, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=192, out_features=192, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.026)(norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=192, out_features=768, bias=True)(act): GELU()(fc2): Linear(in_features=768, out_features=192, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): SwinTransformerBlock(dim=192, input_resolution=(28, 28), num_heads=6, window_size=7, shift_size=3, mlp_ratio=4.0(norm1): LayerNorm((192,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=192, window_size=(7, 7), num_heads=6(qkv): Linear(in_features=192, out_features=576, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=192, out_features=192, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.039)(norm2): LayerNorm((192,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=192, out_features=768, bias=True)(act): GELU()(fc2): Linear(in_features=768, out_features=192, bias=True)(drop): Dropout(p=0.0, inplace=False))))(downsample): PatchMerging(input_resolution=(28, 28), dim=192(reduction): Linear(in_features=768, out_features=384, bias=False)(norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)))(2): Deit((blocks): ModuleList((0): SwinTransformerBlock(dim=384, input_resolution=(14, 14), num_heads=12, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=384, window_size=(7, 7), num_heads=12(qkv): Linear(in_features=384, out_features=1152, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.052)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): SemanticAttentionBlock((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(multi_scale): multi_scale_semantic_token1()(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.065)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(2): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.078)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(3): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.091)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(4): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.104)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(5): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.117)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(6): SwinTransformerBlock(dim=384, input_resolution=(14, 14), num_heads=12, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=384, window_size=(7, 7), num_heads=12(qkv): Linear(in_features=384, out_features=1152, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.130)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop): Dropout(p=0.0, inplace=False)))(7): SemanticAttentionBlock((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(multi_scale): multi_scale_semantic_token1()(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.143)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(8): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.157)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(9): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.170)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(10): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.183)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(11): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.196)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(12): SwinTransformerBlock(dim=384, input_resolution=(14, 14), num_heads=12, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=384, window_size=(7, 7), num_heads=12(qkv): Linear(in_features=384, out_features=1152, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.209)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop): Dropout(p=0.0, inplace=False)))(13): SemanticAttentionBlock((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(multi_scale): multi_scale_semantic_token1()(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.222)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(14): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.235)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(15): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.248)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(16): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.261)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False)))(17): Block((norm1): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(attn): Attention((q): Linear(in_features=384, out_features=384, bias=True)(kv): Linear(in_features=384, out_features=768, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=384, out_features=384, bias=True)(proj_drop): Dropout(p=0.0, inplace=False))(drop_path): DropPath(drop_prob=0.274)(norm2): LayerNorm((384,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=384, out_features=1536, bias=True)(act): GELU()(drop1): Dropout(p=0.0, inplace=False)(fc2): Linear(in_features=1536, out_features=384, bias=True)(drop2): Dropout(p=0.0, inplace=False))))(downsample): PatchMerging(input_resolution=(14, 14), dim=384(reduction): Linear(in_features=1536, out_features=768, bias=False)(norm): LayerNorm((1536,), eps=1e-05, elementwise_affine=True)))(3): BasicLayer(dim=768, input_resolution=(7, 7), depth=2(blocks): ModuleList((0): SwinTransformerBlock(dim=768, input_resolution=(7, 7), num_heads=24, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=768, window_size=(7, 7), num_heads=24(qkv): Linear(in_features=768, out_features=2304, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.287)(norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False)))(1): SwinTransformerBlock(dim=768, input_resolution=(7, 7), num_heads=24, window_size=7, shift_size=0, mlp_ratio=4.0(norm1): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(attn): WindowAttention(dim=768, window_size=(7, 7), num_heads=24(qkv): Linear(in_features=768, out_features=2304, bias=True)(attn_drop): Dropout(p=0.0, inplace=False)(proj): Linear(in_features=768, out_features=768, bias=True)(proj_drop): Dropout(p=0.0, inplace=False)(softmax): Softmax(dim=-1))(drop_path): DropPath(drop_prob=0.300)(norm2): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(mlp): Mlp((fc1): Linear(in_features=768, out_features=3072, bias=True)(act): GELU()(fc2): Linear(in_features=3072, out_features=768, bias=True)(drop): Dropout(p=0.0, inplace=False))))))(norm): LayerNorm((768,), eps=1e-05, elementwise_affine=True)(avgpool): AdaptiveAvgPool1d(output_size=1)(head): Linear(in_features=768, out_features=100, bias=True)
)

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/138225.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

五笔字根-还有什么是你不懂的

前言 拼音打字永久了&#xff0c;提笔忘字&#xff0c;遇到不认识的就拼不出来&#xff0c;用GPT也不行&#xff0c;你都拼不上去&#xff0c;这不是个好习惯&#xff0c;所以趁此机会学一学五笔输入法。 学习是锻炼大脑的最好方法。 文章目录 前言五笔字根口诀五笔字根口诀…

openGauss学习笔记-73 openGauss 数据库管理-创建和管理索引

文章目录 openGauss学习笔记-73 openGauss 数据库管理-创建和管理索引73.1 背景信息73.2 操作步骤73.2.1 创建索引73.2.2 修改索引分区的表空间73.2.3 重命名索引分区73.2.4 查询索引73.2.5 删除索引73.2.6 创建索引的方式73.2.6.1 创建普通索引73.2.6.2 创建多字段索引73.2.6.…

国家开放大学 考试试题训练

经济数学基础 参考 试题 导数基本公式&#xff1a; 积分基本公式&#xff1a; c0 ∫0dxc xaaxa-1 ∫xadxxa1a1c&#xff08;a≠-1&#xff09; axaxlna(a>0且a≠1) …

【R语言】完美解决devtools安装GitHub包失败的问题(以gwasglue为例)

Rstudio&#xff0c;R4.3.1&#xff0c;命令在Rstudio的命令行即console中运行。 文章目录 一、问题复述二、分析三、解决四、安装示例&#xff1a;gwasglue 一、问题复述 使用devtools安装一个github的包。 devtools&#xff1a; devtools 是 R 语言中一个非常有用的包&…

VS2019创建GIt仓库时剔除文件或目录

假设本地有解决方案“SomeSolution” 1、首先”团队资源管理器“-“创建Git存储库”&#xff0c;选择“仅限本地”、“创建” VS会在解决方案目录下自动生成.gitattributes、.gitignore 2、编辑gitignore&#xff0c;直接拖到VS里或者用记事本打开。添加要剔除的文件或文件夹…

java面向对象(九)

文章目录 一、abstract的应用举例二、接口的使用1.概念2.代码案例 三、try-catch-finally使用步骤1.注意点2.finally注意点 四、异常处理的方式二&#xff1a;throws 异常类型1.如图所示&#xff1a;2.代码如下&#xff1a; 提示&#xff1a;以下是本篇文章正文内容&#xff0…

2021年电工杯数学建模A题高铁牵引供电系统运行数据分析及等值建模求解全过程论文及程序

2021年电工杯数学建模 A题 高铁牵引供电系统运行数据分析及等值建模 原题再现&#xff1a; 我国是世界上电气化铁路运营里程最长、服役电力机车型号最多、运营最繁忙的国家。截至 2020 年底&#xff0c;我国铁路年消耗电量约 800 亿千瓦时&#xff0c;约占三峡年总发电量的 8…

软件的开发步骤,需求分析,开发环境搭建,接口文档 ---苍穹外卖1

目录 项目总览 开发准备 开发步骤 角色分工 软件环境 项目介绍 产品原型 技术选型 开发环境搭建 前端:默认已有 后端 使用Git版本控制 数据库环境搭建 前后端联调 ​登录功能完善 导入接口文档 使用swagger​ 和yapi的区别 常用注解 项目总览 开发准备 开发步骤…

工具篇 | Gradle入门与使用指南

介绍 1.1 什么是Gradle&#xff1f; Gradle是一个开源构建自动化工具&#xff0c;专为大型项目设计。它基于DSL&#xff08;领域特定语言&#xff09;编写&#xff0c;该语言是用Groovy编写的&#xff0c;使得构建脚本更加简洁和强大。Gradle不仅可以构建Java应用程序&#x…

【Redis】深入探索 Redis 的哨兵(Sentinel)机制原理,基于 Docker 模拟搭建 Redis 主从结构和哨兵分布式架构

文章目录 一、对 Redis Sentinel 的认识1.1 什么是 Redis Sentinel1.2 为什么要使用 Redis Sentinel1.2.1 主从复制问题1.2.2 人工恢复主节点故障 二、Redis Sentinel 原理剖析2.1 Redis Sentinel 架构2.2 Raft 算法和领袖节点2.3 哨兵节点2.4 故障检测2.5 故障切换2.6 监控和通…

利用大模型知识图谱技术,告别繁重文案,实现非结构化数据高效管理

我&#xff0c;作为一名产品经理&#xff0c;对文案工作可以说是又爱又恨&#xff0c;爱的是文档作为嘴替&#xff0c;可以事事展开揉碎讲清道明&#xff1b;恨的是只有一个脑子一双手&#xff0c;想一边澄清需求一边推广宣传一边发布版本一边申报认证实在是分身乏术&#xff0…

基于矩阵分解算法的智能Steam游戏AI推荐系统——深度学习算法应用(含python、ipynb工程源码)+数据集(三)

目录 前言总体设计系统整体结构图系统流程图 运行环境模块实现1. 数据预处理2. 模型构建1&#xff09;定义模型结构2&#xff09;优化损失函数 3. 模型训练及保存1&#xff09;模型训练2&#xff09;模型保存 4. 模型应用1&#xff09;制作页面2&#xff09;模型导入及调用3&am…

zabbix监控多实例redis

Zabbix监控多实例Redis 软件名称软件版本Zabbix Server6.0.17Zabbix Agent5.4.1Redis6.2.10 Zabbix客户端配置 编辑自动发现脚本 vim /usr/local/zabbix/scripts/redis_discovery.sh #!/bin/bash #Fucation:redis low-level discovery #Script_name redis_discovery.sh red…

【操作系统】实验一 Linux初步

文章目录 Linux初步一、实验目的二、实验内容 Linux初步 一、实验目的 通过proc文件系统观察整个Linux内核和系统的一些重要特征&#xff0c;并编写一个程序&#xff0c;使用proc文件系统获得以及修改系统的各种配置参数。 本实验需要学生具有Linux的基本操作技能&#xff0c…

Rust常见编程概念

变量和可变性 rust使用let声明变量&#xff0c;变量默认是不可改变的。通过在let后面加上mut&#xff0c;可以声明可变变量。可以在变量名后加:和类型名&#xff0c;来显式声明变量类型&#xff0c;例如&#xff1a; let a:u32 1; 常量 常量使用const声明&#xff0c;变量名…

【Tricks】关于如何防止edge浏览器偷取chrome浏览器的账号

《关于如何防止edge浏览器偷取chrome浏览器的账号》 前段时间edge自动更新了&#xff0c;我并没有太在意界面的问题。但是由于我使用同一个网站平台时&#xff0c;例如b站&#xff0c;甚至是邮箱&#xff0c;edge的账号和chrome的账号会自动同步&#xff0c;这就导致我很难短时…

Centos7部署gitlab

建议服务器配置不低于2C8G 1、安装必要的依赖 sudo yum install -y curl policycoreutils-python openssh-server perl2、配置极狐GitLab 软件源镜像 curl -fsSL https://packages.gitlab.cn/repository/raw/scripts/setup.sh | /bin/bash sudo yum install gitlab-jh -y3、…

安防视频/视频汇聚平台EasyCVR使用onvif探测添加设备通道详细步骤来啦!

视频云存储/安防监控EasyCVR视频汇聚平台基于云边端智能协同&#xff0c;支持海量视频的轻量化接入与汇聚、转码与处理、全网智能分发、视频集中存储等。音视频流媒体视频平台EasyCVR拓展性强&#xff0c;视频能力丰富&#xff0c;具体可实现视频监控直播、视频轮播、视频录像、…

uniapp打包安卓后在安卓屏上实现开机自启动

实现开机自启动(使用插件) 打开插件地址安卓开机自启动 Fvv-AutoStart - DCloud 插件市场 使用方法 选择你要开启自启动的项目 在项目的manifest.json中app-plus下写入以下代码 注意需要替换 android_package_name 为自己的,不然无法进行安卓apk打包 "nativePlugins&q…

【计算机网络】IP协议第一讲(协议格式介绍)

IP协议 1.协议头格式1.1 概念介绍1.2补充说明1.2.1 8位生存时间---TTL1.2.2 16位首部检验和 首先明确一个概念&#xff1a;TCP/IP协议是配合使用的&#xff0c;TCP负责可靠传输策略&#xff0c;IP则是负责传输&#xff0c;TCP协议是位于传输层提供的是策略解决可靠性问题&#…