WordPiece词表的创建

文章目录

    • 一、简单介绍
    • 二、步骤流程
      • 2.1 预处理
      • 2.2 计数
      • 2.3 分割
      • 2.4 添加subword
    • 三、代码实现

本篇内容主要介绍如何根据提供的文本内容创建 WordPiece vocabulary,代码来自谷歌;

一、简单介绍

wordpiece的目的是:通过考虑单词内部构造,充分利用subwords的优势,在把长word转化为短word提高文字的灵活性以及提高word转化的效率这两处之间取得一个良好的平衡;

前者会增加词表大小,后者会减少词表大小

二、步骤流程

2.1 预处理

在读取所有的文本内容后,第一步便是对文本内容预处理;

  • 对英文来说,我们可以把字符都转化为小写形式,去掉accents,á 变成 a,然后利用whitespacepunctuation进行分割;也就是空格和标点符号;
  • 对中文来说,我们可以把繁体转化为简体,分割的方式就只有单个单个字词进行分割了,而优化的方式只有从外部引入tokenizer对文本内容做分词,然后进行后续步骤,不然单个中文字词无法进行分解,有人可能想通过偏旁部首来,但偏旁部首如何区分顺序呢?后续的内容将围绕英文展开;

在将文本切割成以word为单位的小块后,我们进行下一步骤;

2.2 计数

在预处理这一阶段,我们得到了以word为单位的小块,为了了解总体words的情况,我们需要对word进行计数处理,并按照数量从大到小排列;如果文本内容很大,我们可以在此做一个优化,过滤掉数量太大或者太小的word以及过滤掉长度太长的word

由于word piece的本质是subwords,为了合理的把word转化为subwords,我们必须考虑word的基本单元;因为如果词表中缺少组成word的基本单元,那么该词的表示就无法实现或者不完整和其他词照成混淆;

所以在这里我们统计所有的词其单个字符出现的次数;同理,在这里我们可以优化一下,删除出现次数较少的字符,由于出现次数较少的字符删除了,哪包含这些字符的words也就无法表示,所以我们同时要删除包含这些字符的words

2.3 分割

在这一环节,我们对计数字典的word进行分割,其处理方式如下:

首先对word设置一个首指针和一个尾指针,以指针之间的内容求匹配计数字典和字符字典的合集,若成功,则将首指针指向尾指针,然后尾指针重新指向最后的位置,若失败,则将尾指针向首指针移动一步;直到停止首尾位置一致,若首尾在尾部则返回output_tokens,若在其他地方则说明不能分词,返回None

实现过程如图所示:

实现代码如下:

def get_split_indices(word, curr_tokens, include_joiner_token, joiner):  indices = []  start = 0  while start < len(word):  end = len(word)  while end > start:  subtoken = word[start:end]  # Subtoken includes the joiner token.  if include_joiner_token and start > 0:  subtoken = joiner + subtoken  # If subtoken is part of vocab, 'end' is a valid start index.  if subtoken in curr_tokens:  indices.append(end)  break  end -= 1  if end == start:  return 1  start = end  return indices  if __name__ == '__main__':  res = get_split_indices('hello', ['h', '##e', '##llo', '##o'], True, '##')  # print(res)  res: [1, 2, 5]

2.4 添加subword

上一步分割的作用实际上是在找最大的分词块,但是其采用的是一种贪婪算法,并不是最优解;在对word进行分割找最大的分割块的indice之后,我们可以更快的找到常常出现在一起的字符串;处理方式如下:获取每一个以indice位置开始,长度依次增加的subword,构建subword字典并计数,其每次增加的数目应该是word.count

这种遍历方式产生的subword的数目过于庞大,因此如果有需要,我们需要对其进行一些优化,比如删掉一些长度较长的subword,删除一些次数比较小的subword,这样添加subword的这一步骤就算完成了;

但是要注意的是,这里的subword出现了重复计数,我们考虑了长的字符串,那么短的字符串一定会被考虑,这里我们从长字符串到短字符开始遍历,当确定长字符串有一定数目确定为vocabulary中的元素时,我们把所有有相同前缀的短字符串减去长字符串的数目避免影响;

与此同时,vocabulary并不一定包含了字符字典,所以我们需要将其进行合并,最后得到的vocabulary就是wordpiece vocabulary

三、代码实现

首先我们对word进行预处理,这里代码省略;

在这里我们传入一个iterable迭代器,然后用collections库中的Counter,对每个词进行计数;

def count_words(iterable) -> collections.Counter:  """Converts a iterable of arrays of words into a `Counter` of word counts."""  counts = collections.Counter()  for words in iterable:  # Convert a RaggedTensor to a flat/dense Tensor.  words = getattr(words, 'flat_values', words)  # Flatten any dense tensor  words = np.reshape(words, [-1])  counts.update(words)  # Decode the words if necessary.  example_word = next(iter(counts.keys()))  if isinstance(example_word, bytes):  counts = collections.Counter(  {word.decode('utf-8'): count for word, count in counts.items()})  return counts

根据当前词频以及upper_threshlower_thresh确定词频的界限;

def get_search_threshs(word_counts, upper_thresh, lower_thresh):  """Clips the thresholds for binary search based on current word counts.  The upper threshold parameter typically has a large default value that can    result in many iterations of unnecessary search. Thus we clip the upper and    lower bounds of search to the maximum and the minimum wordcount values.  Args:      word_counts: list of (string, int) tuples      upper_thresh: int, upper threshold for binary search      lower_thresh: int, lower threshold for binary search  Returns:      upper_search: int, clipped upper threshold for binary search      lower_search: int, clipped lower threshold for binary search    """  counts = [count for _, count in word_counts]  max_count = max(counts)  min_count = min(counts)  if upper_thresh is None:  upper_search = max_count  else:  upper_search = max_count if max_count < upper_thresh else upper_thresh  if lower_thresh is None:  lower_search = min_count  else:  lower_search = min_count if min_count > lower_thresh else lower_thresh  return upper_search, lower_search

对单个的char的数量设置一个上限;

def get_allowed_chars(all_counts, max_unique_chars):  """Get the top max_unique_chars characters within our wordcounts.  We want each character to be in the vocabulary so that we can keep splitting    down to the character level if necessary. However, in order not to inflate    our vocabulary with rare characters, we only keep the top max_unique_chars    characters.  Args:      all_counts: list of (string, int) tuples      max_unique_chars: int, maximum number of unique single-character tokens  Returns:      set of strings containing top max_unique_chars characters in all_counts    """  char_counts = collections.defaultdict(int)  for word, count in all_counts:  for char in word:  char_counts[char] += count  # Sort by count, then alphabetically.  sorted_counts = sorted(sorted(char_counts.items(), key=lambda x: x[0]),  key=lambda x: x[1], reverse=True)  allowed_chars = set()  for i in range(min(len(sorted_counts), max_unique_chars)):  allowed_chars.add(sorted_counts[i][0])  return allowed_chars

结合all_countsallowed_chars,删掉包含allowed_char的字符,控制结果为max_input_tokens个出现次数最大的word

def filter_input_words(all_counts, allowed_chars, max_input_tokens):  """Filters out words with unallowed chars and limits words to max_input_tokens.  Args:      all_counts: list of (string, int) tuples      allowed_chars: list of single-character strings      max_input_tokens: int, maximum number of tokens accepted as input  Returns:      list of (string, int) tuples of filtered wordcounts    """    # Ensure that the input is sorted so that if `max_input_tokens` is reached  # the least common tokens are dropped.    all_counts = sorted(  all_counts, key=lambda word_and_count: word_and_count[1], reverse=True)  filtered_counts = []  for word, count in all_counts:  if (max_input_tokens != -1 and  len(filtered_counts) >= max_input_tokens):  break  has_unallowed_chars = False  for char in word:  if char not in allowed_chars:  has_unallowed_chars = True  break        if has_unallowed_chars:  continue  filtered_counts.append((word, count))  return filtered_counts

获得splitindex,要保证curr_tokens可以对word进行分割;

def get_split_indices(word, curr_tokens, include_joiner_token, joiner):  """Gets indices for valid substrings of word, for iterations > 0.  For iterations > 0, rather than considering every possible substring, we only    want to consider starting points corresponding to the start of wordpieces in    the current vocabulary.  Args:      word: string we want to split into substrings      curr_tokens: string to int dict of tokens in vocab (from previous iteration)      include_joiner_token: bool whether to include joiner token      joiner: string used to indicate suffixes  Returns:      list of ints containing valid starting indices for word    """  indices = []  start = 0  while start < len(word):  end = len(word)  while end > start:  subtoken = word[start:end]  # Subtoken includes the joiner token.  if include_joiner_token and start > 0:  subtoken = joiner + subtoken  # If subtoken is part of vocab, 'end' is a valid start index.  if subtoken in curr_tokens:  indices.append(end)  break  end -= 1  if end == start:  return None  start = end  return indices

进行最后的步骤;

import collections  
from typing import List, Optional  Params = collections.namedtuple('Params', [  'upper_thresh', 'lower_thresh', 'num_iterations', 'max_input_tokens',  'max_token_length', 'max_unique_chars', 'vocab_size', 'slack_ratio',  'include_joiner_token', 'joiner', 'reserved_tokens'  
])  def extract_char_tokens(word_counts):  """Extracts all single-character tokens from word_counts.  Args:      word_counts: list of (string, int) tuples  Returns:      set of single-character strings contained within word_counts    """  seen_chars = set()  for word, _ in word_counts:  for char in word:  seen_chars.add(char)  return seen_chars  def ensure_all_tokens_exist(input_tokens, output_tokens, include_joiner_token,  joiner):  """Adds all tokens in input_tokens to output_tokens if not already present.  Args:      input_tokens: set of strings (tokens) we want to include      output_tokens: string to int dictionary mapping token to count      include_joiner_token: bool whether to include joiner token      joiner: string used to indicate suffixes  Returns:      string to int dictionary with all tokens in input_tokens included    """  for token in input_tokens:  if token not in output_tokens:  output_tokens[token] = 1  if include_joiner_token:  joined_token = joiner + token  if joined_token not in output_tokens:  output_tokens[joined_token] = 1  return output_tokens  def get_search_threshs(word_counts, upper_thresh, lower_thresh):  """Clips the thresholds for binary search based on current word counts.  The upper threshold parameter typically has a large default value that can    result in many iterations of unnecessary search. Thus we clip the upper and    lower bounds of search to the maximum and the minimum wordcount values.  Args:      word_counts: list of (string, int) tuples      upper_thresh: int, upper threshold for binary search      lower_thresh: int, lower threshold for binary search  Returns:      upper_search: int, clipped upper threshold for binary search      lower_search: int, clipped lower threshold for binary search    """  counts = [count for _, count in word_counts]  max_count = max(counts)  min_count = min(counts)  if upper_thresh is None:  upper_search = max_count  else:  upper_search = max_count if max_count < upper_thresh else upper_thresh  if lower_thresh is None:  lower_search = min_count  else:  lower_search = min_count if min_count > lower_thresh else lower_thresh  return upper_search, lower_search  def get_input_words(word_counts, reserved_tokens, max_token_length):  """Filters out words that are longer than max_token_length or are reserved.  Args:      word_counts: list of (string, int) tuples      reserved_tokens: list of strings      max_token_length: int, maximum length of a token  Returns:      list of (string, int) tuples of filtered wordcounts    """  all_counts = []  for word, count in word_counts:  if len(word) > max_token_length or word in reserved_tokens:  continue  all_counts.append((word, count))  return all_counts  def generate_final_vocabulary(reserved_tokens, char_tokens, curr_tokens):  """Generates final vocab given reserved, single-character, and current tokens.  Args:      reserved_tokens: list of strings (tokens) that must be included in vocab      char_tokens: set of single-character strings      curr_tokens: string to int dict mapping token to count  Returns:      list of strings representing final vocabulary    """  sorted_char_tokens = sorted(list(char_tokens))  vocab_char_arrays = []  vocab_char_arrays.extend(reserved_tokens)  vocab_char_arrays.extend(sorted_char_tokens)  # Sort by count, then alphabetically.  sorted_tokens = sorted(sorted(curr_tokens.items(), key=lambda x: x[0]),  key=lambda x: x[1], reverse=True)  for token, _ in sorted_tokens:  vocab_char_arrays.append(token)  seen_tokens = set()  # Adding unique tokens to list to maintain sorted order.  vocab_words = []  for word in vocab_char_arrays:  if word in seen_tokens:  continue  seen_tokens.add(word)  vocab_words.append(word)  return vocab_words  def learn_with_thresh(word_counts, thresh, params):  """Wordpiece learning algorithm to produce a vocab given frequency threshold.  Args:      word_counts: list of (string, int) tuples      thresh: int, frequency threshold for a token to be included in the vocab      params: Params namedtuple, parameters for learning  Returns:      list of strings, vocabulary generated for the given thresh    """  # Set of single-character tokens.  char_tokens = extract_char_tokens(word_counts)  curr_tokens = ensure_all_tokens_exist(char_tokens, {},  params.include_joiner_token,  params.joiner)  for iteration in range(params.num_iterations):  subtokens = [dict() for _ in range(params.max_token_length + 1)]  # Populate array with counts of each subtoken.  for word, count in word_counts:  if iteration == 0:  split_indices = range(1, len(word) + 1)  else:  split_indices = get_split_indices(word, curr_tokens,  params.include_joiner_token,  params.joiner)  if not split_indices:  continue  start = 0  for index in split_indices:  for end in range(start + 1, len(word) + 1):  subtoken = word[start:end]  length = len(subtoken)  if params.include_joiner_token and start > 0:  subtoken = params.joiner + subtoken  if subtoken in subtokens[length]:  # Subtoken exists, increment count.  subtokens[length][subtoken] += count  else:  # New subtoken, add to dict.  subtokens[length][subtoken] = count  start = index  next_tokens = {}  # Get all tokens that have a count above the threshold.  for length in range(params.max_token_length, 0, -1):  for token, count in subtokens[length].items():  if count >= thresh:  next_tokens[token] = count  # Decrement the count of all prefixes.  if len(token) > length:  # This token includes the joiner.  joiner_len = len(params.joiner)  for i in range(1 + joiner_len, length + joiner_len):  prefix = token[0:i]  if prefix in subtokens[i - joiner_len]:  subtokens[i - joiner_len][prefix] -= count  else:  for i in range(1, length):  prefix = token[0:i]  if prefix in subtokens[i]:  subtokens[i][prefix] -= count  # Add back single-character tokens.  curr_tokens = ensure_all_tokens_exist(char_tokens, next_tokens,  params.include_joiner_token,  params.joiner)  vocab_words = generate_final_vocabulary(params.reserved_tokens, char_tokens,  curr_tokens)  return vocab_words  def learn_binary_search(word_counts, lower, upper, params):  """Performs binary search to find wordcount frequency threshold.  Given upper and lower bounds and a list of (word, count) tuples, performs    binary search to find the threshold closest to producing a vocabulary    of size vocab_size.  Args:      word_counts: list of (string, int) tuples      lower: int, lower bound for binary search      upper: int, upper bound for binary search      params: Params namedtuple, parameters for learning  Returns:      list of strings, vocab that is closest to target vocab_size    """    thresh = (upper + lower) // 2  current_vocab = learn_with_thresh(word_counts, thresh, params)  current_vocab_size = len(current_vocab)  # Allow count to be within k% of the target count, where k is slack ratio.  slack_count = params.slack_ratio * params.vocab_size  if slack_count < 0:  slack_count = 0  is_within_slack = (current_vocab_size <= params.vocab_size) and (  params.vocab_size - current_vocab_size <= slack_count)  # We've created a vocab within our goal range (or, ran out of search space).  if is_within_slack or lower >= upper or thresh <= 1:  return current_vocab  current_vocab = None  if current_vocab_size > params.vocab_size:  return learn_binary_search(word_counts, thresh + 1, upper, params)  else:  return learn_binary_search(word_counts, lower, thresh - 1, params)  

整合:

def learn(word_counts,  vocab_size: int,  reserved_tokens: List[str],  upper_thresh: Optional[int] = int(1e7),  lower_thresh: Optional[int] = 10,  num_iterations: int = 4,  max_input_tokens: Optional[int] = int(5e6),  max_token_length: int = 50,  max_unique_chars: int = 1000,  slack_ratio: float = 0.05,  include_joiner_token: bool = True,  joiner: str = '##') -> List[str]:  """Takes in wordcounts and returns wordpiece vocabulary.  Args:      word_counts: (word, count) pairs as a dictionary, or list of tuples.      vocab_size: The target vocabulary size. This is the maximum size.      reserved_tokens: A list of tokens that must be included in the vocabulary.      upper_thresh: Initial upper bound on the token frequency threshold.      lower_thresh: Initial lower bound on the token frequency threchold.      num_iterations: Number of iterations to run.      max_input_tokens: The maximum number of words in the initial vocabulary. The        words with the lowest counts are discarded. Use `None` or `-1` for "no        maximum".      max_token_length: The maximum token length. Counts for longer words are        discarded.      max_unique_chars: The maximum alphabet size. This prevents rare characters        from inflating the vocabulary. Counts for words containing characters        ouside of the selected alphabet are discarded.      slack_ratio: The maximum deviation acceptable from `vocab_size` for an        acceptable vocabulary. The acceptable range of vocabulary sizes is from        `vocab_size*(1-slack_ratio)` to `vocab_size`.      include_joiner_token: If true, include the `joiner` token in the output        vocabulary.      joiner: The prefix to include on suffix tokens in the output vocabulary.        Usually "##". For example 'places' could be tokenized as `['place',        '##s']`.  Returns:      string, final vocabulary with each word separated by newline    """    if isinstance(word_counts, dict):  word_counts = word_counts.items()  params = Params(upper_thresh, lower_thresh, num_iterations, max_input_tokens,  max_token_length, max_unique_chars, vocab_size, slack_ratio,  include_joiner_token, joiner, reserved_tokens)  upper_search, lower_search = get_search_threshs(word_counts,  params.upper_thresh,  params.lower_thresh)  all_counts = get_input_words(word_counts, params.reserved_tokens,  params.max_token_length)  allowed_chars = get_allowed_chars(all_counts, params.max_unique_chars)  filtered_counts = filter_input_words(all_counts, allowed_chars,  params.max_input_tokens)  vocab = learn_binary_search(filtered_counts, lower_search, upper_search,  params)  return vocab

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/210975.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

Canal笔记:安装与整合Springboot模式Mysql同步Redis

官方文档 https://github.com/alibaba/canal 使用场景 学习一件东西前&#xff0c;要知道为什么使用它。 1、同步mysql数据到redis 常规情况下&#xff0c;产生数据的方法可能有很多地方&#xff0c;那么就需要在多个地方中&#xff0c;都去做mysql数据同步到redis的处理&…

2005-2021年地级市绿色发展注意力数据(根据政府报告文本词频统计)

2005-2021年地级市绿色发展注意力数据&#xff08;根据政府报告文本词频统计&#xff09; 1、时间&#xff1a;2005-2021年 2、指标&#xff1a;省、市、年份、一级指标、关键词、关键词词频、总词频 3、范围&#xff1a;270个地级市 4、来源&#xff1a;地级市政府工作报告…

深度学习TensorFlow2基础知识学习前半部分

目录 测试TensorFlow是否支持GPU&#xff1a; 自动求导&#xff1a; 数据预处理 之 统一数组维度 定义变量和常量 训练模型的时候设备变量的设置 生成随机数据 交叉熵损失CE和均方误差函数MSE 全连接Dense层 维度变换reshape 增加或减小维度 数组合并 广播机制&#…

CCKS2023-面向金融领域的主体事件检测-亚军方案分享

赛题分析 大赛地址 https://tianchi.aliyun.com/competition/entrance/532098/introduction?spma2c22.12281925.0.0.52b97137bpVnmh 任务描述 主体事件检测是语言文本分析和金融领域智能应用的重要任务之一&#xff0c;如在金融风控领域往往会对公司主体进行风险事件的检测…

杂散表的阅读

杂散表得阅读 —— 以Marki公司得手册为例 混频杂散&#xff08;Mixing Spurs&#xff09;是指信号经过混频器时&#xff0c;不仅会与本振混频&#xff0c;还会与本振的高次谐波混频&#xff08;对于第二章说的方波本振&#xff0c;信号只与本振的奇次谐波混频因为方波只含有奇…

VSC改造MD编辑器及图床方案分享

VSC改造MD编辑器及图床方案分享 用了那么多md编辑器&#xff0c;到头来还是觉得VSC最好用。这次就来分享一下我的blog文件编辑流吧。 这篇文章包括&#xff1a;VSC下md功能扩展插件推荐、图床方案、blog文章管理方案 VSC插件 Markdown All in One Markdown Image - 粘粘图片…

gitLab创建新项目

1.进入git2.选择创建项目3.勾选生成readme.md文件4.邀请成员

【MATLAB源码-第93期】基于matlab的白鲸优化算法(BWO)和鲸鱼优化算法(WOA)机器人栅格路径规划对比。

操作环境&#xff1a; MATLAB 2022a 1、算法描述 白鲸优化算法&#xff08;BWO&#xff09; 白鲸优化算法是受到白鲸捕食和迁徙行为启发的一种算法。其主要特点和步骤包括&#xff1a; 1. 搜索食物&#xff08;全局搜索&#xff09;&#xff1a;算法模仿白鲸寻找食物的行为。…

流媒体音视频/安防视频云平台/可视化监控平台EasyCVR无法启动且打印panic报错,是什么原因?

国标GB视频监控管理平台/视频集中存储/云存储EasyCVR能在复杂的网络环境中&#xff0c;将分散的各类视频资源进行统一汇聚、整合、集中管理&#xff0c;实现视频资源的鉴权管理、按需调阅、全网分发、智能分析等。AI智能大数据视频分析EasyCVR平台已经广泛应用在工地、工厂、园…

分布式搜索引擎(Elastic Search)+消息队列(RabbitMQ)部署

一、分布式搜索引擎&#xff1a;Elastic Search Elastic Search的目标就是实现搜索。是一款非常强大的开源搜索引擎&#xff0c;可以帮助我们从海量数据中快速找到需要的内容。在数据量少的时候&#xff0c;我们可以通过索引去搜索关系型数据库中的数据&#xff0c;但是如果数…

【C++11】线程库/异常

一&#xff1a;线程库 1.1:线程库(thread) 1.1.1&#xff1a;为什么要有线程库 1.1.2&#xff1a;thread库中的成员函数 1.1.3&#xff1a;线程函数参数 1.2:互斥锁(mutex) 1.2.1&#xff1a;为什么要有互斥锁 1.2.2&#xff1a;C11中的互斥锁 1.3:原子操作(atomic) 1.4:条件变…

Apollo新版本Beta技术沙龙

有幸参加Apollo开发者社区于12月2日举办的Apollo新版本(8.0)的技术沙龙会&#xff0c;地址在首钢园百度Apollo Park。由于去的比较早&#xff0c;先参观了一下这面的一些产品&#xff0c;还有专门的讲解&#xff0c;主要讲了一下百度无人驾驶的发展历程和历代产品。我对下面几个…

Java画爱心

Java画爱心代码&#xff0c;每个人都可以被需要 效果图 源代码 package com.example.test; import java.awt.Color; import java.awt.Font; import java.awt.Graphics; import java.awt.Image; import java.awt.Toolkit; import javax.swing.JFrame; class Cardioid extend…

Java+Swing+Mysql实现超市管理系统

一、系统介绍 1.开发环境 操作系统&#xff1a;Win10 开发工具 &#xff1a;IDEA2018 JDK版本&#xff1a;jdk1.8 数据库&#xff1a;Mysql8.0 2.技术选型 JavaSwingMysql 3.功能模块 4.系统功能 1.系统登录登出 管理员可以登录、退出系统 2.商品信息管理 管理员可以对商品信息…

超完整的mysql安装配置方法(包含idea和navicat连接mysql,并实现建表)

mysql安装配置方法 1、下载mysql2、解压到指定的安装目录3、配置初始化文件my.ini4、配置用户变量和系统变量5、初始化mysql6、安装mysql服务并启动修改密码7、使用idea连接mysql8、使用Navicat可视化工具连接mysql&#xff0c;并实现新建数据库&#xff0c;新建表 1、下载mysq…

C语言-详解指针

目录 一.内存 1.内存的定义 2.内存的结构图 二.地址 1.什么是地址 2.什么是变量的地址 三.什么是指针 1.指针的定义 四.如何获取数据存储空间的地址 1.&运算符 五.指针变量 1.什么是指针变量&#xff08;一级指针变量&#xff09; 2.指针变量的定义 3…

<JavaEE> 什么是线程安全?产生线程不安全的原因和处理方式

目录 一、线程安全的概念 二、线程不安全经典示例 三、线程不安全的原因和处理方式 3.1 线程的随机调度和抢占式执行 3.2 修改共享数据 3.3 关键代码或指令不是“原子”的 3.4 内存可见性和指令重排序 四、Java标准库自带的线程安全类 一、线程安全的概念 线程安全是指…

Electron+Ts+Vue+Vite桌面应用系列:sqlite增删改查操作篇

文章目录 1️⃣ sqlite应用1.1 sqlite数据结构1.2 初始化数据库1.3 初始化实体类1.4 操作数据类1.5 页面调用 优质资源分享 作者&#xff1a;xcLeigh 文章地址&#xff1a;https://blog.csdn.net/weixin_43151418/article/details/134692751 ElectronTsVueVite桌面应用系列 &am…

nacos启动报错 java.lang.RuntimeException: [db-load-error]load jdbc.properties error

以standalone mode sh startup.sh -m standalone 为例子 启动nacos 报错&#xff1a; Caused by: org.springframework.boot.web.server.WebServerException: Unable to start embedded Tomcatat org.springframework.boot.web.embedded.tomcat.TomcatWebServer.initialize(To…

Linux中shell的运行原理

在Linux中&#xff0c;每次输入命令时&#xff0c;前面都会出现一串字母&#xff0c;我们称之为命令行提示符 实际上&#xff0c;命令行提示符是一种外壳程序 外壳程序的概念&#xff1a; 前面我们提到过&#xff0c;在Linux中&#xff0c;一切皆文件&#xff0c;所谓的命令就…