全面梳理Python下的NLP 库

一、说明

        Python 对自然语言处理库有丰富的支持。从文本处理、标记化文本并确定其引理开始,到句法分析、解析文本并分配句法角色,再到语义处理,例如识别命名实体、情感分析和文档分类,一切都由至少一个库提供。那么,你从哪里开始呢?

        本文的目标是为每个核心 NLP 任务提供相关 Python 库的概述。这些库通过简要说明进行了解释,并给出了 NLP 任务的具体代码片段。继续我对 NLP 博客文章的介绍,本文仅显示用于文本处理、句法和语义分析以及文档语义等核心 NLP 任务的库。此外,在 NLP 实用程序类别中,还提供了用于语料库管理和数据集的库。

        涵盖以下库:

  • NLTK
  • TextBlob
  • Spacy
  • SciKit Learn
  • Gensim
  • 这篇文章最初出现在博客 admantium.com。

二、核心自然语言处理任务

2.1 文本处理

        任务:标记化、词形还原、词干提取

      NLTK 库为文本处理提供了一个完整的工具包,包括标记化、词干提取和词形还原。


from nltk.tokenize import sent_tokenize, word_tokenizeparagraph = '''Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.'''
sentences = []
for sent in sent_tokenize(paragraph):sentences.append(word_tokenize(sent))sentences[0]
# ['Artificial', 'intelligence', 'was', 'founded', 'as', 'an', 'academic', 'discipline'

        使用 TextBlob,支持相同的文本处理任务。它与NLTK的区别在于更高级的语义结果和易于使用的数据结构:解析句子已经生成了丰富的语义信息。

from textblob import TextBlobtext = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''blob = TextBlob(text)
blob.ngrams()
#[WordList(['Artificial', 'intelligence', 'was']),
# WordList(['intelligence', 'was', 'founded']),
# WordList(['was', 'founded', 'as']),blob.tokens
# WordList(['Artificial', 'intelligence', 'was', 'founded', 'as', 'an', 'academic', 'discipline', 'in', '1956', ',', 'and', 'in',

        借助现代 NLP 库 Spacy,文本处理只是主要语义任务的丰富管道中的第一步。与其他库不同,它需要首先加载目标语言的模型。最近的模型不是启发式的,而是人工神经网络,尤其是变压器,它提供了更丰富的抽象,可以更好地与其他模型相结合。

import spacy
nlp = spacy.load('en_core_web_lg')text = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''doc = nlp(text)
tokens = [token for token in doc]print(tokens)
# [Artificial, intelligence, was, founded, as, an, academic, discipline

三、句法分析

        任务:解析、词性标记、名词短语提取

        从 NLTK 开始,支持所有语法任务。它们的输出作为 Python 原生数据结构提供,并且始终可以显示为简单的文本输出。

from nltk.tokenize import word_tokenize
from nltk import pos_tag, RegexpParsertext = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''pos_tag(word_tokenize(text))
# [('Artificial', 'JJ'),
#  ('intelligence', 'NN'),
#  ('was', 'VBD'),
#  ('founded', 'VBN'),
#  ('as', 'IN'),
#  ('an', 'DT'),
#  ('academic', 'JJ'),
#  ('discipline', 'NN'),
# noun chunk parser
# source: https://www.nltk.org/book_1ed/ch07.htmlgrammar = "NP: {<DT>?<JJ>*<NN>}"
parser = RegexpParser(grammar)
parser.parse(pos_tag(word_tokenize(text)))
#(S
#  (NP Artificial/JJ intelligence/NN)
#  was/VBD
#  founded/VBN
#  as/IN
#  (NP an/DT academic/JJ discipline/NN)
#  in/IN
#  1956/CD

文本 Blob 在处理文本时立即提供 POS 标记。使用另一种方法,创建包含丰富语法信息的解析树。

from textblob import TextBlobtext = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''
blob = TextBlob(text)blob.tags
#[('Artificial', 'JJ'),
# ('intelligence', 'NN'),
# ('was', 'VBD'),
# ('founded', 'VBN'),blob.parse()
# Artificial/JJ/B-NP/O
# intelligence/NN/I-NP/O
# was/VBD/B-VP/O
# founded/VBN/I-VP/O

Spacy 库使用转换器神经网络来支持其语法任务。

import spacy
nlp = spacy.load('en_core_web_lg')for token in nlp(text):print(f'{token.text:<20}{token.pos_:>5}{token.tag_:>5}')#Artificial            ADJ   JJ
#intelligence         NOUN   NN
#was                   AUX  VBD
#founded              VERB  VBN

四、语义分析

        任务:命名实体识别、词义消歧、语义角色标记

        语义分析是NLP方法开始不同的领域。使用 NLTK 时,生成的语法信息将在字典中查找以识别例如命名实体。因此,在处理较新的文本时,可能无法识别实体。

from nltk import download as nltk_download
from nltk.tokenize import word_tokenize
from nltk import pos_tag, ne_chunknltk_download('maxent_ne_chunker')
nltk_download('words')
text = '''
As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over 187 kilometers (116 mi). There were five other crewed missions using Mercury spacecraft.
'''pos_tag(word_tokenize(text))
# [('Artificial', 'JJ'),
#  ('intelligence', 'NN'),
#  ('was', 'VBD'),
#  ('founded', 'VBN'),
#  ('as', 'IN'),
#  ('an', 'DT'),
#  ('academic', 'JJ'),
#  ('discipline', 'NN'),
# noun chunk parser
# source: https://www.nltk.org/book_1ed/ch07.htmlprint(ne_chunk(pos_tag(word_tokenize(text))))
# (S
#   As/IN
#   of/IN
#   [...]
#   (ORGANIZATION USA/NNP)
#   [...]
#   which/WDT
#   carried/VBD
#   (GPE Soviet/JJ)
#   cosmonaut/NN
#   (PERSON Yuri/NNP Gagarin/NNP)

      Spacy 库使用的转换器模型包含一个隐式的“时间戳”:它们的训练时间。这决定了模型使用了哪些文本,因此模型能够识别哪些文本。

import spacy
nlp = spacy.load('en_core_web_lg')text = '''
As of 2016, only three nations have flown crewed spacecraft: USSR/Russia, USA, and China. The first crewed spacecraft was Vostok 1, which carried Soviet cosmonaut Yuri Gagarin into space in 1961, and completed a full Earth orbit. There were five other crewed missions which used a Vostok spacecraft. The second crewed spacecraft was named Freedom 7, and it performed a sub-orbital spaceflight in 1961 carrying American astronaut Alan Shepard to an altitude of just over 187 kilometers (116 mi). There were five other crewed missions using Mercury spacecraft.
'''
doc = nlp(paragraph)
for token in doc.ents:print(f'{token.text:<25}{token.label_:<15}')
# 2016                   DATE
# only three             CARDINAL
# USSR                   GPE
# Russia                 GPE
# USA                    GPE
# China                  GPE
# first                  ORDINAL
# Vostok 1               PRODUCT
# Soviet                 NORP
# Yuri Gagarin           PERSON

五、文档语义

        任务:文本分类、主题建模、情感分析、毒性识别

        情感分析也是NLP方法差异不同的任务:在词典中查找单词含义与在单词或文档向量上编码的学习单词相似性。

      TextBlob 具有内置的情感分析,可返回文本中的极性(整体正面或负面内涵)和主观性(个人意见的程度)。

from textblob import TextBlobtext = '''
Artificial intelligence was founded as an academic discipline in 1956, and in the years since it has experienced several waves of optimism, followed by disappointment and the loss of funding (known as an "AI winter"), followed by new approaches, success, and renewed funding. AI research has tried and discarded many different approaches, including simulating the brain, modeling human problem solving, formal logic, large databases of knowledge, and imitating animal behavior. In the first decades of the 21st century, highly mathematical and statistical machine learning has dominated the field, and this technique has proved highly successful, helping to solve many challenging problems throughout industry and academia.
'''blob = TextBlob(text)
blob.sentiment
#Sentiment(polarity=0.16180290297937355, subjectivity=0.42155589508530683)

Spacy 不包含文本分类功能,但可以作为单独的管道步骤进行扩展。下面的代码很长,包含几个 Spacy 内部对象和数据结构 — 以后的文章将更详细地解释这一点。

## train single label categorization from multi-label dataset
def convert_single_label(dataset, filename):db = DocBin()nlp = spacy.load('en_core_web_lg')for index, fileid in enumerate(dataset):cat_dict = {cat: 0 for cat in dataset.categories()}cat_dict[dataset.categories(fileid).pop()] = 1doc = nlp(get_text(fileid))doc.cats = cat_dictdb.add(doc)db.to_disk(filename)## load trained model and apply to text
nlp = spacy.load('textcat_multilabel_model/model-best')
text = dataset.raw(42)
doc = nlp(text)
estimated_cats = sorted(doc.cats.items(), key=lambda i:float(i[1]), reverse=True)print(dataset.categories(42))
# ['orange']print(estimated_cats)
# [('nzdlr', 0.998894989490509), ('money-supply', 0.9969857335090637), ... ('orange', 0.7344251871109009),

SciKit Learn 是一个通用的机器学习库,提供许多聚类和分类算法。它仅适用于数字输入,因此需要对文本进行矢量化,例如使用 GenSims 预先训练的词向量,或使用内置的特征矢量化器。仅举一个例子,这里有一个片段,用于将原始文本转换为单词向量,然后将 KMeans分类器应用于它们。

from sklearn.feature_extraction import DictVectorizer
from sklearn.cluster import KMeansvectorizer = DictVectorizer(sparse=False)
x_train = vectorizer.fit_transform(dataset['train'])
kmeans = KMeans(n_clusters=8, random_state=0, n_init="auto").fit(x_train)print(kmeans.labels_.shape)
# (8551, )print(kmeans.labels_)
# [4 4 4 ... 6 6 6]

最后,Gensim是一个专门用于大规模语料库的主题分类的库。以下代码片段加载内置数据集,矢量化每个文档的令牌,并执行聚类分析算法 LDA。仅在 CPU 上运行时,这些最多可能需要 15 分钟。

# source: https://radimrehurek.com/gensim/auto_examples/tutorials/run_lda.html, https://radimrehurek.com/gensim/auto_examples/howtos/run_downloader_api.htmlimport logging
import gensim.downloader as api
from gensim.corpora import Dictionary
from gensim.models import LdaModellogging.basicConfig(format='%(asctime)s : %(levelname)s : %(message)s', level=logging.INFO)
docs = api.load('text8')
dictionary = Dictionary(docs)
corpus = [dictionary.doc2bow(doc) for doc in docs]_ = dictionary[0]
id2word = dictionary.id2token
# Define and train the modelmodel = LdaModel(corpus=corpus,id2word=id2word,chunksize=2000,alpha='auto',eta='auto',iterations=400,num_topics=10,passes=20,eval_every=None
)print(model.num_topics)
# 10print(model.top_topics(corpus)[6])
#  ([(4.201401e-06, 'done'),
#    (4.1998064e-06, 'zero'),
#    (4.1478743e-06, 'eight'),
#    (4.1257395e-06, 'one'),
#    (4.1166854e-06, 'two'),
#    (4.085097e-06, 'six'),
#    (4.080696e-06, 'language'),
#    (4.050306e-06, 'system'),
#    (4.041121e-06, 'network'),
#    (4.0385708e-06, 'internet'),
#    (4.0379923e-06, 'protocol'),
#    (4.035399e-06, 'open'),
#    (4.033435e-06, 'three'),
#    (4.0334166e-06, 'interface'),
#    (4.030141e-06, 'four'),
#    (4.0283044e-06, 'seven'),
#    (4.0163245e-06, 'no'),
#    (4.0149207e-06, 'i'),
#    (4.0072555e-06, 'object'),
#    (4.007036e-06, 'programming')],

六、公用事业

6.1 语料库管理

NLTK为JSON格式的纯文本,降价甚至Twitter提要提供语料库阅读器。它通过传递文件路径来创建,然后提供基本统计信息以及迭代器以处理所有找到的文件。

from  nltk.corpus.reader.plaintext import PlaintextCorpusReadercorpus = PlaintextCorpusReader('wikipedia_articles', r'.*\.txt')print(corpus.fileids())
# ['AI_alignment.txt', 'AI_safety.txt', 'Artificial_intelligence.txt', 'Machine_learning.txt', ...]print(len(corpus.sents()))
# 47289print(len(corpus.words()))
# 1146248

Gensim 处理文本文件以形成每个文档的词向量表示,然后可用于其主要用例主题分类。文档需要由包装遍历目录的迭代器处理,然后将语料库构建为词向量集合。但是,这种语料库表示很难外部化并与其他库重用。以下片段是上面的摘录——它将加载 Gensim 中包含的数据集,然后创建一个基于词向量的表示。

import gensim.downloader as api
from gensim.corpora import Dictionarydocs = api.load('text8')
dictionary = Dictionary(docs)
corpus = [dictionary.doc2bow(doc) for doc in docs]print('Number of unique tokens: %d' % len(dictionary))
# Number of unique tokens: 253854print('Number of documents: %d' % len(corpus))
# Number of documents: 1701

七、数据

NLTK提供了几个即用型数据集,例如路透社新闻摘录,欧洲议会会议记录以及古腾堡收藏的开放书籍。请参阅完整的数据集和模型列表。

from nltk.corpus import reutersprint(len(reuters.fileids()))
#10788print(reuters.categories()[:43])
# ['acq', 'alum', 'barley', 'bop', 'carcass', 'castor-oil', 'cocoa', 'coconut', 'coconut-oil', 'coffee', 'copper', 'copra-cake', 'corn', 'cotton', 'cotton-oil', 'cpi', 'cpu', 'crude', 'dfl', 'dlr', 'dmk', 'earn', 'fuel', 'gas', 'gnp', 'gold', 'grain', 'groundnut', 'groundnut-oil', 'heat', 'hog', 'housing', 'income', 'instal-debt', 'interest', 'ipi', 'iron-steel', 'jet', 'jobs', 'l-cattle', 'lead', 'lei', 'lin-oil']

SciKit Learn包括来自新闻组,房地产甚至IT入侵检测的数据集,请参阅完整列表。这是后者的快速示例。

from sklearn.datasets import fetch_20newsgroups
dataset = fetch_20newsgroups()
dataset.data[1]
# "From: guykuo@carson.u.washington.edu (Guy Kuo)\nSubject: SI Clock Poll - Final Call\nSummary: Final call for SI clock reports\nKeywords: SI,acceleration,clock,upgrade\nArticle-I.D.: shelley.1qvfo9INNc3s\nOrganization: University of Washington\nLines: 11\nNNTP-Posting-Host: carson.u.washington.edu\n\nA fair number of brave souls who upgraded their SI clock oscillator have\nshared their experiences for this poll.

八、结论

        对于 Python 中的 NLP 项目,存在大量的库选择。为了帮助您入门,本文提供了 NLP 任务驱动的概述,其中包含紧凑的库解释和代码片段。从文本处理开始,您了解了如何从文本创建标记和引理。继续语法分析,您学习了如何生成词性标签和句子的语法结构。到达语义,识别文本中的命名实体以及文本情感也可以在几行代码中解决。对于语料库管理和访问预结构化数据集的其他任务,您还看到了库示例。总而言之,这篇文章应该会给你一个良好的开端,进入你的下一个NLP项目,从事核心NLP任务。

        NLP方法向使用神经网络,特别是大型语言模型的演变引发了一套完整的新库的创建和适应,从文本矢量化,神经网络定义和训练开始,以及语言生成模型的应用等等。这些模型涵盖了所有高级 NLP 任务,将在以后的文章中介绍。

塞巴斯蒂安

·

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/96453.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

公网远程连接Redis数据库详解

文章目录 1. Linux(centos8)安装redis数据库2. 配置redis数据库3. 内网穿透3.1 安装cpolar内网穿透3.2 创建隧道映射本地端口 4. 配置固定TCP端口地址4.1 保留一个固定tcp地址4.2 配置固定TCP地址4.3 使用固定的tcp地址连接 前言 洁洁的个人主页 我就问你有没有发挥&#xff0…

一站式自动化测试平台-Autotestplat

3.1 自动化平台开发方案 3.1.1 功能需求 3.1.3 开发时间计划 如果是刚入门、但有一点代码基础的测试人员&#xff0c;大概 3 个月能做出演示版(Demo)进行自动化测试&#xff0c;6 个月内胜任开展工作中项目的自动化测试。 如果是有自动化测试基础的测试人员&#xff0c;大概 …

实现Java异步调用的高效方法

文章目录 为什么需要异步调用&#xff1f;Java中的异步编程方式1. 使用多线程2. 使用Java异步框架 异步调用的关键细节结论 &#x1f389;欢迎来到Java学习路线专栏~实现Java异步调用的高效方法 ☆* o(≧▽≦)o *☆嗨~我是IT陈寒&#x1f379;✨博客主页&#xff1a;IT陈寒的博…

Redis基本操作

根据哔站黑马教学笔记写的笔记&#xff1a;https://www.bilibili.com/video/BV1cr4y1671t?p1&vd_source6a3f27eeec2d16afabc65c8f5e06eac7 1. 初识Redis2. Redis常见命令2.1 通用命令2.2 String 类型2.2.1 String 的常见命令2.2.2 key 结构 2.3 Hash 类型2.4 List 类型2.…

第58步 深度学习图像识别:Transformer可视化(Pytorch)

一、写在前面 &#xff08;1&#xff09;pytorch_grad_cam库 这一期补上基于基于Transformer框架可视化的教程和代码&#xff0c;使用的是pytorch_grad_cam库&#xff0c;以Bottleneck Transformer模型为例。 &#xff08;2&#xff09;算法分类 pytorch_grad_cam库中包含的…

2023.8.19-2023.8.XX 周报【人脸3D+虚拟服装方向基础调研-Cycle Diffusion\Diffusion-GAN\】更新中

学习目标 1. 这篇是做diffusion和gan结合的&#xff0c;可以参照一下看看能不能做cyclegan的形式&#xff0c;同时也可以调研一下有没有人follow这篇论文做了类似cyclegan的事情 Diffusion-GAN论文精读https://arxiv.org/abs/2206.02262 2. https://arxiv.org/abs/2212.06…

无涯教程-TensorFlow - 优化器

Optimizers是扩展类&#xff0c;其中包括用于训练特定模型的附加信息&#xff0c;Optimizers类使用给定的参数初始化&#xff0c;用于提高速度和性能&#xff0c;以训练特定模型。 TensorFlow的基本Optimizers是- tf.train.Optimizer 此类在tensorflow/python/training/opti…

高级艺术二维码制作教程

最近不少关于二维码制作的&#xff0c;而且都是付费。大概就是一个好看的二维码&#xff0c;扫描后跳转网址。本篇文章使用Python来实现&#xff0c;这么简单花啥钱呢&#xff1f;学会&#xff0c;拿去卖便宜点吧。 文章目录 高级二维码制作环境安装普通二维码艺术二维码动态 …

vue中实现文字检索时候将搜索内容标红

实现结果 html&#xff1a; <div class"searchBox"><span class"bt">标&#8195&#8195题</span><div class"search"><div class"shuru"><!-- <span class"title">生产经营<…

安装软件包

安装软件包 创建一个名为 /home/curtis/ansible/packages.yml 的 playbook : 将 php 和 mariadb 软件包安装到 dev、test 和 prod 主机组中的主机上 将 RPM Development Tools 软件包组安装到 dev 主机组中的主机上 将 dev 主机组中主机上的所有软件包更新为最新版本 vim packa…

Spring Cloud 系列之OpenFeign:(7)链路追踪zipkin

传送门 Spring Cloud Alibaba系列之nacos&#xff1a;(1)安装 Spring Cloud Alibaba系列之nacos&#xff1a;(2)单机模式支持mysql Spring Cloud Alibaba系列之nacos&#xff1a;(3)服务注册发现 Spring Cloud 系列之OpenFeign&#xff1a;(4)集成OpenFeign Spring Cloud …

填充柄功能

单元格右下角十字符号 顺序式填充 输入1,2&#xff0c;直接拉取即可实现顺序1到10. 复制式填充 CtrlD或者拉取&#xff0c;选择右下角复制单元格。 规律式填充 输入星期一&#xff0c;星期二&#xff0c;下拉一直可以到星期日 自定义填充 选择文件-》选项-》自定义序列 输…

proteus结合keil-arm编译器构建STM32单片机项目进行仿真

proteus是可以直接创建设计图和源码的&#xff0c;但是源码编译它需要借助keil-arm编译器&#xff0c;也就是我们安装keil-mdk之后自带的编译器。 下面给出一个完整的示例&#xff0c;主要是做一个LED灯闪烁的效果。 新建工程指定路径&#xff0c;Schematic,PCB layout都选择默…

leetcode292. Nim 游戏(博弈论 - java)

Nim 游戏 Nim 游戏题目描述博弈论 上期经典算法 Nim 游戏 难度 - 简单 原题链接 - Nim游戏 题目描述 你和你的朋友&#xff0c;两个人一起玩 Nim 游戏&#xff1a; 桌子上有一堆石头。 你们轮流进行自己的回合&#xff0c; 你作为先手 。 每一回合&#xff0c;轮到的人拿掉 1 -…

Mac OS下应用Python+Selenium实现web自动化测试

在Mac环境下应用PythonSelenium实现web自动化测试 在这个过程中要注意两点&#xff1a; 1.在终端联网执行命令“sudo pip install –U selenium”如果失败了的话&#xff0c;可以尝试用命令“sudo easy_install selenium”来安装selenium; 2.安装好PyCharm后新建project&…

AI 绘画Stable Diffusion 研究(八)sd采样方法详解

大家好&#xff0c;我是风雨无阻。 本文适合人群&#xff1a; 希望了解stable Diffusion WebUI中提供的Sampler究竟有什么不同&#xff0c;想知道如何选用合适采样器以进一步提高出图质量的朋友。 想要进一步了解AI绘图基本原理的朋友。 对stable diffusion AI绘图感兴趣的朋…

C++11并发与多线程笔记(10) future其他成员函数、shared_future、atomic

C11并发与多线程笔记&#xff08;10&#xff09; future其他成员函数、shared_future、atomic 1、std::future 的成员函数1.1 std::future_status 2、std::shared_future&#xff1a;也是个类模板3、std::atomic原子操作3.1 原子操作概念引出范例&#xff1a;3.2 基本的std::at…

LangChain手记 Chains

整理并翻译自DeepLearning.AILangChain的官方课程&#xff1a;Chains&#xff08;源代码可见&#xff09; Chains 直译链&#xff0c;表达的意思更像是对话链&#xff0c;对话链的背后是思维链 LLM Chain&#xff08;LLM链&#xff09; 首先介绍了一个最简单的例子&#xff0c…

Blender 混合现实3D模型制作指南【XR】

本教程分步展示如何&#xff1a; 减少 3D 模型的多边形数量&#xff0c;使其满足 Microsoft Dynamics 365 Guides 和使用 Microsoft Power Apps 创建的应用程序中包含的混合现实组件的特定性能目标的性能需求。将 3D 模型的多种材质&#xff08;颜色&#xff09;组合成可应用于…

react-vite-antd环境下新建项目

vite 创建一个react项目 1. 安装vite并创建一个react项目1. 我使用的 yarn安装&#xff0c;基本配置项目名字, 框架react &#xff0c;js2. cd vite-react进入项目目录安装node包并启动项目 2. 安装引入Ant Design引入依赖&#xff08;我用的yarn&#xff0c;没有安装的也可以使…