Prompt工程师指南[资料整合篇]:Prompt最新前沿论文整理合集、工具和库推荐、数据集整合、推荐阅读内容等,超全面资料

Prompt工程师指南[资料整合篇]:Prompt最新前沿论文整理合集、工具和库推荐、数据集整合、推荐阅读内容等,超全面资料

在这里插入图片描述

1.论文合集

The following are the latest papers (sorted by release date) on prompt engineering. We update this on a daily basis and new papers come in. We incorporate summaries of these papers to the guides above every week.

1.1概述类Overviews

  • Augmented Language Models: a Survey (Feb 2023)
  • A Survey for In-context Learning (Dec 2022)
  • Towards Reasoning in Large Language Models: A Survey (Dec 2022)
  • Reasoning with Language Model Prompting: A Survey (Dec 2022)
  • Emergent Abilities of Large Language Models (Jun 2022)
  • A Taxonomy of Prompt Modifiers for Text-To-Image Generation (Apr 2022)
  • Pre-train, Prompt, and Predict: A Systematic Survey of Prompting Methods in Natural Language Processing (Jul 2021)

1.2方法类Approaches

  • Model-tuning Via Prompts Makes NLP Models Adversarially Robust (Mar 2023)
  • Structure Pretraining and Prompt Tuning for Knowledge Graph Transfer (March 2023)
  • CoTEVer: Chain of Thought Prompting Annotation Toolkit for Explanation Verification (March 2023)
  • Larger language models do in-context learning differently (March 2023)
  • OpenICL: An Open-Source Framework for In-context Learning (March 2023)
  • Dynamic Prompting: A Unified Framework for Prompt Tuning (March 2023)
  • Multitask Prompt Tuning Enables Parameter-Efficient Transfer Learning (March 2023)
  • Effectiveness of Data Augmentation for Prefix Tuning with Limited Data (March 2023)
  • Mixture of Soft Prompts for Controllable Data Generation (March 2023)
  • Prompt, Generate, then Cache: Cascade of Foundation Models makes Strong Few-shot Learners (March 2023)
  • How Robust is GPT-3.5 to Predecessors? A Comprehensive Study on Language Understanding Tasks (March 2023)
  • Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT (Feb 2023)
  • EvoPrompting: Language Models for Code-Level Neural Architecture Search (Feb 2023)
  • In-Context Instruction Learning (Feb 2023)
  • Chain of Hindsight Aligns Language Models with Feedback (Feb 2023)
  • Language Is Not All You Need: Aligning Perception with Language Models (Feb 2023)
  • Automatic Prompt Augmentation and Selection with Chain-of-Thought from Labeled Data (Feb 2023)
  • Active Prompting with Chain-of-Thought for Large Language Models (Feb 2023)
  • More than you’ve asked for: A Comprehensive Analysis of Novel Prompt Injection Threats to Application-Integrated Large Language Models (Feb 2023)
  • A Prompt Pattern Catalog to Enhance Prompt Engineering with ChatGPT (Feb 2023)
  • Guiding Large Language Models via Directional Stimulus Prompting (Feb 2023)
  • How Does In-Context Learning Help Prompt Tuning? (Feb 2023)
  • Scalable Prompt Generation for Semi-supervised Learning with Language Models (Feb 2023)
  • Bounding the Capabilities of Large Language Models in Open Text Generation with Prompt Constraints (Feb 2023)
  • À-la-carte Prompt Tuning (APT): Combining Distinct Data Via Composable Prompting (Feb 2023)
  • GraphPrompt: Unifying Pre-Training and Downstream Tasks for Graph Neural Networks (Feb 2023)
  • The Capacity for Moral Self-Correction in Large Language Models (Feb 2023)
  • SwitchPrompt: Learning Domain-Specific Gated Soft Prompts for Classification in Low-Resource Domains (Feb 2023)
  • Evaluating the Robustness of Discrete Prompts (Feb 2023)
  • Compositional Exemplars for In-context Learning (Feb 2023)
  • Hard Prompts Made Easy: Gradient-Based Discrete Optimization for Prompt Tuning and Discovery (Feb 2023)
  • Multimodal Chain-of-Thought Reasoning in Language Models (Feb 2023)
  • Large Language Models Can Be Easily Distracted by Irrelevant Context (Feb 2023)
  • Synthetic Prompting: Generating Chain-of-Thought Demonstrations for Large Language Models (Feb 2023)
  • Progressive Prompts: Continual Learning for Language Models (Jan 2023)
  • Batch Prompting: Efficient Inference with LLM APIs (Jan 2023)
  • Demonstrate-Search-Predict: Composing retrieval and language models for knowledge-intensive NLP (Dec 2022)
  • On Second Thought, Let’s Not Think Step by Step! Bias and Toxicity in Zero-Shot Reasoning (Dec 2022)
  • Constitutional AI: Harmlessness from AI Feedback (Dec 2022)
  • Successive Prompting for Decomposing Complex Questions (Dec 2022)
  • Large Language Models are reasoners with Self-Verification (Dec 2022)
  • Discovering Language Model Behaviors with Model-Written Evaluations (Dec 2022)
  • Structured Prompting: Scaling In-Context Learning to 1,000 Examples (Dec 2022)
  • PAL: Program-aided Language Models (Nov 2022)
  • Large Language Models Are Human-Level Prompt Engineers (Nov 2022)
  • Ignore Previous Prompt: Attack Techniques For Language Models (Nov 2022)
  • Machine Generated Text: A Comprehensive Survey of Threat Models and Detection Methods (Nov 2022)
  • Teaching Algorithmic Reasoning via In-context Learning (Nov 2022)
  • Enhancing Self-Consistency and Performance of Pre-Trained Language Models through Natural Language Inference (Nov 2022)
  • Ask Me Anything: A simple strategy for prompting language models (Oct 2022)
  • Recitation-Augmented Language Models (Oct 2022)
  • ReAct: Synergizing Reasoning and Acting in Language Models (Oct 2022)
  • Prompting GPT-3 To Be Reliable (Oct 2022)
  • Decomposed Prompting: A Modular Approach for Solving Complex Tasks (Oct 2022)
  • Language Models Are Greedy Reasoners: A Systematic Formal Analysis of Chain-of-Thought (Oct 2022)
  • Evaluating the Susceptibility of Pre-Trained Language Models via Handcrafted Adversarial Examples (Sep 2022)
  • Dynamic Prompt Learning via Policy Gradient for Semi-structured Mathematical Reasoning (Sep 2022)
  • Promptagator: Few-shot Dense Retrieval From 8 Examples (Sep 2022)
  • Atlas: Few-shot Learning with Retrieval Augmented Language Models (Nov 2022)
  • DocPrompting: Generating Code by Retrieving the Docs (July 2022)
  • On the Advance of Making Language Models Better Reasoners (June 2022)
  • Large Language Models are Zero-Shot Reasoners (May 2022)
  • Maieutic Prompting: Logically Consistent Reasoning with Recursive Explanations (May 2022)
  • MRKL Systems: A modular, neuro-symbolic architecture that combines large language models, external knowledge sources and discrete reasoning (May 2022)
  • PPT: Pre-trained Prompt Tuning for Few-shot Learning (Mqy 2022)
  • Toxicity Detection with Generative Prompt-based Inference (May 2022)
  • Learning to Transfer Prompts for Text Generation (May 2022)
  • The Unreliability of Explanations in Few-shot Prompting for Textual Reasoning (May 2022)
  • A Taxonomy of Prompt Modifiers for Text-To-Image Generation (Apr 2022)
  • PromptChainer: Chaining Large Language Model Prompts through Visual Programming (Mar 2022)
  • Self-Consistency Improves Chain of Thought Reasoning in Language Models (March 2022)
  • Training language models to follow instructions with human feedback
  • Rethinking the Role of Demonstrations: What Makes In-Context Learning Work? (Feb 2022)
  • Chain of Thought Prompting Elicits Reasoning in Large Language Models (Jan 2022)
  • Show Your Work: Scratchpads for Intermediate Computation with Language Models (Nov 2021)
  • AI Chains: Transparent and Controllable Human-AI Interaction by Chaining Large Language Model Prompts (Oct 2021)
  • Generated Knowledge Prompting for Commonsense Reasoning (Oct 2021)
  • Multitask Prompted Training Enables Zero-Shot Task Generalization (Oct 2021)
  • Reframing Instructional Prompts to GPTk’s Language (Sep 2021)
  • Design Guidelines for Prompt Engineering Text-to-Image Generative Models (Sep 2021)
  • Making Pre-trained Language Models Better Few-shot Learners (Aug 2021)
  • Fantastically Ordered Prompts and Where to Find Them: Overcoming Few-Shot Prompt Order Sensitivity (April 2021)
  • BERTese: Learning to Speak to BERT (April 2021)
  • The Power of Scale for Parameter-Efficient Prompt Tuning (April 2021)
  • Prompt Programming for Large Language Models: Beyond the Few-Shot Paradigm (Feb 2021)
  • Calibrate Before Use: Improving Few-Shot Performance of Language Models (Feb 2021)
  • Prefix-Tuning: Optimizing Continuous Prompts for Generation (Jan 2021)
  • Learning to Generate Task-Specific Adapters from Task Description (Jan 2021)
  • Making Pre-trained Language Models Better Few-shot Learners (Dec 2020)
  • Learning from Task Descriptions (Nov 2020)
  • AutoPrompt: Eliciting Knowledge from Language Models with Automatically Generated Prompts (Oct 2020)
  • Language Models are Few-Shot Learners (May 2020)
  • How Can We Know What Language Models Know? (July 2020)

1.3应用Applications

  • Can Generative Pre-trained Transformers (GPT) Pass Assessments in Higher Education Programming Courses? (Mar 2023)
  • SelfCheckGPT: Zero-Resource Black-Box Hallucination Detection for Generative Large Language Models (Mar 2023)
  • ICL-D3IE: In-Context Learning with Diverse Demonstrations Updating for Document Information Extraction (March 2023)
  • MathPrompter: Mathematical Reasoning using Large Language Models (March 2023)
  • Prompt-Based Learning for Thread Structure Prediction in Cybersecurity Forums (March 2023)
  • Choice Over Control: How Users Write with Large Language Models using Diegetic and Non-Diegetic Prompting (March 2023)
  • Prompting Large Language Models with Answer Heuristics for Knowledge-based Visual Question Answering (March 2023)
  • Soft Prompt Guided Joint Learning for Cross-Domain Sentiment Analysis (March 2023)
  • SpeechPrompt v2: Prompt Tuning for Speech Classification Tasks (March 2023)
  • Goal Driven Discovery of Distributional Differences via Language Descriptions (Feb 2023)
  • Navigating the Grey Area: Expressions of Overconfidence and Uncertainty in Language Models (Feb 2023)
  • TabGenie: A Toolkit for Table-to-Text Generation (Feb 2023)
  • SGL-PT: A Strong Graph Learner with Graph Prompt Tuning (Feb 2023)
  • Few-Shot Table-to-Text Generation with Prompt-based Adapter (Feb 2023)
  • Language Models Are Few-shot Learners for Prognostic Prediction (Feb 2023)
  • STA: Self-controlled Text Augmentation for Improving Text Classifications (Feb 2023)
  • Check Your Facts and Try Again: Improving Large Language Models with External Knowledge and Automated Feedback (Feb 2023)
  • How Generative AI models such as ChatGPT can be (Mis)Used in SPC Practice, Education, and Research? An Exploratory Study (Feb 2023)
  • Grimm in Wonderland: Prompt Engineering with Midjourney to Illustrate Fairytales (Feb 2023)
  • LabelPrompt: Effective Prompt-based Learning for Relation Classification (Feb 2023)
  • Language Model Crossover: Variation through Few-Shot Prompting (Feb 2023)
  • Prompt Tuning of Deep Neural Networks for Speaker-adaptive Visual Speech Recognition (Feb 2023)
  • The Capacity for Moral Self-Correction in Large Language Models (Feb 2023)
  • Prompting for Multimodal Hateful Meme Classification (Feb 2023)
  • PLACES: Prompting Language Models for Social Conversation Synthesis (Feb 2023)
  • Commonsense-Aware Prompting for Controllable Empathetic Dialogue Generation (Feb 2023)
  • Crawling the Internal Knowledge-Base of Language Models (Jan 2023)
  • Legal Prompt Engineering for Multilingual Legal Judgement Prediction (Dec 2022)
  • Investigating Prompt Engineering in Diffusion Models (Nov 2022)
  • Learn to Explain: Multimodal Reasoning via Thought Chains for Science Question Answering (Sep 2022)
  • Conversing with Copilot: Exploring Prompt Engineering for Solving CS1 Problems Using Natural Language (Oct 2022)
  • Piloting Copilot and Codex: Hot Temperature, Cold Prompts, or Black Magic? (Oct 2022)
  • Plot Writing From Scratch Pre-Trained Language Models (July 2022)

1.4 合集类Collections

  • Chain-of-Thought Papers
  • Papers with Code
  • Prompt Papers

2.prompt工具和库

  • AI Test Kitchen
  • betterprompt
  • ClickPrompt
  • DreamStudio
  • DUST
  • Dyno
  • EmergentMind
  • EveryPrompt
  • GPT Index
  • GPTTools
  • hwchase17/adversarial-prompts
  • Interactive Composition Explorer
  • LangChain
  • Lexica
  • loom
  • Metaprompt
  • OpenAI Playground
  • OpenICL
  • OpenPrompt
  • OpenPlayground
  • Playground
  • Prodia
  • Prompt Base
  • Prompt Engine
  • Prompt Generator for OpenAI’s DALL-E 2
  • Promptable
  • PromptInject
  • Prompts.ai
  • PromptPerfect
  • Promptly
  • PromptSource
  • Promptist
  • Scale SpellBook
  • sharegpt
  • ThoughtSource
  • Visual Prompt Builder

3.数据集

  • Anthropic’s Red Team dataset, (paper)
  • Awesome ChatGPT Prompts
  • DiffusionDB
  • Midjourney Prompts
  • P3 - Public Pool of Prompts
  • PartiPrompts
  • Real Toxicity Prompts
  • Stable Diffusion Dataset
  • WritingPrompts

推荐读物

  • 3 Principles for prompt engineering with GPT-3
  • A beginner-friendly guide to generative language models - LaMBDA guide
  • A Complete Introduction to Prompt Engineering for Large Language Models
  • A Generic Framework for ChatGPT Prompt Engineering
  • An SEO’s guide to ChatGPT prompts
  • AI Content Generation
  • AI’s rise generates new job title: Prompt engineer
  • Awesome ChatGPT Prompts
  • Best 100+ Stable Diffusion Prompts
  • Best practices for prompt engineering with OpenAI API
  • Building GPT-3 applications — beyond the prompt
  • Can AI really be protected from text-based attacks?
  • ChatGPT, AI and GPT-3 Apps and use cases
  • ChatGPT Prompts
  • CMU Advanced NLP 2022: Prompting
  • Common Sense as Dark Matter - Yejin Choi | Stanford MLSys #78
  • Curtis64’s set of prompt gists
  • DALL·E 2 Prompt Engineering Guide
  • DALL·E 2 Preview - Risks and Limitations
  • DALLE Prompt Book
  • DALL-E, Make Me Another Picasso, Please
  • Diffusion Models: A Practical Guide
  • Exploiting GPT-3 Prompts
  • Exploring Prompt Injection Attacks
  • Extrapolating to Unnatural Language Processing with GPT-3’s In-context Learning: The Good, the Bad, and the Mysterious
  • Generative AI with Cohere: Part 1 - Model Prompting
  • Get a Load of This New Job: “Prompt Engineers” Who Act as Psychologists to AI Chatbots
  • Giving GPT-3 a Turing Test
  • GPT-3 & Beyond
  • GPT3 and Prompts: A quick primer
  • Hands-on with Bing’s new ChatGPT-like features
  • How to Draw Anything
  • How to get images that don’t suck
  • How to make LLMs say true things
  • How to perfect your prompt writing for AI generators
  • How to write good prompts
  • If I Was Starting Prompt Engineering in 2023: My 8 Insider Tips
  • Indirect Prompt Injection on Bing Chat
  • Interactive guide to GPT-3 prompt parameters
  • Introduction to Reinforcement Learning with Human Feedback
  • In defense of prompt engineering
  • JailBreaking ChatGPT: Everything You Need to Know
  • Language Models and Prompt Engineering: Systematic Survey of Prompting Methods in NLP
  • Learn Prompting
  • Methods of prompt programming
  • Mysteries of mode collapse
  • NLP for Text-to-Image Generators: Prompt Analysis
  • NLP with Deep Learning CS224N/Ling284 - Lecture 11: Promting, Instruction Tuning, and RLHF
  • Notes for Prompt Engineering by sw-yx
  • OpenAI Cookbook
  • OpenAI Prompt Examples for several applications
  • Pretrain, Prompt, Predict - A New Paradigm for NLP
  • Prompt Engineer: Tech’s hottest job title?
  • Prompt Engineering 101 - Introduction and resources
  • Prompt Engineering 101: Autocomplete, Zero-shot, One-shot, and Few-shot prompting
  • Prompt Engineering 101
  • Prompt Engineering - A new profession ?
  • Prompt Engineering by co:here
  • Prompt Engineering by Microsoft
  • Prompt Engineering: The Career of Future
  • Prompt engineering davinci-003 on our own docs for automated support (Part I)
  • Prompt Engineering Guide: How to Engineer the Perfect Prompts
  • Prompt Engineering in GPT-3
  • Prompt Engineering Template
  • Prompt Engineering Topic by GitHub
  • Prompt Engineering: The Ultimate Guide 2023 [GPT-3 & ChatGPT]
  • Prompt Engineering: From Words to Art
  • Prompt Engineering with OpenAI’s GPT-3 and other LLMs
  • Prompt injection attacks against GPT-3
  • Prompt injection to read out the secret OpenAI API key
  • Prompting: Better Ways of Using Language Models for NLP Tasks
  • Prompting for Few-shot Learning
  • Prompting in NLP: Prompt-based zero-shot learning
  • Prompting Methods with Language Models and Their Applications to Weak Supervision
  • Prompts as Programming by Gwern
  • Reverse Prompt Engineering for Fun and (no) Profit
  • So you want to be a prompt engineer: Critical careers of the future
  • Simulators
  • Start with an Instruction
  • Talking to machines: prompt engineering & injection
  • Tech’s hottest new job: AI whisperer. No coding required
  • The Book - Fed Honeypot
  • The ChatGPT Prompt Book
  • The ChatGPT list of lists: A collection of 3000+ prompts, examples, use-cases, tools, APIs, extensions, fails and other resources
  • The Most Important Job Skill of This Century
  • The Mirror of Language
  • The Waluigi Effect (mega-post)
  • Thoughts and impressions of AI-assisted search from Bing
  • Unleash Your Creativity with Generative AI: Learn How to Build Innovative Products!
  • Unlocking Creativity with Prompt Engineering
  • Using GPT-Eliezer against ChatGPT Jailbreaking
  • What Is ChatGPT Doing … and Why Does It Work?
  • Why is ChatGPT so good?

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.rhkb.cn/news/23582.html

如若内容造成侵权/违法违规/事实不符,请联系长河编程网进行投诉反馈email:809451989@qq.com,一经查实,立即删除!

相关文章

AI绘画爆火背后:扩散模型原理及实现

Datawhale干货 技术:Diffusion扩散模型 最近爆火的AI绘图,相信大家并不陌生了。 从AI绘图软件生成的作品打败一众人类艺术家,斩获数字艺术类冠军,到如今DALL.E、Imagen、novelai等国内外平台遍地开花。也许你也曾点开过相关网站&…

我在终端里养了个猫娘

Kira 一个知识丰富且超级温柔的猫娘 🍭食用指南 本项目是一个终端小程序,智慧且温柔的猫娘Kira让你的终端不再单调乏味。 第0步 在计算机合适的位置创建一个文件夹,到本项目的release页面下载对应系统版本的可执行文件到该文件夹 第1步 …

基于OpenAI实现的个人助理

基于OpenAI实现的个人助理 最近OpenAI所开发的ChatGPT非常火,于是我也去体验了一下。 在玩过之余,就想着能不能把它移植到系统环境,成为一个日常的个人助理,帮助我解决学习、开发或者摸鱼时的种种需求。 于是,看过官…

LiTCTF by lingfeng - (crypto全解)

LiTCTF by lingfeng - (crypto全解) 因为这两天有事/(ㄒoㄒ)/~~,错过了litctf的比赛时间,只能现在复现一下密码题了(;༎ຶД༎ຶ) 梦想是红色的 (初级) 社会主义核心价值观编码 Hex?Hex!(初级) cyber一把梭 原来你也玩原神 (初…

【简单易上手】昇思MindSpore邀你定制专属Diffusion模型

昇思MindSpore首个可训练diffusion模型DDPM马上要和大家见面了,操作简单,可训练推理,单卡即可运行,欢迎广大产学研开发者使用启智社区免费Ascend NPU算力体验 最近爆火的AI绘图,相信大家并不陌生了。 从AI绘图软件生成…

基于segment anything model(SAM)相关性研究的各个方向论文/项目汇总

目录 简介anything项目整理AnyObjectAnyGenerationAny3DAnyModelAnyTaskAnyX 论文汇总AnyObejctAnyGenerationAnyModelAnyTask 简介 有关anything相关的主流任务: 2d检测相关(AnyObject), 3d检测相关(Any3D),AI生成相关&#xff…

单卡就能运行AI画画模型,小白也能看懂的教程来了,还有100万卡时免费NPU算力可用丨昇思MindSpore...

允中 发自 凹非寺量子位 | 公众号 QbitAI 昇思MindSpore首个可训练的diffusion模型DDPM马上要和大家见面了,操作简单,可训练推理,单卡即可运行,欢迎广大产学研开发者使用启智社区免费Ascend NPU算力体验。 最近爆火的AI绘图&#…

『2023北京智源大会』视觉与多模态大模型

『2023北京智源大会』视觉与多模态大模型 文章目录 一. Drag Your GAN: Interactive Point-based Manipulation on the Generative Image Manifold | 潘新钢 | 南洋理工大学1. Image Manipulation(图像编辑)背景2. Drag Your GAN 二. Machine Learning for 3D Content Creatio…

ChatGPT出现后是否还建议读计算机专业?

前言 首先,在多模态大模型落地应用之后,产业领域会迎来一次全面的技术升级,很多传统的人力资源岗位会被替代,但是同样也会增加一些新的就业岗位,而对于计算机专业的同学来说,这也是一个新的发展机会。 在…

chatgpt赋能python:Python配置Anaconda

Python配置Anaconda Python作为一个传统和流行的编程语言,在科学领域得到了广泛的应用。Anaconda是Python的一个流行的开源发行版,它提供了Python和其他相关工具的全套解决方案,使得科学计算和数据分析变得更为容易。在本文中,我…

关于图灵测试和中文屋Chinese room的理解

图灵测试与中文屋 这篇文章想分享关于人工智能的“中文屋论证”(也叫汉字屋,Chinese room)。什么是中文屋论证呢,我们知道图灵测试是判断是机器否是人工智能的公认标准。我先说图灵测试,知道了图灵测试就很好理解汉子屋…

彩票的两种分析方法

概率均值二分K线图分析法: 算法是取当前元素的数学平均值为基本,当本值大于均值则在上期数上加超过值,小于则在上期数上减不足值,即大于则阳线,小于则阴线。这样连线后就是K线了。 如对数进行K线方法:33个…

ChatGPT 飙升到搜索引擎第二梯队后,增长放缓

整理 | 陈静琳 责编 | 屠敏 出品 | CSDN(ID:CSDNnews) ChatGPT 的爆火,是昙花一现,还是未来可期? 近日,网站流量分析工具 Similarweb 针对 ChatGPT 目前的数据流量现状进行了一次深度的调研…

去年精准预言AIGC爆发!今年百度又看好这十大科技趋势

萧箫 发自 凹非寺量子位 | 公众号 QbitAI 2023年,我们还会见证新的AI突破吗? 过去一年里,我们围观了ChatGPT的崛起,看见国内外多模态大模型同台竞技,察觉到自动驾驶公司的商业化加速落地,也发现以AI制药为核…

使用chatgpt画一个流程图

是的&#xff0c;ChatGPT可以直接写代码&#xff01; ChatGPT支持许多编程语言&#xff0c;包括Python&#xff0c;JavaScript和C 等。您可以在消息框中键入您的代码&#xff0c;并使用/code命令将其格式化为代码块&#xff0c;以便ChatGPT更好地理解您的请求。 <!DOCTYPE h…

快速串联 RNN / LSTM / Attention / transformer / BERT / GPT

参考&#xff1a; 李宏毅2021/2022春机器学习课程王树森 RNN & Transformer 教程Transformer 详解 文章目录 0. 背景&#xff1a;序列数据及相关任务1. 早期序列模型1.1 循环神经网络 RNN1.2 长短期记忆网络 LSTM1.3 改善 RNN/LSTM 的三个技巧1.3.1 通过堆叠扩展为深度模型…

国产开源50亿参数新模型,合成可控性、质量实现飞跃

关注并星标 从此不迷路 计算机视觉研究院 公众号ID&#xff5c;ComputerVisionGzq 学习群&#xff5c;扫码在主页获取加入方式 计算机视觉研究院专栏 作者&#xff1a;Edison_G 在 AI 绘画领域&#xff0c;很多研究者都在致力于提升 AI 绘画模型的可控性&#xff0c;即让模型生…

多模态大模型技术演进及研究框架

一、多模态预训练概述 多模态表示包含两个或两个以上事物表现形式 模态是事物的一种表现形式,多模态通常包含两个或者两个以上的模态形式,是从多个视角出发对事物进行描述。生活中常见多 模态表示,例如传感器的数据不仅仅包含文字、图像,还可以包括与之匹配的温度、深度信息…

MySQL索引为什么要用B+树实现?

首先&#xff0c;得先了解什么是B树什么是B树 什么是B树 自平衡二叉树虽然能保持查询操作的时间复杂度在O(logn)&#xff0c;但是因为它本质上是一个二叉树&#xff0c;每个节点只能有 2 个子节点&#xff0c;那么当节点个数越多的时候&#xff0c;树的高度也会相应变高&…

Altman:巨型AI模型时代结束;马斯克TruthGPT曝光|每日创新观察

今日看点&#xff1a; OpenAI CEO&#xff1a;巨型AI模型时代已结束Stable Diffusion-XL开启公测马斯克TruthGPT曝光Adobe Premiere Pro 将引入新 AI 工具OpenAI CEO&#xff1a;巨型AI模型时代已结束 参考链接 OpenAI的首席执行官山姆奥特曼&#xff08;Sam Altman&#xff…