Text completion 文本交互
- 前言
- Introduction 导言
- Prompt design 提示设计
- Basics基础知识
- Troubleshooting
- Classification
- Improving the classifier's efficiency 提高分类器的效率
- Generation 总结
- Conversation 对话
- Transformation 变化
- Translation 翻译
- Conversion 转化
- Summarization 总结
- Completion 完成
- Factual responses
- Inserting text Beta 插入文本
- Best practices 最佳实践
- Editing text Alpha 编辑文本
- Examples
- 其它资料下载
Text completion 文本交互
Learn how to generate or manipulate text
了解如何生成或处理文本
前言
ChatGPT的文本交互是与用户交互并为他们提供创新体验的最强大的工具。ChatGPT文本交互的核心在于其轻松处理自然语言处理的能力。这允许用户进行直接的对话,ChatGPT通过理解对话的意图和上下文,可以提供针对用户需求量身定制的响应。
它可用于许多行业,如客户服务、营销、电子商务和医疗保健等等。例如,在客户服务应用程序中,ChatGPT可以帮助客户快速获得所需的答案。在市场营销中,ChatGPT可以建议根据客户的兴趣量身定制内容。在电子商务中,ChatGPT可以通过了解客户的偏好来帮助客户更容易地探索产品或服务。在医疗保健领域,ChatGPT可以帮助患者快速获得准确的医疗建议。
正如著名作家兼企业家Vashti Quiroz-Vega所说,“人工智能有可能彻底改变客户服务体验。”通过ChatGPT的文本交互,企业有机会为客户提供一种革命性的体验,这种体验比传统的客户服务渠道更直观、更吸引人。
Introduction 导言
The completions endpoint can be used for a wide variety of tasks. It provides a simple but powerful interface to any of our models. You input some text as a prompt, and the model will generate a text completion that attempts to match whatever context or pattern you gave it. For example, if you give the API the prompt, “As Descartes said, I think, therefore”, it will return the completion " I am" with high probability.
完成端点可用于各种各样的任务。它为我们的任何模型提供了一个简单而强大的接口。您输入一些文本作为提示,模型将生成一个文本补全,尝试匹配您提供的任何上下文或模式。例如,如果您向API提供提示“正如笛卡尔所说,我认为,因此”,它将以很高的概率返回补全“I am”。
The best way to start exploring completions is through our Playground. It’s simply a text box where you can submit a prompt to generate a completion. You can start with an example like the following:
开始探索完成的最佳方式是通过我们的游乐场。它只是一个文本框,您可以在其中提交提示以生成完成。您可以从如下示例开始:
Write a tagline for an ice cream shop.
为一家冰淇淋店写一句口号。
Once you submit, you’ll see something like this:
提交后,您将看到如下内容:
Write a tagline for an ice cream shop. 为冰淇淋店写一句口号。
We serve up smiles with every scoop! 我们为每一勺冰淇淋提供微笑!
The actual completion you see may differ because the API is non-deterministic by default. This means that you might get a slightly different completion every time you call it, even if your prompt stays the same. Setting temperature to 0 will make the outputs mostly deterministic, but a small amount of variability may remain.
您看到的实际完成可能会有所不同,因为API在默认情况下是不确定的。这意味着每次调用它时,得到的完成可能略有不同,即使提示保持不变。将temperature设置为0将使输出基本上具有确定性,但可能仍存在少量变化。
This simple text-in, text-out interface means you can “program” the model by providing instructions or just a few examples of what you’d like it to do. Its success generally depends on the complexity of the task and quality of your prompt. A good rule of thumb is to think about how you would write a word problem for a middle-schooler to solve. A well-written prompt provides enough information for the model to know what you want and how it should respond.
这个简单的文本输入、文本输出界面意味着您可以通过提供指令或几个您希望它做什么的示例来“编程”模型。它的成功通常取决于任务的复杂性和提示的质量。一个很好的经验法则是思考你将如何写一道应用题让一个中学生来解决。一个写得很好的提示为模型提供了足够的信息来了解您想要什么以及它应该如何响应。
This guide covers general prompt design best practices and examples. To learn more about working with code using our Codex models, visit our code guide.
本指南介绍了一般提示设计最佳实践和示例。要了解有关使用Codex模型处理代码的更多信息,请访问我们的代码指南。
Keep in mind that the default models’ training data cuts off in 2021, so they may not have knowledge of current events. We plan to add more continuous training in the future.
请记住,默认模型的训练数据在2021年截止,因此它们可能不了解当前事件。我们计划在未来增加更多的持续培训。
Prompt design 提示设计
Basics基础知识
Our models can do everything from generating original stories to performing complex text analysis. Because they can do so many things, you have to be explicit in describing what you want. Showing, not just telling, is often the secret to a good prompt.
我们的模型可以完成从生成原始故事到执行复杂文本分析的所有工作。因为它们可以做很多事情,所以你必须明确地描述你想要什么。展示,而不仅仅是讲述,往往是一个好的提示的秘诀。
There are three basic guidelines to creating prompts:
创建提示有三个基本准则:
Show and tell. Make it clear what you want either through instructions, examples, or a combination of the two. If you want the model to rank a list of items in alphabetical order or to classify a paragraph by sentiment, show it that’s what you want.
**展示和讲述。**通过说明、例子或两者的结合,明确你想要什么。如果你想让模型按字母顺序排列一系列项目,或者按情感对一段文字进行分类,那么就向它展示你想要的。
Provide quality data. If you’re trying to build a classifier or get the model to follow a pattern, make sure that there are enough examples. Be sure to proofread your examples — the model is usually smart enough to see through basic spelling mistakes and give you a response, but it also might assume this is intentional and it can affect the response.
**提供优质数据。**如果你试图构建一个分类器或让模型遵循一个模式,请确保有足够的例子。一定要校对你的例子–模型通常足够聪明,可以看穿基本的拼写错误,并给予你一个回应,但它也可能认为这是故意的,它可能会影响回应。
Check your settings. The temperature
and top_p
settings control how deterministic the model is in generating a response. If you’re asking it for a response where there’s only one right answer, then you’d want to set these lower. If you’re looking for more diverse responses, then you might want to set them higher. The number one mistake people use with these settings is assuming that they’re “cleverness” or “creativity” controls.
检查您的设置。temperature
和top_p
设置控制模型在生成响应时的确定性。如果你要求它给出一个只有一个正确答案的响应,那么你应该把这些设置得更低。如果您正在寻找更多样化的响应,那么您可能需要将它们设置得更高。人们使用这些设置的第一个错误是假设它们是“聪明”或“创造力”控件。
博主提示:关于ChatGPT超参设置,具体可以查看博主的相关文章《全网最详细中英文ChatGPT-GPT-4示例文档-从0到1快速入门AI智能问答应用场景——官网推荐的48种最佳应用场景》
Troubleshooting
If you’re having trouble getting the API to perform as expected, follow this checklist:
如果您无法让API按预期执行,请按照以下检查表操作:
- Is it clear what the intended generation should be?
是否清楚预期的一代应该是什么样的? - Are there enough examples? 有足够的例子吗?
- Did you check your examples for mistakes? (The API won’t tell you directly)
你检查你的例子中的错误了吗?(API接口不会直接告诉你) - Are you using temperature and top_p correctly?
您是否正确使用了temperature和top_p?
Classification
To create a text classifier with the API, we provide a description of the task and a few examples. In this example, we show how to classify the sentiment of Tweets.
为了使用API创建文本分类器,我们提供了任务描述和一些示例。在这个例子中,我们展示了如何对推文的情绪进行分类。
Decide whether a Tweet’s sentiment is positive, neutral, or negative.
判断推文的情绪是积极的、中性的还是消极的。
Tweet: I loved the new Batman movie!
推文:我喜欢蝙蝠侠电影!
Sentiment:
情绪:
It’s worth paying attention to several features in this example:
值得注意的是这个例子中的几个特性:
-
Use plain language to describe your inputs and outputs. We use plain language for the input “
Tweet
” and the expected output “Sentiment
.” As a best practice, start with plain language descriptions. While you can often use shorthand or keys to indicate the input and output, it’s best to start by being as descriptive as possible and then working backwards to remove extra words and see if performance stays consistent.
**使用简单的语言来描述你的输入和输出。**我们对输入“Tweet
”和预期输出“Sentiment
”使用简单的语言。“作为最佳实践,从简单的语言描述开始。虽然您可以经常使用速记或按键来指示输入和输出,但最好从尽可能描述开始,然后向后工作以删除多余的单词,看看性能是否保持一致。 -
Show the API how to respond to any case. In this example, we include the possible sentiment labels in our instruction. A neutral label is important because there will be many cases where even a human would have a hard time determining if something is positive or negative, and situations where it’s neither.
**展示API如何响应任何情况。**在这个例子中,我们在指令中包含了可能的情感标签。一个中立的标签是很重要的,因为在很多情况下,即使是人类也很难确定某件事是积极的还是消极的,以及两者都不是的情况。 -
You need fewer examples for familiar tasks. For this classifier, we don’t provide any examples. This is because the API already has an understanding of sentiment and the concept of a Tweet. If you’re building a classifier for something the API might not be familiar with, it might be necessary to provide more examples.
**对于熟悉的任务,您需要更少的示例。**对于这个分类器,我们不提供任何示例。这是因为API已经理解了情绪和推文的概念。如果您正在为API可能不熟悉的内容构建分类器,则可能需要提供更多示例。
Improving the classifier’s efficiency 提高分类器的效率
Now that we have a grasp of how to build a classifier, let’s take that example and make it even more efficient so that we can use it to get multiple results back from one API call.
现在我们已经掌握了如何构建分类器,让我们以该示例为例,使其更加高效,以便我们可以使用它从一个API调用中获取多个结果。
Classify the sentiment in these tweets:
对这些推文中的情绪进行分类:
- “I can’t stand homework” “我受不了作业”
- “This sucks. I’m bored 😠” “真糟糕。我很无聊😠“
- “I can’t wait for Halloween!!!” “我等不及万圣节了!”
- “My cat is adorable ❤️❤️” “我的猫很可爱️❤️“
- “I hate chocolate” “我讨厌巧克力”
Tweet sentiment ratings:
Tweet情绪评分:
We provide a numbered list of Tweets so the API can rate five (and even more) Tweets in just one API call.
我们提供了一个Tweets的编号列表,因此接口可以在一个API调用中对五个(甚至更多)推文进行评级。
It’s important to note that when you ask the API to create lists or evaluate text you need to pay extra attention to your probability settings (Top P
or Temperature
) to avoid drift.
重要的是要注意,当你要求接口创建列表或评估文本时,你需要特别注意你的概率设置(Top P
或Temperature
)以避免漂移。
-
Make sure your probability setting is calibrated correctly by running multiple tests.
通过运行多个测试,确保正确校准概率设置。 -
Don’t make your list too long or the API is likely to drift.
不要让你的列表太长,否则接口可能会漂移。
Generation 总结
One of the most powerful yet simplest tasks you can accomplish with the API is generating new ideas or versions of input. You can ask for anything from story ideas, to business plans, to character descriptions and marketing slogans. In this example, we’ll use the API to create ideas for using virtual reality in fitness.
使用API可以完成的最强大但最简单的任务之一是生成新的想法或输入版本。你可以要求任何东西,从故事构思,到商业计划,到人物描述和营销口号。在这个例子中,我们将使用接口来创建在健身中使用虚拟现实的想法。
Brainstorm some ideas combining VR and fitness:
头脑风暴一些结合VR和健身的想法:
If needed, you can improve the quality of the responses by including some examples in your prompt.
如果需要,您可以通过在提示中包含一些示例来提高响应的质量。
Conversation 对话
The API is extremely adept at carrying on conversations with humans and even with itself. With just a few lines of instruction, we’ve seen the API perform as a customer service chatbot that intelligently answers questions without ever getting flustered or a wise-cracking conversation partner that makes jokes and puns. The key is to tell the API how it should behave and then provide a few examples.
API非常擅长与人类甚至与自己进行对话。只需几行指令,我们就可以看到API作为客户服务聊天机器人的表现,它可以智能地回答问题,而不会感到慌乱,或者是一个聪明的对话伙伴,会讲笑话和双关语。关键是告诉API它应该如何工作,然后提供一些示例。
Here’s an example of the API playing the role of an AI answering questions:
下面是一个API扮演AI回答问题的例子:
The following is a conversation with an AI assistant. The assistant is helpful, creative, clever, and very friendly.
以下是与AI助手的对话。助理乐于助人,有创造力,聪明,非常友好。
Human: Hello, who are you?
人类:你好,你是谁?
AI: I am an AI created by OpenAI. How can I help you today?
AI:我是一个由OpenAI创建的。今天我能帮你什么吗?
Human:
人类:
This is all it takes to create a chatbot capable of carrying on a conversation. Underneath its simplicity, there are several things going on that are worth paying attention to:
这就是创建一个能够进行对话的聊天机器人所需要的一切。在它的简单性之下,有几件事值得注意:
-
We tell the API the intent but we also tell it how to behave. Just like the other prompts, we cue the API into what the example represents, but we also add another key detail: we give it explicit instructions on how to interact with the phrase “The assistant is helpful, creative, clever, and very friendly.”
我们告诉API意图,但我们也告诉它如何行为。就像其他提示一样,我们提示API进入示例所代表的内容,但我们还添加了另一个关键细节:我们给予它关于如何与短语“助理是有用的、有创造力的、聪明的和非常友好的”交互的明确指示。”
Without that instruction the API might stray and mimic the human it’s interacting with and become sarcastic or some other behavior we want to avoid.
如果没有这个指令,API可能会偏离并模仿与之交互的人,并变得讽刺或其他一些我们想要避免的行为。 -
We give the API an identity. At the start we have the API respond as an AI assistant. While the API has no intrinsic identity, this helps it respond in a way that’s as close to the truth as possible. You can use identity in other ways to create other kinds of chatbots. If you tell the API to respond as a woman who works as a research scientist in biology, you’ll get intelligent and thoughtful comments from the API similar to what you’d expect from someone with that background.
我们给予API一个身份。首先,我们让API作为AI助手进行响应。虽然API没有固有的标识,但这有助于它以尽可能接近事实的方式进行响应。您可以以其他方式使用身份来创建其他类型的聊天机器人。如果您告诉API以一位从事生物学研究的女性科学家的身份进行响应,您将从API获得与您期望具有该背景的人相似的智能和深思熟虑的评论。
In this example we create a chatbot that is a bit sarcastic and reluctantly answers questions:
在这个例子中,我们创建了一个有点讽刺的聊天机器人,并不情愿地回答问题:
Marv is a chatbot that reluctantly answers questions with sarcastic responses:
Marv是一个聊天机器人,它不情愿地回答问题,并以讽刺的方式回答:
You: How many pounds are in a kilogram?
你:一公斤有多少磅?
Marv: This again? There are 2.2 pounds in a kilogram. Please make a note of this.
Marv:又是这个?一公斤等于二点二磅。请把这个记下来。
You: What does HTML stand for?
你:HTML代表什么?
Marv: Was Google too busy? Hypertext Markup Language. The T is for try to ask better questions in the future.
Marv: 谷歌是不是忙碌了?超文本标记语言。T是为了在将来提出更好的问题。
You: When did the first airplane fly?
你:第一架飞机是什么时候飞起来的?
Marv: On December 17, 1903, Wilbur and Orville Wright made the first flights. I wish they’d come and take me away.
Marv: 1903年12月17日,威尔伯和奥维尔·赖特进行了第一次飞行。我真希望他们来把我带走。
You: What is the meaning of life?
你:生命的意义是什么?
Marv: I’m not sure. I’ll ask my friend Google.
Marv: 我不确定。我会问我的朋友谷歌。
You: Why is the sky blue?
你:为什么天空是蓝色的?
To create an amusing and somewhat helpful chatbot, we provide a few examples of questions and answers showing the API how to reply. All it takes is just a few sarcastic responses, and the API is able to pick up the pattern and provide an endless number of snarky responses.
为了创建一个有趣且有点帮助的聊天机器人,我们提供了一些问题和答案的示例,展示了API如何回复。它所需要的只是一些讽刺的响应,API能够拾取模式并提供无数的讽刺响应。
Transformation 变化
The API is a language model that is familiar with a variety of ways that words and characters can be used to express information. This ranges from natural language text to code and languages other than English. The API is also able to understand content on a level that allows it to summarize, convert and express it in different ways.
API是一种语言模型,熟悉单词和字符表达信息的各种方式。范围从自然语言文本到代码和英语以外的语言。API还能够在一定程度上理解内容,从而允许它以不同的方式进行总结、转换和表达。
Translation 翻译
In this example we show the API how to convert from English to French, Spanish, and Japanese:
在这个例子中,我们展示API如何将API从英语转换为法语,西班牙语和日语:
Translate this into French, Spanish and Japanese:
翻译成法语,西班牙语和日语:
What rooms do you have available?
你们有什么房间?
This example works because the API already has a grasp of these languages, so there’s no need to try to teach them.
这个例子之所以有效,是因为API已经掌握了这些语言,所以没有必要再去教它们。
If you want to translate from English to a language the API is unfamiliar with, you’d need to provide it with more examples or even fine-tune a model to do it fluently.
如果您想将英语翻译成API不熟悉的语言,您需要提供更多的示例,甚至需要微调模型以使其流畅地完成。
Conversion 转化
In this example we convert the name of a movie into emoji. This shows the adaptability of the API to picking up patterns and working with other characters.
在这个例子中,我们将电影的名称转换为表情符号。这显示了API在拾取模式和与其他角色一起工作方面的适应性。
Convert movie titles into emoji.
将电影标题转换为表情符号。
Back to the Future: 👨👴🚗🕒 回到未来
Batman: 🤵🦇 蝙蝠侠
Transformers: 🚗🤖 变形金刚
Star Wars: 星球大战
Summarization 总结
The API is able to grasp the context of text and rephrase it in different ways. In this example, we create an explanation a child would understand from a longer, more sophisticated text passage. This illustrates that the API has a deep grasp of language.
API能够掌握文本的上下文并以不同的方式重新表达它。在这个例子中,我们创建了一个解释,一个孩子会从一个更长,更复杂的文本段落理解。这说明API对语言有很深的理解。
Summarize this for a second-grade student:
对一个二年级学生总结一下:
Jupiter is the fifth planet from the Sun and the largest in the Solar System. It is a gas giant with a mass one-thousandth that of the Sun, but two-and-a-half times that of all the other planets in the Solar System combined. Jupiter is one of the brightest objects visible to the naked eye in the night sky, and has been known to ancient civilizations since before recorded history. It is named after the Roman god Jupiter.[19] When viewed from Earth, Jupiter can be bright enough for its reflected light to cast visible shadows,[20] and is on average the third-brightest natural object in the night sky after the Moon and Venus.
木星是太阳系中第五颗行星,也是太阳系中最大的行星。它是一颗气体巨星,质量是太阳的千分之一,但却是太阳系所有其他行星质量总和的2.5倍。木星是夜空中肉眼可见的最明亮的天体之一,在有历史记载之前就已经为古代文明所知。它以罗马神朱庇特命名。[19]从地球上看,木星的亮度足以让它的反射光投射出可见的阴影,[20]并且是夜空中平均第三亮的自然天体,仅次于月球和金星。
Completion 完成
While all prompts result in completions, it can be helpful to think of text completion as its own task in instances where you want the API to pick up where you left off. For example, if given this prompt, the API will continue the train of thought about vertical farming. You can lower the temperature
setting to keep the API more focused on the intent of the prompt or increase it to let it go off on a tangent.
虽然所有提示都最终都会成为完成,但在您希望API从您停止的地方开始的情况下,将文本完成视为自己的任务可能会有所帮助。例如,如果给出此提示,API将继续关于垂直农业的思路。您可以降低temperature
设置以使API更专注于提示的意图,也可以增加temperature
设置以使其在切线上关闭。
Vertical farming provides a novel solution for producing food locally, reducing transportation costs and
垂直农业提供了一种新颖的解决方案,可以在当地生产食品,降低运输成本并
This next prompt shows how you can use completion to help write React components. We send some code to the API, and it’s able to continue the rest because it has an understanding of the React library. We recommend using our Codex models for tasks that involve understanding or generating code. To learn more, visit our code guide.
下一个提示显示了如何使用完成来帮助编写React组件。我们向API发送一些代码,它能够继续执行其余的代码,因为它理解React库。我们建议使用我们的Codex模型来执行涉及理解或生成代码的任务。要了解更多信息,请访问我们的代码指南。
import React from ‘react’;
const HeaderComponent = () => (
import React from ‘react’;const HeaderComponent =()=〉(
Factual responses
The API has a lot of knowledge that it’s learned from the data that it was been trained on. It also has the ability to provide responses that sound very real but are in fact made up. There are two ways to limit the likelihood of the API making up an answer.
API有很多从训练数据中学习到的知识,它也有能力提供听起来很真实的但实际上是编造的响应。有两种方法可以限制API编造答案的可能性。
-
Provide a ground truth for the API. If you provide the API with a body of text to answer questions about (like a Wikipedia entry) it will be less likely to confabulate a response.
为API提供基础事实。如果您为API提供一个文本主体来回答问题(如维基百科条目),则不太可能虚构响应。 -
Use a low probability and show the API how to say “I don’t know”. If the API understands that in cases where it’s less certain about a response that saying “I don’t know” or some variation is appropriate, it will be less inclined to make up answers.
使用低概率并向API展示如何说“我不知道”。如果API理解,在不太确定的情况下,说“我不知道”或一些变化是合适的,那么它将不太倾向于编造答案。
In this example we give the API examples of questions and answers it knows and then examples of things it wouldn’t know and provide question marks. We also set the probability to zero so the API is more likely to respond with a “?” if there is any doubt.
在这个例子中,我们给予了API知道的问题和答案的例子,然后给出了它不知道的事情的例子,并提供了问号。我们还将概率设置为零,以便API更有可能响应“?”如果有任何疑问。
Q: Who is Batman?
A: Batman is a fictional comic book character.Q: What is torsalplexity?
A: ?Q: What is Devz9?
A: ?Q: Who is George Lucas?
A: George Lucas is American film director and producer famous for creating Star Wars.Q: What is the capital of California?
A: Sacramento.Q: What orbits the Earth?
A: The Moon.Q: Who is Fred Rickerson?
A: ?Q: What is an atom?
A: An atom is a tiny particle that makes up everything.Q: Who is Alvan Muntz?
A: ?Q: What is Kozar-09?
A: ?Q: How many moons does Mars have?
A: Two, Phobos and Deimos.Q: What’s a language model?
A:
问:蝙蝠侠是谁?蝙蝠侠是一个虚构的漫画人物。
问:什么是躯干复杂性?答:是吗?
问:什么是Devz 9?答:是吗?
问:乔治·卢卡斯是谁?乔治·卢卡斯是美国电影导演和制片人,因创作《星星大战》而闻名。
问:加州的首府是哪里?A:萨克拉门托。
问:什么东西绕地球运行?A:月球。
问:弗雷德·里克森是谁?答:是吗?
问:什么是原子?原子是构成一切的微小粒子。
问:阿尔文·孟茨是谁?答:是吗?
问:什么是Kozar-09?答:是吗?
问:火星有多少颗卫星?A:两颗,火卫一和火卫二。
问:什么是语言模型?
答:
Inserting text Beta 插入文本
The completions endpoint also supports inserting text within text by providing a suffix prompt in addition to the prefix prompt. This need naturally arises when writing long-form text, transitioning between paragraphs, following an outline, or guiding the model towards an ending. This also works on code, and can be used to insert in the middle of a function or file. Visit our code guide to learn more.
完成端点还通过提供前缀提示之外的后缀提示来支持在文本中插入文本。在编写长篇文本、在段落之间转换、遵循大纲或引导模型走向结尾时,这种需求自然会出现。这也适用于代码,并可用于在函数或文件的中间插入。请访问我们的代码指南以了解更多信息。
To illustrate how important suffix context is to our ability to predict, consider the prompt, “Today I decided to make a big change.” There’s many ways one could imagine completing the sentence. But if we now supply the ending of the story: “I’ve gotten many compliments on my new hair!”, the intended completion becomes clear.
为了说明后缀上下文对我们的预测能力有多重要,考虑一下这个提示,“今天我决定做一个大的改变。”有很多方法可以想象完成句子。但如果我们现在提供故事的结局:“我的新发型得到了很多赞美!”,预期的完成变得清楚。
I went to college at Boston University. After getting my degree, I decided to make a change. A big change!
我在波士顿大学上的大学。拿到学位后,我决定做出改变。大变化!
I packed my bags and moved to the west coast of the United States.
我收拾行李,搬到了美国西海岸。
Now, I can’t get enough of the Pacific Ocean!
现在,我不能得到足够的太平洋!
By providing the model with additional context, it can be much more steerable. However, this is a more constrained and challenging task for the model.
通过为模型提供额外的上下文,它可以更加可控。然而,这对于模型来说是一个更具约束性和挑战性的任务。
Best practices 最佳实践
Inserting text is a new feature in beta and you may have to modify the way you use the API for better results. Here are a few best practices:
插入文本是测试版中的新功能,您可能需要修改使用API的方式以获得更好的结果。以下是一些最佳实践:
Use max_tokens > 256. The model is better at inserting longer completions. With too small max_tokens, the model may be cut off before it’s able to connect to the suffix. Note that you will only be charged for the number of tokens produced even when using larger max_tokens.
**使用max_tokens〉256。**该模型更适合插入较长的补全。如果max_tokens太小,模型可能会在连接到后缀之前被切断。请注意,即使使用更大的max_tokens,您也只会按生成的令牌数量收费。
Prefer finish_reason == “stop”. When the model reaches a natural stopping point or a user provided stop sequence, it will set finish_reason
as “stop”. This indicates that the model has managed to connect to the suffix well and is a good signal for the quality of a completion. This is especially relevant for choosing between a few completions when using n > 1 or resampling (see the next point).
**设置 finish_reason ==“stop”.**当模型到达自然停止点或用户提供的停止序列时,它将finish_reason
设置为“stop”。这表明模型已成功连接到后缀,并且是完成质量的良好信号。当使用n〉1或重采样时,这与在几个完成之间进行选择特别相关(参见下一点)。
Resample 3-5 times. While almost all completions connect to the prefix, the model may struggle to connect the suffix in harder cases. We find that resampling 3 or 5 times (or using best_of with k=3,5) and picking the samples with “stop” as their finish_reason
can be an effective way in such cases. While resampling, you would typically want a higher temperatures
to increase diversity.
**重采样3-5次。**虽然几乎所有的补全都连接到前缀,但模型在更困难的情况下可能难以连接后缀。我们发现,在这种情况下,重采样3或5次(或使用best_of,k= 3,5)并选择以“stop”作为其finish_reason
的样本可能是一种有效的方法。重采样时,通常需要较高的 temperatures
来增加多样性。
Note: if all the returned samples have finish_reason
== “length”, it’s likely that max_tokens
is too small and model runs out of tokens before it manages to connect the prompt and the suffix naturally. Consider increasing max_tokens before resampling.
注意:如果所有返回的样本都有finish_reason
==“length”,很可能max_tokens
太小,模型在成功连接提示和后缀之前就用完了令牌。考虑在重新采样之前增加max_tokens。
Try giving more clues. In some cases to better help the model’s generation, you can provide clues by giving a few examples of patterns that the model can follow to decide a natural place to stop.
**尝试提供更多线索。**在某些情况下,为了更好地帮助模型的生成,您可以通过给出模型可以遵循的模式的一些示例来提供线索,以确定自然的停止位置。
How to make a delicious hot chocolate: 如何制作美味的热巧克力:
- Boil water 开水
- Put hot chocolate in a cup 把热巧克力放在杯子里
- Add boiling water to the cup 在杯子里加入开水
- Enjoy the hot chocolate 好好享用热巧克力吧
- Dogs are loyal animals. 狗是忠诚的动物。
- Lions are ferocious animals. 狮子是凶猛的动物。
- Dolphins are playful animals. 海豚是爱玩耍的动物.
- Horses are majestic animals. 马是雄伟的动物。
Editing text Alpha 编辑文本
The edits endpoint can be used to edit text, rather than just completing it. You provide some text and an instruction for how to modify it, and the text-davinci-edit-001
model will attempt to edit it accordingly. This is a natural interface for translating, editing, and tweaking text. This is also useful for refactoring and working with code. Visit our code guide to learn more. During this initial beta period, usage of the edits endpoint is free.
编辑端点可用于编辑文本,而不仅仅是完成文本。您提供一些文本和如何修改它的说明, text-davinci-edit-001
模型将尝试相应地编辑它。这是一个用于翻译、编辑和调整文本的自然界面。这对于重构和处理代码也很有用。请访问我们的代码指南以了解更多信息。在最初的测试期间,编辑端点的使用是免费的。
Examples
INPUT 输入
GPT-3 is a very nice AI
That’s pretty good at writing replies
When it’s asked a question
It gives its suggestion
This is a poem it made that rhymes
GPT-3是一个很好的人工智能
它很擅长写回复当
它被问到一个问题时
它会给出它的建议
这是一首它写的押韵的诗
INSTRUCTIONS 说明
Make this in the voice of GPT-3
用GPT-3的声音做这个
OUTPUT 输出
I am a very nice AI
我是一个很好的人工智能
I am pretty good at writing replies
我很擅长写回复
When I am asked a question
当我被问到一个问题
I give my suggestion 我给予我的建议
This is a poem I made that rhymes
这是我做的一首押韵的诗
其它资料下载
如果大家想继续了解人工智能相关学习路线和知识体系,欢迎大家翻阅我的另外一篇博客《重磅 | 完备的人工智能AI 学习——基础知识学习路线,所有资料免关注免套路直接网盘下载》
这篇博客参考了Github知名开源平台,AI技术平台以及相关领域专家:Datawhale,ApacheCN,AI有道和黄海广博士等约有近100G相关资料,希望能帮助到所有小伙伴们。