GPT Best Practices
GPT 最佳实践指南
This guide shares strategies and tactics for getting better results from GPTs. The methods described here can sometimes be deployed in combination for greater effect. We encourage experimentation to find the methods that work best for you.
本指南分享了在使用 GPT 时获取更好结果的策略和技巧。这里描述的方法有时可以组合使用以达到更好的效果。我们鼓励进行试验,找到适合您的最佳方法。
Some of the examples demonstrated here currently work only with our most capable model. If you don't yet have access to consider joining the waitlist. In general, if you find that a GPT model fails at a task and a more capable model is available, it's often worth trying again with the more capable model.
一些示例在目前只适用于我们最强大的模型。如果您尚未获得访问权限,请考虑加入等待列表。通常情况下,如果您发现 GPT 模型在某项任务上失败了,并且有一个更强大的模型可用,通常值得尝试使用更强大的模型再次进行尝试。
Six strategies for getting better results
获取更好结果的六个策略
Write clear instructions
写出清晰的指示
GPTs can’t read your mind. If outputs are too long, ask for brief replies. If outputs are too simple, ask for expert-level writing. If you dislike the format, demonstrate the format you’d like to see. The less GPTs have to guess at what you want, the more likely you’ll get it.
GPT 无法读懂您的心思。如果输出过长,请要求简短回复。如果输出过于简单,请要求专业水平的写作。如果您不喜欢格式,请展示您想要看到的格式。GPT 不需要猜测您的意图,您得到所需结果的可能性就越大。
Tactics:
技巧:
Include details in your query to get more relevant answers
在查询中包含细节以获取更相关的答案
Ask the model to adopt a persona
要求模型采用特定角色
Use delimiters to clearly indicate distinct parts of the input
使用定界符清楚地表示输入的不同部分
Specify the steps required to complete a task
指定完成任务所需的步骤
Provide examples
提供示例
Specify the desired length of the output
指定输出的期望长度
请参阅:
《ChatGPT 最佳实践指南之:写出清晰的指示》
Provide reference text
提供参考文本
GPTs can confidently invent fake answers, especially when asked about esoteric topics or for citations and URLs. In the same way that a sheet of notes can help a student do better on a test, providing reference text to GPTs can help in answering with fewer fabrications.
GPT 可以自信地编造虚假答案,特别是在询问深奥主题、引用和网址时。就像一页笔记可以帮助学生在考试中取得更好的成绩一样,向 GPT 提供参考文本可以帮助它以较少的虚构回答来回答问题。
Tactics:
技巧:
Instruct the model to answer using a reference text
指示模型使用参考文本回答问题
Instruct the model to answer with citations from a reference text
指示模型使用参考文本的引用进行回答
请参阅:
《ChatGPT 最佳实践指南之:提供参考文本》
Split complex tasks into simpler subtasks
将复杂任务分解为较简单的子任务
Just as it is good practice in software engineering to decompose a complex system into a set of modular components, the same is true of tasks submitted to GPTs. Complex tasks tend to have higher error rates than simpler tasks. Furthermore, complex tasks can often be re-defined as a workflow of simpler tasks in which the outputs of earlier tasks are used to construct the inputs to later tasks.
与软件工程中将复杂系统分解为一组模块化组件的最佳实践相同,将任务提交给 GPT 时也是如此。与较简单的任务相比,复杂任务的错误率往往更高。此外,复杂任务通常可以重新定义为一系列较简单任务的工作流程,其中较早任务的输出被用于构建后续任务的输入。
Tactics:
技巧:
Use intent classification to identify the most relevant instructions for a user query
使用意图分类识别与用户查询最相关的指示
For dialogue applications that require very long conversations, summarize or filter previous dialogue
对于需要非常长对话的对话应用,进行总结或过滤以前的对话
Summarize long documents piecewise and construct a full summary recursively
将长文档逐段进行摘要并递归构建完整摘要
请参阅:
《ChatGPT 最佳实践指南之:将复杂任务拆分为较简单的子任务》
Give GPTs time to "think"
给 GPT 足够的时间“思考”
If asked to multiply 17 by 28, you might not know it instantly, but can still work it out with time. Similarly, GPTs make more reasoning errors when trying to answer right away, rather than taking time to work out an answer. Asking for a chain of reasoning before an answer can help GPTs reason their way toward correct answers more reliably.
如果被要求计算 17 乘以 28,您可能不会立即知道答案,但是在经过一段时间后就计算出来。同样,与立即回答相比,GPT 在尝试立即回答时会产生更多推理错误,而是给予其足够的时间来得出答案。在给出答案之前要求模型进行推理可以帮助 GPT 更可靠地得到正确答案。
Tactics:
技巧:
Instruct the model to work out its own solution before rushing to a conclusion
指示模型在得出结论之前自行解决问题
Use inner monologue or a sequence of queries to hide the model's reasoning process
使用内心独白或一系列查询来隐藏模型的推理过程
Ask the model if it missed anything on previous passes
询问模型在之前的迭代中是否遗漏了任何内容
请参阅:
《ChatGPT 最佳实践指南之:给 GPT 足够的时间“思考”》
Use external tools
使用外部工具
Compensate for the weaknesses of GPTs by feeding them the outputs of other tools. For example, a text retrieval system can tell GPTs about relevant documents. A code execution engine can help GPTs do math and run code. If a task can be done more reliably or efficiently by a tool rather than by a GPT, offload it to get the best of both.
通过向 GPT 提供其他工具的输出来弥补 GPT 的弱点。例如,文本检索系统可以告诉 GPT 相关文档的信息。代码执行引擎可以帮助 GPT 进行数学计算和运行代码。如果一个任务可以通过工具而不是 GPT 更可靠或更高效地完成,那么可以将其转移,以充分发挥两者的优势。
Tactics:
技巧:
Use embeddings-based search to implement efficient knowledge retrieval
使用基于嵌入的搜索来实现高效的知识检索
Use code execution to perform more accurate calculations or call external APIs
使用代码执行来执行更准确的计算或调用外部 API
Give the model access to specific functions
给予模型访问特定函数的权限
请参阅:
《ChatGPT 最佳实践指南之:使用外部工具》
Test changes systematically
系统地测试变化
Improving performance is easier if you can measure it. In some cases a modification to a prompt will achieve better performance on a few isolated examples but lead to worse overall performance on a more representative set of examples. Therefore to be sure that a change is net positive to performance it may be necessary to define a comprehensive test suite (also known an as an "eval").
如果能够对性能进行衡量,那么改进性能就更容易。在某些情况下,对提示进行修改可能会在一些孤立的示例上获得更好的性能,但在更具代表性的示例集上可能会导致更差的整体性能。因此,为确保改变对性能的净增益,可能需要定义全面的测试套件(也称为“评估”)。
Tactic:
技巧:
Evaluate model outputs with reference to gold-standard answers
根据黄金标准答案评估模型输出
请参阅:
《ChatGPT 最佳实践指南之:系统地测试变化》
Other Resources
其他资源
For more inspiration, visit the OpenAI Cookbook, which contains example code and also links to third-party resources such as:
要获取更多灵感,请访问 OpenAI Cookbook,其中包含示例代码以及链接到第三方资源,例如:
Prompting libraries & tools
提示库和工具
Prompting guides
提示指南
Video courses
视频课程
Papers on advanced prompting to improve reasoning
关于改进推理的高级提示的论文
“点赞有美意,赞赏是鼓励”