Welcome to our series about Prompt Engineering! In this series will teach you all the ins and outs of prompt engineering. When using Large Language Models, like ChatGPT, the better you formulate your instruction, called a prompt, the better their response will be. The practice of designing good instructions is called prompt engineering. In the upcoming series of articles, we will help you to become a prompt engineer.
Prompt Engineering is an important skill for obtaining the desired output from a model as the quality of the answers you get depends heavily on your provided prompts. Generally speaking:
High-quality prompts lead to high-quality answers, and vice versa low-quality prompts lead to low-quality answers.
Prompt Template
In a previous article, we provided a template of what a good prompt should look like:
You are a [expert person]. [context]. [task (action verbs)]. Please use a [tone] writing style. The output should be [format]. It should look like the following example: [example].
Although this template touches everything you need for a good prompt and thus acquires a good response from the AI model, we will give you the tools and skills to build your own effective prompts in this series. Optionally, we will give you the opportunity and apply these skills using the OpenAI Python package
. While we focus in some articles on the OpenAI API, of course you can insert the example prompts in the traditional ChatGPT desktop version!
By the end of this introduction, you will be equipped to create effective prompts by following four key principles.
The Key Principles of Prompt Engineering
Prompts are the instructions you provide to the model, guiding the kind of answer youβre seeking. Think of it this way: A model, here the model with whom you chat if you are using the desktop version ChatGPT or you specify in the API, is like someone unaware of your specific needs or queries. Much like interacting with a person, conveying your intentions or questions clearly is crucial. A poorly structured prompt can result in undesirable responses from the model. This is why it is important to clearly convey your needs to the model through writing effective prompts.
Principle 1: Be Specific
The model doesn’t know what your intentions are when you start an interaction. Providing more details
about the task you want to perform in your prompt helps it respond more directly and effectively. When creating prompts, you should give specific, descriptive, and detailed instructions about the context, output length, format, style, and audience. Note, that we have followed this practice in our template example given to you in the beginning. At last, avoid using technical jargon and use straightforward language.
In short, avoid vague descriptions when crafting prompts. For example, asking the model to generate a short text about prompt engineering is ineffective because it doesn’t specify how many paragraphs, sentences, or words you want. To make the prompt more effective, explicitly specify an expected output length, such as two sentences, and you’ll see this reflected in the output.
Let’s clarify this with an example. Compare the following two prompts;
- General Prompt:
prompt = "How to filter out outliers in my data?
- Specific Prompt:
prompt = "Explain the main procedure of filtering outliers in a dataset with the Python programming language in 4 bullet points.
The second prompt is more likely to yield a detailed and focused response.
Example Cases
General Prompt: How do I add numbers in Excel?
Specific Prompt: How do I add up a row of dollar amounts in Excel? I want to do this automatically for a whole sheet of rows with all the totals ending up on the right in a column called “Total”.
General Prompt: Summarize the following text: [text]
Specific Prompt: Summarize the report with a maximum of 5 sentences, while focusing on aspects related to AI and data privacy. [text]
Principle 2: Action Verbs
Start with an active verb, such as βCreate
β¦β, βSummarise
β¦β, βStructure
β¦β, βGenerate
β¦β or βTranslate
β¦β.
When giving instructions for a task, choose action verbs that clearly guide the model. Examples include write, complete, explain, describe, or evaluate. Using ambiguous verbs like understand, think, feel, try, and know can confuse the model.
For instance, a prompt asking the model to: think about the issue of demand and supply is ineffective. While the model may generate some output on the topic, the prompt needs to be clearer about what is expected.
An effective prompt will always use an action verb that clearly instructs the model on what to do.
Principle 3: Iterating on Prompts
Although the prompt template will get you a rather structured response from the AI, it’s rare to get the perfect response on the first try. Therefore, iteration
is key. If the initial response doesn’t meet your expectations, consider it a starting point rather than the final prompt. You can refine your prompt by asking follow-up questions, specifying more details, or changing the nature of your prompt to make it more accurate. Iterating on your prompt multiple times often leads to progressively better results.
Example Case
Try modifying the prompts and observe how the model’s responses change. Here are a few variations to try:
- Base Prompt:
Explain the concept of machine learning.
This prompt is rather broad and leads to a variety of responses from the AI. If you input this prompt into ChatGPT in five different chats, you will receive different explanations each time. Try it for yourself! Therefore, it is likely you won’t obtain a response that perfectly suits your needs.
2. Changing the Tone: Explain the concept of machine learning in a friendly and simple manner.
For instance, by asking, “Explain the concept of machine learning in a friendly and simple manner,” you’re directing the AI to adjust its language to be more approachable. This can make complex information more accessible, especially if you are not an expert, and want to learn the basics of machine learning without getting intimidated by technical jargon.
3. Adding Constraints: In 3 paragraphs, explain the concept of machine learning in a friendly and simple manner
.
By specifying constraints you focus the AIβs response. This constraint compels the AI to be concise and to prioritize the most significant benefits, making the information easier to digest and recall. In addition you will get a more structured response.
4. Role-Playing: Act as a data scientist. In 3 paragraphs, explain the concept of machine learning in a friendly and simple manner
.
When you instruct the AI, ‘You are a data scientist‘ you’re doing more than just seeking information; you’re initiating a shift in perspective. This approach not only tailors the response more closely to your query but also simulates an interaction where you could be conversing with an expert of your choice, be it a data scientist, doctor, journalist, or professor. Itβs like having a specialist at your help desk, ready to engage in a detailed conversation at any moment.
Iterating on prompts is not only beneficial for getting improved answers from the AI model, additionally iterating on your prompts encourages you to think critically about the type of information you need and how best to extract it. Observing how the AI responses change with different prompts gives you an improved understanding of how AI “understands” and processes human language. This practice, therefore, not only improves your interactions with AI but also lets you build up your communication skills by teaching you how to ask clear, effective questions.
Principle 4: Structure Prompt Components
So far, we have assumed that a prompt only contains instructions
. But in most cases, the prompt also contains input data
to which the model must respond. For example, when summarizing a text, the prompt will contain both the question for summarization and the text to be summarized. Similarly, for coding-specific questions, the prompt includes both the question about the code and the code itself.
As discussed in Principle 1, the model can only respond to what you instruct it and knows nothing about your thoughts. Therefore, it is crucial to clearly distinguish between the different so-called prompt components
. For instance, in the case of summarizing, clearly separate the instruction from the text.
To write an effective prompt always start by describing the instructions at the beginning. Follow this with the text, code, or other content on which these instructions must be executed.
Use this procedure to clearly distinguish the different prompt components:
- Add delimiters such as parentheses
{}
, brackets[]
, backticks (best practice is three: “`), or any other tokens around various input parts. - Mention in the prompt which delimiters are used.
This two-step procedure allows the model to know exactly where to find the input in the prompt. Here’s an example of a text summarization prompt where we delimit the input text with triple backticks:
Summarize the text about machine learning delimited by triple backticks with a maximum of 5 bullet points: ```Machine learning is a subset of artificial intelligence that involves the use of algorithms and statistical models to enable computers to perform tasks without explicit instructions. It relies on patterns and inference instead. Common applications of machine learning include recommendation systems, image recognition, and autonomous driving. By training on large datasets, machine learning models can improve their performance over time, making them highly effective for various complex tasks.
```
Conclusion
Well done! We have laid the groundwork for building efficient prompts using prompt engineering principles. These include asking precise questions, using action verbs, iterating on prompts, and specifying distinct prompt components.