Welcome to our series about Prompt Engineering! In this series you will teach all the ins and outs of prompt engineering or in plain English: how should you interact with Generative AI. When using Generative AI, like ChatGPT, the better you formulate your instruction, called a prompt, the better their response will be. The practice of designing good instructions is called prompt engineering. In the upcoming series of articles, we will help you to become a prompt engineer.
Why should I learn “prompt engineering”? A model such as ChatGPT bases its answer on the question, the prompt, that is fed to it. A sharp question, with clear instructions, therefore, leads to better answers. In other words, the quality of GAI’s answers depends on the prompts provided. And so always hold this thought:
High-quality prompts lead to high-quality answers, and vice versa low-quality prompts lead to low-quality answers.
That is why in this module we will equip you with all kinds of different strategies with which we can trigger a certain reaction from GAI. We mainly use ChatGPT. Since this is the most commonly used GAI, apply it to other language models such as Claude and Sonnet, or use it to create images in Midjourney or Dall-e or even in our own chatbot 😉
The Key Principles of Prompt Engineering
As mentioned earlier, Prompts are the instructions you provide to the model, guiding the kind of answer you’re seeking. Think of it this way: A model is like someone unaware of your specific needs or queries. Much like interacting with a person, conveying your intentions or questions clearly is crucial. A poorly structured prompt can result in undesirable responses from the model. This is why it is important to clearly convey your needs to the model through writing prompts.
One idea you can make (technically not correct, but sufficient for now) is that you should assume that you are talking to a blank sheet of paper. The model does not know who is in front of it, what he or she wants from him, and fills in answers with what is most likely to answer your question. Therefore, it literally does what it is asked to do, and to make it do what you want to do it requires control shaping it’s answer, by prompt engineering from the user side.
Principle 1: Be Specific
The model doesn’t know what your intentions are when you start an interaction. Providing more details
about the task you want to perform in your prompt helps the model to understand you better and therefore provide a conclusive answer. So, when you create a prompt, it should be specific, descriptive, and contain detailed instructions about the subject, context, output length, format, style, and audience.
Therefore, avoid vague descriptions when sending a question. For example, the question you may have already asked ChatGPT since reading this: generate a short text about prompt engineering, is an ineffective way. Why? What does a short text mean, this is ambiguous. What is the subject? Prompt engineering, but what do you want to find out about this, this is now filled in by the model with the most likely (the interpretation of you probably mean this).
Let’s clarify this with an example. Compare the following two prompts;
- General Prompt:
prompt = "How to filter out outliers in my data?
- Specific Prompt:
prompt = "Explain the main procedure of filtering outliers in a dataset with the Python programming language in 4 bullet points.
Prompt 1, will create a response that probably includes the general procedure for identifying and removing outliers in data. But if we are to believe prompt 2 to be our objective. We were looking for how to do this from a practical point of view, namely via Python. There is a mismatch of interpretation here. Avoid this by being specific about your intentions.
Example Cases
General Prompt: How do I add numbers in Excel?
Specific Prompt: How do I add up a row of dollar amounts in Excel? I want to do this automatically for a whole sheet of rows with all the totals ending up on the right in a column called “Total”.
General Prompt: Summarize the following text: [text]
Specific Prompt: Summarize the report with a maximum of 5 sentences, while focusing on aspects related to AI and data privacy. [text]
Principle 2: Action Verbs
Start with an active verb
, such as “Create
…”, “Summarise
…”, “Structure
…”, “Generate
…” or “Translate
…”.
Specificity is therefore important. Also in the task that we send to the model. So when we start a query towards the model, start with an clear active verb, such as “Create …”, “Summarise …”, “Structure …”, or “Translate …”.
More generally, when giving instructions for a particular task, choose an action verb with a clear message. Think of write, complete, translate, summarize. Using ambiguous verbs like understand, think, feel, try, and know can confuse the model.
For instance, a prompt asking the model to: think about the issue of demand and supply is ineffective. While the model may generate some output on the topic, the prompt needs to be clearer about what is expected. Therefore, an effective prompt will always use an action verb that clearly instructs the model on what to do.
Principle 3: Iterating on Prompts
It’s rare to get the perfect response on the first try. Therefore, iteration
is key. If the initial response doesn’t meet your expectations, consider it a starting point rather than the final prompt. You can refine your prompt by asking follow-up questions, specifying more details, or changing the nature of your prompt to make it more accurate. Iterating on your prompt multiple times often leads to progressively better results.
It requires a different approach to your instructions. Reflect therefore on your instructions. Why isn’t the model responding as I intended or envisioned? How can I clarify my instructions so that the model better understands what I mean?
Maybe you have already seen the way we are going and are thinking, “I always ask short, one-line questions.” This relates to the concept of “garbage in, garbage out.” Although language models have become very good and you may be satisfied with them, there is good news: much more is possible! We will help you achieve this. A good prompt can sometimes be pages long and requires well-thought-out instructions. Don’t worry, this is the exception rather than the rule!
Example Case
Try modifying the prompts and observe how the model’s responses change. Here are a few variations to try:
- Base Prompt:
Explain the concept of machine learning.
This prompt is rather broad and leads to a variety of responses from the AI. If you input this prompt into ChatGPT in five different chats, you will receive different explanations each time. Try it for yourself! Therefore, it is likely you won’t obtain a response that perfectly suits your needs.
2. Changing the Tone: Explain the concept of machine learning in a simple manner to a non-expert.
For instance, by asking, “Explain the concept of machine learning in a simple manner to a non-expert,” you’re directing the AI to adjust its language to be more approachable. This can make complex information more accessible, especially if you are not an expert, and want to learn the basics of machine learning without getting intimidated by technical jargon.
3. Adding Constraints: In 3 paragraphs, explain the concept of machine learning in a friendly and simple manner
.
By specifying constraints you focus the AI’s response. This constraint compels the AI to be concise and to prioritize the most significant benefits, making the information easier to digest and recall. In addition you will get a more structured response.
4. Role-Playing: Act as a data scientist. In 3 paragraphs, explain the concept of machine learning in a friendly and simple manner
.
When you instruct the AI, ‘You are a machine learning professor. you’re doing more than just seeking information; you’re initiating a shift in perspective.
Wait? Initiating a shift in perspective? Think about this carefully. Instructing a GAI model therefore provides enormous flexibility. So you can interact with “almost” any person or approach. This approach not only tailors the response more closely to your query but also simulates an interaction where you could be conversing with an expert of your choice, be it a data scientist, doctor, journalist, or professor. However, also a nineteenth century tradesmen (note it is the model training data interpretation of the person). This means, you can have a suitable person at your help desk, ready to engage in a detailed conversation at any moment.
Let’s combine different concepts, specifity, principle 1, and now the role playing. This means we are not limited to have a machine learning professor. But this professor could be a personalized. What is our professor like? How does it respond to a question? For example, our professor could be a stubborn, critical and overly factual correct, which is a different machine learning professor than, an engaging, open-minded, and metaphor loving machine learning professor.
An important consideration to take into account is, as the example above illustrates, the model is full of assumptions. Assumptions are necessary to function, but also lead to bias in the model. Therefore, always remain critical of the way bias is present in the model and how your question deals with this.
Circling back to our main topic, Iterating on prompts is not only beneficial for getting improved answers from the AI model, additionally iterating on your prompts encourages you to think critically about the type of information you need and how best to extract it. Observing how the AI responses change with different prompts gives you an improved understanding of how AI “understands” and processes human language. This practice, therefore, not only improves your interactions with AI but also lets you build up your communication skills by teaching you how to ask clear, effective questions.
Principle 4: Structure Prompt Components
So far, we have assumed that a prompt only contains instructions
. But in most cases, the prompt also contains input data
to which the model must respond. For example, when summarizing a text, the prompt will contain both the question for summarization and the text to be summarized. Similarly, for coding-specific questions, the prompt includes both the question about the code and the code itself.
As discussed in Principle 1, the model can only respond to what you instruct it and knows nothing about your thoughts. Therefore, it is crucial to clearly distinguish between the different so-called prompt components
. For instance, in the case of summarizing, clearly separate the instruction from the text.
To write an effective prompt always start by describing the instructions at the beginning. Follow this with the text, code, or other content on which these instructions must be executed.
Use this procedure to clearly distinguish the different prompt components:
- Add delimiters such as parentheses
{}
, brackets[]
, backticks (best practice is three: “`), or any other tokens around various input parts. - Mention in the prompt which delimiters are used.
This two-step procedure allows the model to know exactly where to find the input in the prompt. Here’s an example of a text summarization prompt where we delimit the input text with triple backticks:
Summarize the text about machine learning delimited by triple backticks with a maximum of 5 bullet points: ```Machine learning is a subset of artificial intelligence that involves the use of algorithms and statistical models to enable computers to perform tasks without explicit instructions. It relies on patterns and inference instead. Common applications of machine learning include recommendation systems, image recognition, and autonomous driving. By training on large datasets, machine learning models can improve their performance over time, making them highly effective for various complex tasks.
```
Conclusion
Well done! We have laid the groundwork for building efficient prompts using prompt engineering principles. These include asking precise questions, using action verbs, iterating on prompts, and specifying distinct prompt components.
Exercise Set
Understanding the Basics
Define Prompt Engineering
Answer: Prompt engineering is the practice of designing instructions (prompts) for Generative AI models to elicit desired and high-quality responses.
Why is specificity important in prompt engineering?
Specificity
helps the AI model understand the exact requirements and context of the task, leading to more accurate and relevant responses.
List the four key principles of prompt engineering discussed:
- Be Specific
- Use Action Verbs
- Iterate on Prompts
- Structure Prompt Components
Applying the Principles
Be Specific
- General Prompt: “Help me understand calculus.”
- Task: Rewrite the above prompt to make it more specific for a first-year student struggling with the concept of derivatives, requesting a step-by-step explanation.
Example Answer:
Act as a Math professor to first year students on a university, bachelor’s course. Provide a step-by-step explanation of the basic concept of derivatives, including simple examples to illustrate each step.
Scenario-Based Iteration:
Initial Prompt: “Describe how to take good lecture notes.”
Task: Assume the first response was too general. Iterate on the prompt to focus on methods suitable for large university lectures.
Example Answer:
Describe note-taking methods for first-year students in large university lectures, including tips on active listening and organizing notes for complex subjects.
Structure Prompt Components:
Coding Prompt Example for University Assignments:
Task: Create a structured prompt for debugging the following Java code from a first-year programming assignment, using delimiters.
public class Main {
public static void main(String[] args) {
int numbers = {1, 2, 3, 4, 5};
for(int i = 0; i <= numbers.length; i++) {
System.out.println(numbers[i]);}
}
}
Answer:
Debug the following Java code from a first-year programming assignment and explain the errors:
```
public class Main {
public static void main(String[] args) {
int numbers = {1, 2, 3, 4, 5};
for(int i = 0; i <= numbers.length; i++) {
System.out.println(numbers[i]);
}
}
```
Iteration to Full Prompt
- Prompt: “Explain the human digestive system.”
- Task: Prompt ChatGPT with above answer, and identify the need for more depth in a specific area (e.g., role of enzymes). Then create a more zoomed-in prompt following the principles.
Example Answer:
“Explain the role of enzymes in the human digestive system for a first-year Biology class, including their functions in breaking down carbohydrates, proteins, and fats, and their importance in nutrient absorption and overall health.”
Create Your Own Prompt
- Scenario: You are a first-year university student preparing your resume for job applications. You need assistance in writing a compelling personal profile section that highlights your academic achievements and motivation. Below is your initial prompt attempt to seek help from an AI tool like ChatGPT.
- Initial Prompt:
write a CV for a job application
- Task: Given what you have learned so far, write an prompt for the given task.
Example Prompt
Act as an experienced career coach specializing in resume writing for recent graduates. I am currently updating my resume and need assistance creating a personal profile. I want to highlight that I am a motivated student who has completed a Bachelor’s degree in Physics and is now pursuing a Master’s in Applied Physics. Please write a concise personal profile of approximately 100 words that showcases my academic achievements, relevant skills, and enthusiasm, aiming to grab the attention of potential employers in the field of applied physics. Ensure the tone is professional and engaging. Here are some courses that I took:LIST OF COURSES
, some personality traits: [PERSONALITY TRAITS], and some volunteer work that I did {DESCRIPTION OF VOLUNTEER WORK}