Find helpful tutorials or share inspiring use cases on AI technology for higher education.

Shared content may not reflect the policies of Tilburg University on the use of AI. 

Follow the Thought, Find the Right Answer: Chain-of-Thought Prompting

Thanks for tuning in to the fifth part of the prompt engineering series. We will take one major step forward in this article towards becoming prompt engineers. But, to become proficient, we have to progress step by step, much like AI models. Yes, we are getting close to the point. Let’s explore multi-step and chain-of-thought prompting, which breaks down multiple or complex tasks into smaller pieces to reach the final output.

By the end of this part you will be able to:

  • Understand and implement multi-step prompting to break down complex tasks into minto simpler, manageable steps for the model.
    • Implement multi-step prompting in scenarios that require logical consistency and multiple stages of transformation.
  • Use chain-of-thought prompting to lead the model through intermediate reasoning steps, improving clarity and reducing errors.

Multi-Step Prompting

Multi-step prompting is a method that directs the behavior of the model to generate effective outputs by breaking down an end goal into a series of smaller, manageable steps. The model goes through each prompt step before giving the final output. These steps make it easier for the model and increase the chance of success. Therefore, this style of prompting is useful for the following type of taks:

  • Sequential tasks, such as generating a coherent text based on a given outline, benefit from multi-step prompts because they often need to complete a series of steps in a specific order. This is what we also saw in the second article with the Inner Monologue, where we gave step-by-step instructions to create a proof exam assistant.
  • Cognitive tasks, such as assessing the correctness of a solution, can also benefit from using multi-step prompts because they involve cognitive processing, problem-solving, and decision-making.

In short, a multi-step prompt breaks down a complex task into smaller, manageable steps the model must take to achieve the desired result.

Example Multiple Transformations:

The use of multi-step prompting is particularly useful when the AI model needs to perform multiple transformations simultaneously. It is also beneficial when the task at hand requires logical coherence

In this context, we instruct the model to create a summary. Typically, this is done with a simple command like “Summarize this article.” However, for a high-quality summary, it is often necessary to incorporate information from earlier sections to make sense of the later sections. This is where a multi-step prompt becomes valuable. By employing a running summary, the model continuously summarizes the text, while checking any given point in the file, to make sure that the content is coherent, contains the necessary information to understand later information and is logically structured throughout.

Prompt: Transform the uploaded pdf with the following steps: 
- Step 1: Divide the document into manageable sections.
- Step 2: Summarize each section individually.
- Step 3: Concatenate the summaries of each section.
- Step 4: Summarize the concatenated section summaries to produce a higher-level summary.

Here we tested the prompt using the literature review of a thesis.

Classroom Example Case: Combining Multi-step prompting and Conditional Prompts

With the techniques of Multi-step prompting and Conditional Prompts, it is possible to create scenario analyses through an AI model. These methods enable both students and teachers to develop hypothetical class case studies on any given subject and let you interact and engage with the study material.

Prompt: Create a hypothetical case study for an economics class focusing on supply and demand. I will provide you with the market sector in <> below.

  • Step 1: Outline a real-world scenario involving a specific product or service and include key factors such as price, consumer behavior, and market conditions.

  • Step 2: Check whether the conditions of a functioning market are met (e.g. availability of substitutes, competition, rational consumer behavior)
    • If the conditions are not met, stop short of providing a full case study and instead highlight the specific conditions that are not satisfied.If the conditions are met, provide a set of analysis questions to test students’ knowledge of supply and demand.
Market sector: <Car Industry>

Chain of Thought Prompting

Chain-of-Thought (CoT) prompting is a technique designed to improve the reasoning capabilities of large language models (LLMs) by encouraging them to explain their thinking process step by step. Instead of having the model give the final answer right away, you now instruct the model to explain its reasoning before it gives an answer. Research shows that this significantly improves the outcome of the answer. How does it work? Let’s explain that step by step 😉

.CoT prompts instruct the LLM to break a problem down into smaller, logical steps. The model is asked to explain each step in detail, forcing it to think about the process that leads to the answer, rather than jumping straight to it. These intermediate steps form the “chain of thought”. It’s hard to provide this prompt, to prompt a large language model to explain. Adding the phrase “Let’s think step by step” at the end of the prompt may be enough. This simple phrase encourages the model to “reason out loud” and go through the necessary steps

Zero-shot CoT: Here, the model is encouraged to generate reasoning steps independently, without explicit examples, using a sentence such as “Let’s think step by step”

Another way to use CoT is to give the model chains of thought it must follow to come up with the solution., which we just did in multi-step prompting. In other words, we were already implicitly doing chain-of-thought prompting.

CoT is especially effective for tasks that require multiple steps, such as math problems, logic puzzles, and questions that require multiple lines of reasoning. The step-by-step process helps the model handle complex tasks more accurately by focusing the model’s attention on one part of the problem at a time. In addition, CoT’s step-by-step reasoning makes the model’s reasoning process more transparent and easier to interpret.

Model dependency: It is important to note that the effectiveness of CoT prompting is highly dependent on the capabilities of the underlying language model. The technique is also not always effective, especially on tasks that do not involve clear sequential reasoning.

.

Let’s examine the difference between multi-step and chain-of-thought prompts. In multi-step prompts, the different steps of the task are directly included in the prompt itself, which directs the model’s behavior. Chain-of-thought prompts take a different approach by instructing the model to generate intermediate steps or thoughts in its output while solving the problem. This helps to gain insight into the model’s decision-making. A limitation of chain-of-thought prompting is that a step with flawed reasoning will lead to an unsuccessful result.

Example Case

To illustrate how you can use Chain-of-Thought (CoT) prompting when explaining code, consider a scenario where a language model is asked to analyze and explain a piece of Python code.

Suppose you have the following Python code:

Python
def calculate_average(numbers):
    total = sum(numbers)
    count = len(numbers)
    if count == 0:
        return 0
    average = total / count
    return average

my_list = [1-5]
result = calculate_average(my_list)
print(result)

To use CoT, you can ask the language model, “Explain this code.” But then the model still has a lot of freedom in how it explains it. Instead, you can structure the prompt so that the model is forced to think step-by-step about the questions you have.

Here’s an example of how you might formulate a CoT prompt to explain this code:

Think about the given Python code step by step. First, explain what the calculate_average function does. Then, explain what the variables my_list and result represent. Finally, explain what the print statement will do. Give your explanation step by step with references to the code. The code is given below:
```def calculate_average(numbers):
total = sum(numbers)
count = len(numbers)
if count == 0:
return 0
average = total / count
return average

my_list = [1-5]
result = calculate_average(my_list)
print(result)
```

With this prompt you force the model to analyze the code in logical steps, and provide an intermediate reasoning step so that it doesn’t immediately jump to the final conclusion

Conclusion

In conclusion, learning multi-step and chain-of-thought prompting is vital for advancing your skills as a prompt engineer. These techniques allow you to break down complex tasks into smaller, manageable steps. Which improves the AI’s logical coherence and reduces errors. By understanding the differences and applications of each method, you can steer AI models to produce accurate and high-quality outputs.

As the saying goes, ‘Two heads are better than one,’ which is why we are not done yet and will switch to tree-of-thoughts prompting in the next part.