Find helpful tutorials or share inspiring use cases on AI technology for higher education.

Shared content may not reflect the policies of Tilburg University on the use of AI. 

The Dual Role of AI: Combating Lazy Learning or Encouraging It? It All Depends on Educational Guidelines

Regardless of university policies, students are already using AI for rewriting, assignments, summaries, or getting explanations. As an enthusiastic user of ChatGPT and NotebookLM and involved in research on AI integration in higher education, I do not see myself as an expert, but rather as an ‘early adopter‘ of AI. Over the past year, I have been working on content that provides students with examples of how AI can be used in an enriched (“augmented”) way.

In this essay, I combine current research findings with practical experiences to touch on a more profound discussion about the role of AI in education. Although the future is difficult to predict, it is essential to think critically about this role now. I will touch on several topics; the debate is too extensive to fully cover all the nuances. But I encourage you to take these points with you and think about them. AI will inevitably be used in many companies and will therefore certainly come your way. Therefore, it is nearly necessary to ensure you’re well informed about the inherent limitations, discussions, and opportunities of AI, besides the necessary practicalities of how to use it.

The development of AI is proceeding at an almost “inhumanly” fast technological progress. That is why I hope to contribute to the current debate and increase awareness about the opportunities and limitations surrounding AI.


Dependence on Knowledge Level and Learning Objectives

Whether AI is supportive or augmenting education strongly depends on the phase a student is in. First-year students with limited prior knowledge have different needs and challenges than master’s students or PhD candidates who already possess a deep understanding of a subject. Students vary in their knowledge levels and the relevance of the subject or task, ranging from short-term importance to being fundamental within a certain field. To better understand this dynamic, it’s useful to employ a framework that combines these two dimensions. This results in four quadrants, each with specific guidelines surrounding AI.

Initial Learning Phase in Core Areas

This quadrant contains students with little to no prior knowledge of a topic that is of significant importance to their studies or future careers. Here, AI can be potentially harmful if students use it for assignments without developing their own understanding. This is what I would call Lazy Learning. Since they haven’t yet mastered the basics, they risk developing a superficial grasp and may fail to recognize errors from AI, such as misinformation or hallucinations. An essential characteristic of an expert is the thorough proficiency of basic skills. If students rely too heavily on AI without understanding the underlying concepts, they deprive themselves of the opportunity to develop essential skills and critical thinking.

An example of this is a first-year student who is starting the core subject of statistics. Statistics is a fundamental subject that provides the foundation for understanding data analysis, a skill that is essential in many academic disciplines. A good command of statistics is necessary because the subsequent course material builds on it and because a thesis often has to be based on correct data inferences. Although AI can provide correct interpretations, an overarching understanding of the material is essential.

Orientation phase in secondary domains

This involves students with limited prior knowledge about a subject that is not fundamental to their field of study or future expertise. In this situation, AI can be very useful. A personal example illustrates this: I recently had to give the website I work on a makeover and came across a problem that could not be solved using the standard functions of the site builder. I had never written code in this programming language before and did not want to delve into it in depth. Thanks to AI, which is excellent at programming, I was able to quickly generate the necessary code and solve the problem.

In addition, for subject material where the course is too short to discuss a topic in detail, AI can help students reach a basic level of knowledge. With the right prompts, they can obtain information about the pros and cons of a concept, understand what it entails, and gain insight into current debates around the topic. This allows them to learn efficiently without delving into details that are less relevant to them.

Deep specialization phase in core areas

In this quadrant, we find students who already have a considerable amount of knowledge about important subjects, such as master’s students or PhD candidates. They will likely use AI to seek further depth. However, just like in the first quadrant, ‘lazy learning‘ is dangerous here. AI offers the possibility that it may not interpret nuances well or make mistakes that are difficult for a layperson to detect.

The difference is that these students usually have enough knowledge to recognize such errors and respond to them. They can use AI as a tool to explore new perspectives, devise arguments to refute hypotheses or expand existing knowledge while remaining critical of the output. This promotes a deeper understanding and stimulates critical thinking, provided they are aware of the limitations of AI.

Suppose a master’s student specializes in behavioral economics. This student is working on a thesis about the impact of psychological biases on financial markets. If the student uses AI to develop complex models or conduct literature research, AI can be useful for quickly accessing information or exploring new perspectives. (Tip use NotebookLM over ChatGPT). However, when it comes to understanding subtle nuances in theories such as prospect theory or developing a new econometric model, AI may have limitations. Because the subject is highly specialized, it can be difficult for the student to notice these omissions, especially if the AI responds confidently. For example, if the student asks AI to explain the implications of loss aversion in investment behavior, the AI model may provide an answer that is superficially correct but overlooks important nuances, such as cultural differences or recent research developments.

Expanding knowledge in non-core areas

This category concerns experts or experienced students who are dealing with topics that are not directly relevant to their area of ​​expertise. AI is likely to be used less here because the student already has sufficient knowledge and the topic is not of great importance to their goals. The added value of AI is limited here, and the student may choose to focus their time and energy on more important or relevant topics.


Lazy prompting is lazy learning

Now that we understand how the same prompts in AI can have different effects on a student’s learning, it is important to look at how students, including myself, use ChatGPT. Often, students use AI to generate an answer to a specific question. For example, we enter a multiple-choice question and ask for the correct answer and an explanation. This usage is problematic. Universities will not be happy with it, and students should not be happy with it either. Although it depends on the difficulty of the question, AI generates answers based on probabilistic calculations. This brings us to an important concept in AI: hallucinations.

What are hallucinations?

Hallucinations in AI are moments when the model fabricates information or gives inaccurate answers, despite appearing confident. This happens because the model has no consciousness or understanding but purely follows statistical patterns. Each answer therefore varies, and the same prompt gives different results at different usage times. This means that sometimes the model gives sometimes correct information and sometimes it doesn’t.

This leads me to the following conclusions. If you are looking for factual information or (recent) events, a tool like Google is the better option. Any user of ChatGPT or other language models is doing themselves a disservice if they only use one prompt and are satisfied with the first answer. These hallucinations are exactly the reason why you should not stick to one prompt. By relying on a single answer, you run the risk of accepting incorrect or misleading information without critical evaluation.

The potential of AI for active learning

In the previous example, we wanted a static answer (the answer to the multiple choice question: A, B, C, D) that is ‘hopefully‘ factually correct; the non-factual output of language models is here a disadvantage. However, actively seeking variation can be extremely valuable in the learning process. When we want to spar or brainstorm hypotheses, develop an approach, come up with counterarguments or think ‘out of the box’, an AI model that works on probability and can go in different directions can be a godsend. AI helps us with this by generating diverse ideas and perspectives that we might not have considered ourselves.

However, this variation can also be counterproductive when we need accurate and consistent information. In tasks where precision and factual correctness are crucial, the variability of AI answers can be confusing or misleading. So it depends on the task whether the variation provided by AI is helpful or a hindrance. It is therefore imperative by definition to be aware of the nature of your task and whether the solution can be found via AI.

A final thought on this topic: “It is about not just seeking answers, but pursuing understanding.”


Bias inside Large Language Models

Additionally, I would like to discuss the different forms of bias in large language models and current methods to address them. Bias is intrinsic to a language model and can creep in both explicitly through the methods, like through Reinforcement Learning from Human Feedback (RLHF).

The role of training data

Language models such as ChatGPT are trained on huge amounts of text from the internet. However, the internet is not free of biases and stereotypes. These biases are absorbed by the AI ​​during the training process.

Furthermore, it is important from whom the information comes. The internet is currently heavily dominated by content from Western countries. If this forms the majority of the data on which a language model is trained, this can lead to an unbalanced representation of cultures, perspectives, and knowledge. This results in a model that may not be neutral and may adopt certain ways of thinking while marginalizing other ways of thinking from marginal groups.

Limitations of debiasing techniques

Techniques such as using human reviewers to assess and correct model output are important, given the inherent limitations of training materials. However, this approach is not without risks. Human reviewers bring their own unconscious biases, which can be unintentionally introduced or reinforced in the model.

This raises the reflective question: whose rules and values ​​are being applied during the debiasing process? If the reviewers have a specific cultural, institutional or personal background, this can lead to the imposition of a limited perspective. Universities and educational institutions strive for critical evaluation and equal treatment, but if debiasing is not done carefully and inclusively, these principles can be compromised.

For an interesting read on the world behind reinforcement learning: Time Magazine article on OpenAI and Kenyan Workers.

The AI’s ‘people pleaser’ mode

ChatGPT is programmed to be as helpful and friendly as possible, a true ‘people pleaser’. This means that the model is biased towards pleasing the user. Referring and thinking back to the concept of lazy learning, we are not challenged to think when we receive an answer neatly and (seemingly) nicely argued. Also, a language model sometimes adjusts an answer by making assumptions based on minimal information, such as a name. Recent research from OpenAI has shown that ChatGPT can exhibit subtle biases based on usernames, which the model learns explicitly or implicitly. For example, a user named Matthijs may receive different answers than someone named Julia or Mohammed, inadvertently reinforcing gender and cultural stereotypes.

This illustrates how AI can inadvertently reproduce existing biases, even though it should not have any of its own. While newer models like GPT-4 significantly reduce these stereotypes, from 1% to 0.1% of answers, which is likely an underestimate given other research, it remains a matter of caution. Given the size and intensive use of ChatGPT by millions of users worldwide, even a small percentage of bias can lead to a significant number of cases.


Commodification of Learning and Authenticity

Another important consideration that needs to be addressed is that the use of AI can undermine the intrinsic value of written work. For example, when a personal letter of recommendation or an essay is generated by AI, it loses its authenticity and personal touch. If I were here to admit that this work was created by AI, the reader would probably feel cheated (think about what it would mean to you, if I stated here this essay is AI-generated). A certain bond of trust has been broken, and the platform or medium on which it was published may lose its credibility.

This raises the question of which tasks we think should remain human and which can be ‘AI-tized’. Of course, this does not have to be binary; each main task consists of several subtasks. Spell checking in Word can easily be performed by AI, and generating counterarguments on which I can base a new paragraph is also a form of co-working with AI.

It is important to think about the balance between human creative work and the support that AI can provide. While some aspects of writing and communication are eminently human and require personal involvement, other tasks can be performed more efficiently with the help of AI, without compromising authenticity.


The impact of artificial intelligence on higher education is significant and inevitable. As an early adopter of AI, I have experienced the benefits of this technology first-hand. However, I realize that my enthusiasm may color my view of the potential drawbacks. It is important to remain critical and acknowledge both the positive and negative aspects of AI in education.

The innovations in AI offer new opportunities for personalized learning and efficiency. Efficiency can already mean when language models can provide answers to questions from the syllabus or agents that can reduce administrative burdens for professors. Therefore, my conclusion is that AI will have a lasting impact on education. It is up to students, teachers, and institutions to consciously and critically engage with this technology.

I hope that this essay has contributed to a better understanding of the complex relationship between AI and higher education. It is not only about how we can use AI, but especially about how we can integrate it responsibly and ethically. There are still important issues, such as environmental impact, and privacy which I have not discussed but which are important in this debate. Finally, I realize that my own perspective, I am biased as I work on specific course chatbots at the university and am enthusiastic about AI may have influenced the way I approached these topics.

Leave a Reply

Your email address will not be published. Required fields are marked *

Leave a Reply

Your email address will not be published. Required fields are marked *