Across universities worldwide, a quiet revolution is underway. Generative artificial intelligence (AI) tools such as ChatGPT, Copilot, DeepSeek and Gemini are being used to produce essays, summarise readings, and even conduct complex assignments.
Generative artificial intelligence is a kind of AI that can handle a variety of creative tasks in diverse domains, such as arts, music and education.
For many university teachers, this raises alarm bells about plagiarism and integrity. While some institutions have rushed to restrict or support AI use, others are still unsure how to respond.
But focusing only on policing misses a bigger issue: whether students are really learning. As an education researcher, I’m interested in the topic of how students learn. My colleagues and I recently explored the role AI could play in learning – if universities tried a new way of assessing students.
We found that many traditional forms of assessment in universities remain focused on memorisation and rote learning. These are exactly the tasks that AI performs best.
We argue that it’s time to reconsider what students should be learning. This should include the ability to evaluate and analyse AI-created text. That’s a skill which is essential for critical thinking.
If that ability is what universities teach and look for in a student, AI will be an opportunity and not a threat.
We’ve suggested some ways that universities can use AI to teach and assess what students really need to know.
Reviewing studies of AI
Universities are under pressure to prepare graduates who are more than just knowledgeable. They need to be self-directed, lifelong learners who are independent, critical thinkers and can solve complex problems. Employers and societies demand graduates who can evaluate information and make sound judgements in a rapidly changing world.
Yet assessment (testing what students know and can do) tends to focus on more basic thinking skills.
Our research took the form of a conceptual literature review, analysing peer-reviewed studies published since the release of the AI tool ChatGPT in late 2022. We examined how generative AI is already being used in higher education, its impact on assessment, and how these practices align (or fail to align) with Bloom’s taxonomy.
Bloom’s taxonomy is a framework widely used in education. It organises cognitive (thinking) skills into levels, from basic (remembering and understanding), to advanced (creating and evaluating).
Several key patterns emerged from our analysis:
Firstly, AI excels at lower-level tasks. Studies show that AI is strong in remembering and understanding. It can generate multiple-choice questions, definitions, or surface explanations quickly and often with high accuracy.
Secondly, AI struggles with higher-order thinking. At the levels of evaluating and creating, its effectiveness drops. For instance, while AI can draft a business plan or a healthcare policy outline, it often lacks contextual nuance, critical judgement and originality.
Thirdly, the role of university teachers is changing. Instead of spending hours designing and grading lower-level assessments, they can now focus on scaffolding tasks that AI cannot master alone, thus promoting analysis, creativity and self-directed learning skills. Self-directed learning is defined as “a process where individuals take initiative to diagnose their learning needs, set learning goals, find resources, choose and implement strategies, and evaluate their outcomes, with or without assistance from others.”
Lastly, the opportunities AI presents seem to outweigh the threats. While concerns about cheating remain real, many studies highlight AI’s potential to become a learning partner. Used well, it can help generate practice questions, provide feedback, and stimulate dialogue (if students are guided to critically engage with its outputs).
All these challenges prompt universities to move beyond “knowledge checks” and invest in assessments that not only measure deeper learning, but promote it as well.
How to promote critical thinking
So how can universities move forward? Our study points to several clear actions:
-
Redesign assessments for higher-order thinking skills: Instead of relying on tasks that AI can complete, university teachers should design authentic, context-rich assessments. For example, using case studies, portfolios, debates, and projects grounded in local realities.
-
Use AI as a partner, not a threat: Students can be asked to critique AI-generated responses, identify gaps, or adapt them for real-world use. This transforms AI into a tool for practising the ability to analyse and evaluate.
-
Build assessment literacy among university teachers: University teachers need support and training to create AI-integrated assessments.
-
Promote AI fluency and ethical use: Students must learn not just how to use AI, but how to question it. They must understand its limitations, biases and potential pitfalls. Students should be made aware that transparency in disclosing AI use can support academic integrity.
-
Encourage the development of self-directed learning skills: AI should not replace the student’s effort, but rather support their learning journey. Hence, designing assessment tasks that foster goal-setting, reflection and peer dialogue is crucial for developing lifelong learning habits.
By fostering critical thinking and embracing AI as a tool, universities can turn disruption into opportunity. The goal is not to produce graduates who compete with machines, but to cultivate independent thinkers who can do what machines cannot: reflect, judge, and create meaning. Assessment in the age of AI could become a powerful force for cultivating the kind of graduates our world needs.