Developing ‘Augmented Graduates’ in the AI Era
- Artificial intelligence is not replacing human contributions, but augmenting them, which means that our graduates need to become adept at using algorithmic systems to extend their own capabilities.
- Students must learn to pair technical fluency with human judgment to avoid “cognitive offloading”—the tendency to delegate basic thinking to generative AI.
- Business schools must adopt curricula, design assessment methods, and provide faculty resources that develop graduates who can lead confidently in a technologically evolving world.
Like every major technological development before it, artificial intelligence (AI) is reshaping how we work, learn, and live. While public debate often focuses on how this new technology will lead to significant disruption and job loss, history suggests that we take a more balanced view. Every wave of widespread innovation has generated anxiety, and yet each has also created new job roles, new industries, and new ways of creating value.
For business schools, this new wave raises a fundamental question: How do we develop graduates who are genuinely able to add such value in an environment where the line between AI-enabled capabilities and human-driven contributions is constantly evolving?
Augmentation, Not Just Automation
This question is the basis of “Augmented Leadership: Navigating the New Age of Intelligence,” a recent report released by The Global Alliance in Management Education (CEMS). It draws on expertise from across the CEMS network of business schools and corporate partners to forecast the future of our graduates.
In my contribution to the report, I emphasize a key message: The future of leadership and work will not be defined by automation alone. It also will be defined by AI’s capacity to augment human talents.
Many conversations about AI’s implications assume that the future will be about people versus machines. But the future is far more likely to be about people working with machines to unlock new potential—to complement, not compete with, our strengths. I strongly believe that this mindset must become the foundation of our teaching and learning.
As Evelyne Léonard, a professor at Université Catholique de Louvain in Belgium, notes in the report, “the starting point is that students are ahead of us.” Indeed, AI is already embedded in students’ everyday academic lives, as they use it for everything from conducting research and analysis to drafting assignments to even managing schedules.
Workers from previous generations might ask, “Will AI take my job?” Students from this generation ask, “How can AI help me do my job better?”
When I speak with students across the CEMS Alliance and within my own business school, the Onsi Sawiris School of Business at the American University in Cairo, I am struck by how naturally they engage with technology. While generations who did not grow up amidst rapid digital transformation may see AI as a threat, “digital natives” do not take that view—rather, they see it as a tool. Workers from previous generations might ask, “Will AI take my job?” Students from this generation are more likely to ask, “How can AI help me do my job better?”
This attitude reflects an augmentation mindset—an asset that business educators must deliberately harness and nurture.
Faculty Preparation and Foundational Knowledge
In a practical sense, this means that business schools “must integrate AI, machine learning, data analytics, blockchain, and cybersecurity into curricula in order to prepare students for an increasingly digital workplace,” as I note in the report. “Faculty must also remain knowledgeable about AI to guide students effectively, which is a huge challenge.”
The more faculty keep their knowledge about AI current, the more they can find opportunities to integrate its use across their courses. Capacity-building opportunities, workshops, and faculty learning communities that support peer-to-peer exchanges of experience are effective ways for schools to help faculty stay up-to-date, improve their skills, and experiment with AI tools in teaching or research.
Faculty can view themselves as learners about AI and its implications, just as their students are. For example, faculty at the Onsi Sawiris School of Business experiment with AI tools to simulate market scenarios and analyze large data sets. This keeps them aligned with the latest technological developments, which in turn helps them enrich students’ learning experiences. This dual approach—with faculty learning alongside students—creates a dynamic environment where AI literacy develops organically.
With this preparation, our professors can better teach students to use AI to analyze business case studies, interpret data in context, and see how different tools can inform decision-making. As faculty learn to question assumptions and apply judgment to machine-generated insights, they can teach their students to do the same.
As the late American writer and futurist Alvin Toffler noted, “AI can process data and provide the ‘what,’ but it is up to us to determine the ‘why’ and the ‘how.’” That is a sentiment faculty must convey to every student.
A CEMS seminar focused on strategic management at the Vienna University of Economics and Business also addresses this issue. This course, Strategic Problem Solving, adopts a behavioral perspective as it systematically integrates recent advances in the large language models (LLMs) that drive generative AI. It focuses on the way “strategy” is crafted and carried out in practice.
Phillip Nell, who leads the course, recently told me he believes that 75 percent of the content in core courses should be preserved, because students need that foundational knowledge to be able to manage LLMs and judge whether AI outputs are misleading or biased. That said, he also argues that nearly all classic courses should have an LLM element.
“Students need to learn about the performance frontiers of LLMs—what they are good at and what they are not so good at—including how to deal with them,” Nell says. “This is what we are trying to achieve in this course.”
He acknowledges that achieving that goal requires that he test AI’s limitations for himself as he walks students through the learning process. It is “extremely difficult to keep up with the pace of technological change,” he says. “I consider this a joint effort.”
Mitigating the Risks of Cognitive Offloading
While AI offers enormous potential for enhancing our abilities, several contributors to the report caution that it also introduces new risks. One such risk is “cognitive offloading”: the tendency to delegate not only routine tasks but also basic thinking to AI. If our graduates delegate their thinking to AI, they risk gradually eroding the very capabilities that distinguish their contributions.
Kourosh Bahrami, CEO of the global adhesive manufacturer tesa and a member of the CEMS Board, captures this well. “It is not a question of human intelligence,” he says. “The problem lies in human nature: Convenience makes us lazy. If a human being can take the easy path, most of the time we will.”
As faculty learn to question assumptions and apply judgment to machine-generated insights, they can teach their students to do the same.
Kai Riemer, a professor at the University of Sydney Business School, offers a similar perspective, noting that a graduate whose primary skill is prompting a chatbot adds limited value to an organization.
Our comparative advantage will lie in critical thinking, creativity, judgment, and empathy. As Toffler also observed, “The illiterate of the 21st century will not be those who cannot read and write, but those who cannot learn, unlearn, and relearn.”
‘Think First, Prompt Second’
The report emphasizes a simple principle for educators and learners alike: “Think first, prompt second.” This is a message embraced by CEMS’ network of corporate partners, including the global electrical engineering company ABB.
With this idea as a guide, the company tells the members of its workforce, “Build your own ideas, structure your thinking, and only once you’ve hit your limits, then turn to AI,” explains Guillaume Delacour, ABB’s vice president of global people development. “That’s when it becomes a truly valuable partner.” The company views AI as an effective tool to support its employees’ personal and professional development.
Similarly, students should be encouraged to develop their own ideas and arguments, then use AI to test, refine, and extend them. Their individual intellectual contributions must remain visible. AI can be a powerful support, but it should not replace independent thinking.
Leading business schools are already experimenting with new approaches. At Université Catholique de Louvain, for example, students are expected to demonstrate original analysis and problem-solving in their thesis work, making clear distinctions between machine-generated input and their own reasoning.
At the University of Sydney, a “two-lane” assessment model combines reflective use of AI with supervised examinations in which AI cannot be used, ensuring that core competencies are rigorously assessed.
New Skills for an AI-Driven Workplace
Technology alone will not determine our future; how we design, govern, and use AI will. Just as government officials and business leaders must establish ethical frameworks, students must learn to make ethical decisions when it comes to AI usage.
In other words, students need to develop core skills for the AI era, which will include the ability to ask the right questions of data, ensure quality and responsible use, and make sound moral decisions. Students must understand how AI systems work, where their information comes from, and what their limitations are, including embedded biases.
Riemer of the University of Sydney encourages people to “step back and remember [AI] is a tool, not a trustworthy colleague. The challenge is to balance those strengths and weaknesses to benefit the organization.”
Technology alone will not determine our future; how we design, govern, and use AI will.
This is an objective of a course at the London School of Economics. Managing Artificial Intelligence asks students to examine the ethical, political, and social implications of AI and to reflect on a central question: How can we harness this technology while preserving what makes us human?
Students themselves are aware of the complex moral challenges that come with the rise of AI. As Léonard at Université Catholique de Louvain reports, many of her students acknowledge that heavy reliance on AI can leave them feeling as if they are “no longer the pilot in the plane.” That is, they feel less connected to their work and less confident in their own understanding.
Our students are calling on business schools for clearer guidance on how and when to use AI to support their work and creativity. They also want to know how to deploy this technology to create a more even playing field when it comes to their own academic assessment, even as regulation struggles to keep pace with technological change.
A Clear Responsibility
Our next generation of graduates will need a combination of technical skill, character, and global perspective to add value to organizations, navigate complexity, and positively impact society. However, what is more important is that their reasoning, creativity, and empathy always remain irreplaceable.
To instill those skills, business schools will need to evolve into digitally driven, interconnected co-learning spaces that extend beyond campus boundaries. Additionally, they will need to blend their faculty’s academic knowledge with real-world industry engagement.
As business educators, our responsibility is clear: to develop work-ready “augmented” graduates who can lead with confidence in a technologically evolving world. If we fulfill that responsibility, I remain optimistic that our next generation of graduates will be more than ready to combine their technological capability with the deep human insight and strong ethical judgment they’ll need to use AI to amplify their contributions in the workplace and the world.