AI Integration, Not Prohibition

Article Icon Article
5 May 2026
Photo by iStock/LightFieldStudios
Why business schools must redesign curriculum and assessment to help students form strong business judgment and prepare for workplaces being reshaped by AI.
  • The real AI ethics challenge in business education is not student misconduct but institutional lag—a growing gap between what schools certify and what professionals are actually expected to do.
  • When and how AI is introduced matters more than whether it is permitted: Undergraduates need sequenced exposure that protects the development of independent judgment, while MBA students need programs that integrate AI into pedagogy rather than hold it at arm’s length.
  • Curriculum and assessment design that makes student reasoning and ownership visible better supports the formation of professional judgment and identity than rules focused on restricting tool use.

 
Recent reporting on the use of artificial intelligence to cheat on professional exams has drawn renewed attention to questions of academic integrity. One notable recent example was the decision by the Association of Chartered Certified Accountants in the United Kingdom to discontinue most remote exams beginning in 2026. Public discussion of the ACCA’s action has focused on enforcement: tighter rules, stronger surveillance, better detection. For many instructors, the answer to these challenges is to resurrect the in-class blue book exam. Those responses are understandable. But for business schools, they miss the deeper issue.

The real ethical challenge posed by AI is not student misconduct. It is institutional lag.

As AI becomes embedded in professional practice, inherited curricula and assessment models are increasingly misaligned with how judgment, competence, and accountability operate in the workplace. The result is not more cheating, but a growing gap between what schools certify and what professionals are actually expected to do.

Integrating AI Tools Into the MBA Curriculum

AI tools are already part of the working environment that students are preparing to enter. Most MBA students have work experience, understand professional responsibility, and are accustomed to being evaluated on outcomes rather than process alone. In that context, banning AI use from the outset can be counterproductive. The ethics-related question is not whether students use AI, but whether their judgment remains visible and their ownership of their decisions remains clear.

For that reason, in my MBA courses, I permit AI use from the beginning of the term, with explicit requirements that students disclose how they used the tools, justify their decisions, and defend the final products as their own. They learn that they must treat AI as a tool, not as a substitute for their responsibility. Assessment focuses on whether they can explain and stand behind their work, not on whether they produced it unaided

Integration, not just permission, should be the standard for MBA-level instruction. MBA instructors should, as much as possible, integrate AI tools into their lesson plans and pedagogy—designing case analyses that assume AI access; building in-class exercises in which students prompt, critique, and revise AI outputs together; and structuring assessments that require students to make visible their path from initial query to defended judgment.

Classroom prohibitions on AI use force a split between MBA students’ working selves and their student selves—a split that is itself an obstacle to the integrated judgment that they are meant to develop.

Students consistently report the dissonance they experience when AIs use is curtailed in their programs. One student expressed relief at being able to use AI tools in class, because it felt counterproductive to use AI tools extensively at work but not be able to use them in the classroom.

That relief is diagnostic. MBA students are already inhabiting professional roles, and classroom prohibitions force a split between their working selves and their student selves—a split that is itself an obstacle to the integrated judgment that they are meant to develop. Programs that permit AI grudgingly, or hold it at arm’s length from the curriculum, are not preserving rigor so much as training students for a working environment that no longer exists.

Sequencing AI Use in Undergraduate Courses

Undergraduate education requires a different approach. Undergraduates are still developing basic professional judgment: how to frame problems, how to distinguish explanation from analysis, and how to calibrate confidence to competence. Giving them unrestricted access to powerful AI tools too early risks collapsing that developmental process, but pretending those tools do not exist is neither realistic nor responsible.

As I incorporate AI more deliberately into my undergraduate teaching, I have realized that the solution is not prohibiting its use, but sequencing it.

In my business ethics course, for example, I introduce AI gradually through a sequence of three assignments, each with clearly defined parameters. Across these assignments, the parameters change along three dimensions: access to tools, the complexity of the task, and the visibility of judgment required in the final product.

Students complete these assignments in three phases. First, early in the course, students complete short writing assignments using a lockdown browser that forces them to work in a closed environment with no access to external sources or AI tools. A typical assignment asks students to explain and take a stance on a moral argument—such as how Milton Friedman could plausibly claim that the sole social responsibility of business is to increase profits.

The goal is not to memorize information, but to build critical thinking by developing the ability to articulate positions and understand the difference between stating a claim and defending it. In other words, it can have some utility to begin a course with traditional “blue book” pedagogy, but it should not be the endpoint of a course.

Later in the term, students are given open-ended analytical assignments in which AI use is explicitly permitted. For these assignments, they are asked to tackle problems that would be difficult to complete within the allotted time without assistance. They must disclose which tools they used; explain why they used them; and identify where human judgment was required to select, revise, or reject AI-generated content.

Finally, to better understand how students reason about AI-assisted work, I designed an in-class assessment in which they use AI to generate essay drafts, make light edits, and then submit and defend the results. In the process, they learn that submitting the work means they are claiming it as their own.

When students are asked to engage with AI output critically rather than passively, most develop an intuitive sense of the difference between endorsing a product and owning an argument.

I begin by generating a highly polished AI-generated answer from a simple set of prompts. I input the syllabus, the readings, class handouts, and the question itself into a generative AI tool such as ChatGPT or Claude. In other words, I demonstrate to students how easy it is to produce seemingly high-quality work that is not their own.

Then, I ask them to refine or challenge that answer by utilizing their judgment and creativity. The goal is not to suggest they begin every assignment with an easy inauthentic version, but to teach them what it means to create work that is authentically their own.

The results are revealing. For instance, out of approximately 80 students in one class, the great majority recognized that the AI-generated draft, however polished, was not their own work. Most produced final essays that departed substantially from the original output, with very few submitting anything close to the AI-generated version. Those who give stronger responses introduce sharper distinctions and anchor their analyses in specific course concepts, class discussions, and independent thinking. Those who submitted weak responses tended to reproduce the AI answer’s language and viewpoint balance.

Apart from assigning them poor grades, I generally work with such students during office hours. My goal is to help them learn how to optimize the use of the powerful AI tools at their disposal.

This experiment suggests that the educational value of AI does not turn on whether its use is permitted, but on whether students are required to critically interrogate its outputs and locate their own judgment within, rather than after, the process. When students are asked to engage with AI output critically rather than passively, most develop an intuitive sense of the difference between endorsing a product and owning an argument.

At the end of the course, students work on a complex capstone case that is designed to mirror professional ambiguity. The case involves AdMetric, a fictional analytics firm whose AI-driven marketing tools raise questions about manipulation, transparency, and responsibility across multiple stakeholders. There is no single correct answer. What matters is whether students can integrate facts, values, and constraints into a defensible judgment. AI can assist them with research, scenario testing, and drafting, but students must still take ownership of the positions they uphold.

The contrast across these stages is intentional. When AI is introduced before students have internalized the basic structure of reasoning and accountability, it can substitute for judgment rather than support it. When introduced after those foundations are in place, it can expand perspective, surface tradeoffs, and deepen analysis.

‘From a Dim Flashlight to a Floodlit Room’

Students have described the transition between AI prohibition with a lockdown browser and AI allowance in strikingly concrete terms. Commenting on the capstone AdMetric case, one student noted that working with AI “introduces additional variables that may otherwise be overlooked,” allowing someone “with no legislative background to identify, cite, and incorporate relevant regulations such as dark pattern laws.” Another emphasized that AI makes it possible to “test multiple scenarios, trial approaches, or lines of reasoning simultaneously,” rather than working through possibilities one at a time under time pressure.

A third captured the difference in experience between the first two assignments more vividly: Using AI, this student wrote, felt “like switching from a dim flashlight to a floodlit room.” Under the lockdown browser conditions, the student was spending most of the time “remembering frameworks, definitions, and examples instead of actually thinking.” With AI, the student was able to apply analytical frameworks step by step and focus on evaluation rather than recall, in a way that helped develop a deeper understanding without shortcutting the learning process. Importantly, AI was “a supplement to thinking, not a replacement.”

If institutions respond only by hardening rules about AI use without rethinking curriculum and assessment, they risk solving the wrong problem.

At the end of the semester, students expressed similar sentiments about the impact the course had on their view of AI and the role it will play in their future careers. For example, one student wrote that when I told the class they were “only using 5 percent of what AI can actually do,” it changed how the student approached not just assignments but problem-solving more broadly. This shift encouraged students to “experiment, ask better questions, and use AI more intentionally and efficiently.”

Through the AI-related exercises, another learned not only about AI but “about myself”—which speaks to the deeper connection between AI engagement and the evolution of their professional selves.

Professional Identity in the Age of AI

Anxiety around AI-enabled cheating is understandable. But if institutions respond only by hardening rules without rethinking curriculum and assessment, they risk solving the wrong problem. The ethical issue is not students’ use of powerful tools, but schools’ failure to adapt their educational models to the realities those models are meant to serve. Rules about acceptable AI use are necessary, but they are not sufficient.

Ethics in business education is less about policing inputs than about shaping capacities—and, with them, professional identity. Business education is not merely about information transfer; it is about professional formation. And in the workplace those professionals are entering, AI already mediates how strategic priorities are set and institutional policies are implemented.

Business schools should therefore be asking different questions. Not “How do we stop students from using AI?” but “At what point does AI use strengthen rather than weaken professional formation?” Not “How do we detect unauthorized assistance?” but ”How do we design assessments that require visible judgment and ownership, even when AI is used?”

Far from being an external threat to business education, AI is the environment in which future professionals will work. Treating it as such requires institutional judgment and curricular reform, not just student compliance.

Author’s note: In keeping with the practice I describe above, I used AI tools to test framings, surface counterarguments, and tighten prose. The thesis, the classroom examples, and the positions taken are mine—and I stand behind them.

What did you think of this content?
Your feedback helps us create better content
Thank you for your input!
(Optional) If you have the time, our team would like to hear your thoughts
Authors
Michael A. Santoro
Professor of Management and Entrepreneurship, Leavey School of Business, Santa Clara University
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.