From Polished Work to Real Judgment

Article Icon Article
24 March 2026
Photo by iStock/Nanci Santos
AI lets students produce professional-looking work in seconds. Business faculty must make sure students also learn to own—and live with—their decisions.

Sponsored Content

  • AI has raised the baseline for what counts as professional student work in business programs—but it has also made gaps in students’ judgment harder to ignore.
  • When students can generate polished answers instantly, the real learning shifts from producing output to making—and owning—decisions.
  • Business simulations expose students to what one-and-done assignments cannot: how their choices compound, how constraints emerge, and how judgment is best forged under pressure.

 
Look at the assignments business students are submitting today, and their work looks—at first glance—impressive. The writing is polished, and the decks look as if they were produced by a mid-level consultant. After years of public pressure to set professional standards, it seems that business programs have finally hit the mark.

But if we take a closer look at the feedback faculty are giving students, week after week, we can see a clear disconnect between the work students are producing and the skills business schools want them to develop. The polish is there, but the thinking is missing.

In other words, today’s students can walk anyone through the what of their deliverables with ease. But the moment they’re asked why they made a specific choice, or how they’d pivot if the market shifted, the logic falls apart.

The Part That’s Disappearing

Artificial intelligence (AI) isn’t the problem. Calculators and spreadsheets didn’t kill the need for students to demonstrate critical thought. When those innovations emerged, students still had to build the models and make sense of the results.

AI, however, changes the learning equation in a different way. With a decent prompt, students can now move directly from asking a question to producing a finished deliverable. They’re effectively bypassing the “messy middle”—that uncomfortable space where they must weigh trade-offs, sit with uncertainty, and actually decide what matters.

Historically, that middle space is where learning happened. Now, it’s becoming optional.

When Output Isn’t the Same as Judgment

AI is exceptionally good at summarizing information and making recommendations that sound “reasonable.” What it can’t do, however, is make a hard call and then live with the fallout.

This becomes obvious when faculty put different student teams in the same environment with the same tools. Their answers to the same business problem start to look alike. Everyone lands on a direction that is safe and defensible, but ultimately generic. Yet meaningful progress in business rarely comes from settling for the “safe” answer; rather, it comes from having the conviction to pick a direction when the data is noisy or incomplete.

That’s where AI tools hit their limit. AI can provide the logic, but it cannot provide conviction. It has no skin in the game. That leaves the hard questions to the human: Which signals can be trusted when they contradict each other? Which risks are actually worth taking? These aren’t output problems—they’re judgment problems. And no language model can shoulder the consequences of a bad decision on a student’s behalf.

The Limits of the One-and-Done Assignment

The “one-and-done” assignment has its place, but it isn’t enough anymore. It’s fine to check whether someone knows a formula or can format a balance sheet, but those types of assessments rarely capture how business decisions compound over time. A single turn-in offers no chance for students to learn from a mistake; it just gives them a chance to put on a performance.

Learning tends to accelerate when early choices carry real consequences. That is where business simulations earn their keep.

Simulations make judgment visible, because they don’t just require students to recommend a strategy—they make them live in that strategy and experience the consequences of their actions. Usually, students begin the simulation with a burst of confidence built on clean and uncomplicated assumptions. Teams might cut prices to grab market share or spend big to outpace a competitor. On a slide deck, those decisions look perfectly defensible.

Learning tends to accelerate when early choices carry real consequences.

The real shift occurs a few rounds later, when those choices start to harden into constraints. That pricing move? It’s now a margin trap. A quick decision on capacity has suddenly locked up the cash needed for an emergency pivot. When a competitor reacts in a way no one saw coming, there is no paper to “turn in” and walk away from. Students are forced to deal with the reality of the moment—to determine what to do next and justify that decision.

At that point, students must move their focus away from how their plan was written and toward whether the members of their team actually understand the fallout of their own choices. That is where judgment becomes visible.

A simulation doesn’t care about the theoretical merit of the plan; it forces students to face the mess they made three weeks ago. If they blew their budget on a bad production run in Week 2, they’ll be staring at a red cash flow statement in Week 6. No amount of clever writing fixes that.

This is where the professional gloss finally wears off. Students stop focusing on producing a polished plan and start discovering whether they can manage the consequences of their own decisions.

Raising the Bar

AI has reset the floor for what “good” work looks like. Because of that, the ceiling is now higher. We aren’t just teaching students how to churn out a decent analysis anymore—the tools can do that in seconds. Now, the real work is teaching them how to pick a direction, handle the fallout when things go sideways, and own the outcome.

Companies don’t fail because their new hires can’t generate information. Companies fail when those graduates freeze up or hedge because they’ve never actually been on the hook for a high-stakes call. Students can’t prepare for that kind of responsibility by reading about it; they must prepare by making a mess and having to fix it.

In business schools, it’s not just about whether faculty’s assignments need a refresh. It’s about whether we’re simply rewarding completion—or actually building environments where students are required to develop real judgment to earn their degrees and enter the workplace.

What did you think of this content?
Your feedback helps us create better content
Thank you for your input!
(Optional) If you have the time, our team would like to hear your thoughts
Authors
Evan Meyer
Director of Marketing, Capsim Management Simulations Inc.
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.