AI Has Arrived—But There’s No Need for Alarm

Article Icon Article
Wednesday, June 12, 2024
By Hatim A. Rahman
Photo by iStock/jittawit.21
Humans will still have an edge in terms of creativity. AI’s advantage is its ability to store and process large volumes of data—without getting cranky.
  • As we consider the impact of AI on education, two concerns come to the forefront: how it will disrupt student learning and whether it will displace human educators.
  • Because students can use AI to complete their homework, educators now will likely rely more heavily on in-class participation and exams to assess student learning.
  • Just as airplane pilots learn to use automated systems to augment their skills in the cockpit, business educators can use AI to augment—not replace—their teaching in the classroom.

 
If you read enough about artificial intelligence—especially generative AI like ChatGPT—what you learn might scare you. There are widespread concerns about AI taking over jobsdoing students’ homework for them, and potentially forcing professors to reinvent how to teach their tried and true coursework.

While we definitely need to exercise collective caution about how we apply fast-moving AI technologies in our classrooms, it’s safe to say many of these fears are overblown and, worse, misguided. I say this as a professor at Northwestern University’s Kellogg School of Management in Evanston, Illinois—as well as a researcher and expert on the impact of AI on work. My forthcoming book, Inside the Invisible Cage, shows the impact of AI is not foretold. Rather, this technology promises to amplify our collective—institutional, organizational, and individual—priorities and values.

I’ve thought a lot about this topic, given that each year I teach hundreds of MBA students and interact with a wide variety of business educators. As we consider how AI will impact business education, I see two main areas of potential concern: how AI might disrupt student learning and whether it will displace business educators in the classroom.

However, I believe AI-related fears in both of these areas are overblown. Instead, we have several reasons to believe that this technology will actually enhance the education experience.

AI Is Not the End of Student Learning

That said, we now must be even more thoughtful with our learning assessment methods.

In general, the goal of business educators is to provide new knowledge that encourages students to develop fresh insights, perspectives, and skills—that inspires students to adopt novel ways to think about and tackle business problems. In doing so, professors have always faced tradeoffs as they balance imparting knowledge with assessing learning, whether they teach financial modeling or marketing frameworks.

The adoption of new technology represents one of the largest tradeoffs faculty have to make, and new tech-enabled tools present both opportunities and challenges when it comes to communicating and assessing new knowledge. Despite all the hype, generative AI isn’t very different from any other technology in this regard. Yes, AI can make it easier for students to complete homework or tests outside of class. Therefore, it’s important for instructors to think more carefully and strategically about how to measure whether students are meeting the course learning objectives.

Practically speaking, the adoption of AI in education will likely lead us to reemphasize in-class measures to assess student learning. This might mean relying more on student participation in class discussions, in-class assignments, and onsite, paper-based tests. (Recall those dreaded blue books of past generations!)

As technology continues to evolve, so must educator policies and practices. But we must adapt not to police our students but to ensure that they learn.

This is hardly the first time educators have had to make such a shift. Just look at courses in quantitative fields, where many technologies have emerged to support (or supplant) learning, from rudimentary calculators to Excel to more powerful modern-day statistical programs.

Even before students had access to these tools, they could always turn to a more knowledgeable older sibling or friend for help subverting homework and out-of-class learning expectations. Hence, mathematics faculty have long adopted assessment methods such as conducting in-class tests and asking students to solve problems at the board.

We faced a similar issue when laptops and tablets became ubiquitous in the classroom. Instructors had to craft thoughtful policies about when and how students could use new technologies in class—whether for taking notes only or for completing in-class exercises. The rules educators set have depended on their teaching and assessment objectives.

Here, I want to emphasize that, as technology continues to evolve, so must educator policies and practices. But we must adapt not to police our students but to ensure that they learn. This process is about evolving our thinking and practices to take new technology into account, not adopting a doom-and-gloom, all-or-nothing mindset.

AI Will Not Replace Business Professors

Instead, human professors and AI tools should collaborate with each other.

When I talk to audiences about anxieties related to AI, I ask them to guess how many jobs have been replaced by technology since 1950. The answer, they’re surprised to hear, is one: the elevator operator. Of the 270 jobs listed in the United States Census that year, that position was the only one that was completely displaced over time.

Still, it’s hard not to worry about what modern technologies like AI might mean for certain professions—arguably even my own job! If students can interact with a business-savvy chatbot that has access to a large number of business textbooks, might business professors go the way of the dodo?

Probably not.

Luckily, it’s easy to find examples of tasks that technology was supposed to obliterate, but didn’t. Remember all the doomsday predictions many people made about physical books upon the advent of electronic readers like Kindle and Nook? That wasn’t that long ago. But the numbers don’t lie: Sales of hardcopy books actually rose after e-books arrived, and sales of the former still handily beat sales of the latter worldwide. It turned out that many people preferred to read, collect, and gift old-school tomes, and e-books might have helped stimulate those sales indirectly.

E-books might be better for some purposes (downloading a text quickly when one can’t reach a physical bookstore), while hardcopy books might be better for others (gifting and collecting). How we use both is not either-or but both-and. Similarly, the contributions of humans and AI are not mutually exclusive. They can work together, collaborating versus competing, for the best outcomes. The goal, then, is to combine the strengths of both.

So what are those strengths? As humans, we excel at things like situational creativity, innovation, and emotional intelligence. Moreover, even though we are not perfect in this regard, moral and ethical decisions should be tied closely to human values and priorities.

AI, on the other hand, excels at speed and efficiency, particularly in situations that are similar to its training data. The technology has the ability to store and process large data volumes over unlimited time periods. It doesn’t get fatigued—or cranky—like we do.

AI can serve as a trusted co-pilot in our classrooms, or more aptly a technical assistant. But we must be careful not to rely on the technology excessively.

But while AI can do more, and do it faster, the technology is fallible. Therefore, human input and oversight are necessary to assuring the quality and accuracy of its work. Humans must stay in the loop to perform sanity and morality checks of AI systems.

Human supervision is even more critical in situations when human lives and consequential outcomes are being trusted to AI, such as the recent release of self-driving cars onto city streets. Human involvement also provides some transparency and accountability when AI is involved in decisions that affect issuing credit cards, approving bank loans, and other high-stakes areas where there may be systematic discrimination.

I often point to the example of airplane pilots. The automated systems on airplanes can do most everything pilots can as well, if not better. However, pilots remain critical operators and overseers of aircrafts, as suggested by the latest data from the U.S. Bureau of Labor Statistics, which show an increase in their numbers and their compensation over time.

Pilots learn how to use these systems and incorporate those systems into their work, but they still rely on their ability to fly the planes themselves. In other words, pilots are not competing with automation, but collaborating with it. They use this technology to augment, not replace, their skills.

So it is with business professors, the “pilots” of the classroom. AI can certainly serve as a trusted co-pilot, or more aptly a technical assistant. But we must be careful not to rely on the technology excessively. The goal, again, is to combine the best of human and machine intelligence.

Professors choose topics, craft curricula, and deliver content while using AI to inform those processes and complete specific tasks more efficiently. They can use AI to prepare students for in-class discussions and help students explore a topic in more depth—always, of course, with the professor’s oversight.

Indeed, recent attempts to create AI tutors fell spectacularly short, prompting one researcher to predict that flying cars will be in operation before AI tutors are! How and to what extent professors use AI to augment their work will vary depending on their teaching objectives, their personal preferences, and their institutions’ policies and resources.

AI Will Not Be as Disruptive as Many Predict

But we must be intentional in its use to achieve the best outcomes.

One of the broad takeaways here is this: If we think it is inevitable that AI will “take over” most tasks, in domains ranging from manufacturing to education, we go down the wrong path. Yes, AI will change our lives, and indeed it already has. But, no, the technology won’t displace and disrupt our lives to the extent that the media and many analysts suggest.

Instead of viewing AI as a threat, we must take more control over how it affects us by deciding what we value and prioritize. We must make sure humans stay very much involved in AI’s workflows and decision-making processes.

I hope we will use AI to take more control of our lives and destinies, to uphold our values and principles to the best of our very human abilities.

Moreover, we must be thoughtful and intentional in how and where we apply AI. We must ensure that we have diverse voices contributing to its development and deployment, including those from historically marginalized groups. Otherwise, AI’s operations and outcomes will continue to reflect the experiences of a very narrow group of people.

We also need to be careful to avoid hammer-and-nail thinking. As in, “Now that we have these large language models like GPT-4, what are all the ways we can apply them?” The better questions to ask are “What is the outcome we want to measure, predict, or generate? Is AI the right tool for this? What data can we actually provide the technology to achieve that goal?”

In the business education arena, we can use AI to make learning accessible to more students from more places, and we can assess learning in more and better ways. But we have to make deliberate choices about how to use it and verify whether it’s working as hoped, without bias.

Certainly, we can allow AI to make decisions for us in education and other key areas. But I don’t think most of us would opt for that. Instead, I hope we will take the opportunity to use AI to gain more control over our lives and destinies, to uphold our values and principles to the best of our very human abilities.

In the end, the choice is ours.

What did you think of this content?
Thank you for your input!
Your feedback helps us create better content.
Your feedback helps us create better content.
Authors
Hatim A. Rahman
Assistant Professor of Management and Organizations, Kellogg School of Management, Northwestern University
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.