How Is AI Changing Education—and Business?
- If students master ChatGPT while they’re in school, they’ll improve the papers they write in their courses—and they’ll know how to use the technology once they’re on the job.
- While AI is becoming more ubiquitous in retail settings, some consumers don’t trust it, and some companies are using it in unscrupulous ways.
- AI can create amazing graphics for data visualization, but questions persist about the ethics of machine-created art.
Maybe you’re a business owner who has just called one of your suppliers, and you get swift, practically seamless service from the agent at the other end. But when you pay close attention, you realize you’re interacting with a program, not a person. Welcome, artificial intelligence.
Or maybe you’re a college student whose roommate wants you to glance over his latest term paper before he turns it in, and it reads better than anything he’s asked you to edit before. Welcome, ChatGPT.
But, really, how welcome is either? It’s a question business leaders of the world have been grappling with for years. Artificial intelligence (AI), which once seemed like science fiction, has been expanding its presence in our lives for decades. It now can be found in our homes, our cars, our retail interactions, and many points in between.
At the University of Kentucky’s Gatton College of Business and Economics in Lexington, many professors have begun exploring the possibilities of AI. Some are considering how AI will impact the educational experience, while others are examining how it will transform business operations in a range of fields.
Whether you’re a business student preparing to enter the corporate world or a business leader deciding how or whether to use AI in your organization, you need to understand how this technology might revolutionize the workplace. Here, four Gatton College professors pose key questions about the current state of AI and draw on their own expertise to outline how AI could transform college, the workplace, and the world.
Cheat Sheet or Writing Aid?
One of the most controversial new AI offerings is ChatGPT, a chatbot that answers prompts with detailed written responses. Many professors don’t want their students to use ChatGPT in any context because they consider it a form of academic dishonesty. But Darshak Patel, a co-author of this article, considers it a tool.
“Many students have trouble knowing where to start with something they want to write,” says Patel, director of undergraduate studies and senior lecturer in economics at the Gatton College. “ChatGPT gives them a tool to brainstorm.”
Even so, many professors who aren’t sure how to incorporate AI into the syllabus might utilize detection systems to determine if students have relied on ChatGPT to write papers. While some professors allow students to cite ChatGPT as a source, there is still uncertainty about what constitutes “helping yourself to someone else’s work,” says Patel.
Students can be hesitant to draw on the technology because they don’t want to be viewed as cheaters or lazy thinkers, Patel continues. “They’re here to expand their minds. Is this, in fact, blocking them from doing that?”
There are parallels between AI and other breakthrough technology. At one time, professors were wary of allowing students to use the internet, but now it’s an indispensable learning tool.
For his part, Patel plans to encourage the use of ChatGPT when he teaches a one-year MBA class of managerial economics, because he feels that students need to understand the technology. He also feels that ChatGPT can make students better writers—in part because it can undo some of the unintentional harm caused by smartphones and their text and email apps.
“Texting has created this shorthand that is tough to switch back from,” he says. In addition to correcting students’ grammar mistakes, ChatGPT can provide teaching moments as students see poor habits being corrected before their eyes. “They may even be more receptive when ‘reprimanded’ by a tool instead of a teacher.”
Outside of the classroom, ChatGPT can help students write professional résumés and cover letters. When used as “a fantastic complementary resource” in the career center, Patel says, AI can completely change “how we prepare students for getting jobs.”
Patel draws parallels between AI and other breakthrough technology. At one time, professors were wary of allowing students to use the internet, but now it’s an indispensable learning tool. “Why is this so different?” he asks. “We might one day think of this like other ‘scary’ technologies that today are a normal part of life.”
Conclusion: AI can be a valuable tool in the classroom, but both students and faculty must understand how to use it.
Efficient Customer Agent or Unethical Negotiator?
Off campus, AI is increasingly integrating itself into consumers’ lives. For instance, as automated voices become more commonplace, people become more comfortable with them. But as AI systems start to dominate customer service, ethical questions arise.
Aaron Garvey, a co-author of this article as well as an associate professor and Ashland Oil Research Professor of Marketing, studies how people respond differently when they are negotiating with AI company agents. Negotiation is a skill that humans have practiced since kindergarten, Garvey points out. Even then, “we looked at product offers—‘I’ll trade you my apple for that cookie.’”
While experience has taught us to be skeptical about other people’s intentions, we don’t ascribe intentions to AI systems, Garvey says, which leads us to accept offers we might reject from humans.
Garvey cites the classic scenario in which two people must divide a pot worth 100 USD. “I make you an offer of what my split and your split of the pot will be, and you decide whether to take it or not. If you don’t, neither of us gets anything,” Garvey explains. “You may not take it if you think I might be greedy—say, going with 70-30—even though you’ll walk away empty-handed. You might prefer to punish me.”
But research shows that people are more likely to accept the low end of a 90-10 split when it’s offered by an AI system. That’s because people don’t assume that machinelike AI has self-interested motives. For the same reason, when booking a ride with Uber, people are more likely to agree to pay a higher price when the one asking for it is an AI instead of a human.
However, using AI for negotiations can backfire for a company in certain situations. “When you as a consumer get a wonderful deal from a person, you may think of the person as benevolent, as generous. A machine? You don’t attribute those good intentions to the AI, and thus don’t reward it.” To promote that sense of good will, more companies are humanizing their AIs by presenting them with human faces and giving them names like Ted.
This brings up the question of ethics, says Garvey. “Is it OK for a company to switch off between having AI be more humanlike or more machinelike based on what is to its advantage?” For instance, when the company wants to appear benevolent, it might call its chatbot Ben. When the company wants the customers to accept a 90-10 offer, it might go with the name Robo 5000.
“We’re interested in that,” says Garvey. “We all should be.”
People are more likely to accept the low end of a 90-10 split when it’s offered by an AI system because people don’t assume that machinelike AI has self-interested motives.
Garvey also has been part of research showing that people are “actually more likely to disclose private, sensitive information to AI than to a human” and that they are more likely to be persuaded by strong flattery from AI. He says, “You don’t feel AI is trying to charm you—even though it is charming you, and in many cases more effectively than a human.”
In other words, Garvey believes, we’re still figuring out how AI is affecting us. “We should all be thinking about our decisions and how we respond when it comes to dealing with AI,” he says. “It’s like we’re in a whole new poker game.”
Conclusion: As AI becomes more sophisticated, consumers must understand how companies are using it, and companies must ensure they are deploying it in an ethical fashion.
Masterful Creator or Art Thief?
The artistic possibilities of AI are explored in a master’s-level data visualization course taught by co-author Dan Stone, accounting professor, Rosenthal Endowed Chair, and former director of the Business Analytics Center.
PowerPoint was a great start for showing data analytics graphics, says Stone, but “AI? It’s phenomenal! You can make remarkably diverse images using free software. You want your accounting numbers presented in the style of Picasso? You can do that.”
Stone recently had students work with DALL-E, an AI system that creates images by responding to natural-language prompts and executing “massive complex searches of databases.” Similar options include programs such as NeuralBlender, MidJourney, and Craiyon.
In one of Stone’s recent classes, students used DALL-E to create data pertaining to Shaker Village, a local historic nonprofit organization. Some students found the experience exciting, some found it weird, and some experienced “a landmark change in their thinking about what was possible,” says Stone.
AI is phenomenal for showing data analytics graphics, says Dan Stone. “You want your accounting numbers presented in the style of Picasso? You can do that.”
But some were hesitant because they worried that identifying themselves when they used the software might impact their future careers, says Stone. “They seemed to be asking, ‘Is this really OK to use?’”
Even Stone isn’t sure. “Graphic artists and painters make their living generating content, and now by typing 15 words into a program, you instantly have maybe something incredible,” he says. In fact, some artists are suing AI companies, claiming their work was used without permission to train art programs. Stone acknowledges that it’s difficult to know where to draw the line.
“At the same time, how can you ignore all the amazing work we can do with this kind of technology?” he asks. The graphics are so appealing, he says, that when they’re shown in venues from board meetings to conferences, “people will be wide awake who may have otherwise been taking a snooze. Boring traded for engaging.” But he notes that we must question what else we are trading for AI imagery.
Conclusion: AI can be used to provide stunning graphics, but there are still questions about the ethics of machine-created art.
Neutral Analyst or Unreliable Source?
To discover how much humans trust AI, behavioral researcher and co-author Benjamin Commerford has conducted research within the auditing field. Along with three colleagues, he reported the results of a study that showed that auditors are more likely to rely on evidence that comes from human sources rather than machines.
“We put 170 auditors in a case-based scenario where they are auditing a management estimate that is potentially biased,” says Commerford, an associate professor. “Auditors often receive evidence that conflicts with management’s estimates. Ultimately, we find that auditors propose smaller adjustments to management’s financial estimates when contradictory evidence comes from an AI system.”
Three reasons could explain this difference in trust, Commerford says. First, we find it easier to conceptualize what a human is doing because we can draw on our own experiences and we have a general understanding of how humans make decisions. We are less certain of how algorithms work.
Research shows that auditors are more likely to rely on evidence that comes from human sources rather than machines.
Second, we have a different level of tolerance for errors. We’re quicker to forgive humans than algorithms because we know humans make mistakes. Three, we want to be able to assign blame, and it may be easier to blame Rob in accounting than a robot.
The study also showed that some auditors are averse to using AI because they aren’t sure “they can get their client to book an adjustment on the basis of evidence from an AI system,” adds Commerford.
Conclusion: While AI is becoming a more essential business tool, neither employees nor clients will always put their faith in it.
A Changing Picture
As these observations show, the AI road ahead is paved with questions. If you ask DALL-E to cross a Monet with a flying machine, which artist should receive credit for the resulting image of a lilycopter? If you use ChatGPT as a starting point for a term paper or a business report, are you solving your frustration or stunting your growth as a thinker?
The answers are likely to fascinate and challenge. As Stone asks, “If this is what happened in one or two years, what do you think will happen in the next two years to come? Better yet, let’s ask AI.”