Bot or Not? Lessons on Using AI in the Classroom

Article Icon Article
Monday, June 3, 2024
By Sara J. Welch
Illustration by iStock/PhonlamaiPhoto
At Baruch College’s Zicklin School of Business, students employ AI tools when they take tests, gather information, and complete homework assignments.
  • Through prompt engineering exercises, students discover how to craft the types of questions that will return helpful answers.
  • Non-native speakers benefit from AI tools that enable them to communicate more clearly and confidently.
  • Students learn that today’s technology can still be limited, unethical, biased—or simply wrong.

 
Artificial intelligence (AI) tools are playing an increasingly prominent role in both the classroom and the workplace. Such tools include large language models (LLMs), which learn from and generate text, as well as robotic process automation tools such as UiPath, Appian, and MuleSoft.

Most universities are exploring the best ways to take advantage of these powerful new technologies. For instance, several years ago, one school used AI to create an experimental teaching assistant that provided automated responses to hundreds of basic questions that students posted about coursework in an online forum—and the students couldn’t even tell they weren’t interacting with a human TA.

Because so many students, faculty, and staff are already using AI tools, it’s essential for institutions of higher learning to develop parameters for what constitutes acceptable usages on their campuses. At Baruch College of the City University of New York, we recently drafted guidelines specifying that students can use generative AI tools such as OpenAI’s ChatGPT, Microsoft’s Copilot, or Google’s Gemini. However, such usage is only allowed when the instructor permits it, when students cite AI as a source, and when students show exactly how they used the technology.

At Baruch’s Zicklin School of Business, we know that if students learn to use AI in the classroom, they will have a better chance of knowing how to leverage it in the workplace. Here, four Zicklin instructors share their insights about what AI can do, how it can be used, and what limitations still exist.

AI Helps Students Think Critically

Yafit Lev-Aretz, an assistant professor in the Zicklin School’s law department, gives students opportunities to experiment with GenAI in her undergraduate business law class. During a recent take-home midterm exam, Lev-Aretz posed a question, then provided an answer that had been generated by AI. By itself, she estimated, that answer would score 60 out of a possible 100 points.

Lev-Aretz had students devise more precise prompts that would return more detailed answers and earn better grades. “I did not ask to see their prompts or provide feedback on them,” she explains. “My goal was simply to encourage them to use this tool, which I think can be extremely helpful in their thinking and writing process.”

She adds, “This engagement and willingness to experiment are precisely the learning outcomes I aim to foster.”

AI Can Be a Source of Information…

AI can quickly provide supplemental knowledge on an almost limitless number of topics, as Curtis Izen tells students in his undergraduate computer information systems honors course. Izen is a senior information specialist in Baruch’s Computing and Technology Center and an adjunct lecturer in the Paul H. Chook Department of Information Systems and Statistics.

AI can quickly provide supplemental knowledge on an almost limitless number of topics, as Curtis Izen tells his undergraduate students.

“For example, if students don’t know about database management systems, they can ask ChatGPT to explain what a database management system is and provide some examples,” he says.

Instructors might find GenAI equally useful, Izen notes, especially when they’re creating course materials. “They can feed it a paragraph of information and ask it to generate questions,” he says. “It can also change the type of question—say, from a multiple-choice question to a fill-in-the-blank one, or an open-ended question.” It can even help write syllabi, he adds.

…But Only With The Right Prompting

AI can help users reduce gaps in their knowledge—if they know how to use it properly, says Danny Park, an adjunct lecturer in the Chook department. Before students begin his graduate course on business analytics and artificial intelligence, Park provides them with a list of prompts they can feed to ChatGPT to bone up on certain topics.

He hand-feeds them the prompts to show them that only properly crafted questions will return helpful answers. Unless users are effective at prompt engineering, Park explains, they might find themselves illustrating the truth of the old programming mantra of “garbage in, garbage out.”

Prompt engineering is also a key part of the marketing analytics MBA courses taught by Joshua Moritz, a lecturer in the Allen G. Aaronson Department of Marketing and International Business.

“Because AI is pretty new, we need to learn how to ‘train’ AI to avoid generating bad results,” Moritz says. “So, students need a stable, accurate benchmark to know if their answers are going to be right or wrong.”

To that end, Moritz first has students use Microsoft Excel to solve a basic statistical problem—say, creating a linear regression from raw data points. Next, students feed the same data into ChatGPT, making requests and asking questions to see if they can generate the same results as they did in Excel. Students keep tweaking the prompts until the ChatGPT answer matches the Excel answer, which Moritz refers to as “the truth.” Once students get the prompts right, they can save them and use them repeatedly to solve the same simple statistical problems.

“Because AI is pretty new, we need to learn how to ‘train’ AI to avoid generating bad results.”—Joshua Moritz

For this exercise to work, each student must pay a monthly 20 USD fee for the 4.0 edition of ChatGPT, because the free version doesn’t do statistical analysis. “Students can’t share accounts because people write and speak differently. Sometimes small differences such as using ‘the’ versus ‘a’ or making a change in punctuation can generate different answers, believe it or not,” Moritz says.

Even when the prompts are exactly the same, ChatGPT sometimes generates a slightly different and therefore incorrect answer, Moritz says. “I am not clear why this happens,” he admits. “The technology is improving, but it’s not 100 percent. If this were medicine or cybersecurity, getting one answer wrong out of a hundred would not be OK. But this is marketing—you might lose money, but no one will die.”

AI Aids Non-Native Speakers

One advantage of allowing students to use AI during assignments, says Lev-Aretz, is that it reduces language barriers for those who are not native English speakers. That’s a critical consideration for a school like Baruch, whose students hail from more than 150 countries and speak more than 100 languages.

Izen agrees. For some course assignments, he has students record comments asynchronously through voiced-based tools such as the commercial product VoiceThread. (A similar product, Vocat, is an open-source option that was developed at Baruch.) He has found that his students’ public speaking skills improve the more they use the tool.

“Some of my students’ communication was so poor that if I called on them in class, they wouldn’t answer, or they’d mumble and speak so softly I couldn’t understand them,” Izen says. “But now that I’ve been using VoiceThread for several years, I notice that their verbal communication and confidence improve by the end of each course.”

But There Are Downsides

While AI offers enormous benefits to students, it also carries potential risks. One is that it could interfere with the learning process if students use it to complete homework assignments instead of doing the work themselves. Therefore, instructors need to devise strategies to mitigate the possibility that students are merely plagiarizing from AI.

That’s one reason Izen uses VoiceThread in his classrooms. After students make video or voice recordings, they share the files on the learning management system, where VoiceThread is integrated.

Because students must provide answers in their own words and post their sources in text comments, Izen can use these recordings to check whether his students are grasping the basic concepts they might have asked AI to explain. If they are describing their efforts with algorithms or computer programs, students must verbally explain their codes or formulas for their submitted assignments.

AI Cannot Be Blindly Trusted

In addition, AI poses legal and ethical risks, which means that neither students nor business leaders can simply accept the answers AI tools provide. Baruch instructors make certain students are aware of four risks in particular.

1. AI is limited in what it can do. Park notes that many users seem to ignore the word “artificial” in the phrase “artificial intelligence”—they start to believe AI is superior to human ingenuity, creativity, and wisdom. But he points out that, unlike humans, LLMs cannot provide value that is greater than the quality of the material they were trained on.

“People use ChatGPT as if it were a search engine, but in most cases, they’d be better off just using Google instead.”—Danny Park

2. Legal issues abound. It is not always clear exactly how generative AI products have been trained. In an interview published by The Wall Street Journal in April 2024, OpenAI’s chief technology officer claimed she didn’t know how Sora, the company’s text-to-video generator, had been taught. In a subsequent interview with Bloomberg, YouTube’s CEO said that if Sora had been trained on YouTube, that would violate his company’s terms of service.

Meanwhile, a slew of newspaper publishers, from The New York Times to the Chicago Tribune to California’s Orange County Register, are suing Microsoft and OpenAI for reusing their articles without permission.

Similarly, actress Scarlett Johansson might take legal action against OpenAI for creating a ChatGPT voice that sounds like the character she created in the 2013 film “Her”—even after Johansson had declined the offer to voice the AI assistant herself.

3. AI perpetuates harmful biases. Facial recognition technology has been accused of discrimination against Black and brown people. Ask Freepik—an image bank website that includes an AI image generator—to create a photo of a CEO, and you’ll get nothing but images of white men. A query for an image of a nurse gives you white women only.

4. AI is simply wrong sometimes. Recently, Google had to disable its Gemini text-to-image feature because it generated historically inaccurate images of popes who were female and American Founding Fathers who were Black.

“People use ChatGPT as if it were a search engine, but in most cases, they’d be better off just using Google instead,” Park comments. He adds that his teenage son once failed a math quiz because he plugged the questions into a chatbot, which produced incorrect results, and his son didn’t check the answers.

AI Is Part of the Future

Despite the challenges it presents, AI will be an integral part of tomorrow’s workplace. That’s why the Zicklin School has gone beyond integrating AI into the classroom. We’ve also developed programs—including an undergraduate AI minor and a graduate-level AI certificate—that are designed to help students leverage technology once they’re on the job. Because AI potentially impacts all business disciplines, we collaborate extensively with philosophy professor Elizabeth Edenberg, an expert on technology and ethics, as we develop these programs.

Armed with this information, our graduates will know how to use AI tools in ways that are effective, useful, ethical, and trustworthy—which will give them and their companies a competitive advantage.

What did you think of this content?
Thank you for your input!
Your feedback helps us create better content.
Your feedback helps us create better content.
Authors
Sara J. Welch
Associate Director of Marketing and Communications, Zicklin School of Business, Baruch College
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.