Why It’s Time to Join ‘Team ChatGPT’

Article Icon Article
Wednesday, March 8, 2023
By Soumyadeb Chowdhury, Samuel Fosso-Wamba
Image by iStock/BRO Vector
Many educators view the highly capable chatbot as a threat to learning. In reality, generative AI might be opening up a new era for business education.
  • ChatGPT might make teaching harder for educators, but it also presents opportunities for them to adapt their mindsets, curricula, and pedagogies to create greater value for students.
  • Business schools must train students not only to overcome ChatGPT’s limitations, but also to tap into their “human” skills such as intuition, creativity, and communication.
  • Like mechanical robots that are used in manufacturing, generative AI tools will become teammates with human workers, supporting their decision-making and problem-solving.

 
By now, you have no doubt heard of ChatGPT, the chatbot that provides humanlike responses to question prompts. OpenAI, a San Francisco-based research laboratory, introduced the free version of the technology in November 2022, before launching a paid subscription-based service called ChatGPT Plus in February 2023. Available all the time irrespective of demand, Plus has quicker response times and is always evolving to offer faster, better responses.

Before launching ChatGPT, OpenAI already had introduced DALL-E, an AI tool that can generate digital images from text-based descriptions. The company estimates that ChatGPT is trained using around 45 terabytes of data gathered from the web.

ChatGPT represents a significant departure from the AI we all are used to. Traditional AI-based machine-learning algorithms, for example, can find patterns within big datasets and make predictions. We see this predictive capacity, to some extent, with Google and other search engines, which provide autocomplete features and make recommendations to help improve our results.

But ChatGPT does more than just predict. It uses what’s called generative AI language models, which allow it to create new content based solely on the information it is provided. This content can be in the form of images (based on textual input or another image); text such as news articles, poetry, movie scripts, and marketing campaigns; and audio such as new music tracks, sound effects, and voiceovers. The quality of its creative output depends on the quality of the input it receives, which includes both the data on which it has been trained and the question prompts users create to describe the task they are asking it to complete.

This new technology is garnering massive interest and investment. Tesla CEO Elon Musk and tech giant Microsoft have invested billions of U.S. dollars in OpenAI. Google and Meta claim to have developed comparable generative AI technologies. With this level of attention, it is highly unlikely that this technology will be a passing fad. It is far more likely that the number of available generative AI-based tools will increase and become more sophisticated—and monetized—in the future.

Let’s be honest with ourselves: The era of generative AI tools such as ChatGPT has made it much harder for us to teach. But as academic practitioners, we cannot ignore this technology. Instead, we must understand and apply its capabilities and limitations, and we must be open to its potential to evolve and expand our teaching philosophies, practices, and assessments.

Recognizing Generative AI’s Limitations

Although the capabilities of ChatGPT have been widely touted, there are certain things it can’t do. For example, its training is based on information posted in 2021 and before. Therefore, the algorithm cannot answer questions related to more recent events that occurred in 2022 and beyond, and the information it draws from is not being kept up to date.

To use the technology effectively, users also must be aware of its other major limitations and their implications. For example:

  • ChatGPT cannot think or critically analyze, and it lacks human emotions. Because its responses are based only on its training data and machine-learning algorithms, it cannot solve human and societal problems.
  • ChatGPT is unable to comprehend the meaning behind the words of either question prompts or its own responses. Therefore, it lacks true originality, insight, or depth.
  • The accuracy, authenticity, and trustworthiness of ChatGPT output is always questionable. First, there currently is no way for questioners to understand why and how it generates its responses. And, second, the data that ChatGPT is trained on is pulled from the web, which is filled with nonfactual and opinion-based information. The bot cannot differentiate between content based on fact and content based on opinion. Moreover, we also must question, what is a fact?
  • Because of the data used to train the AI, its responses could be biased. OpenAI has taken steps to make the bot’s responses as objective as possible. Even so, the possibility of bias remains.
  • Finally, efforts to make ChatGPT more neutral could lead to worker exploitation. A Time investigation revealed that OpenAI paid Kenyan workers 2 USD per hour to label toxic content as abusive, sexist, racist, violent, or offensive. While these labels help build safety and morality into the system, critics have called attention to the “massive supply chains of human labor” required to support the process. These efforts also raise two complex issues. First, how can ChatGPT be considered neutral when the way content is labeled is based on human judgments and does not consider cultural differences? Second, to what extent are businesses using nonethical business practices to embed ethics and morality in algorithms and machines?

We will need to design a new wave of teaching practices to help students understand these limitations. We also must promote cautious use of the information the tool provides, and understand the critical role of ethics and corporate social responsibility in the development and use of such tools.

For instance, teachers should create class activities in which students use ChatGPT to solve problems by asking it questions. Next, teachers also must ask students to work in groups to critically analyze the authenticity and accuracy of the information provided by ChatGPT, before discussing the repercussions and risks of blindly using and trusting ChatGPT responses in real-life business scenarios.

Students should discuss the repercussions and risks of blindly using and trusting ChatGPT responses in real-life business scenarios.

ChatGPT provides us with the opportunity to engage our students and make learning fun and interesting. Better still, by immersing them in activities that use the technology, we can equip students with the skills to use the tool effectively and cautiously, stoke their curiosity, and inspire them to ask more questions than ever. More important, we can train them to ask the right questions, which is key to co-creating knowledge.

The takeaway: The only way to overcome the issues posed by generative AI tools in any sector is to use those tools. Only then can educators and their students understand the technology’s capabilities and limitations, which will help them to recognize the true value of human skills.

Preparing Students for New Disruptions

While ChatGPT’s negative impact on business education has been highly debated (and somewhat exaggerated), we need to look at the bigger picture. Even as we manage AI’s disruption in our classrooms, we must also ask, how can we prepare business school graduates for its disruption in the workplace?

Advances in computing, data-driven decision-making, and robotic process automation already have changed many job roles and made others redundant, particularly in manufacturing, customer service, marketing, and rudimentary data entry. As generative AI applications become more powerful and sophisticated, another wave of job displacement will almost certainly occur.

To prepare students for what is to come, we must focus on helping them cultivate knowledge, skills, and competencies that will keep their expertise relevant in the changing job market. The emergence of generative AI tools challenges higher education providers and practitioners to adapt their mindsets, curricula, and pedagogies in ways that create greater value for students.

Just think of how the invention of the calculator forced instructors to change how they taught and assessed students’ mathematics intelligence and learning ability. They had to shift from asking students to memorize numbers to teaching them to understand and explain the process of solving math problems. Similarly, ChatGPT compels us to spend even more time teaching students how to tap their creativity, generate original ideas, hone their higher-order thinking skills, and communicate in different situations in humane ways, respecting social and cultural values.

Moreover, students must come to recognize that AI does not feel emotions or empathy, and so it lacks contextual (situation-based) understanding and communication skills. To fill that gap, they will need to learn to use their own emotional intelligence—a human quality that is often overlooked and underappreciated.

The takeaway: It is essential that we teach business students creative, communication, and higher-order thinking skills and place greater emphasis on applying these skills to critical thinking and problem-solving. We must ensure that business school graduates can critically analyze ChatGPT’s responses and use those responses cautiously and responsibly to make decisions.  

Tapping AI for Decision-Making

We know that decision-making encompasses three challenges: complexity (due to the many streams of information/data), uncertainty (due to unpredictable events and situations), and equivocality (due to divergent viewpoints and conflicting interests among the people involved).

With its superior computational capabilities, generative AI can deal with complexity via data processing and information consolidation. The other two challenges, however, are better suited to human minds. Human workers can use their tacit experience and intuition to understand the decision-making context. They can use their emotional and social intelligence—their persuasive, negotiation, and communication skills—to come to acceptable solutions. Therefore, business school graduates must learn how to put these intangible and interpersonal skills into practice.

Once students gain more confidence in their own abilities and learn to overcome the limitations of generative AI, they will be prepared to view AI as a teammate, not just as a tool.

Mechanical robots have become mainstays in manufacturing, where humans provide instructions to the robots, the robots complete manual tasks, and humans then assess the quality of the completed work. Similarly, it now is inevitable that generative AI will act as a teammate that helps human workers complete cognitive tasks.

Business students have come to view digital technologies as tools to help them achieve specific tasks, and not as partners—and certainly not as competitors or threats. Teaching them to view AI as a partner, not to mention preparing them to potentially compete with its abilities, is uncharted territory.

The takeaway: We must give business students as many opportunities as possible to use and test ChatGPT. Only then will they learn to better utilize and value human skills and gain more confidence in their own abilities, as they also learn to overcome the limitations of generative AI. Once they gain these skills, they will be prepared to view AI as a teammate, not just as a tool.

New Era, New Skill Sets

To collaborate effectively with technology such as ChatGPT, students must learn to coordinate tasks, develop contextual understanding, apply their superior intuition, and unlock their imagination and creativity—all at the same time. In other words, adapting to generative AI technologies will require them to do more than just accept and adopt the technology. It will require them to evolve their own skills and mindsets with the technology.

For their part, business schools must work closely with businesses that are adopting generative AI, in order to understand how human jobs will be redesigned and how organizational dynamics will be restructured to accommodate the technology. Through these partnerships, business schools can help define the new sets of skills, knowledge, and capabilities that human workers will need to work effectively with generative AI.

The final takeaway: Like many disruptive technologies that have come before it, generative AI is not going away—it will only grow more sophisticated and more prevalent in human endeavors. To shape its use in the workplace, business schools must tailor their curricula, learning objectives, pedagogy, and course design to accommodate the new technology. They must teach students the skills they will need to work with their AI teammates effectively, cultivate open mindsets, and remain flexible to the needs of organizations and society. 

Authors
Soumyadeb Chowdhury
Associate Professor of Emerging Technology and Sustainability Management, Head of the TBS Center of Excellence Sustainable Development, TBS Education
Samuel Fosso-Wamba
Professor Business Analytics and Artificial Intelligence, Dean of Research, TBS Education
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.