Advancing More Ethical Artificial Intelligence

Article Icon Article
Wednesday, February 24, 2021
By Denise Kleinrichert
Photo by iStock/onurdongel
A business school takes a multidisciplinary approach to teaching students about the critical role of ethics in the deployment of artificial intelligence.

San Francisco has a long history of discovery—from the Gold Rush to the tech revolution. The city also has a history of embracing people-centered social justice. It makes sense, then, that faculty at San Francisco State University (SFSU) would want to combine the two as we explore the implications of one of the next frontiers of discovery: artificial intelligence.

I have found that business schools largely discuss AI within other topic areas such as product development or marketing. It's rarer that they examine its implications in multidisciplinary discussions and curricula. To bring that multidisciplinary approach to this emerging technology, SFSU’s Lam Family College of Business, in collaboration with faculty from the computer science and philosophy departments, launched a graduate certificate program in ethical artificial intelligence (EAI) in the fall of 2019. Serving as a bridge between disciplines, the program brings together students and faculty from the university’s three colleges to form a set of common values around the use of AI in business contexts.

New Ways to Work

Why is an EAI certificate so critical at this moment? Because of the complex set of issues that have emerged with the increasing use of AI in business contexts. Humans already have witnessed the transformative impact of three technological revolutions, driven by the advent of the steam engine, electric grid, and digital technologies, respectively. But now that we are in the Fourth Industrial Revolution, as World Economic Forum founder and executive chairman Klaus Schwab calls it, we are facing greater disruption than we have ever faced before.

Nicholas Davis of the Thunderbird School of Global Management at Arizona State University in Phoenix refers to the AI technologies powering this disruption as “cyber-physical systems,” which allow people to outsource more of their work tasks to advanced machines that can “think” and render decisions based on available data. According to futurists’ forecasts, AI will displace or modify many types of jobs across sectors ranging from manufacturing to transportation. At the same time, AI is likely to create a wide range of new jobs, not only in technical sectors such as software engineering, data analytics, and cloud computing, but also in more human-centric industries such as hospitality, construction, healthcare, agriculture, and education.


With AI's many potential applications, students also must understand the legal and moral ramifications of the technology and be aware of its potential for abuse.

Throughout the curriculum, we ask students to examine the uses and potential abuses of AI in business processes and practices. We pay special attention to how the technology can be used to support people, planet, and profit.

According to Igor Persic, chief data officer of LinkedIn, “AI skills are among the fastest-growing skills on LinkedIn, and saw a 190 percent increase from 2015 to 2017.” These skills are in demand globally, with the greatest needs in the United States, China, India, Israel, and Germany. Further, automation and machine-focused processes will continue to grow globally, with researchers estimating “that by 2025, the amount of work done by machines will jump from 29 percent to more than 50 percent.” Many experts agree that, in time, AI will generate more jobs and domains of expertise, not fewer.

The use of artificial intelligence is gaining momentum in nearly every area of business. But with its many potential applications, students also must understand the legal and moral ramifications of deploying the technology in different contexts and be aware of its potential for abuse.

Understanding the Ramifications

Our 10-credit-hour EAI graduate certificate is the university’s first multidisciplinary certificate program, bringing together faculty from three fields. Computer science is the base from which AI originates. Business is the primary context in which AI is deployed. Philosophy provides the critical thinking and theoretical ethical reasoning from which students can analyze AI’s complex personal and socioeconomic impacts. I teach the business ethics portion of the program. My co-faculty include Carlos Montemayor and Macy Salzberger from the philosophy department and Dragutin Petkovic from the computer science department.

To earn the certificate, students must complete one three-credit hour course in each of these three disciplines. In AI Technologies and Applications, students explore the tech behind AI, with content tailored to their specific disciplinary areas of focus. In Ethics and Compliance in Business, they are introduced to issues such as ethical decision making, regulatory compliance, and stakeholder impact. In Ethical Principles, they examine the ethical, political, and social ramifications of both current and potential future uses of AI. Finally, each student completes a self-reflective research paper on an issue related to the ethical use of AI, under the guidance of a faculty coordinator.

Throughout the curriculum, we ask students to examine the uses and potential abuses of AI in business processes and practices. We pay special attention to how the technology can be used to support people, planet, and profit.


As business educators, we need to offer more courses focused on ethical artificial intelligence so that students know what it means to design and deploy AI with societal impact in mind.

By the end of the program, we want students to have fulfilled several learning objectives. First, they should understand what principles drive AI and how the technology intersects with issues of privacy, security, and transparency. Second, they should know how to make unbiased and transparent business decisions when designing and deploying automatic decision-making processes. Third, they should know how to enforce ethical standards and comply with laws and regulations. Finally, we want them to understand the significance of ethical AI to society—how the technology can affect interpersonal development and relationships, business operations, stakeholder welfare, and the overall operation of the organization.

For example, in one of our class discussions, we ask students to explore the distinction between the mantra “do no evil” and the mantra “don’t be evil”—the latter being a motto long associated with tech giant Google. That topic alone can inspire a fascinating classroom debate on the value judgments and difficulties inherent to AI. 

In their culminating papers, students have explored a range of topics related to AI’s development, use, and societal impact. These topics have included the use of AI to create “deepfake” images; the moral status of AI; and the question of whether it's possible for AI to have agency and intentionality based on its own knowledge, moral understanding, and basic cognitive skills.

Critical Work

We enrolled 30 students in the first course we offered for the program, which was in philosophy; since then, interest among students and practitioners has risen significantly. Although some students are pursuing only the three-course graduate certificate, most are also completing their graduate master’s studies in one of the program’s three disciplinary areas. As of the fall of 2020, we had three business, philosophy, and computer science students complete the certificate. In the spring of 2021, six additional students are scheduled to complete both the certificate and a master’s degree in one of these three disciplines.

As business educators, we need to offer more EAI-focused courses so that students know what it means to design and deploy AI with societal impact in mind. We must equip students with an increasingly critical skill set that combines technical innovation, critical thinking, and ethical business practice. Students will need ample opportunities to explore this emerging field and sharpen their abilities to use AI to support ethical decision-making, product development, and service deployment.

If they are to be visionary and responsible future leaders, our students must have the wisdom to appreciate all of AI’s ethical implications. They must understand that it will be their responsibility to use these emerging technologies in ways that make a positive impact on society. 

Authors
Denise Kleinrichert
Professor and Interim Associate Dean, Lam Family College of Business, San Francisco State University
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.