We Must Prepare to Work With—and Trust—Humanoid AI

Article Icon Article
Monday, June 30, 2025
By David Steinberg
Illustration by iStock/Robert Wicher
Through the study of semiotics, which refers to making sense of signs and symbols, students develop a sense of trust for humanoid robots in the workplace.
  • As humanoid AI becomes more common on the job, students can study semiotics to learn how people communicate through signs and icons—and how those signs might be applied to interactions with robots.
  • By using immersive audiovisual experiences such as movie clips and promotional videos about AI, faculty can train students to spot signs and symbols in the real world.
  • Students who study semiotics gain insights into their own humanity and an understanding of why they do or don’t trust humanoid AI.

 
When I ask undergraduates in my Business Skills courses how generative artificial intelligence will help them do their jobs, they enthusiastically predict that GenAI will aid them in creating presentations, conducting research, writing reports, and upgrading their coding skills. But when I ask them how they would feel about working with humanoid AI, they go quiet. They tell me that they need to be able to trust their co-workers, but many of the current humanoids “weird them out.”

Yet the odds are good they will be collaborating with humanoid AI in the workplace, because this technology has become a strategic priority, particularly for the United States and China. According to an article in Foreign Policy magazine, “Advances in generative AI since 2022 have turbocharged the development of humanoid robots, and this is accelerating.”

In August 2024, the World Robot Conference in Beijing featured a record 27 humanoids. And an article in Forbes magazine predicts that the companies and countries that first introduce efficient humanoid robots will have huge advantages because they will have “the opportunity to remake their communities in a world in which labor costs approach zero.”

Once humanoid teammates become common, it will be useful for workers to understand semiotics, or the study of the symbols, gestures, and images that help us make sense of the world. Linguist Ferdinand de Saussure defines semiotics as a “science that studies the life of signs within society.” By studying semiotics, individuals learn to read cues that are ubiquitous but often go undetected. As these individuals develop a heightened awareness of their own humanity, they create their own augmented reality, as if they are viewing the world through a pair of smart glasses.

In the workplace, an understanding of semiotics sheds light on human-computer interactions. According to research from Erin Chiou and John Lee, “Semiotics helps to explain how people construct meaning from interactions with automation and representations of the automation, which guides and is guided by the process of trusting.” For this reason, a business school curriculum that includes courses on semiotics provides students with a competitive advantage once they’re on the job.

As senior director of studies for the work-integrated degree programs at Heriot-Watt University’s Edinburgh Business School in Scotland, I advise faculty members seeking ways to create immersive audiovisual experiences that resonate with students in virtual classrooms. After five years of teaching semiotics, I have conducted enough R&D to identify three particularly useful learning exercises that students will enjoy—and that will help them truly master this critical subject.

Icons, Indexes, and Symbols

I introduce students to semiotics by teaching them about the three types of signs originally defined by 19th-century philosopher and linguist Charles Sanders Peirce. These definitions were later contemporized by linguistic anthropologist Marcel Danesi, as follows:

  • An icon is a “sign that simulates, replicates, reproduces, imitates, or resembles properties of its referent in some way,” such as a subway map, a computer desktop, or a perfume scent.
  • An index signifies causality or connects “the laws of physical perception to abstract referents.” Examples include an image of a finger pointing at an object; words such as up, down, here, and there; or a surname that signifies someone’s ethnicity.
  • A symbol “stands for a referent in a conventional way and which, therefore, must be learned in context.” A symbol might be an olive branch, a crucifix, or a well-known corporate logo.

Then, I show clips from business-related movies and ask students to identify the signs that appear in the films. For example, “The Big Short” contains scenes loaded with signs, particularly one in which Steve Carell’s character learns from Byron Mann’s character about the bewildering variety of collateralized debt obligations (CDOs). In class, I play the clip twice so students can familiarize themselves with the actors and environments before they start scanning for signs.

After learning about the three types of signs—icons, indexes, and symbols—students watch business-related movie clips to identify the signs that appear in the films.

Icons in this scene include Teppanyaki-style food preparation (slicing and dicing), signifying firms carving up debt to create CDOs. Indexes include people drinking alcohol and placing large bets in a casino, mirroring the way eager first-time home buyers purchased mortgages beyond their means or the way banks took over-leveraged positions with mortgage-backed securities. Symbols include wedding rings worn by both characters, signifying faithfulness to a partner or perhaps to capitalism.

From the movie “Margin Call,” I play a scene in which Zachary Quinto’s character realizes that his investment firm is in peril. Icons include the lens blurring as Quinto’s character anxiously crunches the numbers. One index pops up when the song “Wolves” by the group Phosphorescent plays through Quinto’s earphones. With lyrics such as “Mama, there’s wolves in the house,” the song represents the ruthlessness of the firm and the wild, untamed nature of the financial services sector. Symbols include the “Grand,” the name of the first nightclub that Quinto’s colleagues go to, which also signifies one thousand dollars.

As students identify the signs, I encourage them to determine which of the three definitions best fits the context, although a sign can be more than one type. At this point, critical thinking kicks into gear! If students can make arguments for one sign fitting all three definitions, I celebrate the moment with the class.

Symptoms and Signals of AI

I take semiotics to the next level by playing short videos of some of the world’s most advanced humanoid AI (i.e., Ameca, Atlas, Optimus, Figure 02, G1, and Pepper). Then, using the polling app Wooclap, I ask students to rate the trustworthiness of each one.

Humans instinctively use the signs discussed above to help them determine trustworthiness. Chiou and Lee note that “symptoms and signals are intrinsic indicators of trust that are incidental to the interaction. Symptoms result from a physical process and can demonstrate competence, intent, and motivation.”

As students watch the videos, they instantly process each humanoid’s tone and word selection as it speaks—if it indeed speaks and has a face. They also note gestures, pauses, eye and facial movements, dexterity, and physical strength. After viewing the video of Ameca, my students often discuss the dissonance between the robot’s soft, humanlike facial expressions and its mechanical ability to access and process large amounts of information in a way that is well beyond human capability.

As students view videos of humanoid AI, they note gestures, pauses, and facial expressions that cause them to assign levels of trustworthiness to the robots.

It’s fascinating to me that students consistently assign high trust levels to Pepper, a humanoid with a childlike appearance and speech. A comment from Alexandre Colle, founder and CEO of home robotics company Konpanion, explains why this is surprising: “Increasing humanoid resemblance does not necessarily lead to trust. In fact, the more a robot resembles a human, the more it may evoke suspicion. When we assign humanlike qualities to nonhuman agents, we also attribute a certain level of agency to them. The more sophisticated the agent, the greater the perceived agency—leading to heightened expectations of internal motives, and in some cases, distrust or suspicion.”

Even so, humans continue to invest astronomical sums of money into developing AI with anthropomorphic features. For instance, many promotional videos of humanoid AI include close-ups of hand structures and demonstrations of fine motor skills—even though it is not essential that a robot’s grasping structures look human. What does this say about us and the way we see the world?

Levels of AI Autonomy

In the final segment of my class, I give students AI applications—some humanoid, some not—from different industries. I ask students to determine the level of autonomy that they would set for each, using a scale developed by Raja Parasuraman, Tom Sheridan, and Christopher Wickens for types of human interaction with automation. The scale has 10 levels, from the lowest (“The computer offers no assistance; the human must make all decisions and take all actions”) to the highest (“The computer decides everything and acts autonomously, ignoring the human”).

For example, a self-driving car by its very nature must act autonomously. Students consistently assign this type of AI a score of between 7 and 10, indicating that they only want the system to inform them of its decisions after the fact, only if asked, only if the system decides to, or not at all. By contrast, students say that collaborative robots (cobots) in manufacturing can act autonomously as long as humans are not ignored.

The autonomy level of other applications tends to be dependent on context. Students give high scores to medical diagnosis AI but believe mental healthcare provision needs human oversight for safety reasons. Similarly, students determine the autonomy of military AI by its purpose: They allow higher levels of autonomy to threat monitoring and combat simulations than to warfare systems and strategic decision-making applications.

This exercise helps students think through the factors and situations that lead humans to trust AI—or not.

Safeguards: An AI Mindset

When students have opportunities to explore the symptoms and signals of humanoid AI, they become more prepared for the workplace of the future. But on a deeper level, they also pause to reflect on their own humanity and the world they want to live and work in.

Business school classes on semiotics roam beyond the realms of finance, leadership, operations, and other traditional subjects and enter the realm of humanism. Ethical considerations also come into play. Let’s not kid ourselves: The goal of a business education is to teach students how to form startups or lead companies that dominate their markets and demonstrate shareholder value.

But AI-related ventures require safeguards akin to those associated with powerful weapons systems. In our classes, we need to encourage students to celebrate the remarkable advances in AI and automation, while also protecting the very species that created AI in the first place.

When it comes to AI, we can no longer confidently hold the view that someone else is looking after us. Each one of us now has that responsibility.

I hope that my classes do more than inspire students to learn about AI and perhaps someday launch AI startups. I would like my students to understand that we cannot reach a point when individual AI systems combine into something so powerful that it overrides human well-being. Moreover, we must not let a few companies, countries, and individuals determine the path ahead—and yet, this is exactly what is happening.

I want my students to realize that we can no longer confidently hold the view that someone else is looking after us. Each one of us now has that responsibility.

The Business School in 2040

Like our students, business faculty must prepare for a future workplace that features humanoid AI. Within 15 years, I fully expect academics to find universities transformed. Let’s imagine what it might be like when we’re working in a business school in 2040.

We enter the atrium of the 10-story intelligent building (icon: human hypothalamus). It automatically scans our nervous system using quantum sensors, connects with our smart clothing and building sensors, and creates a customized comfortable micro-environment for each of us. We are greeted by a burly humanoid who, for security reasons, scans us and our laptop bags. Then, with a welcoming smile and gentle hand motion (symptom and signal), it indicates that we should proceed.

We walk to the coffee kiosk, select our favorite morning drink on a menu screen, and watch the robotic barista arm prepare our order. A cylinder-shaped robot slowly moves past us as it polishes the floor.

After leaving the kiosk, we pass through barriers by swiping our biometrics cards (icon: human) and then step into the crowded elevator. As the doors close, we note the sounds emanating from our fellow passengers: a light cough, a text message tone, and the faint sound of servo motors turning on and off (index: diversity).

Exiting the elevator, we walk down the glass-walled hall and look inside seminar rooms filled with students and teachers. While most instructors are human, we realize that one class is being led by an AI executive-in-residence (index: division of labor). Two things distinguish this humanoid from those that we saw in the school’s atrium. First, it was not mass produced. Second, colleagues and students think that it looks—and often acts—human.

We turn our attention back to the hallway and see the dean of the business school heading our way. She is wearing her AACSB accreditation seal lapel pin (symbol: business education). We greet her and prepare for the day ahead.


What did you think of this content?
Your feedback helps us create better content
Thank you for your input!
(Optional) If you have the time, our team would like to hear your thoughts
Authors
David Steinberg
Principal, Reykjavik Sky Consulting, and Associate Professor in Contemporary Business Practices and Senior Programme Director of EBS Graduate Apprenticeships, Edinburgh Business School, Heriot-Watt University
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for subscribing to AACSB LINK! We look forward to keeping you up to date on global business education.
Weekly, no spam ever, unsubscribe when you want.