Research Roundup: March 2023

Article Icon Article
Tuesday, March 28, 2023
By AACSB Staff
Addressing negative employee attitudes toward AI, fighting disability bias at work, and setting guidelines for using AI in published scholarship.

Better Bots Are Getting Harder to Detect

New research published in the proceedings of the 56th Hawaii International Conference on System Sciences has confirmed what most already suspect: Advances in artificial intelligence will make online “bots” more difficult to distinguish from humans.

The paper was written by Sippo Rossi, doctoral fellow at Centre for Business Data Analytics at the Department of Digitalization at Copenhagen Business School (CBS), with co-authors Odd Harald Auglend and Raghava Rao Mukkamala, also of CBS; Matti Rossi of Aalto University in Finland; and Yongjin Kwon and Jason Thatcher of Temple University in Philadelphia.

The team created a mock Twitter feed with tweets that focused on the topic of Russia’s war in Ukraine. The feed intermixed tweets from actual accounts with tweets from fake profiles. The fake profiles used pictures created through StyleGAN and tweets generated by GPT-3, the same artificial intelligence language model that powers ChatGPT.

They then asked 375 participants whether they could accurately differentiate between the artificially generated Twitter accounts and the real ones. It turned out that participants actually were more likely to label the fake accounts as real than the real accounts. “One of the real profiles was mislabeled as fake by 41.5 percent of participants who saw it,” says Rossi. “Meanwhile, one of the best-performing fake profiles was labeled as a bot by 10 percent” of participants who saw it.

In the past, Rossi explains, so-called “bots” were easy to spot. But the sophistication and accessibility of AI-generated content—heralded by the advent of ChatGPT—has unsettling implications for society. Bad actors are likely to use the burgeoning technology to engage in political manipulation, misinformation, cyberbullying, and cybercrime via fake social media accounts.

Bad actors are likely to use AI technology to engage in political manipulation, misinformation, cyberbullying, and cybercrime.

The researchers call for additional studies that look at whether participants can pick out bot-generated comments in discussions on specific news articles. Only by studying this issue closely, they emphasize, will researchers discover methods to verify users’ identities and detect and remove fake accounts.

Says Mukkamala, “It’s essential to consider the potential consequences of these technologies carefully and work towards mitigating these negative impacts.”

How Will Workers Respond to AI?

Another study exploring the implications of artificial intelligence examines a different question: Under what circumstances will employees be most open to using AI tools in the workplace?

The paper, forthcoming in the journal Group & Organizational Management, is authored by five scholars interested in organizational behavior and organizational psychology. They include Katerina Bezrukova of the University at Buffalo (UB) in New York; Terri Griffith of Simon Fraser University in British Columbia, Canada; Chester Spell of Rutgers University in Camden, New Jersey; Vincent Rice Jr., also of UB; and Huiru Evangeline Yang of IÉSEG School of Management in Lille, France.

The team conducted a meta-analysis of studies across a range of disciplines, from social psychology to information systems. These studies focused on AI’s application in different contexts, from diagnosing medical conditions to programming robotic “dogs” for dangerous rescue missions.

The researchers found that whether teams are open to using AI depends on two factors: whether they view the technology positively or negatively and whether they are forced to use it. When team members view AI positively, they tend to be less open to using AI if forced to do so by management. But when they view AI negatively, they become more open to using the technology if doing so is mandatory.

Whether teams are open to using AI depends on whether they view the technology positively or negatively and whether they are forced to use it.

The researchers believe this finding provides an initial guideline for how companies might optimize interactions between human workers and AI platforms such as ChatGPT and its inevitable successors. 

“The answer to the question about people collaborating with AI is more nuanced than simply, ‘Will they or won’t they?’” Bezrukova says. “Managers should be aware of a variety of responses when AI is introduced into the workplace.”

Reducing Ability Bias in the Workplace

With greater emphasis being placed on increasing diversity in the workplace, more large firms are investing in policies that support equity, diversity, and inclusion, as well as offering employee training in unconscious bias. In 2017, companies in the U.S. alone spent 8 billion USD on such programs, according to a McKinsey report.

But a recent study published in Frontiers in Rehabilitation Sciences finds that this money has not had the intended effect of reducing implicit bias. The study was based on a survey jointly conducted by the business school and medical school at the University of Exeter in the United Kingdom.

The 108 survey respondents included workers across southwestern England who either worked in human resources or who were involved in making recruitment decisions at their organizations. They answered questions about their own disability status and past interactions with people with disabilities. They also took Harvard University’s Implicit Association Test and completed the Health-Related Quality of Life (HRQOL) survey developed by the U.S. Centers for Disease Control and Prevention. The HRQOL asks questions about respondents’ mobility, self-care, and experiences of pain, anxiety, and depression.

According to the survey, nearly 75 percent of respondents showed some implicit bias in which they favored non-disabled people over people with disabilities. This finding was the same whether survey respondents worked for large companies or small and medium enterprises (SMEs).

Women tended to show less implicit bias than men against people with disabilities. Respondents who themselves had disabilities or challenging health conditions also exhibited lower implicit bias, consistent with past studies showing people show less bias against groups with which they identify.

“Addressing negative attitudes toward disabled people in the workplace should be a high priority for policymakers interested in the disability employment gap.”

To reduce implicit bias against people living with disabilities, companies must do more to increase disability representation in their workplaces, says Daniel Derbyshire, a postdoctoral research fellow at the university and lead author of the study. He co-authored the work with University of Exeter professors Anne Spencer and Brit Grosskopf and consultant Theo Blackmore.

Changing attitudes and reducing bias in meaningful ways will “require deeper and more structural reimagining of paradigms and modes of thinking with respect to disability,” says Derbyshire. “Addressing negative attitudes toward disabled people in the workplace should be a high priority for policymakers interested in the disability employment gap.”


Research News

■ Cambridge sets ethical guidelines for using AI in scholarship. The Cambridge University Press has released its first AI ethics policy, which sets out rules for researchers regarding the use of generative AI tools such as ChatGPT. Not surprisingly, the rules prohibit authors from including an AI platform as an “author” of any paper or book accepted for publication. In addition, the press requires authors who use AI as part of their research to fully disclose that fact, as well as how the technology was deployed.

Cambridge added these new guidelines in response to requests for guidance from researchers, says Mandy Hill, the managing director for academic publishing at the press.

“In prioritizing transparency, accountability, accuracy and originality, we see as much continuity as change in the use of generative AI for research,” says Hill. “We will continue to work with [researchers] as we navigate the potential biases, flaws and compelling opportunities of AI.”

■ Partners make PACT to address climate change. Two universities in the Asia Pacific region have launched an international research center as part of a joint initiative called the Pacific Action for Climate Transitions (PACT). Fiji National University in Suva and Monash University in Melbourne, Australia, will develop and provide solutions to communities that are most vulnerable to extreme climate events—particularly those in Pacific Island nations. Monash Business School will oversee the partnership. 

The PACT center will conduct research intended to help policymakers seeking to fund and adopt mitigation and adaptation measures. Current projects include those focused on enhancing climate resilience and well-being, designing scalable carbon sequestration contracts, and supporting the effective implementation of Fiji’s Climate Change Act 2021.  

Up to this point, most climate research has focused “on either understanding the science of it, trying to push forward with mitigation strategies, or changing consumer behavior,” says Simon Wilkie, head of the Monash Business School and dean of the Faculty of Business and Economics.

The PACT center, he says, will focus on a different question: “How do economies that have been damaged and shocked by climate change transition going forward?”

In addition to conducting research, the center will provide policymakers with training and capacity building. It also will further current Monash University projects in the region, including its World Mosquito Program and Revitalising Informal Settlements and Their Environments (RISE), an initiative that brings clean water and sanitation to 24 informal settlements in Fiji and Indonesia. 


Send press releases, links to studies, PDFs, or other relevant information regarding new and forthcoming research, grants, initiatives, and projects underway to AACSB Insights at [email protected].

Authors
AACSB Staff
The views expressed by contributors to AACSB Insights do not represent an official position of AACSB, unless clearly stated.
advertisement
advertisement
advertisement
Subscribe to LINK, AACSB's weekly newsletter!
AACSB LINK—Leading Insights, News, and Knowledge—is an email newsletter that brings members and subscribers the newest, most relevant information in global business education.
Sign up for AACSB's LINK email newsletter.
Our members and subscribers receive Leading Insights, News, and Knowledge in global business education.
Thank you for signing up!
Weekly, no spam ever, unsubscribe when you want.