In their rush to grasp the business benefits offered by artificial intelligence (AI) and machine learning, are companies missing the ethical risks in deploying these technologies? Evidence of that oversight abounds, not least in a recent study of 450 C-suite executives in which AI was viewed nearly exclusively from an opportunity lens.
Aside from a mention of cybersecurity as a threat, a recent report—Maintaining the Human Connection in an age of AI, from the consulting firm A.T. Kearney—showed 50 percent of executives who are looking at the adoption of new technology in the long-term selected AI and machine learning as their best opportunity, up from just 27 percent last year.
The absence of reflection on the ethical aspects of AI is all the more striking in light of the observation from Paul A. Laudicina, founder and chairman of A.T. Kearney’s Global Business Policy Council and co-author of the report, who said in a press statement that the results “suggest that the C-suite believes corporate social responsibility is shifting from an optional activity to a central requirement for successful corporate leadership.”
While they recognize new skills are needed in an age of automation, more than 90 percent of executives surveyed in the study do not anticipate a reduction in workforce size as a result of technological displacement—at least not in the next five years.
The benefits of AI bring new risks, along with opportunities
As they rush towards AI’s “game-changing potential,” as A.T. Kearney put it, CEOs would be wise to weave ethical considerations into their AI strategy, said Elaine Weidman Grunewald, co-founder of the AI Sustainability Center, a multi-disciplinary hub launched earlier this year to promote responsible and purpose-driven technology.
“AI will deliver undisputed revenue gains and dramatic cost-saving possibilities to companies,” Weidman Grunewald told TriplePundit. “What it will also deliver is a whole host of new risks, such as privacy intrusion, bias and discrimination. AI is different because it is self-learning and self-scaling, yet governance frameworks that address issues like transparency, explainability and accountability are lagging.”
The term “explainability” refers to machine learning techniques that make it possible for human users to understand, appropriately trust and effectively manage AI, as defined by the Harvard Business Review.
Without building and maintaining trust in an increasingly transparent era, companies embracing AI “may compromise all the benefits gained,” Weidman Grunewald noted.
Employees are still wary of AI
But while 85 percent of senior executives classify themselves as AI optimists, according to an EY study in May, the same survey revealed that employees may not be so keen. According to respondents, employee trust (33 percent) is one of the greatest barriers to AI adoption even though 87 percent of CEOs and business leaders completely or somewhat trust this technology.
One of employees’ more prevalent concerns is what has been called “snooptech,” the ability of employers to start using AI to monitor their employees. As Jeffrey Hirsch, a law professor at the University of North Carolina, Chapel Hill, recently reported in Fast Company, “Lots of workers are under automated surveillance from their employers.” Hirsch noted that such analyses could affect who is hired, fired, promoted or given raises. In addition, some AI programs can mine and manipulate data to predict future actions, such as who is likely to quit their job.
Can labor laws keep up?
Labor laws in the U.S. and other parts of the world have not kept up with technological advances and are not ready to deal with this new reality. In Sweden, however, the national Public Employment Service already has their eye on this issue. In a three-year partnership with the AI Sustainability Center, the agency is working to ensure that that their increasing use of AI in job matching and other services is done in a way that accounts for societal risks early in the process.
“As a public-sector agency, the stakes are high and being a first mover will be crucial in maintaining public trust,” says the AI Sustainability Center’s other co-founder, Anna Felländer.
Both policymakers and companies will have to step up their game in including social and ethical considerations in the use of AI. As 3p has reported, a growing number of companies are taking note and action to mitigate the risk and, in the process, finding opportunities to offer solutions, including IBM, Microsoft, Google and Accenture which through its new Applied Intelligence practice recently launched the AI Fairness Tool.
“Today all companies have become data driven. Yet there is a huge maturity spectrum when it comes to understanding the risks and pitfalls of AI,” Felländer says. “Maintaining a human connection requires an in-depth understanding of how the use of AI can impact not just on the bottom line, but people and society. That impact can be both positive or negative, but understanding the impact enables you to take action and mitigate risks before they occur.”
Image credit: Gerd Altmann/Pixabay
Based in Florida, Amy has covered sustainability for over 25 years, including for TriplePundit, Reuters Sustainable Business and Ethical Corporation Magazine. She also writes sustainability reports and thought leadership for companies. She is the ghostwriter for Sustainability Leadership: A Swedish Approach to Transforming Your Company, Industry and the World. Connect with Amy on LinkedIn and her Substack newsletter focused on gray divorce, caregiving and other cultural topics.