Artificial intelligence (AI) will likely prove to be one of the most powerful technologies of the 21st century. There is no question AI has the potential to deliver great value to the enterprise. AI technologies already helped create several of the largest, most profitable and powerful corporate entities in human history, such as Google, Facebook and Amazon. It can collect, manage and filter enormous amounts of data to analyze and predict behavior. AI is already reshaping the structure and operations of virtually every business.
“We are at an inflection point right now,” says Anita Schjoll, CEO, Iris.ai. “Suddenly, all of this data, these algorithms and computational power are available. Things have come together.”
But despite its potential for ubiquity, the ethical considerations of AI technologies within the enterprise are still in early stages.
Key Ethical Issues with Enterprise AI
How—and why—does AI makes its decisions? The data sets are so large, unique and complex that, beyond a certain point, the function of the algorithm begins eluding the designers. “It is a problem that is already relevant, and it’s going to be much more relevant in the future,” says Tommi Jaakkola, a professor at MIT who specializes in applications for machine learning.
If its developers don’t know why and how AI is “thinking,” that creates a slippery slope as these algorithms grow more complex. That’s one reason why David Tennenhouse, chief research officer at VMware, believes so strongly in the need for enterprises to deliver what he calls “explainable AI,” which provides “chains of reasoning to prove why its decisions are correct.”
Another ethical conundrum: what happens when AI is required to make a life-or-death decision? Believe it or not, that question needs to be addressed soon. The era of autonomous vehicles is almost upon us. It is only a matter of time before an algorithm determines who lives and who dies in an unavoidable vehicular collision. When that happens, will the enterprise that developed that algorithm be held legally liable for its decision? What, for that matter, will be the legal status of an AI device?
Finally, there is the issue of how AI and automation will affect jobs and, by extension, the economy. Does the adoption of AI mean putting people out of work? Do we sacrifice human financial well-being for a more profitable AI workforce? We’ve all heard the apocalyptic view that AI-driven machines will outsmart humanity and take over the world. But, there is a growing chorus of voices that agrees with a recent report stating AI will boost economic growth in the UK, creating new jobs as others fall away.
AI Rising in the Enterprise
Against this background, AI technology is steadily progressing into the enterprise. Recent Harvard Business Review and MIT Technology Review surveys highlight how the initial use cases of AI predominantly involve managing machine-to-machine scenarios. As a result, enterprise AI usage focuses primarily on back-office functions where computer-to-computer interactions heavily invest in finance and IT operations.
In fact, IT teams are by far the biggest early adopters of enterprise AI. Network security is the largest single use case in the HBR survey at 44 percent. IT departments also use AI to:
- Resolve user technology problems (41 percent);
- Reduce production management work through automation (34 percent); and
- Monitor compliance (34 percent).
As Bask Iyer, chief information officer of VMware, points out, “The Hollywood version of AI happens about one percent of the time. The best AI use cases, such as eliminating mind-numbing back office operations or predictive analytics, are not always that glamorous.” But as the low-hanging fruit for initial adoption of AI in the enterprise, it could still “make our customers’ lives easier,” he says, “helping us to better understand and anticipate their needs. Ultimately, this will translate into billions in revenue and happier customers.”
Workplace Concerns for AI
There is no doubt that enterprise AI has the potential to revolutionize the workplace, but Iyer argues it will take a “people-first” approach for successful integration. The organization needs to train and educate the workforce to understand the basic capabilities of machine learning. Companies also need to build the talent to work alongside AI “colleagues” that will process data and communicate differently than human counterparts.
Building trust between human and AI employees also will prove especially important. While machines are universally better at managing other machines, it is not clear how well they will manage or be managed by human colleagues. Indeed, Iyer predicts that soon:
Managers will be judged not by what they’re doing as far as implementing AI, but how well they’re working alongside a robot or an AI predictive analytics machine.
Bask Iyer, VMware CIO
For this level of business and technology integration to happen, the human workforce will need to trust the non-human AI—and vice versa.
The benefits of enterprise AI technology in the workplace could enable humans to do what they do best. “Machines are great at matching learned patterns,” says Iyer, “but interpreting patterns for results took humans thousands of years to learn. While the big data that fuels AI technology frees us to innovate and collaborate, the idealist in me would like to believe that it takes human intuition to connect the dots and reveal insights.”
That freedom holds the potential for unprecedented breakthroughs that could transform our lives and solve our most challenging problems.