In today’s Fortune CEO Daily column, Fortune president Alan Murray reflected on the social implications of AI. Murray moderated the 2018 Siebel Scholars conference at Stanford University, hosted by C3.ai CEO and Chairman Thomas M. Siebel.
“I came away energized, and further convinced that the ability to collect ever more data from everyone and everything, and to convert that data into intelligence with ever-better machine learning algorithms, will transform society in ways we have only begun to imagine—both for good and for bad,” he said. “AI will be an unprecedentedly powerful tool—but a tool nonetheless.”
The topics of privacy, bias, job disruption, inequality, government controls, and the weaponization of AI are important topics that must be addressed by business, government, and society, noted Murray. These topics generated several key questions for Murray:
- How do we realize the full positive potential of data-driven intelligence without trampling on individuals’ legitimate rights to control the spread and use of their own data?
- How do we instill our sense of values, ethics, and morality into those algorithms, and purge societal biases reflected in the data that trains them?
- How do we defend against the weaponization of AI by bad actors?
- How do we deal with job market disruption as the nature of work is transformed? All the participants agreed new jobs would replace the old, but worried about the massive retraining needed to survive the transition.
- How do we address the inequality that stems from the winner-take-most nature of the digital technology revolution?
- How can democratic societies concerned with responsible development of AI and use of personal data compete against an authoritarian regime like China, where many of those concerns are shunted aside?
Authors Ajay Agrawal (Prediction Machines) and Deirdre Mulligan (Privacy on the Ground), world chess champion and chairman of the Human Rights Foundation Garry Kasparov, and VP GM of the Intel AI product group Naveen Rao discussed the potential risks and opportunities of AI. With AI automating tasks and causing a restructuring of human jobs, people may be able to spend more time on the things they want to do, like creative pursuits. “AI will unleash creativity,” commented Kasparov. Universities will need to adjust curriculum and fields of study to prepare for this new generation of AI. This could mean not only more data science, but also a focus on ethics, values, and creative fields.
In a session on the ‘weaponization of AI,’ authors Max Tegmark (Life 3.0) and Pedro Domingos (The Master Algorithm), and Mark Nehmer, the Director of R&D and Tech Transfer DSE/FVE PEO (Acting) of the Defense Security Service, discussed the dangers of AI without oversight. “When we go to war, we can’t hand over warfare to machines. It still takes wisdom to know whether taking a human life is the right thing to do based on the context,” noted Nehmer. At the same time, “AI is an extension of us. AI makes us more intelligent. Whatever we can do with intelligence, we can do more and better with AI,” said Domingos. “We should also be worried – even paranoid – about AI.”
The need to have robust discussions about the role of AI in changing society is urgent, said Tegmark. “Now is the time to have a candid discussion about this […] rather than telling ourselves ‘everything will be fine,’” he said. “If we can amplify our own intelligence with AI, we can create a future that’s dramatically better than today, for the U.S., China and beyond.”