UK-based technology publication TechRadar profiled C3 CEO Thomas M. Siebel in an in-depth interview, describing him as “one of the leading names in AI.” The feature by Catherine Ellis, which explores Siebel’s perspectives on the use of AI for social good and the potential risks of AI across society, also details C3’s technology and applications.
With customers including the U.S. Air Force, Shell, and John Deere, C3 has spent 10 years and a half-billion dollars building an AI platform for industrial-scale applications. C3 applications help reduce greenhouse gas emissions; predict hardware failure for offshore oil rigs, fighter jets and tractors; and assist banks with preventing money-laundering.
Siebel notes that applications such as AI-based precision medicine can bring patients massive benefits. Doctors will be able to analyze health data to predict disease and adverse drug reactions, determine human-specific or genome-specific treatment protocols, and select the optimal pharmaceutical product or combination to treat a specific disease.
If we can accurately predict for a population size of the U.S. or the U.K. who is predisposed to suffer from diabetes in the next five years, we can treat those people clinically now rather than treat them in the emergency room in five years. “The social and economic implications of that are staggering,” said Siebel.
However, Siebel points out that insurers could use the same information to charge certain patients more or deny coverage to patients with preexisting conditions. “We’re seeing that when it comes to personal identifiable data, corporations are not regulating themselves,” said Siebel, citing Facebook as the most obvious example.
IoT vulnerability is another example of the need for oversight. “There are troubling issues associated with how fragile these systems are, like power systems and banking systems,” Siebel explains. “Electrical power is the bottom of Maslow's Hierarchy of twenty-first century civilization. All other systems – whether it's security, food supply, water distribution, defense, financial services–they're all dependent upon it.”
The role of government in an AI-enabled world is vital and highlights the need for government agencies to protect people’s privacy but not try to regulate algorithms, an impossible task, said Siebel.
"The idea that we’re going to have government agencies that are going to regulate AI algorithms is just crazy. When does a computer algorithm become AI? Nobody can draw that line, and if you put some government agency on it, it’s just going to be a big mess. But privacy is something they can protect, and they need to protect,” said Siebel.
Read the full article here.