The rapid advancement of AI poses risks — some real, some exaggerated. But John Villasenor, a faculty co-director of the UCLA Institute for Technology, Law, and Policy and Brookings Institution nonresident senior fellow, argues that the benefits far outweigh the risks.
C3 AI spoke with Villasenor about AI doomsaying, how the emergence of generative AI reminds him of the early days of the commercial internet, the need for businesses to act fast or risk falling behind, and why regulators should tread carefully to avoid stifling innovation.
John Villasenor: The current AI dialogue is overly captured by fearmongering. AI, like any technology, will be used for problematic purposes. Take the internet for example: has the internet been used for bad things? Of course. But most people would agree that we’re better off with the internet than without it. I think that the same goes for AI. The downsides will be far outweighed by the benefits — some that we can anticipate, some we cannot.
AI has an endless list of applications. Drug discovery in the pharmaceutical industry is a big one. Another application not talked about enough is cybersecurity. Good cyber defense will require AI because cyber-attacks happen at speeds that you can’t reasonably defend against without AI. And then there’s education. Generative AI could help deliver customized learning in more cost-effective ways. I’ve seen examples of universities trying to ban ChatGPT, and I think it doesn’t make any sense. I’ve told my students to just use generative AI however they want, but that they have to do so responsibly. If what they turn in is plagiarized, they’ve committed plagiarism.
I’ve sometimes spent hours staring at a blank Word document trying to figure out how to write a few paragraphs. With generative AI and the right prompts, that task can be completed in a few minutes. The output of the AI is still going to need human refinement in many cases, but if generative AI can create a good draft of, say, a press release, presentation, or a financial filing, companies will be more cost efficient, which is a competitive advantage. And technology that can bring this kind of efficiency is going to generate economic benefits.
A thriving AI ecosystem will become an indispensable ingredient for a country to be economically competitive. Will there be some jobs that are lost? Yes, but then new jobs form. Technology has replaced many jobs that were in existence in the early 1900s, but our unemployment rate today is not 80%. There were no computer programmers in 1910. Right now, we have millions of them.
If you go back 50 years and think about the technology landscape in 1973 versus today, it’s unimaginably different. In 50 years, current college students have a reasonable expectation of still being in the workforce. So, those entering the workforce today need to be agile thinkers and need to be able to identify, respond, and adapt to technological disruptions that are going to come. They’ll need to be able to distinguish what is purely hype because not everything that is presented as revolutionary will be. They need to view these claims with the proper skepticism, but at the same time, not be like some company executives in 1996 who believed the Internet was just a fad.
I worry that hastily drafted regulation will be over-encompassing and impede opportunities to use AI in innovative ways that are important for economic competitiveness. I’m concerned by the suggestion to create a separate, AI-specific regulating agency, since there is already a whole thicket of laws and regulations that will apply to AI with just as much force. For example, if a company uses an AI algorithm to evaluate job applications, and the algorithm discriminates based on a protected characteristic like race or gender, that’s already unlawful under the Civil Rights Act of 1964. It may be that AI creates new issues that fall through the cracks of today’s regulations. In that case, it makes sense to discuss new regulatory or legislative solutions, but always while also considering any unintended consequences.
If the U.S. impeded people from using AI to explore promising, beneficial AI applications, that would just push innovation overseas. Our higher education system attracts an incredible groundswell of talent from across the world. If we constrain AI research and activity, other countries will gladly welcome those people. That said, if I could push a button and make the U.S. into the only country in the world that has AI capacity, I wouldn’t want to push it. I think we benefit from a global ecosystem of AI being worked on and advanced by capable people all over the world. But overly heavy regulation would result in the U.S. losing its leading role in AI.
Generative AI today is like the internet circa 1996. It was impressive relative to what existed beforehand, but it’s nothing compared to what would happen 15, 20 years later with future innovations. The history of innovation is fascinating because it’s the incumbents that often fail to capture and identify the opportunities that later displace them. Some companies that are poised to ride the AI wave will take advantage of it; some will fail to do so because of classic innovator’s dilemma reasons. But AI and generative AI will continue to shape the way we work and how we innovate. And businesses that fail to engage with the innovation opportunities provided by AI risk falling behind.
This website uses cookies to facilitate and enhance your use of the website and track usage patterns. By continuing to use this website, you agree to our use of cookies as described in our Privacy Policy.