There are major concerns with the safety of AI, including...

There are major concerns with the safety of AI, including confabulation by black-box models like ChatGPT, the authors write. Credit: AP / Richard Drew

Alvin Chin is a research scientist at the University of Illinois, Discovery Partners Institute, where he participates in the AI Consulting Practice. Lav Varshney is a professor at the University of Illinois Urbana-Champaign and CEO and founder of Kocree Inc. He was recently a White House fellow, serving on the National Security Council staff.

While the Olympics were created to showcase the best of humanity, tech giants including Google, Meta and Microsoft advertised a world where machines took home the gold.

Cue the backlash.

Google’s “Dear Sydney” campaign was pulled early for implying that a chatbot’s letter to U.S. hurdler Sydney McLaughlin-Levrone was superior to the heartfelt expression of a young superfan. The preponderance of commercials promoting AI left many viewers more puzzled — if not freaked out — about the most profound technological development at least since the advent of the internet.

This doesn’t have to be the case.

AI in some form or another has been around for generations. Over the past 18 months, however, it has made unprecedented progress in adoption and inserted itself into popular culture. Moreover, AI is now projected to be a general-purpose technology with wide-ranging impacts, likening it to the steam engine and electricity.

So, how might we accelerate the widespread adoption of AI in a positive way?

We can look to history for some signposts. In describing electricity, political scientist Jeffrey Ding noted that it took over 50 years before key innovations significantly transformed manufacturing and other industrial sectors. Primarily motivated by commercial concerns, the “war of the currents” between Thomas Edison and George Westinghouse saw both sides pushing arguments about the dangers of electricity. Edison even orchestrated the use of Westinghouse’s technology to execute a convicted murderer, attempting to coin the term “Westinghouse” for electrocution. Despite the commercial motives, their concerns were not unfounded and quickly led to research and development that improved safety. However, the combination of fear and inherent safety issues with electricity slowed the adoption of the technology.

There are major concerns with the safety of AI, including confabulation by black-box models like ChatGPT, the confident but erratic actions of AI systems in critical infrastructure when faced with unknown situations, and the risk of data poisoning attacks on AI systems by those bent on wreaking mayhem. As with electricity, more research and development to address these problems will help. We believe there is a need for white-box AI systems that are directly human-controllable, human-understandable, and require very little data to train. Indeed, we have been developing a powerful AI technique called information lattice learning with these exact properties.

Addressing fears and providing assurance regarding AI safety is a must. The National Institute of Standards and Technology AI safety group, in which we are participating, has been charged with thinking through AI safety standards. But can we develop an AI workforce that takes safety as seriously as it does innovation? Land grant university systems like ours can play a pivotal role in this as we develop new courses, curricula and degree programs to meet the AI moment. An education on AI that includes engineering safety should not be restricted to AI specialists, rather it should be extended to everyone who learns the technology to spread it to their own industrial or societal sector.

Policymakers and technologists often speak of an AI triad — data, compute and algorithms — as the three things necessary to make it work. Yet, this may be too narrow a view since AI algorithms are incorporated into larger systems as they spread. We contend there should be an expanded AI safety triad that includes AI operations (AIOps) focused on ensuring the design, deployment and maintenance of AI algorithms is safe. Professional engineers signing off on systems with AI components should demonstrate knowledge in all three legs of the AI safety triad. Education should include all three.

Safety was not the only obstacle to the adoption of electricity; workforce and organizational adjustments were also needed. Similarly, the adoption of AI requires more than just an increasing number of AI specialists. University education, extension and consulting programs offer a promising way to catalyze technology diffusion by strengthening the workforce and helping C-suite executives reorganize. Such education can enhance the ability of organizations and their workforces to integrate AI in their own fields, whether it be environmental engineering, music composition, or nutritional science.

To catalyze AI adoption across industries, it is also important that the way we develop AI does not lean too far toward just taking data and compute to make things work for particular problems, without theoretical foundations for why it is working. Ensuring there is well-funded research on theoretical abstractions (like in information lattice learning) is crucial to deep understanding.

The research, teaching, and extension functions of a comprehensive university system like ours can hasten AI technology adoption across industrial sectors in safe, transparent and theory-based ways. Such efforts can strengthen our communities, act as engines of state and national economic growth, and boost national competitiveness.

Alvin Chin is a research scientist at the University of Illinois, Discovery Partners Institute, where he participates in the AI Consulting Practice. Lav Varshney is a professor at the University of Illinois Urbana-Champaign and CEO and founder of Kocree Inc. He was recently a White House fellow, serving on the National Security Council staff.

SUBSCRIBE

Unlimited Digital AccessOnly 25¢for 5 months

ACT NOWSALE ENDS SOON | CANCEL ANYTIME