Ultimately it will be up to human judgment, not microchip...

Ultimately it will be up to human judgment, not microchip circuits, to harness the power of artificial intelligence properly. Credit: Getty Images/MR.Cole_Photographer

The phenomenon known as artificial intelligence promises solutions to various problems while threatening to deepen others. That view befits a new technology that has amazing ways to grow smarter and more powerful than any we’ve seen.

Ultimately it will be up to human judgment, not microchip circuits, to harness that power properly. Can we?

Think of the existing positives and negatives of the internet and instant communication, and extrapolate from there. Concerns range from diminished civil liberties and reduced employment to more limited roles for the judgment of medical professionals and, scariest of all, methods of war and violence.

On July 21, representatives of seven leading AI companies in the U.S. appeared with President Joe Biden to broadly pledge new standards. That’s a start. There is a critical role for government regulation.

So far, our real-life glimpses of AI are modest compared to what may lie ahead.

GOOD AND BAD USES

A relatively benign peek: In Suffolk County, a digital government software platform called Granicus uses a combination of AI and human labor to find people who rent homes on Airbnb and other websites. Those property owners are automatically issued notice of the county’s 5.5% per night hotel-motel tax.

A slightly more worrisome glimpse: The growing use of ChatGPT recently caused a widely reported fiasco in a court of law that should serve as a warning. A Manhattan attorney, Steven Schwartz, used the program for legal research and ended up citing bogus cases in his filings that the computer spat out. U.S. District Court Judge Kevin Castel said: “The court is presented with an unprecedented circumstance.” The chided attorney promised no such episodes going forward.

At least that boondoggle could have been avoided by someone checking law books. Other falsehoods might not be so simple to correct, especially if someone is purposely generating them.

New York State, like other governments, signals that it can offer useful oversight. The legislature is setting up a commission to recommend ways to regulate against AI abuses.

Political campaigns fuel special concern. A super PAC promoting GOP presidential candidate Ron DeSantis used an AI version of Donald Trump’s voice in a slick televised ad aired in Iowa attacking the former president. Comments from Trump’s AI-generated voice seem based on a message he posted on his social media site Truth Social.

Also looming as thorny issues: deepfake photos and videos, especially porn of non-consenting people, and the use of robotics to replace employees.

PROBLEM-SOLVING POTENTIAL

On a parallel track is the rational, problem-solving part of this multifaceted revolution in superior electronic tools.

Earlier this year, Microsoft president Brad Smith spoke at the Vatican. He made the point that even a broom can be used to sweep up or hit someone over the head. As to promise, Smith spoke of how one nonprofit used AI and satellite images in India to show where houses may be most vulnerable to floods and heat. Volunteers fanned out to warn the homeowners, he said, and “share simple steps like covering their roof with a burlap bag to lower the temperature during a heat wave.”

At least that’s a limited solution to one aspect of a rapidly growing climate problem — the kind of program that leading tech thinkers should be prodded and inspired to produce and expand on. But solutions to our most pressing social problems cannot be left solely to for-profit corporations.

AI analysis also holds great promise in diagnosing diseases early. So-called machine learning models can monitor patients’ symptoms and notify doctors of trouble. Data crunched from medical devices can show how complicated the cures are. We’re starting to see it in use. Deep dives and crosschecks of medical records create another exciting frontier. Yet, we must ensure that human capital, and the values and traditions of our institutions, are not moved to the margins. AI should aid and guide human intelligence, not replace it.

Sam Altman, chief executive of ChatGPT and creator of OpenAI, is far from alone in warning that AI use will need tight regulation. It’s a very real concern that as weapons such as drones become “smarter” they could overrule their human masters as to whom to kill unless closely reined in. Top scientists issued an open letter earlier this year that said: “Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.”

Leaders in the tech industry, government, and other sectors have no choice but to anticipate future problems and abuses and discuss practical ways to head them off. It’s still early enough to think freshly: What practices and policies will keep the tantalizing ascent of AI from tilting more to the utopian than the dystopian?

Congress has engaged in a perpetual struggle for years to get a handle on big-tech issues, but the problems seem to elude lawmakers. They need desperately to increase their collective human intelligence to keep up.

No machine, no matter how sophisticated, can select options for us. More than ever, beneficent human vigilance over technology must be the maxim. This could be the governance challenge of a lifetime.

MEMBERS OF THE EDITORIAL BOARD are experienced journalists who offer reasoned opinions, based on facts, to encourage informed debate about the issues facing our community.

SUBSCRIBE

Unlimited Digital AccessOnly 25¢for 6 months

ACT NOWSALE ENDS SOON | CANCEL ANYTIME