ANN ARBOR, Mich. (Michigan News Source) – A University of Michigan professor testified to the U.S. Senate about potential dangers artificial intelligence poses to financial markets — including the possibility of AI learning to manipulate markets without explicit instruction.

“Our existing laws, generally speaking, are written based on the assumption that it is people who make decisions,” Michael Wellman, Richard H. Orenstein Division Chair and Lynn A. Conway Collegiate Professor of Computer Science and Engineering, said in September. “When AI makes the decisions, do our laws adequately ensure accountability for those putting the AI to work?”

MORE NEWS: Bus Service Adds More Stops Between Detroit and Mt. Pleasant

Bad actors, Wellman warns, could use AI to manipulate markets or extract financial information. But new research shows that algorithms designed to maximize profit, operating independently, could cause just as many problems.

Megan Shearer, a recent Ph.D. graduate from Wellman’s group, conducted a study that demonstrated how a trading algorithm didn’t need developer specification to learn market manipulation. If AI developed its own nefarious tactics, Wellman wondered, could developers have a loophole if they claimed they never programmed it to do so?

“We need to find ways to define intent in the context of an AI system,” Wellman said. “Unfortunately, it’s hard to know how to improve oversight of AI given the industry’s secrecy and the difficulty predicting what might come next.”

AI could also lead to an information monopoly. Firms with the best data will build the most capable AI, obtaining an advantage in trading by keeping information hidden. Wellman says this may pose an insider trading problem, which it’s not yet clear how to regulate.

As a result, Wellman says there’s more research to be done before AI is truly ready to go public.

“Simulations of how AI behaves in test environments that resemble real financial systems can help researchers and regulators better understand an algorithm’s behavior before putting it out into the real world,” Wellman said. “We must tease apart which practices and circumstances help versus hurt—and identify market designs or regulations that promote beneficial practices and deter harmful ones.”