The Trump administration is proposing new rules to guide future federal regulation of artificial intelligence used in medicine, transportation and other industries.
But the vagueness of the principles announced by the White House is unlikely to satisfy AI watchdogs who have warned of a lack of accountability as computer systems are deployed to take on human roles in high-risk social settings, such as mortgage lending or job recruitment.
A document from the White House said that in deciding regulatory action, U.S. agencies “must consider fairness, non-discrimination, openness, transparency, safety, and security.” The rules won't affect how federal agencies such as law enforcement use AI; they are specifically limited to how federal agencies devise new AI regulations for the private sector. There's a month-long public comment period before the rules take effect.
“These principles are intentionally high-level,” said Lynne Parker, U.S. deputy chief technology officer at the White House's Office of Science and Technology Policy. “We purposely wanted to avoid top-down, one-size-fits-all, blanket regulations.”
The White House said the proposals unveiled Tuesday are meant to promote private sector applications of AI that are safe and fair, while also pushing back against stricter regulations favored by some lawmakers and activists.
Federal agencies such as the Food and Drug Administration will be bound to follow the new AI principles, which makes the rules “the first of their kind from any government,” said Michael Kratsios, the U.S. chief technology officer, in a call with reporters Tuesday.
Rapid advancements in AI technology have raised fresh concern as computers increasingly take on jobs such as diagnosing medical conditions, driving cars, recommending stock investments, judging credit risk and recognizing individual faces in video footage. It's often not clear how AI systems make their decisions, leading to questions of how far to trust them and when to keep humans in the loop.
Kratsios said he hopes the new principles can serve as a template for other Western democratic institutions such as the European Commission, which has put forward its own AI ethical guidelines, to preserve shared values without impeding the tech industry with “innovation-killing” regulations.
That, he said, is “the best way to counter authoritarian uses of AI” by governments that aim to “track, surveil and imprison their own people.” The Trump administration has sought to penalize China over the past year over AI uses the U.S. considers abusive.
The U.S. Commerce Department last year blacklisted several Chinese AI firms after the Trump administration said they were implicated in the repression of Muslims in the country's Xinjiang region. On Monday, citing national security concerns, the agency set limits on exporting AI software used to analyze satellite imagery.