On Monday, the White House and the National Institute of Standards and Technology (NIST) issued a federal plan to develop technical standards for artificial intelligence (AI). The plan follows up on a mandate contained in President Trump’s executive order on Maintaining American Leadership in Artificial Intelligence, issued last February.
NIST's AI Standardization Plan
The executive order directed NIST to develop a plan that would: "ensure that technical standards minimize vulnerability to attacks from malicious actors and reflect Federal priorities for innovation, public trust, and public confidence in systems that use AI technologies; and develop international standards to promote and protect those priorities."
The plan is meant to bolster AI standards-related knowledge, leadership and coordination among agencies that develop or use AI. It also states that the government should prioritize efforts that are "inclusive and accessible, open and transparent, consensus-based, globally relevant and nondiscriminatory, and that use multiple approaches."
The NIST plan also promotes the focused research on the trustworthiness of AI; the support and expansion of public-private relationships; and an increase in collaborations on this matter with other countries. NIST’s AI plan also comes with guidelines for federal agencies for how to embrace AI technologies to further their mission.
The federal government will also help with the standardization of some AI-development tools "to advance the development and adoption of effective, reliable, robust, and trustworthy AI technologies."
These tools include, but are not limited to: data sets in standardized formats, including metadata for training, validation and testing of AI systems; tools for capturing and representing knowledge and reasoning in AI systems; fully documented use cases and best practices for AI technologies so that others can learn from them.
CCC 20-Year AI Research Roadmap
Last week, the Computing Community Consortium, which is associated with the National Science Foundation, also released a 20-year Community Roadmap for Artificial Intelligence Research in the United States.
This roadmap is based on three main themes: creating an “AI infrastructure” to serve academia, industry and government; training a capable AI workforce, and investing in basic AI research (the type of research that needs to be done for decades before significant results are seen).
The CCC believes that all of these efforts will require substantial investments from the government, but it noted that the results will be transformative.
Also, NIST isn't really the first agency that comes to mind, in regards to minimizing vulnerability, and increasing public trust & confidence in AI. I'd think you want someone like DARPA to take the lead on those initiatives and provide a set of recommendations. NIST could then take those and work with industry to produce the relevant standards.