As reported recently by NextGov (NIST to Release New Playbook for AI Best Practices), National Institute of Standards and Technology experts are developing a new playbook for AI best practices.
Acting as a companion to NIST's Risk Management Framework, the playbook will help public and private organizations implement the framework and navigate the pervasive biases often accompanying AI technologies.
A prominent flaw of many AI programs – thus far – is systemic bias. While many programmers watch out for cognitive and statistical biases, they can easily overlook the components determining how a system can generate biases.
One key component that NIST suggests is ensuring a strong human management element behind AI systems. Human management is a vital principle to managing technology: being mindful of the human influence – realized or not – on technology to prevent it from being used in ways designers did not initially intend.
Reva Schwartz, a research scientist and principal AI investigator at NIST, told NextGov that NIST has been working on monitoring three types of biases that emerge with AI systems: statistical, systemic, and human.
While bias often enters AI systems through the mathematical basics of its algorithms, as seen in statistical biases, Schwartz said that the playbook's approach looks to prevent other, more human-centric tendencies from distorting AI technologies.
Two recommendations to help private and public groups include promoting a healthy governance structure with clear-cut individual roles and responsibilities and encouraging a professional culture that supports critical thinking and transparent feedback on AI technologies and products.
While monitoring for AI bias, the playbook also highlights the positive impacts of AI systems. For example, NIST has long advocated focusing on a human-centric design approach in AI to meet users' needs better and build confidence in the reliable and ethical deployment of AI technologies.