I’ve been thinking a lot lately about how we manage AI. The IJIS Institute hosted a Justice and Public Safety AI Summit earlier this month in Reston, VA. Like most IJIS events, it attracted an interesting mix of practitioners from solution providers, federal, state and local agencies and non-profits all trying to wrap their heads around the appropriate uses and implications of AI in law enforcement, public safety and justice. It was a great event – you can see my notes on Threads.
One of the presentations that has really stuck with me was a session on AI Governance by Dennis Chornenky, a former White House senior advisor, now the CEO of Domelabs. With AI evolving so rapidly, I see many organizations and governments short-cutting some of their traditional IT governance/risk management processes in an effort to not be left behind.
Dennis reminded us that just like other technologies, organizations need to take some time to clearly define their AI strategy and policies to make sure they are using AI safely and efficiently. This starts with asking the important questions: “Why are we using AI?”, “How will we oversee safety?”, “What use cases are appropriate?” and “How will we validate/measure success?”. Then, organizations should task control of these to committees for responsible use, ModelOps and portfolio management, ideally all reporting to an overarching AI governance board.
If this all sounds familiar, it should – successful organizations have been leveraging IT governance and risk management frameworks for other technologies for many years. As we look to manage our use of AI into the future, it just make sense to not abandon what we know works.