All-knowing robots are not inevitable, and other lessons on AI use in policing

The Toronto Police Services Board (TPSB) recently approved a policy for use of AI technology in policing, which researchers believe is the first in Canada.

All-knowing robots are not inevitable, and other lessons on AI use in policing

Credit: Mohamed Hassan/Pixabay

The Toronto Police Services Board (TPSB) recently approved a policy for use of AI technology in policing, which researchers believe is the first in Canada.

Why do we need a policy?

While current technology for ‘robocops’ tends to see them fall into fountains more than emulating Skynet, there are many concerns about AI use in policing, says UBC law professor Kristen Thomasen. From embedding discrimination to abolishing privacy, the technology shouldn’t be seen as inevitable – or even beneficial.

What did they get right?

Worrying uses of technology, including mass surveillance, can be banned. And having a policy at all is useful – unlike the Vancouver Police Force, which has been using algorithmic policing systems without a comprehensive policy like TPSB’s for some time to determine patrol areas, says Thomasen.

What needs more work?

Just because a tool exists, doesn’t mean we should use it. AI has the potential to do great harm, and policies should approach its use in policing with greater caution, she says. The TPSB policy also makes officers individually accountable for actions informed by AI, when it could also be the force’s reliance on a system that’s at fault.

Interview language(s): English, French (Thomasen)