Axon, previously known as Taser, recently unveiled a new AI ethics board for guiding the company’s use of artificial intelligence (AI). According to the company, the new board will be convening twice per year in a bid to talk about the ethical effects of future Axon products, more importantly how using them may impact community policing. Moreover, privacy groups urged the newly formed board to channel their focus on Axon’s facial recognition technology.
With police forces in both China and the UK testing real-time facial recognition technology in public, the use of this technology in policing is becoming a contentious debate. In fact, the UK has put in police CCTV cameras featuring facial recognition technology to help in scanning for hooligans at soccer tournaments. On the other hand, the Chinese police force has incorporated the technology into their sunglasses to assist in scanning travelers at all train stations.
Steve Tuttle, Axon’s spokesperson, told The Verge, that the company hopes that the AI ethics board would assist Axon in navigating increasingly disturbing possibilities associated with facial recognition. Aside from holding the company publicly responsible, he added that the new board would aid the company in formulating a set of AI ethics codes within the law enforcement space. Steve stressed that the primary goal of that would be to earn public trust.
Although Axon claims not to be creating real-time facial recognition technology for law enforcement, the technology would greatly suit the company’s new emphasis on body video and camera analytics. In the past, Axon’s CEO Rick Smith has asserted that real-time recognition may be helpful in dealing with extreme issues such as terrorist manhunt and child abductions.
Axon rebranded from Taser International by taking the new name from its cloud platform, which serves as a store for photos and videos taken from body cameras worn by police offers. The cloud platform boasts over 20 petabytes (20 million GB) of data, which according to Axon makes the company the biggest custodian of public safety data not only in the US alone but also worldwide.
Facial recognition algorithms have experienced problems with gender and racial biases, which has resulted in higher error rates for non-white and women subjects. Although some products have attained equitable error rates successfully across the population, many facial algorithms still fight with this problem. An MIT research done early this year revealed considerable racial inconsistencies in algorithms provided by China’s Megvii, IBM, and Microsoft.
In the context of law enforcement, such error rates would translate to severe human cost. For instance, higher false-positives relating to African- Americans would cause more arrests and police stops. Even so, the situation could intensify the risk of police officers using force.
In a statement to the Verge, Tuttle stressed the company’s goal to stay ahead of any public concerns related to AI. He said the company is focusing on AI algorithms that will have unimaginable capabilities in the coming five years. Tuttle also asserted that there would be constant communication flow between Axon and the new board, which would include emails, phone calls, and biannual reports.