Experts Discuss the Real Risks of Police Technology – Governing
Experts from the Information Technology and Innovation Foundation (ITIF), a Washington-based research organization explored police technology’s advances and risks as outlined in the organization’s recently published report on the issue.
Police Tech: Exploring the Opportunities and Fact-Checking the Criticisms, details how technologies like AI and robotics can help police prevent and respond to crime. The report notes that opponents of police tech have some legitimate concerns, but that outright bans are not the solution. Instead, more research, independent testing and governance rules could help mitigate risks.
“There’s plenty of room for technology to transform public safety and law enforcement,” said panel moderator and ITIF Senior Policy Analyst Ashley Johnson. “So, this is exactly what we have discussed and written about in ITIF’s new report that came out this week.”
During the panel, experts discussed the changing landscape of public safety technology, exploring advances in the capabilities of the tech itself and in the way that the public perceives the use of public safety tech.
For example, Boston Dynamics Vice President of Policy and Government Relations Brendan Schulman noted that robots are not new to law enforcement, but what has changed is the athletic capabilities of newer models like Spot, the company’s robot dog. Schulman said its Spot’s navigational capabilities that make this tool different for public safety officials, as well as the integration of artificial intelligence and automation.
This automation capability does not mean that the robot can act independently and with its own intent, but rather that it can be directed to do a complex task like “open a doorknob” without someone operating it remotely to do so.
“So, there’s a significant amount of concern that I think is fictitious,” Schulman said, citing the influence of science fiction portrayals of robots. “But then there’s also, I think, a substantial list of concerns that are real and that the industry and the government should address.”
One of the ways these risks can be addressed, as underlined in the report, is through independent testing and research.
For ShotSpotter, a company that uses acoustic surveillance technology by leveraging audio sensors to detect gunfire incidents, an independent audit of the company’s privacy implications was conducted by New York University and is available for the public to view on the company’s website.
“We adopted the recommendations from NYU,” said ShotSpotter’s Vice President of Analytics and Forensic Services Tom Chittum. “And their conclusion was that our technology posed an extremely low risk to individual privacy.”
Technology companies that operate in the public safety space can seek independent assessments, like the one conducted by NYU, to reduce risk or outline their own policies for how the company’s products will be used.
However, as Schulman noted of advanced robotic technologies, there is currently an absence of policy in this space, which he believes amplifies public fears.
“I think what would be really useful — either at the department level, or city level or state level, or perhaps federally in terms of guidance to state authorities — is some type of framework,” Schulman said.
He believes that this type of framework could address the weaponization of robots, the use of cameras and warrant requirements for a robot to enter a specific premises. Having previously worked in the drone industry, he said he has seen the impact of such frameworks in addressing and alleviating public concerns.
Chittum believes developing effective public policy begins with open dialog, running tests and using data to address the questions being raised.
As Chittum detailed, the public expects law enforcement officers to perform their jobs more efficiently, more fairly and with greater transparency. Chittum says technology tools — when used responsibly and with oversight — can help law enforcement officials deliver that kind of service.
Related Articles