Artificial Intelligence and Policing in Australia
Dr Tegan Westendorf: Australian Strategic Policy Institute.
For policing agencies, AI is considered as a force-multiplying solution not only because it can process more data that human brains can conceivably do within required time frames, but also because it can yield data insights to complement the efforts of human
teams to solve complex analytical problems.
It’s broadly understood that human bias can compromise both police outcomes
and the trust from communities that enables effective policing.
This can make AI seem a ‘solution’, although, if it’s adopted without knowledge of its limitations and potential errors, it has the potential to create more and compounding problems for police. While researchers are fond of analysing ‘human bias’ in systems, the humanity of individuals also really matters for how they do their work and engage with their communities. It’s a strength of the community policing function, not something to be edited out by technology, no matter how powerful and large
the datasets may be.
This insight can help shape how policing works with AI and other new technologies, and how human analysts can prevent coded human bias running unchecked in AI systems.
The Australian Government is building the necessary policy and regulatory frameworks to pursue the goal of positioning Australia as a ‘global leader in AI’.
Initiatives have included:
• the recent launch of Australia’s AI Action Plan (2021)
as part of the Digital Economy Strategy
• the CSIRO Artificial Intelligence Roadmap (2019)
• the Artificial Intelligence Ethics Framework, which includes eight principles designed to ‘ensure AI is safe, secure and reliable’
• over $100 million in investment pledged to develop the expertise and capabilities of an Australian AI workforce and to establish private–public partnerships to develop AI solutions to national challenges.
AI is being broadly conceptualised by the federal government and many private companies as an exciting technological solution to ‘strengthen the economy and improve the quality of life of all Australians’10 by inevitably ‘reshaping virtually every industry,
profession and life’.
There’s some truth there, but how that reshaping occurs depends on choices, including, for policing, about how data and insights are used and how direct human judgements and relationships can be informed by those technologies, not discounted and disempowered.
AI development, deployment and monitoring methods and capabilities aren’t always sufficiently understood in either government agencies or client industries.
Key documents all mention ethics and safety being important, especially in balancing commercial
interests and incentives, but none of the government documents cited above mention how the current limitations on AI technology can compromise those principles of ethical, safe and explainable AI.
Human bias is a key example of this insufficient understanding. AI, which is often cited as a solution to remove intentional and unconscious biases, can in fact learn such biases and propagate them.
Algorithms are trained on historical datasets and deployed into future ones, meaning that they ‘learn’ the human and other biases encoded in the historical datasets developed by human decisions and apply those rules to the data environments they’re deployed into. To make this more complicated, currently available AI products don’t have the functionality to render visible all the rules they learned on training datasets, or all the correlations they’re learning once deployed into live datasets.
Even if tech experts were to argue that it’s the complexity of algorithms that prevents comprehensive explainability, and that this can be overcome with sufficient expertise by staff, the functional problem of being unable to explain AI correlations, insights and decisions persists even if complexity and expertise, not the pure functionality, are the barriers. This means the transparency of and ability to explain AI
decisions (termed ‘explainability’ in computer science literature) necessary to comprehensively understand AI decisions are not yet available.
We’re not yet able to make AI functionality sufficiently transparent and comprehensible to have confidence in all its applications.
When it comes to critical functions that interact with individuals’ lives and places in our community, such as policing, this is a deep design problem that must be confronted at the design and adoption stage.
It’s problematic that high-level government policy documents often don’t demonstrate a sufficient understanding of the ethical and efficacy limitations of current algorithms in policing scenarios. High-level policy documents usually don’t dive down to particular implementation instances, because determining high-level, guiding policy objectives requires a degree of abstraction.
But factors with the capacity to fundamentally compromise and derail those policy objectives should warrant acknowledgement to ensure that they’re factored into consequent policy development regarding implementation.
Government not having this understanding when crafting guiding and regulatory frameworks for AI use could have major implications for the rights of citizens, even when police officials do understand those limitations. Without a detailed understanding of the strengths and weaknesses of AI, governments’ capacity to develop policy and training guidance and regulatory frameworks for AI use in law enforcement and criminal justice that uphold democratic rights and the ethical AI standards that Australia
mandates is challenged.
High-level policy needs to guide the development of detailed implementation policy taking into account
the fact that, as a result of AI limitations, human validation should be integrated with AI decisions and insights in applications in which human lives and rights are affected. It is not, by the principles of the AI Ethics Framework, safe or sufficient to use human validation belatedly after a default expectation of problem-free operation.
Various types and uses of AI are under consideration.
NSW Police is already using AI to analyse CCTV data. Currently in trial, the Face Matching Service program controversially has access (by virtue of all Australian governments agreeing to participate in the program) to government databases of existing government-issued documents with photo ID to enable comprehensive ID verification that organised crime groups can’t fake.
Queensland Police is already trialling the use of an AI algorithm in an attempt to flag
high-risk domestic violence (DV) offenders.
There’s been no public messaging about how those trials directly incorporate the AI Ethics Framework24 on either the Qld Police or NSW Police websites, nor mention of it in media interviews about the trials. But there are some immediately obvious, though not comprehensive, efforts to uphold ethical principles. In the NSW trial, ‘privacy protection and security’ are aspired to by housing photo IDs in separate state and territory databases to which all states and territories have agreed to grant each other access via a central hub, rather than housing all the data in one database, which poses greater cybersecurity risks to all the data housed there.
For the Queensland trial, there’s evidence of efforts to include the ethical principles of accountability (police are accountable and maintain human oversight of AI decisions); reliability (operating in accordance with the intended purpose of risk assessment); fairness (only known, repeat offenders are assessed by the AI); and human and social wellbeing (via the effort
to reduce DV by habitual offenders).
The Queensland Police trial provides a valuable example of how the problems associated with using AI, given its potential to perpetuate bias in application, can be mitigated to a significant extent.
Because suggested law enforcement solutions extend from AI data-processing/matching to decision-making (for example, in the US, deciding whether a person is either too high a risk or eligible for parole; in the Qld Police example, deciding whether a person is sufficiently high risk to warrant pre-emptive police intervention in the home), developing the policy and regulatory frameworks necessary to guide the ethical, effective and democratically legitimate use of AI algorithms by the public sector in Australia is not
only imperative but critically urgent.
Dr Teagan Westendorf is an analyst in the Strategic Policing and Northern Australia programs at ASPI.
The Australian Strategic Policy Institute was formed in 2001 purportedly as an independent, non-partisan think tank. It claims its core aim is to provide the Australian Government with fresh ideas on Australia’s defence, security and strategic policy choices. It is controversial in Australian journalistic circles for its joint funding between the taxpayer funded Defence Department and weapons manufacturers including Thales and Lockheed Martin.
There has been a long running legal dispute between ASPI and Australia’s leading investigative news site Michael West Media.
Artificial intelligence and policing in Australia was released this week. To read or download in full go to the ASPI website.