Leading technologists Professor Genevieve Bell and David Thodey have signed an open letter calling for a national conversation about the need to develop a legal and ethical framework for artificial intelligence in Australia.
Bell, director of the 3A Institute and Intel Senior Fellow, and Thodey,
former CEO of Telstra and chair of the CSIRO, raise sound and pressing concerns.
While the concept of artificial intelligence (AI) may seem like the realm of science fiction, the reality is that our lives are already being shaped by AI in ways that are largely unnoticed and almost completely unregulated.
Consider those annoying ads that pop-up online when you search Google. How does Google know that you are in the market for that new shirt?
These ads are just one example of content recommendations – you also get them when you login to Netflix, Google, your social media feed or even a news website. The algorithms and techniques that sit behind them, also drive Facebook and Instagram’s facial recognition capability.
Content recommendations are an application of an AI method known as “machine learning” – essentially a set of algorithms that mine the data you (or your children) provide online, whether through your search queries or your social media posts, to “learn you”.
These algorithms engage with every piece of data you lay down online, to learn about your movements, your likes and dislikes, your social networks and your emotional triggers.
At a first glance, features like content recommendation may seem relatively harmless, even helpful. However, when applied in an unregulated environment on a massive scale, their application can have profound social consequences – including undermining democratic processes.
Consider the shifting reality for print media and independent journalism over the past five years.
Newspapers and journalists used to be funded by revenue generated by advertising. Today, that advertising spend has all but disappeared from journalism, instead shifting online to global providers such as Google, Facebook and Amazon.
The dominance of online marketing has all but pulled the rug out from underneath a critical plank of democratic societies – independent journalism – and in its place we are left with a world of content that is curated by algorithms to sell you everything from political parties to shoes.
We are only just beginning to understand the profound impact that active manipulation of the online space had on the US election, through the sale of personal data to target political messaging through content recommendations to propaganda, troll factories and fake news.
If Australia is to harness the incredible potential of artificial intelligence while minimising the risks, there are many complex practical and ethical issues that need to be considered.
The issue of bias in algorithms has been relatively well publicised. For example, MIT researchers have shown how leading facial analysis software either misidentifies or cannot “see” faces of people of colour. This reflects the reality that if you “train” a computer program to make decisions based on biased training data, the computer’s own future decision making will reflect these biases.
As algorithms are increasingly applied to pursuits such as “predictive policing”, or identifying at risk children, the social implications of bias loom large. Researchers are working on ways to correct for this bias but for now, the “objectivity” of artificial intelligence for decision making cannot be assumed simply because a product has gone to market.
The issue of explicability is equally critical for anyone concerned about social justice. Explicability relates to the conundrum that humans have been clever enough to develop algorithms that enable machines to then perceive, learn, decide, and act on their own. However, to date, these same machines are unable to explain their decisions and actions to human users.
As “thinking machines” become increasingly involved in justice systems, or in making decisions on social welfare benefits, the need for explicability is clear.
Leading thinkers have noted that the success or failure of artificial intelligence hinges on the critical issue of trust – trust that facilitates data sharing and the multi-disciplinary collaboration on which these technologies depend. For example, breakthroughs in medical research, such as the development of algorithms to identify skin cancers, depend on dermatologists being willing to share hundreds of photos of skin cancers with digital scientists.
Thoughtful, pragmatic regulation on the use and abuse of data is fundamental to trust. It is hard for the public to understand and support data sharing, in a world where Cambridge Analytica and other data breach scandals are front of mind.
While the full impact remains to be seen, the European Union had led the way by recently introducing the General Data Protection Regulation (GDPR), which provides a set of basic principles around what data can be collected, from whom, and when and how it can be used. This includes simple but important innovations. For example, the GDPR introduced rules around when and how companies can collect data from children.
Artificial intelligence is no longer the stuff of science fiction. The power these technologies offer for good is extraordinary. However, the risks are also very real.
Australian consumers, researchers and businesses need to engage in a national conversation about what an informed legal and ethical framework for artificial intelligence in Australia would involve.
The future relies on our willingness to engage with these difficult issues today.