AI need not be malevolent to be dangerous

Marcus Alexander Ong

Sunday, September 20th, 2015

Image by  Wendelin Jacober

Image by
Wendelin Jacober

Hollywood has popularised the notion of a malevolent Artificial Intelligence (AI) from the terrifying Skynet in the Terminator films to the all powerful Matrix. These dangers have been echoed in recent times by a plethora of notable technologists and academics including such luminaries as Prof. Stephen Hawking, Bill Gates and Elon Musk. However, an AI need not be malevolent or even ‘intelligent’ to have dangerous or even disastrous consequences.

The Skynet and the Matrix are examples of generalised AI which means that they can act across a broad range of domains from manufacturing robots and new AIs, to creating strategies for warfare and developing artificial worlds in which humans may interact. Whilst generalised AIs make for great films we are probably a few decades away from seeing such advanced AIs. However, many people may be surprised to know that domain specific AIs are currently active across a plethora industries from suggesting films on Netflix and products at the supermarket, executing trades in the financial markets to diagnosing illnesses and identifying genetic diseases. These more ‘mundane’ examples are not ‘intelligent’ – in the general sense of the world – and they do not have hidden agendas such as the subjugation of humankind. As I will now explain they can still pose serious risks.

AIs tend to learn in two ways. The first is supervised learning where a user pre-defines rules for an algorithm to follow or gives an algorithm specific examples which it can use to define its own rules. This is often used in classification tasks such as diagnosing illnesses or estimating user preferences for product recommendations. The second is unsupervised learning where users simply input data and the algorithm attempts to uncover relationships within the data. This can be a very powerful tool as it does not require the user to have pre-defined knowledge of a problem. The corollary of this is that the results can often be unstable and unpredictable.

If we look at finance, machine learning techniques – another name for AI – have been popular in algorithmic and high frequency trading. In the simplest implementation, a user pre-defines a particular trading strategy and the algorithm simply executes the trades in accordance with these rules. This would be an example of supervised learning. However, this restricts a user to established trading strategies or ones which s/he can conceive. This is potentially restrictive and could cause a firm to focus on ‘crowded’ trades. It also does not utilise one of the major strengths of machine learning which is its ability discover non-obvious relationships. This could be overcome by using unsupervised learning where the algorithms objective is to simply ‘make money’. Unfortunately, there is a significant risk that the algorithm could unwittingly develop illegal trading strategies such as layering. In layering a trader submits ‘fake’ trades to give other investors a skewed view of market and forcing the market to move. The trader can then profit by these movements in the market. Alternatively, the algorithm could sell a broad range of highly correlated assets causing the market to collapse in order to buy the assets back at a lower price. These are both acts of market manipulation which could be enacted by a naïve AL without any malevolent intention. Even worse, the user may be completely unaware of this illegality without a detailed (post-trade) examination of their trading books.

Another example of the risks posed by a naïve domain specific AI is in product pricing. Consider the pricing of flights in the airline industry. Prices are set independently by each airline in order to maximise their profits. The airlines could increase their profits by acting in unison as a cartel. Whilst it is illegal to actively ‘price fix’ it happens indirectly – to a lesser extent – by monitoring competitor prices. It would be quite possible for AIs at different airlines to actively co-operate via this indirect price transmission route. Besides trading and price fixing laws, AIs could also unintentionally breach data protection and privacy laws; amongst others.

This issue here is that a lack of ‘intelligence’ can lead to dangerous consequences because naïve algorithms are unable to tell the difference between right and wrong. This has also been depicted in Hollywood in the film ‘I robot’ starring Will Smith. Here the main AI (VIKI) decides that humanity has embarked on a path that will lead to its extinction. Since it has been programmed to protect humans it decides that the only way to keep them safe is to remove free will. In order to defeat VIKI Will Smith requires the help of a more ‘intelligent’ AI (Sonny) which has free will and can understand human behaviour and emotions.

Artificial Intelligence presents many exciting opportunities but poses fundamental questions to our humanity and presents many risks. These risks are not decades or centuries away but are present now. We must give careful consideration to the safeguards that we put in place and our ability to control these algorithms. For a more detailed look at these issues. I highly recommend ‘Superintelligence’ by Prof. Nick Bostrom.


Leave a Reply


8 − = five

Search SSI

Latest News

Subscribe

Please enter your email address below to be notified of the latest news from the Smart Societies Institute:

Submit

Follow Us

Facebook linkedin twitter

Meta