There is a lot of buzz around artificial intelligence these days.  In July this year, Pluribus, a poker playing artificial intelligence (AI), defeated a four-time World Series of Poker champion. While AI systems have been built over the years to beat the best players at checkers, chess, Go and Jeopardy, beating a Poker champion is notable milestone in the progress of AI due to the nature of the game. Unlike the previous games won by AI, poker is based on hidden information. Pluribus had to know when to bluff, when to call someone else's bluff and when to vary its behaviour. The AI's success is attributed to learning the nuances of poker by playing trillions of hands against itself and evaluating the results. 

Pluribus's ability to solve challenges based on hidden information opens up an abundance of possible uses, including cybersecurity, Wall Street trading, and even political negotiations. Within the legal sector AI tools such as Lex Machina have been used to assist lawyers in their legal strategy, including estimating the chances of winning a case before a particular judge.  The broad range of capabilities of AI are highly exciting and attractive, however before they are employed on a large scale it is important to revisit the issues that arise when using and implementing AI.

What is AI?

To put it plainly, AI is the simulation of human intelligence processes by machines, especially computer systems. These processes include learning (acquiring information and rules), reasoning (using rules to reach conclusions) and self-correction.

Issues with AI 

 Ethical 

  • AI tools can be used to analyse personal data and make inferences. As a result, it can create a detailed profile of an individual, without the individual having provided much information.
  • This capability may raise red flags with consumers as they worry about AI tools revealing sensitive private information.
  • Companies should consider the implications of data use to ensure that its data use is legal, fair proportional and just, in order to maintain their consumers' trust.

Bias

  • A particularly relevant issue for Insurance companies.
  • Bias in AI can arise from feeding it potentially discriminatory decisions made by human predecessors.
  • The AI will then use this potentially biased data to make future decisions.
  • To prevent the risk of being sued for discrimination, companies will need to take care to audit the data before feeding it to the AI.

 

Copyright

  • AI has the potential to engage in acts of content creation as it mimics human cognition.
  • But if the definition of creation implies conscience and will AI works would not qualify for protection.
  • Without copyright protection for AI works there is a concern that there will be a decrease in investment in AI.
  • To combat this some countries are considering granting legal subjectivity to AI, thereby protecting AI produced work.
  • Another less extreme option is to extend the definition of creativity to cover works created by AI. 

As AI programs continue to be developed and their intelligence systems improved, there will be a broadening of AI related issues. Although it will be exciting to see what challenges AI can overcome next, the issues we have explored will soon become simply the tip of the iceberg.