The report, entitled “AI in the UK: ready, willing and able?” (available here), provides comprehensive coverage of critical issues relevant to the development and use of AI in the UK, such as the potential bias in AI systems; the need for AI systems to be intelligible; funding, education and training in the AI sector; and risk mitigation.
The Chairman of the Select Committee, Lord Clement-Jones, presented a summary of the report at a Law Society event entitled “AI and Ethics: plotting a path to unanswered questions” hosted by Hogan Lovells International LLP on 27 April 2018.
Amongst the many recommendations in the report is that a cross-sector ethical code of conduct for organisations developing and using AI should be drawn up and promoted.
The report recognises the significant potential of AI to contribute to economic productivity and for the UK to be among the world leaders in the field of AI, but finds that there are areas of uncertainty which could dissuade investment and potentially hinder uptake of AI by the general population. The report identifies a number of critical risks presented by AI which would need to be mitigated in order to support development and growth of AI systems including:
- the potential bias in AI systems and the need to ensure that the data used is truly reflective of diverse populations;
- the security risks associated with the use of personal data;
- the need for AI systems to be transparent and intelligible; and
- the potential for AI to contribute to social inequality.
Regulation of AI
The report considers whether regulation of AI should be introduced as a mechanism to manage these (and other) risks, but concludes that blanket regulation of AI, at this stage, would be inappropriate given the rapid developments being made in AI, the risk of regulation inhibiting innovation, and the difficulties of successfully designing a one-size-fits-all solution. The report recommends that existing sector-specific regulators are at present best placed to consider the impact of AI on their sectors and any subsequent regulation that may be needed. In this respect, the report acknowledges that in some areas existing legislative frameworks may be sufficient, for example the Data Protection Bill and GDPR will go a long way to address the concerns associated with the handling of personal data.
However, the report acknowledges that there may be risks associated with AI which are not adequately covered by existing legislation.
One of the suggested solutions is an overarching code to control behaviours associated with the development and use of AI, presumably with the aim that major tech firms and other AI actors sign up to the code on a voluntary basis. The report suggests that, in time, the code could provide the basis for new statutory regulation, if deemed necessary.
As a starting point, the report sets out five overarching principles that would form the basis of the code:
- AI should be developed for the common good and benefit of humanity.
- AI should operate on the principles of intelligibility and fairness.
- AI should not be used to diminish the data rights or privacy of individuals, families or communities.
- All citizens have the right to be educated to enable them to flourish mentally, emotionally and economically alongside AI.
- The autonomous power to hurt, destroy or deceive human beings should never be vested in AI.
The introduction of a cross-sector AI code is likely to be welcomed by a public that is increasingly reliant on AI in many areas of life. As highlighted by Lord Clement-Jones, an effective code of ethics may be potential measure to improve public trust in AI and to equip the public to be prepared to challenge its misuse. However, whether such a code has any actual impact on the behaviour of the dominant tech firms and other key actors in the AI field will depend on firstly persuading them to sign-up, and secondly on their ongoing compliance.
There is no point introducing a voluntary code so onerous that no one is willing to comply. However, at the same time, such codes must have sufficient teeth to be meaningful. The first hurdle will therefore be for critical actors to agree a set of standards. This exercise will be a huge challenge given the range of relevant organisations and institutions involved in the AI space. The risk in emphasising collaboration is that the resulting code is too flimsy to have any effect, whilst a more forceful approach may risk alienating critical players.
Assuming a sensible set of standards is developed, a suitable body will need to be given the role of monitoring and enforcing the code, with sufficient gravitas to make their “seal of approval” for signatories worthwhile, and the power to ensure that signatories toe the line. The report has suggested that the Centre for Data Ethics and Innovation could be one such body.
Exactly how monitoring and enforcement of compliance with the code might be undertaken without statutory powers to investigate, and without the threat of criminal or civil sanctions, is the next challenge, and it will be interesting to see how far such voluntary measures are able to go. For example, would the enforcement body have the authority and resources necessary to scrutinise non-open source algorithms to assess whether those algorithms might be producing discriminatory results, or whether the institutions using them have been sufficiently transparent in how those algorithms determine outcomes? The threat of the introduction of regulation may in itself be sufficient to motivate key institutions to ensure the code has some weight, but at present it seems unlikely that the Government would act on that threat for all of the reasons outlined in the report, and the uncertainties surrounding Brexit.
We will watch with interest for responses to this report from both the Government and the dominant tech firms, to see whether the proposals are suitably ambitious to be effective at galvanising further action.