Responsible Innovation - Why it matters to the insurance sector
The last twelve months has seen a proliferation of legal, regulatory, academic and policy papers about innovation, in particular the use of Artificial Intelligence ("AI"). At the heart of the debate is "responsible innovation", a concept that incorporates social principles such as ethics, transparency, trustworthiness and accountability in relation to complex technology.
"Responsible innovation" at an international level
There has been plenty of activity at the European level on ethical and legal aspects of AI, including discussions about introducing mandatory insurance for high-risk AI systems and the publication by Insurance Europe of a paper on AI and liability. In the UK, the Information Commissioner's Office published a paper on the use of big data and artificial intelligence and later this year, the UK government will launch a new strategy for the commercialisation, development and adoption of "responsible AI". Further afield, Japan published an AI governance report, the USA expanded the scope and membership of its senior body responsible for overseeing AI and Singapore concluded the first phase of its work in promoting the responsible adoption of AI and data analytics by financial institutions.
UK regulatory approach to "responsible innovation"
The range of global and local stakeholders participating in discussions on "responsible innovation" is vast and it is difficult to predict what the impact might be for financial services regulation in the UK. In the last few years, the PRA and FCA have focused efforts on understanding emerging technologies, how they are used by firms and which firms are actively using them, with both regulators continuing to devote significant resources and time to analysing the risks and rewards of digitisation in financial services, including the insurance market.
In 2020, the Bank of England conducted a survey with UK banks to understand how the pandemic affected their use of machine learning and data science, concluding that, together with other regulators, it will take "necessary steps" to support the safe adoption of machine learning and data science in financial services. Delivering fair value in the digital age is one of the FCA's key supervisory priorities for the next few years and will see them target product and service quality, the use of data and algorithms in pricing and fair treatment of vulnerable customers in the digital age. The FCA said it will also engage across industry sectors on AI, focussing in particular on machine learning technology and how to "enable safe, appropriate and ethical use of new technologies".
Where might this lead?
The variety in design and uses of AI, the lack of consensus on how to define AI and the fact that technology evolves, all pose obstacles to the regulation of AI. Many stakeholders have worked on, or continue to develop a framework for the ethical development and regulation of AI across sectors, but a broad approach is likely to create unintended consequences for some sectors. There are arguments for making regulation technology-specific, but also for taking a sectoral approach and designing protections based on how technology is used by providers of products and services in a particular industry. Given the unique aspects of the insurance market and in particular, the dual role that insurers have, as both users of AI and as providers of insurance to other users of AI and developers of AI, it is important that the insurance sector contributes to ongoing debates and helps regulators to reach a consensus on a common understanding of what "responsible innovation" means for the insurance sector.
A cross-practice team at Clifford Chance advises financial institutions on AI and data ethics. We can help you stay informed of current discussions and provide advice on legal and regulatory aspects of your AI strategy, including on the implementation of data ethics frameworks.