
Trump Administration Executive Order (EO) Tracker
Early last year, the Office for Product Safety and Standards (the “OPSS”) commissioned the Centre for Strategy and Evaluation Services to carry out a study on the impact of artificial intelligence (“AI”) on product safety. The scope of the study was large, encompassing all manufactured consumer products (except for vehicles, pharmaceuticals and food) and involved the consultation of a number of different stakeholders. The results of the study are contained in a comprehensive report published by the OPSS on 23 May 2022 (the “Report”).
We have outlined the key points below, focusing in particular on the benefits and challenges that AI in consumer products can bring, as well as whether the current regulatory framework for product safety and liability can be considered sufficient for these types of products?
“Smart” products, “connected” products, and consumer Internet of Things (“IoT”) products – these are all related terms that are used interchangeably with AI, but what does AI really mean? According to the Report, AI is a broad, constantly evolving term which generally points to “machines using statistics to find patterns in large amounts of data” and has “the ability to perform repetitive tasks with data without the need for constant human guidance”. Some examples of AI include voice recognition, facial and image recognition, machine learning and natural language processing.
The Report goes on to identify the key characteristics of AI applications relevant to product safety:
Data needs: AI consumer products require significant amounts of good quality data to function effectively;
Opacity: It is not always clear to a consumer when an AI system is in use and the workings of certain AI consumer products can be opaque; and
Mutability and autonomy: AI systems have the ability to learn and develop over time, instead of relying on explicit instructions, and so they can display autonomy in actions and decision-making.
While there is no doubt that the use of AI in consumer products is on the rise, the study identifies notable differences between the way in which it is used across various product groups. For example, whilst smart speakers commonly use AI (offering features such as speech recognition and voice assistant systems to understand and respond to user requests), domestic appliances are not as advanced in adopting AI into their design – likely due to cost, privacy and awareness barriers.
Despite this, there is little doubt that the use of AI in consumer products will continue to increase over the coming years – particularly as investment and innovation (partly spurred on by the reliance on technology during the COVID-19 pandemic) lead to improvements in both hardware and AI solutions. But what will this mean for product safety and liability?
Objectives
The Report focused on three specific objectives:
Analysing the current and likely future applications of AI in the home, highlighting the advantages and disadvantages for consumers and product safety implications / risks;
Assessing whether the current product safety framework is sufficient for a new generation of products that incorporate AI; and
Examining what factors regulators should consider when responding to the new challenges posed by AI to ensure consumer safety and foster product innovation.
Findings
In relation to product safety, the Report outlines a number of opportunities and challenges when incorporating AI systems into manufactured consumer products. These include:
The Report found that for many existing AI consumer products, the current regulatory framework for product safety and liability and the mechanisms in place to monitor product safety are applicable and sufficient. Having said this, the development of more complex AI systems is likely to mean that gaps in the current UK legislative and regulatory regime become more apparent over the coming years. Some examples include:
Whether AI software is covered by current UK law (including for example the General Product Safety Regulations (2005)) is unclear and there is a need to explore the concept of what a ‘product’ really includes;
The introduction of AI in consumer products has resulted in complex supply chains with a number of different economic operators (including software developers) which in turn requires deeper consideration about where responsibility and liability for harms should lie;
The definition of 'damage' as well as ‘defect’ may also need further consideration, as notions of harm may increasingly include risks that have ‘non-physical’ effects such as damage to personal data or the mental health impacts of products (and not just physical health and safety effects as is currently the case);
Focusing on ensuring compliance at the point at which a product is placed on the market may not be sufficient in situations where a product has the potential to change autonomously and over time once in the hands of a consumer; and
There is a need for product standards to consider the use of AI in consumer products, which is not currently the case, meaning significant challenges currently exist for manufacturers, conformity assessment bodies and authorities in trying to understand what product compliance looks like for these types of products.
While the Report does not explicitly recommend that the UK introduces regulation to fill the gaps identified above, we expect there to be a growing consensus among stakeholders to adopt such regulation in the near future (particularly given the influence that European movements in this area are likely to have on the UK Government). Hogan Lovells is actively monitoring developments in this space - keep an eye out for our future updates.
Authored by Valerie Kenyon, Eshana Subherwal, Vicki Kooner, and Daniel Lee.