You are here

Regulating artificial intelligence to promote innovation

Jens-Henrik Jeppesen

Issue Lead for Artificial Intelligence, Digital Economy Committee
Workday
17 Mar 2022
Digital Economy

Given its outstanding potential, Artificial intelligence (AI) is known as the ‘new electricity’. This technology can improve healthcare, optimise commerce, strengthen energy resilience, enhance employees’ skills and efficiency, and drive human progress overall.

Businesses can use AI technologies across their operations to inform human decisions, streamline job processes and aggregate business data. Governments, on the other hand, can also use artificial intelligence to design policies, inform cross-sectoral decisions, improve communications with citizens and accelerate public services. For example, the City Hall of Lisbon is using AI to monitor and manage traffic flows, which, in turn, has enabled a reduction in the response time of medical service vehicles.

For artificial intelligence to continue flourishing, it should be monitored appropriately. But regulating AI is a difficult balancing act. AI regulations should provide incentives for its innovation, development and deployment in a broad range of areas - all while creating a vibrant market for trustworthy and ethical AI tools. 

In Europe, the EU Commission has released a proposal for a Regulation on Artificial Intelligence -  the AI Act. This proposal is the first of its kind and it is likely to set global norms and standards for the development and deployment of AI instruments. It will regulate an incredibly broad set of technologies and tools to be used by companies, citizens and the public sector alike.

The AI Act: Reshaping the Digital Decade

The Commission’s AI Act proposal is carefully crafted, well thought-out and comprehensive; a good basis for the legislative process.

For instance, the proposal supports a much-needed risk-based approach. This categorises usage scenarios along a risk scale and imposes regulatory requirements on that basis. This is crucial, given that some AI systems can cause unacceptable risks, others little to no risk. Some, are standalone software systems that may have fundamental rights implications, others are embedded in physical products as health and safety components.

Another essential element supported by the  proposal is the principle of self-assessment, which enables AI providers to comply with relevant obligations throughout the design and development process. Self-assessment is especially important for the provision of software as a service, where improvements and upgrades are released frequently. A third-party assessment approach would extend the time-to-market as companies would depend on outside assessment bodies to approve said updates. By restricting third-party assessment to encompass only a few types of high-risk applications, the AI Act avoids the risk of overburdening assessment bodies with cases.

Overall, by setting robust safeguards for AI systems that could pose risks to health, safety or fundamental rights, the legislation can create a prosperous market for reliable and ethical AI systems.

But the AI Act draft also has some room for improvement. The regulation can be enhanced by tightening the definition of AI as a whole, which currently includes software and tools not normally associated with AI. Further, the definition of high-risk should not inadvertently encompass use scenarios that do not actually produce material risks.

Other challenges in the AI Act arise from the product safety regulatory model chosen by the European Commission – the New Legislative Framework approach. This existing framework, meant to improve market surveillance, seems well-suited for AI tools that are embedded in products such as Internet of Things devices, autonomous cars, robotics or other machinery. However, for many standalone software applications, protection of fundamental rights – not health and safety – is the main concern. For software systems that are constantly updated and improved, it would be more relevant to use a set of process-based rules and requirements to guide providers of AI systems and the organisations that use these technologies. This would best ensure that fundamental rights concerns are properly addressed when AI systems are developed and used in high-risk scenarios.

Another problem arising from the New Legislative Framework is the assumption in the AI Act that an AI system is handed over to the customer with instructions for use, much like a physical product. In this model, the provider is held responsible for the safety of the product. The same occurs in the AI Act, where obligations and requirements fall overwhelmingly on the provider, rather than the user. However, in many enterprise and business-to-business use cases, AI systems are deployed and customised under the control of the customer organisation (the user), and it is the user who then determines how the AI system interacts with the data that he/she controls. The allocation of responsibilities in the AI Act should be amended to accommodate this type of deployment scenario.

A vision for the future

The dual objectives behind the EU’s AI strategy – benefiting from the potential of AI while addressing its associated risks – are shared by policymakers in the US and many other countries. Thus, cooperation on artificial intelligence goes beyond Brussels and the AI Act.

Recent international initiatives, such as those promoted by the OECD and the G7, are key to foster the necessary dialogues that can serve as the basis for global standards and expectations. As leaders of the world economy, the transatlantic partners also have an essential role to play, most notably in the context of the ongoing Trade and Technology Council, where AI has been identified as one of the most significant areas for cooperation. These discussions and meetings can lead to tangible deliverables and a common understanding of risk and risk mitigation required for a successful global AI strategy.

If you want to know more about AmCham EU’s position on artificial intelligence, you can read our position paper on AI Act and our recommendations for the TTC’s Technology Standards Working Group.