In April, the European Union published a draft law on artificial intelligence, which will apply to any company with products or services available within the E.U.
The AI act aims to protect citizens’ rights while still allowing innovation to grow.
Benjamin Mueller, Senior Policy Analyst at the Centre for Data Innovation, explained at a Chamber webinar on 31 August that regulators around the world are currently introducing laws to deal with rapidly developing technology. The European Union has introduced a broad range of initiatives, including GDPR on data privacy. But the law on AI is perhaps the most significant of the E.U.’s regulations in the tech area to date, he said.
“The E.U. initiative is very bold and far-reaching,” Mueller said. “Whether it works remains to be seen.”
He noted that the act runs to 150 pages, much longer than similar proposals in other jurisdictions, and is a reflection of how the bloc is set up.
“The E.U. is an administrative body staffed by lawyers, and the temptation therefore is to create these sweeping blueprints for how a market is supposed to operate, and I think it sometimes becomes divorced from the reality of business operations.”
The E.U. act contains three categories: forbidden, high risk and low risk. The high risk category is probably of most concern to the business community, as it covers a broad range of areas including education, vocational training, employment, and essential public and private services.
Many companies will therefore have to adhere to the new technical and regulatory requirements, some of which are fairly onerous, as Mueller explained. For example, AI systems must be traceable and auditable, and must be subject to human oversight at all times.
Complying with the regulations means that innovators inside the E.U. will face higher upfront costs, which could create a competitive advantage for companies outside Europe.
“This means that AI is more likely to be developed outside the E.U., and the E.U. will become a market where people sell AI,” Mueller said.
Marcos Chow, Head of Technology Enablement, KPMG China/Hong Kong, said that the widespread adoption of AI technology means it is in use across many industries, and it is being used by people who do not have an IT background. So regulators are now trying to catch up to harness the use of AI.
He noted that the E.U.’s draft law is one of several in development globally. Singapore, Japan and UNESCO are among those that have also come up with recommendations and guidelines. While policymaking in this area is still in its infancy, the drafts that have been developed so far are all principle-based, which is a good thing, Chow said.
“Every country is at a different stage of enforcing regulations. Regulators are looking at each other’s work, and there is a convergence coming,” he said.
For example, he said, the key guidelines in the E.U.’s proposals overlap with the seven ethical principles for AI in Hong Kong, including transparency, accountability and oversight. So Hong Kong companies will not have to worry about dealing with very different legal requirements if they are doing business in the E.U.
However, Chow added, complying with the new regulations may be more difficult for smaller companies that have fewer people and resources than large businesses.
He said that businesses in Hong Kong should view A.I. as a cross-functional topic, and not something that is just limited to IT or legal departments.
“Hong Kong companies should ensure that they have developed policies and principles on how AI will be used within the organization,” Chow said, adding that these would need to be updated regularly.