Special Feature
Strengthening AI Governance
Strengthening AI Governance <br/>加強人工智能管治

Strengthening AI Governance <br/>加強人工智能管治

As we enter the second half of 2025, the popularity of artificial intelligence (AI) shows no signs of abating, with news and industry updates about this emerging technology continuing to dominate headlines. Increasingly, organizations in Hong Kong are exploring ways to integrate AI into their day-to-day operations, aiming to automate workflows and boost productivity.

Nonetheless, enthusiasm for AI is dampened by the challenges in governing its use and ensuring data security, as well as compliance with laws such as the Personal Data (Privacy) Ordinance (the PDPO).

Compliance checks completed by my Office (the PCPD) in May 2025 revealed that 80% of the organizations examined used AI in their daily operations. A recent study published by the Hong Kong Federation of Trade Unions (HKFTU) further showed that nearly 70% of employees did not regularly or proactively disclose their use of Gen AI to their employers, and more than 40% expressed little concern about the liability of exposing or mishandling personal data or confidential information when using Gen AI.

With these trends in mind, the question arises of how an organization can ensure that its employees use AI safely in the ever-evolving digital landscape, aiming to leverage the benefits of the new technology while safeguarding the interests of both the organization and its employees to create a win-win situation.

 

AI Security

As data is the lifeblood of AI, it is abundantly clear that threats to personal data privacy are among the most concerning risks posed by AI. Although some encouragement can be drawn from the PCPD’s 2024 survey, which showed that nearly 70% of enterprises recognized significant privacy risks associated with AI use, only 28% of these had an AI security policy in place. Clearly, awareness does not equate to action, leaving some organizations vulnerable to AI security risks and their employees uncertain about what is permissible.

 

The PCPD’s New Guidelines

To help organizations and members of the public address the privacy risks brought by the AI tsunami, the PCPD has published a series of guidance materials and leaflets since 2021. In particular, the “Checklist on Guidelines for the Use of Generative AI by Employees” (the Guidelines) was published in March this year to help organizations develop internal policies or guidelines on the use of generative AI (Gen AI) by employees at work (AI policy) while complying with the relevant requirements of the PDPO. The HKFTU study published in May calls, among other things, for organizations to refer to the Guidelines when formulating an internal AI policy.

The Guidelines recommend that organizations take into account the following areas when developing an internal AI policy.

a) Scope of Permissible Use of Gen AI

Organizations should clearly specify the permitted Gen AI tools and define the permissible purposes for using these tools; for instance, whether employees can use these tools for drafting documents, summarizing information or creating textual, audio and/or visual content. Also, to delineate accountability, organizations should specify whether the AI policy applies to the whole organization or only to specific divisions.

b) Protection of Personal Data Privacy

The Guidelines recommend that organizations provide clear instructions on the “inputs” and “outputs” of Gen AI tools. Specifically, the permissible types and amounts of information that can be input into Gen AI tools should be stated, as well as the permissible purposes for using AI-generated outputs, the permissible means of storing such information, as well as the applicable data retention policy and other relevant policies with which employees must comply should be set out in the AI policy.

c) Lawful and Ethical Use and Prevention of Bias

An organization’s AI policy should specify that employees must not use Gen AI tools for unlawful or harmful activities. Additionally, it should stipulate that employees, as human reviewers, are responsible for verifying the accuracy of AI-generated outputs and for correcting and reporting any biased or discriminatory outputs that may arise. Organizations should also provide instructions on when and how to watermark or label AI-generated outputs.

d) Data Security

To safeguard data security, the Guidelines recommend that an organization’s AI policy should specify which categories of employees are permitted to use Gen AI tools and the types of devices on which their use is permitted. Employees should use robust user credentials and maintain stringent security settings in these tools. They should also be required to report AI incidents in accordance with the organization’s AI incident response plan.

e) Violations of AI Policy

Lastly, organizations should specify to employees the possible consequences of violating the AI policy. For recommendations on establishing a proper Gen AI governance structure and other relevant considerations, organizations can refer to the PCPD’s “Artificial Intelligence: Model Personal Data Protection Framework,” which was published last year.

 

Practical Tips

In addition, the Guidelines provide several practical tips on supporting employees in using Gen AI tools, including (a) enhancing transparency by regularly informing employees of the AI policy and any updates, (b) providing training and resources, (c) assisting employees with a designated support team, and (d) establishing a feedback mechanism for identifying areas for improvement.

 

Commitment to AI Security and Privacy

With the emergence of increasingly powerful and versatile AI models, organizations across various sectors are incorporating these transformative technologies into their operations. It is high time that internal AI policies are established, providing clear guidance to employees on the use of Gen AI at work. By having a comprehensive AI policy, organizations can harness  the benefits of AI while maintaining personal data privacy, thereby building trust with customers and stakeholders in the digital era.

 

Ada Chung Lai-ling, Privacy Commissioner for Personal Data

Top

Over the years, we have helped businesses overcome adversity and thrive locally, in Mainland China and internationally.

If you want to take advantage of our network,insights and services, contact us today.

VIEW MORE