AI Compliance Checklist
- Adrian Goergen
- Aug 17, 2023
- 2 min read
Updated: Aug 28, 2023
AI can transform businesses and boost productivity but careless use and missing protocols leave your business vulnerable to lawsuits, data leaks, and reputation damage. To prevent these scenarios, we have developed the following checklist to implement measures:
☑️ AI Compliance Policy These guidelines set a high-level expectation for how AI tools should be handled in the company. They cover general principles, definitions, acceptable and unacceptable uses and privacy considerations. All employees using AI tools should accept & sign them. Click here to get a template
☑️ Introduce a AI Compliance Officer Dedicate a person to ensure continuous compliance regarding AI tools. Core tasks are:
Create & maintain the Company AI Policy in collaboration with the management / Board of Directors.
Conduct /review the risk assessment for new tools and accept/block their use
Maintain a comprehensive inventory of accepted tools and their intended use
Conduct monthly reviews of changes in regulation
Serve as a contact point for all AI compliance-related requests
Conduct annual training regarding the use of AI tools (can be included in security awareness)
☑️ AI System Introduction Process Before introducing a new tool, there should be a mandatory impact assessment of how the tool interacts with data, potential risks, and mitigations before its introduction. Benefits should be put in comparison to associated risks. The AI Compliance Officer has to approve all tools before internal or external use.
☑️ Comprehensive Inventory of AI Tools The list should include each tool's purpose, data it has access to, who uses it, and any associated risks. Only tools accepted on this list are allowed to be used. Communicate this with employees and encourage open communication about which tools are used to uncover Shadow IT.
☑️ Annual or Ad-hoc risk-management reviews, training & awareness Providing regular training for employees about the ethical use of AI, recognizing potential issues, and updating them on any new laws and regulations is a key part of creating a compliance-focused culture. Also the initial risk assessment for AI Tools in use should be reviewed regularly. In case there are major changes, ad-hoc trainings should ensure continuous compliance.
☑️ Declare whenever AI is used for outside communication According to the current draft of the EU AI Act, any outside use of AI, for example interacting with customers through a chatbot or generating & editing content (eg. deep fakes) will need to be declared. We suggest already making it a common practice now.
Optional:
☑️ Introduce a security layer when accessing LLM tools To make sure no customer data, IP or other confidential data is submitted to third-party AI tools, you can restrict access to those tools to happen only through interfaces that remove this data from prompts.
☑️ Deploy models in your private cloud
If you need to use sensitive data in your interaction with AI models such as LLMs, we suggest exploring the opportunity of deploying models in your private cloud (eg. through Azure) to avoid sharing this data with third parties.
Disclaimer: This article was crafted with assistance from chatGPT
Comments