Why worry about AI Compliance?
- Adrian Goergen
- Aug 17, 2023
- 4 min read
Updated: Sep 18, 2023

AI is not the future; it's the present.
Multi-purpose generative AI tools are reshaping the way we work, introducing a myriad of opportunities and challenges to companies, regulators and the general public.
Nations around the world are debating how to protect their citizens without stifling innovation.
Companies reluctant to use AI tools will be left behind while companies recklessly adopting them are vulnerable to existential risks.
Technology is developing at breakneck speed. That’s why now is the time to create a robust framework to harness this pivotal moment in history consciously.
Business Impact of Generative AI
Generative AI will fundamentally change the way we work and live. From automating mundane tasks to unlocking completely novel capabilities, the impact is undeniable.
Research suggests that current tools are already able to almost double the productivity of knowledge workers while increasing the quality of their output.
Embracing AI's potential in an organization is about more than adopting the latest tools. It requires a strategic approach to implementation, alignment with data privacy and regulatory requirements, and an understanding of interrelations and ethical considerations.
AI Compliance is not just a legal issue. It has a significant impact on how your business operates and how your customers view you. It will determine whether your company can grow sustainably in a rapidly changing technological environment while avoiding existential risks.
Regulations: A Global Perspective
Governments are striving to create balanced frameworks that ensure citizen safety without inhibiting technological growth.
EU
The EU has been the pioneer in regulating AI and the EU AI Act, frequently criticised for its scrutiny, is expected pass towards the end of 2023. While the implementation may take another two years, it is expected to set a global reference similar to what GDPR did for data privacy.
The EU follows a risk-based approach with four main levels.
Applications with unacceptable risk are strictly prohibited. Examples are social scoring, biometric identification systems and emotion recognition systems in selected fields like law enforcement, education and at the workplace.
Use-cases classified as high-risk will need to prove responsible data governance, transparency towards the user and appropriate accuracy. They also need to have a risk management system in place. Examples are tools used in Education, Medicine, Law but also in many functions of HR.
Applications that don’t threat fundamental rights, such as non-discrimination, freedom of expression and protection of personal data are largely left unregulated.
General Purpose Models (eg. GPT4) that are able to solve a variety of tasks are currently expected to be especially regulated regarding the data they train on and what they are allowed to generate. Also, they need to label their outputs as AI-generated. Exceptions will be made for research projects.
US
In contrast, the US is approaching AI regulation through a combination of voluntary guidelines and enforceable state laws.
The AI Bill of Rights and the NIST's AI Risk Management Framework have set the tone with non-binding principles focusing on discrimination protection, data privacy, transparency and explainability. The FTC has begun enforcing consumer protections around AI, and various state-level initiatives are emerging.
As other countries are joining and working out regulations, the core emerging principles seem to be threefold:
Transparency (declaring when something was AI generated)
Explainability (being able to show how AI got to a conclusion)
Fairness (ensuring models don’t discriminate)
However, these three do by far not cover all discussed regulations and it is important to note that current guidelines and regulations are a work in progress which is why it is crucial to stay informed about the developments going forward.
Employee Education and Fear: Clear Guidelines Needed
The uncertain terrain of generative AI has left many companies frozen with indecision, leading some to outright prohibit its use. This reaction stifles the vast potential that these tools can unleash, including enhancements in efficiency, creativity, and innovation.
The key to unlocking this potential lies in empowering employees with proper guidance, not restraining them with fear and uncertainty. Educating staff through targeted training demystifies the complex world of AI, inspiring confidence instead of concern. This process should not be about merely dictating what's allowed, but rather inspiring a profound understanding of the risks, rewards, and responsible practices associated with these tools.
Companies need to provide not just a list of acceptable tools, but a dynamic guidebook that evolves with the technology. This living document should encompass recommended and permissible tools, clear usage guidelines, expected inputs and outputs, and should be revisited regularly to remain aligned with technological advancements and updated regulation.
The integration of generative AI tools doesn't need to be a leap into the unknown. With thoughtful training, clear guidance, and a well-maintained toolbox, companies can foster a secure environment where innovation flourishes and fear is vanquished.
Compliance Solutions: A Strategic Approach
Data privacy and IP risks have always been concerns with third-party tools, but AI has amplified these risks. The allure for employees to share customer data or IP without proper consideration can lead to significant threats.
Many AI models use this data for training, which may inadvertently expose sensitive information to not only the model providers but also other users.
Compliance is not merely about reacting to regulations. It's about creating a robust framework that includes:
Appointing an AI Compliance Officer
Communicating an AI Use Policy
Conducting regular Trainings
Setting Up an AI System Introduction Process
Maintaining a Dynamic Guidebook of Used & Allowed Tools
If possible: Risk Mitigation through Private Cloud Model Hosting
Conclusion: Act Now, Benefit Tomorrow
AI compliance is not a roadblock; it's a path to responsible and sustainable growth. By embracing compliance today, companies can ensure that they are well-positioned to leverage the incredible opportunities AI offers.
Reach out to us if you're working on a solution for AI Compliance or enabling Enterprises to use AI tools.
Disclaimer: This article was crafted with assistance from chatGPT
Comments