AI video image

Responsible AI: How can new technologies respect data privacy?

A key milestone in the legislative landscape to support responsible use of AI was passed earlier this year, as members of the European Parliament approved the EU’s Artificial Intelligence Act

The goal? To promote the uptake of human centric and trustworthy artificial intelligence while supporting innovation 

Key requirements of the act include:

  • Prohibiting use of AI technology for potentially harmful purposes 

  • Varying obligations depending on whether an AI tool is high, limited or minimal risk

  • Transparency requirements around labeling AI content and disclosing training data for providers of Generative AI

As the world’s first comprehensive AI regulation, the landmark legislation will come into force over the next two years and it’s something every business around the world should be aware of. That’s not just because it impacts any organization that does business within the EU’s 27 member states. It’s also because – much like the GDPR – it most probably will set the stage for similar AI-specific legislation to be introduced in other regions.

Celonis supports the principles of the Artificial Intelligence Act

As both a user and enabler of AI, Celonis is committed to the responsible development and deployment of the technology. As Anna Rocke, our Director Privacy, Ethics & Compliance, states:

 “As a SaaS company operating within the European Union, we fully support the principles embodied in the new EU AI Act, which aims to ensure the responsible development and deployment of artificial intelligence technologies. We are committed to these principles, prioritizing transparency, accountability, and the protection of data privacy and data security in all our AI-driven solutions.

Whilewe respect the goals of the EU AI Act in safeguarding individuals and promoting ethical AI practices we advocate for the development of practical regulations together with the responsible advancement of industry enabling state-of-the-art AI technologies in the EU.”

So, as we start to explore the wider implications of the act, what does responsible AI use really mean in practice? I spoke further with Anna Rocke to discuss the unique data privacy issues that come with using any AI solution, as well as the steps Celonis is taking to ensure AI is used in an ethical and responsible way. 

Responsible AI use revolves around data privacy 

Rocke highlights a variety of ethical questions and data privacy issues that could arise from AI’s ability to process vast amounts of potentially sensitive information and extract insights. She says:  

“Biased or discriminatory outputs occur when the historical data used to train the AI has inherent biases. This usually happens when the data reflects societal biases, errors, or outdated norms.

Also the issue of unawareness of data processing by AI touches on privacy concerns related to AI technologies. Most people don't know how extensively AI systems analyze their data, leaving them in the dark about the depth and breadth of personal information these systems access. For instance, AI algorithms can analyze social media activity, online searches, and even personal communication to profile and predict user habits and preferences.

Lastly, there is the problem that it is extremely difficult to delete personal data. It's deeply ingrained into the system during the learning process. AI models aren't fed data just to process it once and then forget it. The data shapes the algorithm's understanding, so to truly delete it would mean to retrain the whole system. If a user requests to have their personal data deleted, it often requires significant effort to locate and remove the data from complex deep-learning models. This also means there is a longer time when certain personal data may be at risk for a data breach.”

According to Rocke, general privacy principles have to be applied to AI-enabled systems as privacy legislation such as the GDPR of course also apply to AI systems. She points out that privacy experts must build up knowledge and understanding of the new technologies in order to provide the appropriate guidance within their organizations, particularly as technologies such as generative AI become more widely used. Making this connection between data privacy expertise and the use of AI systems is one of the most important steps businesses can take in ensuring responsible AI use. 

5 Key actions for responsible AI use

To ensure AI technologies are used in a way that protects personal information and respects individuals’ privacy rights, Rocke believes a combination of technical advancements, regulations and ethical considerations is required. She sees five key strategies every business should be applying today for responsible AI: 

  1. Using diverse and representative training data

  2. Limiting usage of personal data to what is necessary for the intended purpose

  3. Anonymizing personal data whenever feasible

  4. Implementing techniques to detect and mitigate potential biases

  5. Educating employees involved in AI development about its ethical implications

The Celonis approach to responsible AI

So what is Celonis doing to support the ethical use of AI? 

Rocke explains:

“In a cross-departmental initiative with Legal, Information Security, Data Privacy and Ethics, we developed a governance model for AI to be integrated in our development process. This allows us to build AI systems that are not only technologically advanced but also respectful of human rights. Besides monitoring legal developments and following the general discussion, we are closely listening to our customers to understand their needs specifically related to our services and products.”

Celonis has clear responsible AI principles, which are: 

  • Fairness: Mitigation of potential bias and discriminatory conclusions so that AI systems remain fair to the individuals using, and being impacted, by them.

  • Transparency and explainability: Where applicable and reasonably expected, transparency and explanations of how the systems work and potential impacts on users considering their role and expected knowledge.

  • Security and reliability: Provision and adoption of AI in a secure manner, while upholding the required level of performance under a variety of circumstances.

  • Data Privacy: Provision and adoption of AI in compliance with the Celonis Global Privacy Policy to protect personal data and individuals’ rights.

  • Accountability: Evaluating, monitoring and documenting information regarding AI at Celonis.

Now the EU’s AI Act has been approved, businesses should be on the lookout for updates around the timeline for enforcement, and clarification around the use cases that fall into each risk category. 

In the meantime, you can learn more about Celonis’ commitment to information security, data privacy and sustainability by visiting our Trust Center.

bill detwiler author headshot celosphere 2022 1024x1024
Bill Detwiler
Senior Communications Strategist and Editor Celonis Blog

Bill Detwiler is Senior Communications Strategist and Editor of the Celonis blog. He is the former Editor in Chief of TechRepublic, where he hosted the Dynamic Developer podcast and Cracking Open, CNET’s popular online show. Bill is an award-winning journalist, who’s covered the tech industry for more than two decades. Prior his career in the software industry and tech media, he was an IT professional in the social research and energy industries.

Dear visitor, you're using an outdated browser. Parts of this website will not work correctly. For a better experience, update or change your browser.