Compliance doesn’t have to be a solo journey. Join Anna Romberg and Julia Haglind in a collaborative exploration of real-world compliance challenges. Share your experiences and challenges; your issue could be featured in the next Compliance Conversations With Anna Romberg & Julia Haglind.
AI seems like an unstoppable, undeniable force, but is it? Compliance Conversations authors Anna Romberg and Julia Haglind explore the implications of incorporating AI into compliance practices and offer a clear warning: We can’t outsource responsibility to machines.
Thank you all who have provided input to continue our compliance conversations. Based on your input, it seems that many of us are curious about how to approach AI. In this column, we’d like to share some reflections based on numerous conversations with lawyers, attorneys and compliance specialists. We will explore compliance and AI from the perspective of the perfect match, the data challenge and the regulatory hurdles.
Many professionals in ethics and compliance, as well as legal teams, are now on the verge of a transformative period as AI is incorporated into their workflows. We notice that many are taking their first tentative steps into the AI jungle, testing everything from tools to draft legal texts to chatbots that answer questions about internal governing documents. The general sentiment is that compliance and AI is a perfect match. At conferences, we share ideas about how AI to some extent will revolutionize how we design, implement and monitor our ethics and compliance programs and broader legal work.
Naturally, there is great potential, both in terms of workload and overall efficiency, in automating repetitive tasks, document review, contract analysis, due diligence and compliance checks. Many ethics and compliance teams already today use AI to translate policy documents and provide real-time translation of virtual trainings and workshops. By utilizing technology chief ethics and compliance officers hope to free up time for the team to focus on more strategic and proactive work. We dream about a future where we can get real-time alerts of when an employee is likely to engage in a policy violation by data-driven models, mapping status of bonus achievement with travel patterns, changed sentiment in the correspondence with a particular third party (or perhaps transferring to utilizing WhatsApp instead of email) in conjunction with customer meetings occurring primarily at restaurants off working hours.
But before we can even dream of getting there, we have to be realistic about the data challenge. This can probably be summed up as “garbage in, garbage out,” or more elegantly put — AI tools are only as good as the data they are trained on. If the data used to train these tools is incomplete, outdated or simply unclear, the results are often inaccurate or misleading. Regardless of reminders from authorities (including the latest from the DOJ) that it is critical we incorporate the use of data into our work, this is a real challenge for many ethics and compliance teams. The data challenge is not only a challenge for us but for many global organizations in general and it will be hard work for most to realize the potential from “the perfect match.”
Policy Management: How Hard Can It Be?
Introducing Compliance Conversations With Anna Romberg & Julia Haglind
Read moreLet’s take internal governing documents as an example. Many have realized the power of making them more accessible (and easier to interpret) for employees. This can contribute to everything from reducing risks, improving compliance and increasing efficiency. So a natural first AI area to explore is chatbots that allow employees to ask questions regarding the contents of these governing documents, such as “What does PEP mean?” or “Can I accept a gift from a client?”
However, we’ve noticed that it quickly becomes apparent that the quality of the internal governing documents needs to be improved for the chatbot to generate correct and relevant answers. You must also ensure that the bot’s freedom to “hallucinate” or be too creative in its responses is very limited. However, this can, in turn, lead to the response often being “I’m sorry, I can’t answer that,” which naturally provides limited value for the person asking the question.
So, one of the conclusions many are now drawing is that the first thing they need to do is raise the quality — and clarity — of their documents. If even a well-trained AI bot can’t understand what applies, how are employees supposed to? Additionally, both expertise and resources are required to train the chatbot properly to make it the asset it can be, and few legal and compliance teams have the means to do this.
Finally, we need to face the regulatory uncertainty with regards to evolving AI regulation. To some extent, we can apply the same logic as we have for our data privacy compliance programs. We need to be careful about data types, how data is processed and how the data finally is used. The tricky thing about AI is we may not really know what data is used, how it is processed and how the conclusions are generated. That is why AI is so powerful and risky at the same time. Monitoring employee behaviors and “sentiment” is something that under current data privacy regimes is getting increasingly difficult, and applying AI will not really change this.
Responsible and ethical AI is something that ethics and compliance teams should talk about and invest time in understanding. As with any regulation, it will only take us as far as the letter of the law; how this is applied in a responsible way will always depend on the decision makers and we always come back to two questions: What type of company do we want to be and what kind of leader do I want to be? Let’s not outsource our personal and corporate responsibility to AI but instead leverage AI to drive efficiency and support informed decision making.
That said, the development is moving at a rapid pace, and we look forward to exploring how the perfect match is realizing its potential — and seeing how legal, ethics and compliance and AI are collaborating to drive more responsible organizations and more informed decision-making.