While many organizations approach AI governance as a technical challenge, the key to effective risk management starts with people, argues Skillsoft’s Asha Palmer. By understanding how employees at every level interact with AI in their daily work, companies can build more practical and granular risk frameworks. This employee-centric approach makes AI governance both more accurate and more actionable for the workforce who must implement it.
Artificial intelligence (AI) has quickly shifted and reshaped parts of our jobs and the industries we work in. But while AI has become a household term, it’s been around for decades, tracing back to the mid-20th century when the term was first uttered in a Dartmouth workshop. Since then, AI has powered many of the technologies we use, including facial recognition and digital assistants on our smartphones, personalized recommendations across our favorite streaming services and the best route to take in a ride-share, most of which have existed for nearly a decade.
As AI and generative AI (GenAI) have advanced — at breakneck speed over the last year — the technology has presented both an opportunity for innovation and increased calls for regulation and governance. Earlier this year, the European Parliament adopted the EU AI Act, landmark legislation that aims to regulate the growing field of AI. The first of its kind, the EU AI Act establishes a risk-based framework to ensure the technology is ethically used and potential risks are mitigated. While it won’t officially go into effect until February 2025, at that time, certain AI systems will be banned, and organizations using AI must ensure their employees are adequately trained in AI literacy.
The EU AI Act, along with new laws popping up around the globe, including in the U.S., is an encouraging start but just the beginning. It sets the precedent for responsible AI development and utilization. Organizations and leaders must be willing to work as best practices evolve, calling for updated AI policies.
A risk-based framework should connect policy to personnel to make the transition smoother and more effective.
SEC’s Quiet AI Revolution
As artificial intelligence reshapes the business landscape, the SEC is gearing up for a new era of oversight. With a handful of cases already on the books and warnings from top officials, the message is clear: AI isn't just disrupting industries — it's disrupting regulatory enforcement.
Read moreDetailsFocusing on people unlocks risk-based approach
The EU AI Act exemplifies a risk-based regulatory approach by categorizing AI applications according to their potential societal and individual impacts. Similarly, our internal AI governance and compliance programs should be structured this way. Embracing risk-based frameworks in compliance is a promising strategy that tackles each employee’s unique challenges, ensuring solutions are relevant and impactful to the individual’s needs.
By centering programs on the risks employees are likely to face and introduce to the organization, leaders can enhance resilience and foster sustainable practices. This involves understanding each employee’s role, career path, location and relevant legislation, enabling a personalized approach based on assumed risks.
Compliance programs with risk-based frameworks that prioritize employees’ specific needs can effectively navigate extensive legislative changes, empowering employees to understand the risks associated with their responsibilities and how to mitigate them. While privacy regulations like GDPR, Canada Law 25 and CCPA have become integral to business operations, the global landscape of proposed AI laws remains complex, with no uniform standards and many states awaiting federal guidance. Risk-based compliance programs support employees across the corporate spectrum in recognizing how their roles may be specifically affected.
At its core, a risk-based compliance program must grasp the people, places and responsibilities that drive the business. It’s the roles people perform, where they perform them and their associated responsibilities that introduce risk and need mitigation.
By adopting risk-based frameworks, organizations can ensure that employees, regardless of position, understand the implications of current and future legislation on their specific roles, promoting a culture of compliance and vigilance.
The future of AI regulation and what comes next
As the AI landscape evolves, categorizing risks becomes complex, with AI systems often straddling different risk levels. The dynamic nature of AI development can quickly shift applications from low- to high-risk, requiring companies to be prepared. Collaboration among regulators, technologists and ethicists is crucial for managing these uncertainties.
Regulatory frameworks and company policies must be regularly reassessed and revised to keep pace with innovation and address emerging threats. Even if not directly affected by AI laws, organizations can learn from new legislation to develop their own AI policies. These should focus on ethical usage, bias, fairness and compliance to navigate the legal, reputational, and societal challenges of AI deployment.
If you aren’t already, it’s time to be optimistic about the prospects of a responsible AI ecosystem. The EU’s AI Act served as a guiding light when it first made headlines in March, but we are just at the beginning of this journey that started over 50 years ago. Luckily, we’re in the driver’s seat. In order to continue to balance innovation with compliance successfully, we must ensure continued collaboration among lawmakers, technologists and industry stakeholders. Adapting to the challenges and trends will help maintain a landscape where AI can flourish responsibly and ethically.