Companies across all industries have arrived at a precarious juncture as AI technology evolves more rapidly than legislators’ ability to rein it in. But that doesn’t mean AI needs to be (or should be) a free-for-all. Kapish Vanvaria and Sarah Y. Liang of EY Americas explore some of the steps businesses can take to head safely into the breach of AI ethics and accountability.
The velocity of technological advancements is accelerating at a pace that defies our traditional mechanisms of adaptation and oversight. The capacity of these changes, particularly in the realm of responsible artificial intelligence (AI), remains a vast and largely uncharted territory. As we grapple with the implications of AI, it is becoming increasingly clear that waiting for prescriptive rules and regulation to guide this rapid evolution is not only impractical but also potentially harmful to both society and industry.
To address the potential negative impact of AI to specific groups of individuals or society, lawmakers around the globe are striving to comprehend the multifaceted impact of AI, wrestling with the formidable task of crafting regulations to minimize the risk. However, creating new public policy can be inherently deliberative and often lags behind the swift currents of technological innovation. As a result, there is growing recognition that companies cannot afford to passively await regulatory clarity. Instead, they must proactively take safeguarding measures to enable responsible deployment of AI technologies.
Current guidelines, such as the White House executive order on AI, the EU’s AI Act, Singapore’s AI governance framework and recent updates to the California Consumer Privacy Act (CCPA) that encompass AI considerations, include examples of reactive policymaking. Some of these guidelines were built with specific use cases in mind and represent a snapshot in time. To think forward in this quickly changing environment, a proactive and comprehensive strategy can prepare organizations to manage emerging risks as well.
To engage in responsible use of AI, organizations should put means and methods in place to create and govern the use of AI across the organization to verify it is ethical, transparent, accountable and ensures fairness and safety for all individuals affected by the use of AI. These means and methods enable measurable and manageable safeguards against regulatory, reputation, business and societal risks. Using AI responsibly drives opportunities for organizations to compete, comply and protect assets in today’s environment and forge a fair and responsible digital future.
What are leading companies doing?
Many leading tech companies are setting a precedent for corporate self-regulation in the realm of AI. These companies have made significant investments in responsible AI to proactively promote the ethical and responsible development and deployment of AI that aligns with their values and technical standards.
These companies recognize that self-regulation is not just a moral imperative but also a strategic one. By taking the initiative, they are positioning themselves as stewards of responsible AI, minimizing potential costs associated with future regulatory compliance and contributing to shaping a more equitable and sustainable future.
These companies demonstrate that responsible AI is not just a “should do” compliance exercise but that it is a “must do” to drive business and societal value for numerous stakeholders, as outlined below:
- Institutional investors expect companies to demonstrate a thorough understanding of the ethical, legal and societal risks associated with AI adoption. They expect companies to demonstrate how AI initiatives contribute to innovation, efficiency gains and a competitive advantage while minimizing the negative impacts on society and the environment.
- Employees and consumers have a growing expectation for the development and implementation of comprehensive frameworks governing the ethical use of AI, especially related to data privacy, transparency, consent, control, training and job protection. In fact, 81% of employees say AI technology organizations need to self-regulate more and nearly as many (78%) say the government needs to play a bigger role in regulating AI technology, according to a 2023 EY survey.
- Standard setters, governments and regulators are increasingly involved in establishing ethical guidelines and industry standards for AI development and deployment.
- Society at large is expecting companies to take a proactive role in shaping a safe digital future by prioritizing responsible AI practices.
Organizational leaders must consider the specific needs of their stakeholders. Organizations that serve the private sector must integrate AI governance into their business strategies to enhance trusting customer relationships and sustain innovation while adhering to ethical standards. Private-sector companies must continue to build and maintain customer trust, which is critical for their brand reputation and long-term success. This involves ensuring that AI systems are designed and used in ways that are ethical, transparent and respect customer privacy.
Clear policies and procedures, data security measures, transparent AI systems and stakeholder communication are examples of measures that promote an organization’s efforts. Additionally, in the private sector, there is a strong emphasis on gaining a competitive edge through innovation. Organizations must balance the drive for rapid AI development and deployment with the need to ensure that the technologies are safe, reliable and do not introduce unintended harm.
Public sector-serving organizations must align AI governance with the values of public service, including equity, justice and the protection of public interests. These organizations are held to a higher standard of equity and fairness as they serve diverse populations with varying needs and vulnerabilities. Their AI solutions must ensure that their technologies do not perpetuate or exacerbate existing inequalities through their design, monitoring assessments and reporting. These organizations also face strong demand for transparency and accountability, meaning that AI systems used must be explainable, supported with robust documentation and designed with adequate human-in-the-loop oversight.
Growing with evolving technology
Responsible AI starts with defining the company’s strategy, its North Star.
- Vision: What are the organization’s long-term aspirations and ideal state it wants to reach with the help of AI?
- Mission: What will the organization do or prioritize (specific impact to relevant parties)?
- Values: What are the organization’s values and beliefs when it comes to using AI, and how will it guide the behaviors and decisions of its members? What will the organization do and not do?
- Principles: How will the organization implement its values through responsible AI?
Once the North Star is defined, a top-down and a bottom-up approach to answer the following questions will help companies rightsize their responsible AI governance to self-govern without slowing down the business.
- How does our AI strategy reflect our ambition to be industry leaders in responsible AI practices, and what steps are we taking to integrate this into our corporate narrative?
- What groundbreaking AI applications are we exploring to disrupt our industry, and how are we fostering a culture of continuous innovation to maintain our edge?
- How have we ensured the selection of AI providers aligns with our risk management and ethical standards, and what security protocols protect our AI systems?
- What initiatives are we launching to ensure our employees are empowered by AI rather than replaced, and how are we measuring the success of these initiatives?
- How are we balancing the capital allocation between AI development and other strategic investments, and what metrics are we using to track the success of our AI endeavors?
- Are our AI initiatives compliant with data protection and privacy laws, and do we have an incident response plan for AI-related security issues?
- In what ways are we investing in cybersecurity and data privacy to build trust in our AI systems, and how are we communicating this to our customers and partners?
- Does our AI framework provide the necessary flexibility, tools and resources for different project scales, and how do we maintain compliance with evolving regulations without hindering business progress?
Building a risk mitigation strategy for responsible AI is key to effectively advance in this transformative age. A strategy grounded in responsible AI framework principles enables transparent, manageable and use case-focused development. It’s also important to identify risks and mitigation activities throughout the AI development lifecycle to responsibly develop and deploy AI. Establishing appropriate governance models to control the AI lifecycle and organize efforts enables effective program management. Identifying roles by either appointing a chief AI officer or enhancing the roles of the organization’s technology leaders and positioning key stakeholders and priority ownership to align with the overall strategy creates a strong organizational structure. This governance must support the growth and development of talent to manage risk and execute the strategy.
Implementing formal governance policies and procedures that align with the overall responsible AI strategy is important. The strategy should clearly outline the ethical application, accountability measures, risk oversight and safeguards over the organization’s assets. Implemented policies will highlight the organization’s requirements for risk identification and assessment, mitigation and control measures, incident response, risk monitoring and reporting. Adequate procedures will enable use case selection, model development and validation, approval workflows, ongoing monitoring, model change management and decommissioning.
The rapid and accelerating technological growth and innovation fueled by AI demands a proactive approach to responsible AI governance. Companies cannot afford to wait for governments to catch up with the rapid pace of technological change. Instead, they must take the lead in self-regulation, not only because it is the right thing to do but also because it is in their best interest.
Organizations serving the public sector must design and implement measures that promote equity, fairness, transparency and accountability to maintain public trust and uphold public interests. Organizations in the private sector must integrate responsible AI governance to enhance customer relationships and sustain innovation while adhering to ethical standards, mitigating risk while contributing to a positive brand image and a competitive position in the market. By embracing responsible AI, companies can minimize risks, foster trust and contribute to a future where technology serves the greater good. The time to act is now, and the blueprint for self-regulation is clear. It is up to corporate leaders to rise to the occasion and chart a course toward a responsible and sustainable technological future.
Michael Tippett, a senior manager in the EY Risk Consulting practice, contributed to this article.