No Result
View All Result
SUBSCRIBE | NO FEES, NO PAYWALLS
MANAGE MY SUBSCRIPTION
NEWSLETTER
Corporate Compliance Insights
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe
Jump to a Section
  • At the Office
    • Ethics
    • HR Compliance
    • Leadership & Career
    • Well-Being at Work
  • Compliance & Risk
    • Compliance
    • FCPA
    • Fraud
    • Risk
  • Finserv & Audit
    • Financial Services
    • Internal Audit
  • Governance
    • ESG
    • Getting Governance Right
  • Infosec
    • Cybersecurity
    • Data Privacy
  • Opinion
    • Adam Balfour
    • Jim DeLoach
    • Mary Shirley
    • Yan Tougas
No Result
View All Result
Corporate Compliance Insights
Home Governance

8 Questions to Help Rightsize Responsible AI Governance

Proactive AI governance can help build stakeholder trust

by Kapish Vanvaria and Sarah Y. Liang
September 9, 2024
in Governance
ai governance human robot hands

Companies across all industries have arrived at a precarious juncture as AI technology evolves more rapidly than legislators’ ability to rein it in. But that doesn’t mean AI needs to be (or should be) a free-for-all. Kapish Vanvaria and Sarah Y. Liang of EY Americas explore some of the steps businesses can take to head safely into the breach of AI ethics and accountability.

The velocity of technological advancements is accelerating at a pace that defies our traditional mechanisms of adaptation and oversight. The capacity of these changes, particularly in the realm of responsible artificial intelligence (AI), remains a vast and largely uncharted territory. As we grapple with the implications of AI, it is becoming increasingly clear that waiting for prescriptive rules and regulation to guide this rapid evolution is not only impractical but also potentially harmful to both society and industry.

To address the potential negative impact of AI to specific groups of individuals or society, lawmakers around the globe are striving to comprehend the multifaceted impact of AI, wrestling with the formidable task of crafting regulations to minimize the risk. However, creating new public policy can be inherently deliberative and often lags behind the swift currents of technological innovation. As a result, there is growing recognition that companies cannot afford to passively await regulatory clarity. Instead, they must proactively take safeguarding measures to enable responsible deployment of AI technologies.

Current guidelines, such as the White House executive order on AI, the EU’s AI Act, Singapore’s AI governance framework and recent updates to the California Consumer Privacy Act (CCPA) that encompass AI considerations, include examples of reactive policymaking. Some of these guidelines were built with specific use cases in mind and represent a snapshot in time. To think forward in this quickly changing environment, a proactive and comprehensive strategy can prepare organizations to manage emerging risks as well.

To engage in responsible use of AI, organizations should put means and methods in place to create and govern the use of AI across the organization to verify it is ethical, transparent, accountable and ensures fairness and safety for all individuals affected by the use of AI. These means and methods enable measurable and manageable safeguards against regulatory, reputation, business and societal risks. Using AI responsibly drives opportunities for organizations to compete, comply and protect assets in today’s environment and forge a fair and responsible digital future.

What are leading companies doing?

Many leading tech companies are setting a precedent for corporate self-regulation in the realm of AI. These companies have made significant investments in responsible AI to proactively promote the ethical and responsible development and deployment of AI that aligns with their values and technical standards.  

These companies recognize that self-regulation is not just a moral imperative but also a strategic one. By taking the initiative, they are positioning themselves as stewards of responsible AI, minimizing potential costs associated with future regulatory compliance and contributing to shaping a more equitable and sustainable future.

These companies demonstrate that responsible AI is not just a “should do” compliance exercise but that it is a “must do” to drive business and societal value for numerous stakeholders, as outlined below:

  • Institutional investors expect companies to demonstrate a thorough understanding of the ethical, legal and societal risks associated with AI adoption. They expect companies to demonstrate how AI initiatives contribute to innovation, efficiency gains and a competitive advantage while minimizing the negative impacts on society and the environment.
  • Employees and consumers have a growing expectation for the development and implementation of comprehensive frameworks governing the ethical use of AI, especially related to data privacy, transparency, consent, control, training and job protection. In fact, 81% of employees say AI technology organizations need to self-regulate more and nearly as many (78%) say the government needs to play a bigger role in regulating AI technology, according to a 2023 EY survey.
  • Standard setters, governments and regulators are increasingly involved in establishing ethical guidelines and industry standards for AI development and deployment.
  • Society at large is expecting companies to take a proactive role in shaping a safe digital future by prioritizing responsible AI practices.

Organizational leaders must consider the specific needs of their stakeholders. Organizations that serve the private sector must integrate AI governance into their business strategies to enhance trusting customer relationships and sustain innovation while adhering to ethical standards. Private-sector companies must continue to build and maintain customer trust, which is critical for their brand reputation and long-term success. This involves ensuring that AI systems are designed and used in ways that are ethical, transparent and respect customer privacy. 

Clear policies and procedures, data security measures, transparent AI systems and stakeholder communication are examples of measures that promote an organization’s efforts. Additionally, in the private sector, there is a strong emphasis on gaining a competitive edge through innovation. Organizations must balance the drive for rapid AI development and deployment with the need to ensure that the technologies are safe, reliable and do not introduce unintended harm. 

Public sector-serving organizations must align AI governance with the values of public service, including equity, justice and the protection of public interests. These organizations are held to a higher standard of equity and fairness as they serve diverse populations with varying needs and vulnerabilities. Their AI solutions must ensure that their technologies do not perpetuate or exacerbate existing inequalities through their design, monitoring assessments and reporting. These organizations also face strong demand for transparency and accountability, meaning that AI systems used must be explainable, supported with robust documentation and designed with adequate human-in-the-loop oversight.

eu flag on wooden bench
Compliance

Landmark EU AI Act Takes Effect; Here’s What You Need to Know

by Jonathan Armstrong
August 14, 2024

Read moreDetails

Growing with evolving technology

Responsible AI starts with defining the company’s strategy, its North Star.

  • Vision: What are the organization’s long-term aspirations and ideal state it wants to reach with the help of AI?
  • Mission: What will the organization do or prioritize (specific impact to relevant parties)?
  • Values: What are the organization’s values and beliefs when it comes to using AI, and how will it guide the behaviors and decisions of its members? What will the organization do and not do?
  • Principles: How will the organization implement its values through responsible AI?

Once the North Star is defined, a top-down and a bottom-up approach to answer the following questions will help companies rightsize their responsible AI governance to self-govern without slowing down the business.

  1. How does our AI strategy reflect our ambition to be industry leaders in responsible AI practices, and what steps are we taking to integrate this into our corporate narrative?
  2. What groundbreaking AI applications are we exploring to disrupt our industry, and how are we fostering a culture of continuous innovation to maintain our edge?
  3. How have we ensured the selection of AI providers aligns with our risk management and ethical standards, and what security protocols protect our AI systems?
  4. What initiatives are we launching to ensure our employees are empowered by AI rather than replaced, and how are we measuring the success of these initiatives?
  5. How are we balancing the capital allocation between AI development and other strategic investments, and what metrics are we using to track the success of our AI endeavors?
  6. Are our AI initiatives compliant with data protection and privacy laws, and do we have an incident response plan for AI-related security issues?
  7. In what ways are we investing in cybersecurity and data privacy to build trust in our AI systems, and how are we communicating this to our customers and partners?
  8. Does our AI framework provide the necessary flexibility, tools and resources for different project scales, and how do we maintain compliance with evolving regulations without hindering business progress?

Building a risk mitigation strategy for responsible AI is key to effectively advance in this transformative age. A strategy grounded in responsible AI framework principles enables transparent, manageable and use case-focused development. It’s also important to identify risks and mitigation activities throughout the AI development lifecycle to responsibly develop and deploy AI. Establishing appropriate governance models to control the AI lifecycle and organize efforts enables effective program management. Identifying roles by either appointing a chief AI officer or enhancing the roles of the organization’s technology leaders and positioning key stakeholders and priority ownership to align with the overall strategy creates a strong organizational structure. This governance must support the growth and development of talent to manage risk and execute the strategy.

Implementing formal governance policies and procedures that align with the overall responsible AI strategy is important. The strategy should clearly outline the ethical application, accountability measures, risk oversight and safeguards over the organization’s assets. Implemented policies will highlight the organization’s requirements for risk identification and assessment, mitigation and control measures, incident response, risk monitoring and reporting. Adequate procedures will enable use case selection, model development and validation, approval workflows, ongoing monitoring, model change management and decommissioning.

The rapid and accelerating technological growth and innovation fueled by AI demands a proactive approach to responsible AI governance. Companies cannot afford to wait for governments to catch up with the rapid pace of technological change. Instead, they must take the lead in self-regulation, not only because it is the right thing to do but also because it is in their best interest. 

Organizations serving the public sector must design and implement measures that promote equity, fairness, transparency and accountability to maintain public trust and uphold public interests. Organizations in the private sector must integrate responsible AI governance to enhance customer relationships and sustain innovation while adhering to ethical standards, mitigating risk while contributing to a positive brand image and a competitive position in the market. By embracing responsible AI, companies can minimize risks, foster trust and contribute to a future where technology serves the greater good. The time to act is now, and the blueprint for self-regulation is clear. It is up to corporate leaders to rise to the occasion and chart a course toward a responsible and sustainable technological future.

Michael Tippett, a senior manager in the EY Risk Consulting practice, contributed to this article.

 


Tags: Artificial Intelligence (AI)
Previous Post

AI Voice Cloning Is Giving Rise to Extortion & Vishing Scams

Next Post

Here We Go Again: Keeping Up With State Moves on AI Regulation

Kapish Vanvaria and Sarah Y. Liang

Kapish Vanvaria and Sarah Y. Liang

Kapish Vanvaria is risk markets leader at Ernst & Young. He brings cross-industry expertise across financial services, health & consumer products and technology, media & entertainment and telecommunications. Kapish has deep experience in internal audit, cybersecurity, compliance, third party risk, technology implementations and automation/analytics.
Sarah Y. Liang is the EY Americas responsible AI risk leader, where she drives the responsible and scalable integration of artificial intelligence to enhance business competitiveness and comply with regulatory and ethical standards. Drawing on her extensive cross-industry experience, particularly in the technology and gaming industries, she enables organizations to transform their businesses through innovation, technology and a growth mindset.

Related Posts

surrealist businessmen on platforms doing tug of war

Regulation vs. Innovation: The Tug-of-War Defining Finance’s Future

by Alex Tsepaev
June 6, 2025

AI compliance creates a global patchwork where EU fines reach €35 million while the US encourages growth — leaving financial...

Ethiciti AI Transforming Online Compliance Training

How AI is Transforming Online Compliance Training

by Corporate Compliance Insights
June 3, 2025

Is your compliance training keeping up with AI innovation? Whitepaper How AI is Transforming Online Compliance Training What's in this...

GAN Integrity TPRM & AI

Where TPRM Meets AI: Balancing Risk & Reward

by Corporate Compliance Insights
May 13, 2025

Is your organization prepared for the dual challenges of AI in third-party risk management? Whitepaper Where TPRM Meets AI: Balancing...

tracking prices

Pricing Algorithms Raise New Antitrust Concerns

by FTI Consulting
May 13, 2025

Interdisciplinary frameworks can help manage legal, privacy and consumer protection risks

Next Post
map of utah

Here We Go Again: Keeping Up With State Moves on AI Regulation

No Result
View All Result

Privacy Policy | AI Policy

Founded in 2010, CCI is the web’s premier global independent news source for compliance, ethics, risk and information security. 

Got a news tip? Get in touch. Want a weekly round-up in your inbox? Sign up for free. No subscription fees, no paywalls. 

Follow Us

Browse Topics:

  • CCI Press
  • Compliance
  • Compliance Podcasts
  • Cybersecurity
  • Data Privacy
  • eBooks Published by CCI
  • Ethics
  • FCPA
  • Featured
  • Financial Services
  • Fraud
  • Governance
  • GRC Vendor News
  • HR Compliance
  • Internal Audit
  • Leadership and Career
  • On Demand Webinars
  • Opinion
  • Research
  • Resource Library
  • Risk
  • Uncategorized
  • Videos
  • Webinars
  • Well-Being
  • Whitepapers

© 2025 Corporate Compliance Insights

Welcome to CCI. This site uses cookies. Please click OK to accept. Privacy Policy
Cookie settingsACCEPT
Manage consent

Privacy Overview

This website uses cookies to improve your experience while you navigate through the website. Out of these, the cookies that are categorized as necessary are stored on your browser as they are essential for the working of basic functionalities of the website. We also use third-party cookies that help us analyze and understand how you use this website. These cookies will be stored in your browser only with your consent. You also have the option to opt-out of these cookies. But opting out of some of these cookies may affect your browsing experience.
Necessary
Always Enabled
Necessary cookies are absolutely essential for the website to function properly. These cookies ensure basic functionalities and security features of the website, anonymously.
CookieDurationDescription
cookielawinfo-checbox-analytics11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Analytics".
cookielawinfo-checbox-functional11 monthsThe cookie is set by GDPR cookie consent to record the user consent for the cookies in the category "Functional".
cookielawinfo-checbox-others11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Other.
cookielawinfo-checkbox-necessary11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookies is used to store the user consent for the cookies in the category "Necessary".
cookielawinfo-checkbox-performance11 monthsThis cookie is set by GDPR Cookie Consent plugin. The cookie is used to store the user consent for the cookies in the category "Performance".
viewed_cookie_policy11 monthsThe cookie is set by the GDPR Cookie Consent plugin and is used to store whether or not user has consented to the use of cookies. It does not store any personal data.
Functional
Functional cookies help to perform certain functionalities like sharing the content of the website on social media platforms, collect feedbacks, and other third-party features.
Performance
Performance cookies are used to understand and analyze the key performance indexes of the website which helps in delivering a better user experience for the visitors.
Analytics
Analytical cookies are used to understand how visitors interact with the website. These cookies help provide information on metrics the number of visitors, bounce rate, traffic source, etc.
Advertisement
Advertisement cookies are used to provide visitors with relevant ads and marketing campaigns. These cookies track visitors across websites and collect information to provide customized ads.
Others
Other uncategorized cookies are those that are being analyzed and have not been classified into a category as yet.
SAVE & ACCEPT
No Result
View All Result
  • Home
  • About
    • About CCI
    • CCI Magazine
    • Writing for CCI
    • Career Connection
    • NEW: CCI Press – Book Publishing
    • Advertise With Us
  • Explore Topics
    • See All Articles
    • Compliance
    • Ethics
    • Risk
    • FCPA
    • Governance
    • Fraud
    • Internal Audit
    • HR Compliance
    • Cybersecurity
    • Data Privacy
    • Financial Services
    • Well-Being at Work
    • Leadership and Career
    • Opinion
  • Vendor News
  • Library
    • Download Whitepapers & Reports
    • Download eBooks
    • New: Living Your Best Compliance Life by Mary Shirley
    • New: Ethics and Compliance for Humans by Adam Balfour
    • 2021: Raise Your Game, Not Your Voice by Lentini-Walker & Tschida
    • CCI Press & Compliance Bookshelf
  • Podcasts
    • Great Women in Compliance
    • Unless: The Podcast (Hemma Lomax)
  • Research
  • Webinars
  • Events
  • Subscribe

© 2025 Corporate Compliance Insights