From recruitment to retention, technology has long been crucial to effective workforce management. And while companies may be flocking to tools powered by AI and machine learning, a New York City law set to go into effect in 2023 calls attention to the need to ensure that automated tools don’t multiply society’s ills.
The use of technology in workforce management is nothing new. But concern is growing over adoption of advanced data analytics and AI-based tools, particularly the risk these tools run of widening employment disparities between demographic groups.
To that end, in early 2023, New York City will begin requiring employers to conduct annual bias audits of any AI-based employment decision tools, including those supported by machine learning, statistical modeling and data analytics solutions.
The city’s move comes on the heels of several high-profile incidents of “algorithms behaving badly,” including the revelation of an Amazon hiring tool that was biased against women. However, it’s important to note that while concern over bias in AI technology is understandable, such issues are surely not unique to algorithmic decision-making.
Multiple studies have demonstrated that unconscious biases deeply affect recruiters’ and business managers’ human decisions as well. Ultimately, business leaders need to maximize the benefits of technology to improve their workforce decisions while mitigating their associated risks through robust governance processes. Simply abandoning new technology would be of little help and could be detrimental to their DEI commitments.
Law goes into effect in 2023, but questions remain
New York City’s new measure, passed Nov. 10, 2021, requires employers to display the results of their annual AI hiring audits publicly, to inform candidates that their applications have been reviewed by an automated hiring software and to offer an alternative selection process if requested.
However, there remains uncertainty around various aspects of these bias audits. In the legislation, the term “bias audit” refers to “an impartial evaluation by an independent auditor” and the bill says that such an audit “shall include but not be limited to the testing of an automated employment decision tool to assess the tool’s disparate impact on persons.”
Such a broad definition, while helpful in providing flexibility to the providers and users of such technology, lacks detail on the complete set of requirements from such an audit. It is also unclear if there are specific parties who can play the role of such an independent auditor, what skills might such auditors need and how their independence can be ensured.
How companies can prepare now
Companies should not simply react to the city’s legislation but should proactively build capabilities to ensure their automated decisioning tools are robust and trustworthy. This would not only reduce their regulatory exposure but, more fundamentally, also drive business value and build trust with their current and future workforce.
To achieve this, they should run all their models (both during development and after they have been implemented) through a comprehensive fairness workflow, in real-life situations, structured around four key steps:
- Bias identification. The purpose of any hiring process is to identify and recruit the best talent for a role. In doing so, it will necessarily be “biased” against those deemed less suited to the role. The first challenge is to identify whether, in doing so, significant differences in outcomes are being observed between different groups (e.g., based on race, gender or other identified groups of interest).
- Root cause assessment and determination of whether bias can be justified. Correlation does not imply causation. In other words, simply observing disparate outcomes between groups does not help in determining if they are justified, let alone in addressing them. So an essential corollary is to identify the root causes of the apparent differences between groups and reach a considered conclusion on whether such differences are justified.
- Bias mitigation. Assuming that the bias is not considered justified, targeted interventions will be needed to mitigate it. For machine learning models, this could include rebalancing the data set used to train the model to make it more representative or even to correct historic imbalances if appropriate; dropping input data variables that are proxies for protected groups (e.g., years of uninterrupted work experience as a proxy for gender); or enforcing parity between groups through changes in threshold rates where appropriate.
- Bias reporting. It is essential that companies document bias mitigation strategies and report results to external auditors upon request. Such documentation should include information about defined protected groups, fairness objectives, observed disadvantages between different demographic groups, identified root causes, targeted interventions and final fairness testing results.
Taking action
It is important that companies likely to be affected act promptly to maximize the benefits of machine learning, statistical models and data analytics solutions for workforce management while complying with these new rules in New York City, a community in which about 4.5 million people work.
Failure to deploy such diagnostic and monitoring capabilities may result in significant cost in terms of preventable harms, increased distrust between employers and employees and reputational damage.