As artificial intelligence finds its way into aspects of everyday life and becomes increasingly advanced, some state legislators are feeling a new urgency to create regulations for its use in the hiring process.
Artificial intelligence, commonly referred to as AI, has been adopted by a quarter of businesses in the United States, according to the 2022 IBM Global AI Adoption Index, a jump of more than 13% from the previous year. And many are starting to use it in the hiring process.
State laws did not follow. Only Illinois, Maryland and New York City require employers to seek consent first if they are using AI during certain parts of the hiring process. A handful of other places are considering similar legislation.
“Legislators are critical, and as always, lawmakers are always late to the party,” says Maryland State Delegate Mark Fisher, a Republican. Fisher sponsored his state’s law, which took effect in 2020, regulating the use of facial recognition programs when hiring. It prohibits an employer from using certain facial recognition services, such as those that might match candidates’ faces with outside databases, during a candidate’s interview process unless the candidate consents.
“Technology innovates first, and then it always seems like a good idea . . . until it’s not,” Fisher says. “That’s when lawmakers step in and try to regulate things as best they can.”
Where AI developers want to innovate as quickly as possible with or without legislation, developers and policymakers need to consider the implications of their decisions, says Hayley Tsukayama, senior legislative activist at the Electronic Frontier Foundation, which advocates for civil liberties on the internet.
For policymakers to write effective legislation, developers need to be transparent about the systems used and open to scrutinizing potential problems, Tsukayama says.
“It’s probably not exciting for people who want to go faster or people who want to put these systems in their workplace right now or already have them in the workplace right now,” she says. “But I think for policy makers it’s really important to talk to a lot of different people, especially the people who are going to be affected by this.”
AI in recruitment
According to an analysis by Skillroads, which provides professional resume writing services that integrate AI, AI can facilitate the hiring process by performing resume assessments, scheduling candidate interviews, and gathering data.
Some members of Congress are also trying to act. The US privacy and data protection bill aims to establish rules for artificial intelligence, including AI risk assessments and its overall use, and would cover data collected during the hiring process. Introduced last year by Rep. Frank Pallone Jr., a Democrat from New Jersey, he currently sits on the US House Energy and Commerce Committee.
The Biden administration released the Blueprint for an AI Bill of Rights last year, a set of principles to guide organizations and individuals on the design, use and deployment of automated systems, according to the document.
In the meantime, lawmakers in some states and localities have been working to create policies. Maryland, Illinois, and New York City are the only places with explicit laws for job seekers addressing AI during the hiring process; requiring companies to notify them when used at certain times and seek consent before moving forward, according to data from Bryan Cave Leighton Paisner, a global law firm providing legal advice to clients on commercial litigation, finance, real estate and more. California, New Jersey, New York State and Vermont have also considered bills that would regulate AI in hiring systems, according to the New York Times.
Facial recognition technology is used by many federal agencies, including cybersecurity and law enforcement, according to the US Government Accountability Office. Some industries also use it. Artificial intelligence can link facial recognition programs to candidate databases in seconds, says Fisher, which he cited as the concern that prompted his bill.
His goal, he says, was to craft a narrow measure that could open the door to potential future AI legislation. The bill, which took effect in 2020 without being signed into law by then-Governor Larry Hogan, a Republican, only includes the private sector, but Fisher says he would like it expanded to include public employers.
Policymakers’ understanding of artificial intelligence, particularly with respect to its implications for civil rights, is nearly non-existent, says Clarence Okoh, senior policy adviser at the nonprofit Center for Law and Social Policy (CLASP) based in Washington, D.C. and Just Tech Fellow of the Social Science Research Council.
As a result, he says, companies that use AI often regulate themselves.
“Unfortunately, I think what’s happened is that a lot of AI developers and sales have been very good at crowding out the conversation with policy makers about how to govern AI and mitigate the social consequences,” Okoh says. “And so, unfortunately, there is a lot of interest in developing self-regulatory systems.”
Some self-regulatory practices include audits or compliance that use general guidance, such as the Blueprint for an AI Bill of Rights, Okoh explains.
The results have sometimes raised concerns. Some organizations operating under their own guidelines have used AI recruiting tools that have shown bias.
In 2014, a group of developers at Amazon began creating an experimental, automated program to review the resumes of candidates for top talent, according to a Reuters investigation; but in 2015, the company found that its system had actually taught itself that male applicants were preferable.
People close to the project told Reuters the experimental system was trained to screen applicants by observing trends in resumes submitted to the company over a 10-year period, most of which were from men. Amazon told Reuters the tool “has never been used by Amazon recruiters to assess candidates.”
But some companies say AI is helpful and strict ethics rules are in place.
Helena Almeida, vice president and chief legal officer at ADP, a human resource management software company, says its approach to using artificial intelligence in its products follows the same ethical guidelines as before the technology emerged. Regardless of legal requirements, says Almeida, ADP sees it as an obligation to go beyond the basic framework to ensure its products don’t discriminate.
Artificial intelligence and machine learning are used in several ADP hiring services. And many current laws apply to the world of artificial intelligence, she says. ADP also offers its customers certain services using facial recognition technology, according to its website. As technology evolves, ADP has adopted a set of principles to govern its use of AI, machine learning, and more.
“You can’t discriminate against a particular demographic without AI, and neither can you with AI,” says Almeida. “So that’s a core part of our framework and how we look at biases in these tools.”
One way to avoid AI problems in the hiring process is to maintain human involvement, from product design to regular monitoring of automated decisions.
Samantha Gordon, program manager at TechEquity Collaborative, an organization that advocates for tech workers in industry, says that in situations where machine learning or data collection is used without human intervention, the machine is likely to be biased toward certain groups.
In one example, HireVue, a platform helping employers collect video interviews and job applicant ratings, announced in 2021 the removal of its facial analytics component after an internal review found the system had less correlation to job performance than other elements of algorithmic assessment, according to a statement from the organization.
“I think that’s the thing you don’t have to be a computer scientist to figure out,” Gordon says. Speeding up the hiring process, she says, leaves room for error. This is where Gordon says lawmakers are going to have to step in.
And on both sides of the aisle, Fisher says, lawmakers think companies need to show their work.
“I would like to think that, generally speaking, people would like to see a lot more transparency and disclosure in the use of this technology,” Fisher said. “Who uses this technology? And why?”