Articles Posted in Artificial Intelligence (AI)

Employment discrimination on the basis of factors like race, religion, sex, sexual orientation, gender identity, and disability is unlawful under New Jersey employment law. Broadly speaking, courts have identified two types of unlawful employment discrimination: discriminatory intent and disparate impact. Cases based on discriminatory intent often involve overt bias. In disparate impact cases, an employer may violate the law if their policy or practice has an outsized adverse impact on a protected group, even if it appears outwardly neutral. A group of federal agencies developed a guideline several decades ago, known as the Four-Fifths Rule, for determining when a policy or practice has too much of a disparate impact on a protected group. While this rule significantly predates the current use of artificial intelligence (AI) in employment, it provides a useful guide for assessing when an AI tool might violate employment laws.

Title VII of the Civil Rights Act of 1964 is the main federal statute dealing with employment discrimination. It mentions five protected categories by name: race, sex, religion, color, and national origin. Congress amended the statute in 1978 to add discrimination based on pregnancy and childbirth to the definition of sex discrimination. The U.S. Supreme Court has held that sex discrimination includes discrimination based on sexual orientation and gender identity.

The Four-Fifths Rule is based on uniform guidelines developed by four federal agencies in 1978: the Equal Employment Opportunity Commission (EEOC), the Civil Service Commission, the Department Of Labor, and the Department of Justice. Although it includes the word “rule,” the Four-Fifths Rule is more like a guideline that provides an idea of when an employment practice might run afoul of the law.
Continue reading

As artificial intelligence (AI) becomes more common in computer applications, lawmakers and policymakers have taken notice. This includes the use of AI by employers. For example, several bills introduced in the state legislature would add AI protections to New Jersey employment law. The White House issued an executive order in October 2023 calling for policies that “mitigate AI’s potential harms to employees’ well-being and maximize its potential benefits.” In April 2024, the Department of Labor’s Wage and Hour Division (WHD) issued Field Assistance Bulletin (FAB) 2024-1, which addresses the potential legal issues that AI may pose under several federal employment statutes.

AI and Federal Employment Laws

The WHD uses the definition of “artificial intelligence” that Congress adopted in the National Artificial Intelligence Initiative Act of 2020: a “machine-based system” that can “make predictions, recommendations or decisions influencing real or virtual environments” based on “human-defined objectives.” This consists of three steps, according to the statute:
1. The system perceives an environment.
2. It “abstract[s] such perceptions into models” using an automated system.
3. It “use[s] model inference” to produce new information, including recommendations or conclusions.
Employers may use automated AI tools for a wide range of analytical purposes related to hiring, management, HR, and employee and labor relations.

Potential AI Issues Identified by the WHD

FAB 2024-1 discusses potential problems under three employment statutes.
Continue reading

The use of artificial intelligence (AI) technology has caused concern in numerous industries, raising concerns ranging from copyright protection to employment discrimination. State legislatures, state and federal regulatory agencies, and the White House have weighed in on the potential misuse, whether intentional or not, of AI in the workplace. New Jersey employment law does not address this issue directly, but several pending bills would take it on. The U.S. Senate also has a bill that deals with the use of AI in hiring decisions. The No Robot Bosses Act (NRBA) would limit the ways employers may use AI during the hiring process and give individuals the right to sue for damages. The bill is unlikely to pass during this session, but it will hopefully inspire future bills.

Several bills introduced in the 2024-25 session of the New Jersey Legislature address AI in employment decisions. One would regulate “automated employment decision tools,” systems that use machine learning or other types of AI to screen job applicants or identify preferred candidates for a position. These types of tools rely on human-supplied data to “learn,” so they are prone to making biased decisions if they receive biased data. This can occur with no conscious intent to be biased. The bill would require “bias audits” for all AI screening tools. The NRBA also addresses these concerns.

The NRBA would regulate “automated decision systems” (ADSs), which it defines as “a system, software, or process that uses computation” to make decisions or assist in decision-making. This includes systems that use “machine learning, statistics, or other data processing or artificial intelligence techniques.”
Continue reading

Employers have begun relying on various artificial intelligence (AI) tools to streamline the hiring process and other aspects of the employment relationship. While the marketing for these tools sometimes claims that they can perform certain tasks better than humans, experience demonstrates that they are not free of many human biases. Humans programmed the AIs, after all, and may have included their own biases in the code. Two pending bills would amend New Jersey employment law to regulate the use of these tools in the hiring process. One would require “bias audits” of AI-based analytical tools. The other bill deals specifically with AI tools that analyze video interviews of job applicants.

The Use of AI in Hiring

The term “artificial intelligence” can refer to several types of software applications. Generative AI, for example, can create written or visual works based on user prompts. Employers use analytical AI tools to go through large amounts of data and make recommendations or decisions. For example, an AI system could screen job applicants based on whatever factors employers choose. This is the source of much of the concern about AI in employment decisions. Employers are ultimately liable for bias in an AI tool’s algorithm.

Automated Employment Decision Tools

A3854, introduced in the New Jersey Assembly on February 22, 2024, would regulate companies that produce and sell “automated employment decision tools” (AEDTs), as well as the employers that use them. The bill defines an AEDT as a system that uses statistical theory or a learning algorithm to filter job applicants or employees in a way that “establishes a preferred candidate or candidates.” Any AI system that screens job applicants would fit this definition.
Continue reading

Employers are increasingly relying on tools that use artificial intelligence (AI) for various employment-related purposes. AI tools can be useful for tasks that require sifting through large amounts of information, such as the hiring process. New Jersey employment laws set limits on employers when they are making hiring decisions. Employers may not, for example, screen job candidates based on protected categories like disability, genetic history, or pregnancy. Employers are liable for these types of hiring decisions even when they outsource them to someone else. This includes AI tools, but the law in New Jersey remains unclear on how laws against employment discrimination apply to virtual decision-makers. The White House recently issued an executive order (EO) providing directives to various executive agencies regarding AI. These include instructions to agencies that enforce federal employment laws to review current AI practices with the goal of “ensur[ing] that AI deployed in the workplace advances employees’ well-being.”

The New Jersey Law Against Discrimination (NJLAD) prohibits employers from taking various adverse actions against job applicants and employees solely based on factors like race, religion, sex, sexual orientation, disability, and others. This includes refusing to consider someone for employment because of a protected category. Many of the categories identified by the NJLAD have historically served as the basis for countless adverse hiring decisions.

One concern about the use of AI in screening job applicants and assisting in hiring decisions is that human biases, whether consciously held or not, could become part of the software’s algorithms. Neither federal nor New Jersey employment laws currently address this concern. New York City enacted a bill several years ago that requires periodic “bias audits” for AI-based tools that employers use in the hiring process. This process involves reviewing AI tools to see if they have any sort of disparate impact on members of protected categories. Lawmakers introduced a similar bill in the New Jersey Assembly in December 2022, but it has not advanced beyond its initial committee assignment.
Continue reading

Contact Information