How generative AI can promote inclusive job descriptions

Proper context and data privacy should be top of mind for developers when writing applications on generative AI for B2B use cases.

Team member extends all hands in for a huddle. [unity / teamwork / trust / diversity / inclusion]
Prostock Studio / Getty Images

An ever-increasing number of employers are experiencing the many benefits of artificial intelligence throughout their human resources practices—from candidate personalization, conversational experiences, matching and scoring algorithms and AI-generated insights.

With the emergence of generative AI, HR tech products are starting to build use cases to optimize communication among recruiters, managers, candidates, and employees as well as build assistants to boost HR productivity. These technologies are also assisting HR teams to build better employee retention and growth strategies and helping them transform into a skills-based organization.

While all this innovation is underway, consistency and inclusiveness within job descriptions continue to be a challenge and are often overlooked.

Generative AI can help ensure that job postings consistently meet the level of criteria needed for a specific function, including the necessary skills and competencies required, along with the use of inclusive language and reduction of bias. This is especially helpful as the labor market remains strong and businesses continue to need workers.

Thoughtfully crafted with the appropriate contextual considerations, generative AI has the capability to responsibly generate adaptive and inclusive job descriptions on a large scale. It produces highly personalized postings that preserve the organization’s tone and brand, accomplishing this in a fraction of the time it would take a human.

Offloading this task to generative AI allows HR to concentrate on content that shapes the culture and brand experience—areas where technology falls short in understanding the nuanced human elements

LLMs need the right context

Commercial large language models (LLMs) used for generative AI are essentially an approximation of the extensive knowledge available on crafting job descriptions. While existing industry standards generally have well-phrased descriptions, they may lack the specific context of the organization or team, making them appear impersonal or generic to candidates. Furthermore, if these models are prompted to generate a job description using gendered titles (such as “fireman”), the outcome is likely to be non-neutral, highlighting the need for careful consideration of language for inclusivity.

Generative AI models require precise prompts to shape the writing of job descriptions and to specify which words and phrases to steer clear of. Rather than employing a job title like “weatherman,” the program should be directed to use the more inclusive term “meteorologist,” accompanied by an illustrative tone and well-crafted examples. And doing this at scale throughout the organization may not be easy.

It might be tempting for HR teams to dig out an old job posting for a similar role to save time, but the effort that’s made on the front end will pay off on the back end in the form of a job description that piques the interest of the right talent. A posting that steers away great candidates could have an expensive and long-lasting negative impact on the business.

What defines a biased job description? Identifying bias is not always straightforward for HR; it’s a subjective task. While certain corrections may be apparent, discerning whether bias is truly eliminated or inadvertently introduced can be challenging. This is where technology proves invaluable, assisting humans in striking the right balance swiftly and accurately. AI models, which learn from past performance and adhere to fundamental guidelines, can play a crucial role in generating job descriptions that align with fairness and inclusivity.

The challenges for developers

During OpenAI’s first developer conference in early November, the company said GPT-4 turbo models have a 128k context window, which means it can digest the equivalent of more than 300 pages of text in a single prompt. ChatGPT almost certainly will learn how to provide the right responses from that much context, which is really a game changer. And ChatGPT has gotten a lot cheaper too. From that perspective, developers are thinking, “OK, how best do I add value to my users?”

With previous versions of ChatGPT, finding use cases was all about figuring out a scenario to generate content and building an app on top of ChatGPT. But now one can just get the context and leave a lot of other things out. That’s a clear indicator of the huge promise of the technology.

But against that optimistic outlook, enterprises using generative AI must grapple with ethical and privacy concerns. Governance, monitoring, and fundamental documentation are the safeguards against deploying discriminative AI. In the past, developers could rely on those safeguards alone to guard against discriminative AI. However, the landscape has evolved significantly, and that requires developers to consider a whole lot more in their design. It’s a whole new ballgame.

Today there is much more scrutiny around several big issues, namely masking personally identifiable information, injecting context without a data leak, and saving customer information in its own ecosystem while only passing the inferred aspects of the request to generative AI models. Those are some of the complexities that developers are running into at the moment.

Why generative AI needs guardrails

As with any new or emerging technology, industry and government are working to set proper ethical and legal guardrails around AI. For an engineer, building on generative AI requires a keen awareness of both the ethical and practical uses of data.

Data protection. Passing a job applicant’s resume through a large language model without the applicant’s consent, or using it to write a rejection letter to a candidate, could be problematic if personally identifiable information is inadvertently revealed to LLMs. Data privacy is paramount when sending personal details to a platform that is not technically dedicated to an existing setup.

How is information masked? How are prompts reengineered? How does an engineer prompt for a specific example without passing personally identifiable information, and on its way back, how is the data substituted with the right parameters to show it back to the user?

These are all questions developers should consider when writing applications on generative AI for B2B use cases.

Segmented learning. Another critical factor for developers to consider is segmenting customer data from a model training or machine learning perspective, because the nuances of how an email is written varies from one organization to another, and even among different users within an organization, for example.

AI learning cannot be combined and made generic. So continuing to compartmentalize and have learning by a specific customer, location, or viewers is critical.

Cost optimization. Having the ability to cache and reuse the data is important, because data input and output can get expensive for certain use cases that involve volume transactions.

A small document with a huge impact

Some may question the need for written job descriptions in the modern workforce, but job descriptions remain the most effective way to communicate an employer’s talent needs and the underpinning skills for specific roles.

When done well, vacancy notices attract candidates and employees who are aligned with a company’s values, mission, and culture. A paycheck and a corner office are no longer enough to get a job seeker’s attention. They want to work for firms with a top-notch culture and impeccable values.

Using thoughtful and sensitive language signals to candidates that the employer has an inclusive workplace that considers all applicants. Similarly, by ensuring that generative AI has the proper context and that private data is kept private, developers play an important role in an exciting and promising technology that is ethical, inclusive, and free of bias.

Kumar Ananthanarayana is the vice president of product management at Phenom, a global HR technology company based in the greater Philadelphia area.

Generative AI Insights provides a venue for technology leaders—including vendors and other outside contributors—to explore and discuss the challenges and opportunities of generative artificial intelligence. The selection is wide-ranging, from technology deep dives to case studies to expert opinion, but also subjective, based on our judgment of which topics and treatments will best serve InfoWorld’s technically sophisticated audience. InfoWorld does not accept marketing collateral for publication and reserves the right to edit all contributed content. Contact doug_dineley@foundryco.com.

Copyright © 2024 IDG Communications, Inc.