RECOGNIZING AND EXPLAINING BIAS IN JOB DESCRIPTIONS: A ROBERTA-POWERED RECRUITMENT FRAMEWORK

Authors

  • Sharma, Rohan Prakash Department of Artificial Intelligence, GHRCEM, Nagpur, India

DOI:

https://doi.org/10.5281/zenodo.17242159

Keywords:

Bias Detection, Job Descriptions, RoBERTa, BERT, Multi-label Classification, Explainable AI (XAI), SHAP, Circumstantial Embeddings, Intersectional Bias, Trustworthy recruitment Practices, NLP, Transformer Models

Abstract

The general public seldom acknowledges job description bias because it remains widely unrecognized as it seriously affects both candidate variety and hiring procedure inclusivity. The research has developed Explainable AI for Trustworthy recruitment as an Artificial Intelligence system based on modern Natural Language Processing approaches to detect unintended biases in job listings. Various traditional tools based on keyword matching differ from the Explainable AI for Trustworthy recruitment system because it uses RoBERTa transformer model with contextual understanding to discover subtle intersectional biases which include gender and racial as well as age and disability aspects. The system identifies discriminatory wording to enable recruiters in writing job descriptions free of prejudice. The Explainable AI system for Trustworthy recruitment enhances both recruitment system effectiveness and universal hiring reach as well as working toward diverse workplace diversity

Downloads

Published

2025-10-01

Issue

Section

Articles