{"id":2052,"date":"2026-02-14T15:34:42","date_gmt":"2026-02-14T15:34:42","guid":{"rendered":"https:\/\/d.sheep-mine.ts.net\/?p=2052"},"modified":"2026-02-14T15:34:42","modified_gmt":"2026-02-14T15:34:42","slug":"algorithmic-inequalities-how-ai-hiring-tools-replicate-old-workplace-biases","status":"publish","type":"post","link":"https:\/\/d.sheep-mine.ts.net\/?p=2052","title":{"rendered":"Algorithmic Inequalities: How AI Hiring Tools Replicate Old Workplace Biases"},"content":{"rendered":"<p><br \/>\n<\/p>\n<div>\n<p>Organisations globally are increasingly incorporating artificial intelligence (AI) hiring tools to filter, evaluate and shortlist candidates for jobs \u2013 initially these systems were touted as a solution to reducing administrative burden, accelerating efficiency and obliterating human prejudice from recruitment. However, a growing body of research suggests that instead of neutralising bias, AI-powered hiring tools are deeply embroiled in exacerbating existing inequalities in the labour market. This trend is particularly detrimental for women whose professional trajectories include career breaks \u2013 pauses in formal paid employment, often taken for caregiving, elder care, childbirth or other familial responsibilities \u2013 as well as those with unconventional, non-linear resumes.\u00a0<\/p>\n<h3 class=\"wp-block-heading\" id=\"h-unprecedented-rise-of-ai-in-hiring\">Unprecedented rise of AI in hiring\u00a0<\/h3>\n<p>Today AI has become deeply embedded in the global recruitment workflows. AI-driven recruitment in its nascent stages in the early 2000s included Application Tracking Systems (ATS), i.e., keyword-based resume screening to filter applicants based on specific qualifications and job descriptions. The advancements in machine learning (ML) and natural language processing (NLP) transformed resume parsing; AI platforms developed capabilities in understanding contexts, skills and experience levels in resumes. AI-powered job-matching algorithms matched candidates with positions based on historical hiring trends. Subsequently, by the late 2010s, AI-driven chatbots automated <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/explore.hireez.com\/blog\/history-of-ai-in-recruitment\/\"><strong>candidate engagement<\/strong><\/a>and pre-screening processes. Predictive analytics supported companies in anticipating workforce demands using historical data, industry trends and attrition patterns, then combined with AI sourcing to match the best candidates, enabling more strategic, data-driven recruitment. This marked a crucial shift for AI from passive tool to active determinant in the recruitment process.\u00a0<\/p>\n<div class=\"wp-block-image\">\n<figure class=\"aligncenter size-large\"><img loading=\"lazy\" data-lazyloaded=\"1\" decoding=\"async\" width=\"1024\" height=\"576\" src=\"https:\/\/feminisminindia.com\/wp-content\/uploads\/2026\/02\/20250826_170621_0000.png-1024x576.webp\" alt=\"AI\" class=\"wp-image-194040\" srcset=\"https:\/\/feminisminindia.com\/wp-content\/uploads\/2026\/02\/20250826_170621_0000.png-1024x576.webp 1024w, https:\/\/feminisminindia.com\/wp-content\/uploads\/2026\/02\/20250826_170621_0000.png-300x169.webp 300w, https:\/\/feminisminindia.com\/wp-content\/uploads\/2026\/02\/20250826_170621_0000.png-768x432.webp 768w, https:\/\/feminisminindia.com\/wp-content\/uploads\/2026\/02\/20250826_170621_0000.png-180x100.webp 180w, https:\/\/feminisminindia.com\/wp-content\/uploads\/2026\/02\/20250826_170621_0000.png.webp 1200w\" data-sizes=\"(max-width: 1024px) 100vw, 1024px\"\/><figcaption class=\"wp-element-caption\">Source: Canva<\/figcaption><\/figure>\n<\/div>\n<p>A recent <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/hr.economictimes.indiatimes.com\/news\/workplace-4-0\/recruitment\/over-70-indian-recruiters-turning-to-ai-to-find-hidden-talent-report\/127885466\"><strong>survey<\/strong><\/a> by LinkedIn revealed that 70% of Indian recruiters are using AI to tap into \u201chidden talent\u201d, assess candidates\u2019 skills and accelerate the hiring and onboarding processes. Around 80% of respondents echoed the opinion that AI made it easier to assess a candidate\u2019s skills, and 76% thought it helped streamline the tedious hiring processes overall. Companies are adopting the AI infrastructure to automate CV screening, match resumes to job descriptions and even hold initial AI-led video interviews. While the fundamental intent may be positive \u2013 attempting to eliminate the bias and subjectivity of human judgement \u2013 reports have revealed a <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/cardozolawreview.com\/automating-discrimination-ai-hiring-practices-and-gender-inequality\/\"><strong>concerning trend<\/strong><\/a>: the core data and criteria used by AI systems mirror historical patterns of exclusion and discrimination.\u00a0<\/p>\n<h3 class=\"wp-block-heading\" id=\"h-how-ai-bias-affects-women-more\">How AI bias affects women more\u00a0<\/h3>\n<p>AI tools do not operate in a vacuum. MIT Sloan Professor Emilio J. Castilla refers to this as the <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/mitsloan.mit.edu\/ideas-made-to-matter\/ai-reinventing-hiring-same-old-biases-heres-how-to-avoid-trap\"><strong>\u201cparadox of algorithmic meritocracy\u201d<\/strong><\/a>. The majority of AI hiring models use machine learning algorithms that are trained on historical (existing) human resources data such as past resumes, performance outcomes and hiring decisions. If those historical trends were likely shaped by flawed human assumptions, containing discriminatory patterns (for instance, hiring fewer women and a lack of diversity in senior positions), then AI learns to associate \u201csuccessful\u201d candidates with features linked to those biased outcomes. Thus exposing the ethics and credibility of the supposedly \u2018neutral and unbiased\u2019 software.\u00a0\u00a0<\/p>\n<figure class=\"wp-block-pullquote\">\n<blockquote>\n<p>AI tools do not operate in a vacuum. The majority of AI hiring models use machine learning algorithms that are trained on historical (existing) human resources data such as past resumes, performance outcomes and hiring decisions. If those historical trends were likely shaped by flawed human assumptions, containing discriminatory patterns, then AI learns to associate \u201csuccessful\u201d candidates with features linked to those biased outcomes<\/p>\n<\/blockquote>\n<\/figure>\n<p>This phenomenon was most noticeable when it was found that the AI recruitment tool developed by Amazon (which was trained on the past 10 years of hiring data dominated by male candidates) was penalising resumes containing women-associated terms, as in \u201cwomen\u2019s chess club captain\u201d or \u201cwomen\u2019s college\u201d. After a lot of public outcry, <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.reuters.com\/article\/world\/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG\/\"><strong>Amazon<\/strong><\/a> was forced to disband the recruitment tool. In another instance, several companies, including Goldman Sachs and Unilever, used HireVue\u2019s speech recognition algorithms to assess candidates\u2019 spoken English proficiency; however, <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.businessinsider.com\/hirevue-uses-ai-for-job-interview-applicants-goldman-sachs-unilever-2017-8\"><strong>research<\/strong><\/a> uncovered that these algorithms disadvantaged non-white and deaf candidates. There is cultural bias as well, as some AI tools have downgraded resumes from candidates who studied at historically Black colleges and women\u2019s colleges because those institutions\u2019 data wasn\u2019t fed into the predominant white-collar pipelines.<\/p>\n<p><strong>Career Breaks Misinterpreted as Negative Features\u00a0<\/strong><\/p>\n<p>Women are statistically more likely than men to take career breaks due to caregiving responsibilities. LinkedIn\u2019s <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/economicgraph.linkedin.com\/content\/dam\/me\/economicgraph\/en-us\/PDF\/gender-gaps-in-career-breaks.pdf\"><strong>report<\/strong><\/a> uncovers that women are 63.5% more likely to list career breaks on their profiles. Interestingly, women from countries with more inclusive policies, such as Sweden, Germany and France were more transparent about the break (over 50%), while women from the Global South appear apprehensive to list the gap (20%). Personal goal pursuit and professional development were the most common break types for men, ranging from 6 to 12 months. Whereas a pause of six months to several years is common amongst women, especially in regions where social support around parental leaves and childcare is limited. The report also revealed that career breaks caused a hindrance to women rejoining the\u00a0workforce. AI systems view consistent employment as evidence of commitment, reliability and competence, and gaps, regardless of context, can be viewed as negatives.\u00a0<\/p>\n<p>While research and global data specifically demonstrating how AI rebukes\u00a0career breaks is limited, industry research strongly supports that women with career gaps are less likely to get shortlisted for roles they qualify for at par with their male counterparts (without the employment gap). Labour scholars argue that AI resume ranking models can favour uninterrupted career progressions and penalise \u201cnon-linear\u201d resumes \u2013 a structural disadvantage for many women. On a surface level, this may appear as an objective evaluation, but it is a replication of broader hiring prejudices in historical recruitment traditions, but on an even larger scale. As these systems are promoted as \u201cdata-driven\u201d and shrouded by an air of neutrality, their decisions are harder to challenge.\u00a0<\/p>\n<h3 class=\"wp-block-heading\" id=\"h-favouring-linear-profiles-and-algorithmic-bias\">Favouring linear profiles and algorithmic bias<\/h3>\n<p>AI recruiting tools rely on skills-based matching \u2013 identifying keywords that align with the job description. This can potentially level the playing field for non-traditional applicants by emphasising skills rather than pedigree. LinkedIn, for instance, claims to shift hiring from \u201cpedigree and titles\u201d to demonstrable skills, but these claims are not backed by the required careful calibration which ensures that AI does not undervalue experiences that don\u2019t fit established, linear templates.\u00a0<\/p>\n<figure class=\"wp-block-pullquote\">\n<blockquote>\n<p>Industry research highlights the risks associated with AI hiring. Zhisheng Chen\u2019s <a rel=\"nofollow\" target=\"_blank\" href=\"https:\/\/www.nature.com\/articles\/s41599-023-02079-x\" target=\"_blank\" rel=\"noreferrer noopener\"><strong>study<\/strong><\/a> on algorithmic discrimination in hiring finds that while AI supports efficiency, it often reproduces biased outcomes based on race, gender and other characteristics found in training data.<\/p>\n<\/blockquote>\n<\/figure>\n<p>Industry research highlights the risks associated with AI hiring. Zhisheng Chen\u2019s <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.nature.com\/articles\/s41599-023-02079-x\"><strong>study<\/strong><\/a> on algorithmic discrimination in hiring finds that while AI supports efficiency, it often reproduces biased outcomes based on race, gender and other characteristics found in training data. Another <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/ojs.aaai.org\/index.php\/AIES\/article\/view\/36703\/38841\"><strong>study<\/strong><\/a> analysing large language models (LLMs) used in hiring evaluations relieved the cultural and linguistic biases in ranking interview transcripts, as the Indian applicants received relatively much lower scores than their British counterparts, even when anonymised. This implies that AI systems inadvertently favour Western linguistic and communication norms (such as accent and tone), leading to systematic disadvantage for non-native candidates. These biases can lead to less diverse hiring outcomes and, at some places, filter out qualified candidates from the initial hiring stages themselves.\u00a0<\/p>\n<p>A 2026 UK <a rel=\"nofollow\" target=\"_blank\" target=\"_blank\" rel=\"noreferrer noopener\" href=\"https:\/\/www.theguardian.com\/business\/2026\/feb\/04\/women-tech-finance-higher-risk-ai-job-losses-report\"><strong>report<\/strong><\/a> by the City of London Corporation discovered that mid-career women, especially those with five to ten years of clerical experience, were being overlooked for positions in tech and financial services because of rigid automated screening processes that did not account for career breaks. The report also revealed that these women were at higher risk of losing their jobs to automation than their male counterparts.\u00a0<\/p>\n<p>Women are already disproportionately affected by unfair, stereotypical hiring practices, alongside having to face invasive questions from potential employers on their plans to get married and to \u201cstart a family\u201d. Women spend additional hours after finishing their productive, paid work to engage in unpaid domestic and care work, which is unaccounted for. The added layer of AI mirroring the existing barriers only complicates the situation further and necessitates significant changes.\u00a0<\/p>\n<p>Researchers from the University of South Australia suggest that \u201cAI alone cannot fix the biases\u201d; incorporating equality-orientated algorithms without structural context and oversight would do little for diversity. AI developers and employers must address this by presenting intersectional training datasets which include diversified geographical, demographic and professional trajectories. Career breaks, offbeat, non-conventional roles and experiences need to be acknowledged. AI can potentially be used to augment human judgement; skilled HR professionals must interpret and verify AI recommendations against contextual information that cannot be encoded by algorithms. Organisations must be held accountable to be more transparent about the algorithmic criteria; candidates should have insights into how hiring decisions are made. Just as countries are increasingly developing policies to fight misuse by AI and have equal opportunities (recruitment) laws, there must also be clear standards and regulations around fairness and anti-discrimination in AI hiring.\u00a0\u00a0<\/p>\n<p><strong>References:\u00a0<\/strong><\/p>\n<p>https:\/\/www.theguardian.com\/business\/2026\/feb\/04\/women-tech-finance-higher-risk-ai-job-losses-report<br \/>http:\/\/ojs.aaai.org\/index.php\/AIES\/article\/view\/36703\/38841<br \/>https:\/\/www.nature.com\/articles\/s41599-023-02079-x<br \/>https:\/\/economicgraph.linkedin.com\/content\/dam\/me\/economicgraph\/en-us\/PDF\/gender-gaps-in-career-breaks.pdf<br \/>https:\/\/www.businessinsider.com\/hirevue-uses-ai-for-job-interview-applicants-goldman-sachs-unilever-2017-8<br \/>https:\/\/www.reuters.com\/article\/world\/insight-amazon-scraps-secret-ai-recruiting-tool-that-showed-bias-against-women-idUSKCN1MK0AG\/<br \/>https:\/\/mitsloan.mit.edu\/ideas-made-to-matter\/ai-reinventing-hiring-same-old-biases-heres-how-to-avoid-trap<br \/>https:\/\/cardozolawreview.com\/automating-discrimination-ai-hiring-practices-and-gender-inequality\/<br \/>https:\/\/hr.economictimes.indiatimes.com\/news\/workplace-4-0\/recruitment\/over-70-indian-recruiters-turning-to-ai-to-find-hidden-talent-report\/127885466<br \/>https:\/\/explore.hireez.com\/blog\/history-of-ai-in-recruitment\/<\/p>\n<hr class=\"wp-block-separator has-alpha-channel-opacity\"\/>\n<div class=\"m-a-box \" data-box-layout=\"slim\" data-box-position=\"below\" data-multiauthor=\"false\" data-author-id=\"191692\" data-author-type=\"guest\" data-author-archived=\"\">\n<div class=\"m-a-box-container\">\n<div class=\"m-a-box-tab m-a-box-content m-a-box-profile\" data-profile-layout=\"layout-1\" data-author-ref=\"guest-191692\" itemscope=\"\" itemid=\"https:\/\/feminisminindia.com\/guest-author\/simran-dhingra\/\" itemtype=\"https:\/\/schema.org\/Person\">\n<div class=\"m-a-box-content-middle\">\n<div class=\"m-a-box-item m-a-box-avatar\" data-source=\"local\"><a rel=\"nofollow\" target=\"_blank\" class=\"m-a-box-avatar-url\" href=\"https:\/\/feminisminindia.com\/guest-author\/simran-dhingra\/\"><img loading=\"lazy\" decoding=\"async\" data-lazyloaded=\"1\" alt=\"\" src=\"https:\/\/secure.gravatar.com\/avatar\/230dc3075b1e99097d364a05b29c055fe74e15fdbaf0ee8b0e24af8c96730e4a?s=100&amp;d=mp&amp;r=g\" srcset=\"https:\/\/secure.gravatar.com\/avatar\/230dc3075b1e99097d364a05b29c055fe74e15fdbaf0ee8b0e24af8c96730e4a?s=200&amp;d=mp&amp;r=g 2x\" class=\"avatar avatar-100 photo\" height=\"100\" width=\"100\" itemprop=\"image\"\/><\/a><\/div>\n<div class=\"m-a-box-item m-a-box-data\">\n<div class=\"m-a-box-bio\" itemprop=\"description\">\n<p>Simran Dhingra is a recent graduate from Geneva Graduate Institute. Her research interests lie at the intersections of gender, peace, and migration. Her work examines how digital infrastructures reproduce power hierarchies, shape vulnerabilities, and influence policy responses at multilateral and institutional levels.<\/p>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n<\/div>\n\n<p><a href=\"https:\/\/feminisminindia.com\/2026\/02\/13\/algorithmic-inequalities-how-ai-hiring-tools-replicate-old-workplace-biases\/\">Source link <\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Organisations globally are increasingly incorporating artificial intelligence (AI) hiring tools to filter, evaluate and shortlist&#8230;<\/p>\n","protected":false},"author":1,"featured_media":0,"comment_status":"","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[5960,5961,32],"tags":[2205,5959],"class_list":["post-2052","post","type-post","status-publish","format-standard","hentry","category-ai-2","category-hiring-discrimination","category-work","tag-ai","tag-hiring-discrimination"],"_links":{"self":[{"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/posts\/2052","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/users\/1"}],"replies":[{"embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcomments&post=2052"}],"version-history":[{"count":0,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=\/wp\/v2\/posts\/2052\/revisions"}],"wp:attachment":[{"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=%2Fwp%2Fv2%2Fmedia&parent=2052"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=%2Fwp%2Fv2%2Fcategories&post=2052"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/d.sheep-mine.ts.net\/index.php?rest_route=%2Fwp%2Fv2%2Ftags&post=2052"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}