Data Bias on the Daily: Is AI hindering her job search?

The gender pay gap and women’s representation in leadership roles continues to captivate headlines but what action is really being taken… It’s time to take a journey through the application process for a young adult female, let’s call her Mira, looking to land an interview in a Science, Technology, Engineering or Math (STEM) focused corporation…

A large part of Mira’s job search is online, where she will turn to various social platforms to seek out new opportunities, and depending on the channel she picks, she will be shown job advertisements that ultimately will be based on biased pay per click (PPC) and purchasing algorithms.

Advertising a STEM job across various social networks reached 20% fewer females than males, even though it is illegal to gender bias in both the US and Europe.

Management Science Study

Since advertising algorithms are designed to optimise advertising spend, and maximise reach, less potential female candidates were exposed to the ad and therefore did not get the opportunity to apply. To put it simply, the algorithms are biased against female candidates due to the higher cost of reaching them, even if ultimately the female candidate would be a better hire and take less resources to train.

Despite all of these challenges and biases facing her, Mira chugs along and finds a job posting in her field that captures her attention. Upon reading further into the job and the company, she is both consciously and unconsciously influenced by the language used to describe the role which will determine her next step. The STEM industry, particularly, is imbalanced due to a lack of women being trained and therefore is an industry that remains largely male-dominated. This transfers into the biased language used in the industries’ job listings, which in turn biases the data the job board algorithms are trained on.

A University of Waterloo and Duke University study showed that male-dominated industries (STEM industries) use masculine-themed words (like “leader”, “competitive” and “dominant”) which when interpreted by female applicants deteriorated their perception of the “job’s appeal” and their level of “belongingness” even if they felt able to perform the job. Above this, it is proven that a female will only apply for a job if she fulfills 100% of the criteria, whereas males will apply if they feel they fulfill only 60%.

Determined as ever, Mira eventually finds a job description and company she feels confident about, she submits an application. Her CV and cover letter are parsed and ranked alongside other applicants, male and female. Each success factor identified within the words Mira has used, like her programming skills, is weighted according to what has been historically successful in the past for that particular company.

In the age of Artificial Intelligence, past hiring decisions are used to train algorithms to identify the best suited candidates for the company. The issue with biased data is that even if gender is excluded from the application itself, the gender distribution of the training data may be strongly imbalanced since the industry has been historically male-dominated. This means that even if Mira gets to the point where the hiring company or job board puts her resume through their fit algorithm, she still may not receive the interview based on the inherent bias in the program.


If you’re not convinced by Mira’s journey, here’s a real life example: Amazon’s experimental hiring tool was used to screen and rate resumes based on historical hiring patterns. To date, a majority of Amazon employees were males and inevitably the system taught itself to penalise female resumes, with far greater efficiency than a human.


Still unsure if data bias is perpetuating gender bias in STEM? Check out these articles from others in the industry:

How Unconscious Bias Holds Women Back

Mitigating Gender Bias

AI Is Demonstrating Gender Bias


Processing…
Success! You're on the list.

Chatbots, Influencer Bots, Gaming Bots, Oh my!

As it becomes
increasingly difficult to get the attention of executives and consumers through
email and advertisements, many companies are turning to Artificial
Intelligence. But the real question remains, what are the ethics behind
corporations using bots to promote their goods and services?

Chatbots have shown
real ROI to many businesses and at times have proven more effective than an
under-trained salesforce. An
Oracle survey
has shown that Chatbots could save $174 Billion across
Insurance, Financial Services, Sales, and Customer Service. With vast
abilities, including the ability to chat with actual human dialogue it’s no
wonder companies are so eager and quick to turn to this advancement in AI.
Should a company be ethically responsible to share with consumers that they are
chatting with a bot? Is this on the consumer to “beware” of bots? Do
we as a society care? Should we care?

Influencer Bots, like LilMiquela and Shudu, use CGI and are run by companies specializing in Artificial Intelligence to blend the lines between reality and robot. These two influencer bots have amassed almost 2 million followers and “LilMiquela” is touted as “Musician, change-seeker, and robot with the drip💧💖” in her Instagram Bio. These bots are influencing the purchases we make, the culture we enjoy and now even the music industry. Lil Miquela’s profile is at least honest that she is a bot but Shudu’s bio vaguely states: “The World’s First Digital Supermodel.”


In a world where social media is influencing so many consumer purchasing decisions, especially in younger demographics, is it even ethical to create an influencer bot?


Gaming bots, when
used as intended, can often enhance the gaming experience. For example,
Fortnite has upped their bots’ ability in order to maintain an enjoyable
experience for new comers in their vast online community. However, most gamers
know that bots have often been abused since the idea of using an algorithm to
replace a human online became a reality. Many have used the bot technology to
their advantage and there are companies out there doing something about it.
Niantic, creator of one of the most popular cell phone apps ever, Pokemon Go,
has been extremely strict on bots and gamers trying to cheat the system, going
so far as to take
legal action
.

With bots becoming
one of the most common Artificial Intelligence interacted with daily we need to
start questioning the ethical implications. How are AI empowered bots improving
our daily lives? What are the implications of influencer bots like Lil Miquela
and Shudu on the future of our society? How aware are you that Artificial
Intelligence is impacting your purchasing decisions?


Processing…
Success! You're on the list.

Data Bias on the Daily: Criminal Sentencing- Not all algorithms are created equal

Imagine This…. You’ve been convicted of a non-violent crime, say petty theft. Your legal team decides the best course of action is to take a plea deal. On the day of your sentencing, the judge rejects your plea deal and doubles your sentence. Why? An algorithm says that you are at high risk for violent crime in the future…

You may be reading
this thinking, that can’t possibly be real? But that is an all too real
scenario because of the COMPAS algorithm.


COMPAS, an acronym for Correctional Offender Management Profiling for Alternative Sanctions, is a case management and decision support tool used by U.S. courts to assess the likelihood of a defendant becoming a repeat offender.


The problem with COMPAS, as a ProPublica report states, “Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” ProPublica also concluded that the algorithm was twice as likely to falsely flag black defendants as future criminals as it was to falsely flag white defendants. And therein lies the problem, the algorithm has inherently biased training data due to years of human bias in the courtroom.

COMPAS is not only
biased racially, but it also has bias against age and gender. An independent
study done by researchers at Cornell University and Microsoft found that
because most of the training data for COMPAS was based on male offenders the
model is not as good at distinguishing between male and female as it could be.
They even decided to make a separate COMPAS model aimed specifically at
recidivism risk prediction for women.

But why would COMPAS
separate the data based solely on gender when COMPAS has also shown to have
racial bias? Why are judicial systems still turning to private, for-profit,
companies whose algorithms are known to support racial, age and gender bias?

Turning to these
types of algorithms have long standing implications on human life and our
judicial system. Criminals receiving their sentences in the early ages of
algorithmic adoption should not be test samples or guinea pigs for faulty and
biased algorithms. As Artificial Intelligence becomes more main stream,
understanding the data sets and training methodologies is key to understanding
the results – how is data bias affecting your daily life?


For more information
on COMPAS and ProPublica’s report, please click
here
.

Up next: DNA Testing: Is knowing your heritage worth risking your privacy?

Processing…
Success! You're on the list.