Alexa, How are you using my data?

Smart Home technology is a well saturated market with technology that just a short few years ago many thought could not be possible. We have long talked about voice assistants and video enabled devices but now technology that was once thought to be futuristic has arrived, seemingly omnipresent in many households around the global. Not only are Artificial Intelligence enhanced video enabled devices now available but they come in many varieties, from home protection to two-way video chatting with your pet.


Video “chat” with a pet?….

When did animals start chatting?


With these ever present devices in so many households – cameras, digital assistants, smart TVs, smart thermostats and more – are we enhancing our physical privacy or actually putting it in jeopardy? Are the benefits, such as automation, smart phone remote capabilities and more, worth the risk of data privacy?

Are you taking advantage of Smart Tech, or is it taking advantage of you?

So what’s the big deal if your smart home has data that’s making your life easier? Amazon’s Alexa can make it easier for you to order more household items. Google Home can integrate with Google Nest to allow you to control your A/C by simply telling it to change the temperature. All great features that help make things a little bit more convenient in our day to day, but what exactly are these companies doing with our data?

Amazon is pretty transparent in regards to the voice data Alexa is storing, a quick look on their website tells us that. But what about Google, their biggest competitor in the Smart Home space? Google is fairly transparent as well, though as previously mentioned in the intro post, changing your privacy settings may impact your service. Google’s privacy policy website tells us that they are mostly using your saved data to improve searches and targeted ads, see this video below:

These are great examples of transparency from these corporations and they outline relatively mundane uses for your data but it’s still important to understand. The future consequences of these corporations having your stored data should be the biggest concern. Google’s CEO Eric Schmidt said in 2010:

“One of the things that eventually happens … is that we don’t need you to type at all,” later adding: “Because we know where you are. We know where you’ve been. We can more or less guess what you’re thinking about.”

Eric Schmidt

Adapt that quote to the Smart Home and eventually Google doesn’t need you to speak to Google Home, rather the A/C just changes to suit your pre-determined preferences when you arrive home because of patterns in your stored data combined with Artificial Intelligence. Alexa doesn’t need you to tell her to order paper towels, they just show up because all of your stored data has told them it’s time for another shipment.

While these specific examples of transparency regarding data storage are promising, consider how much you’re willing to give away and where the line is with your data privacy. Consider the fact that these devices are always listening and while the corporations behind them may be simply using this data to “train” their AI, the government or third party apps could be using this data for other reasons.

In 2018 law enforcement subpoenaed Amazon for Amazon Echo data as evidence in a criminal trial for murder. The lines between privacy, technology and criminal justice are changing daily. Amazon is not the only tech company that has had this happen, Fitbit and Apple have run into similar situations.  While most technology companies are quick to defend consumer privacy the question still stands:


How much of your personal privacy are you willing to give away?


Letting AI into our daily lives is not something to fear but maintaining control over data and privacy should be a top concern. There’s many ways to protect your privacy, or at least limit your exposure, while utilizing the benefits of Smart Home tech. Awareness is the first step in achieving enhanced privacy. Visit 101 Data Protection Tips for a comprehensive list of ways to attempt to protect your privacy.


Up next: Data Privacy Part Four


Processing…
Success! You're on the list.

Data Bias on the Daily: Is AI hindering her job search?

The gender pay gap and women’s representation in leadership roles continues to captivate headlines but what action is really being taken… It’s time to take a journey through the application process for a young adult female, let’s call her Mira, looking to land an interview in a Science, Technology, Engineering or Math (STEM) focused corporation…

A large part of Mira’s job search is online, where she will turn to various social platforms to seek out new opportunities, and depending on the channel she picks, she will be shown job advertisements that ultimately will be based on biased pay per click (PPC) and purchasing algorithms.

Advertising a STEM job across various social networks reached 20% fewer females than males, even though it is illegal to gender bias in both the US and Europe.

Management Science Study

Since advertising algorithms are designed to optimise advertising spend, and maximise reach, less potential female candidates were exposed to the ad and therefore did not get the opportunity to apply. To put it simply, the algorithms are biased against female candidates due to the higher cost of reaching them, even if ultimately the female candidate would be a better hire and take less resources to train.

Despite all of these challenges and biases facing her, Mira chugs along and finds a job posting in her field that captures her attention. Upon reading further into the job and the company, she is both consciously and unconsciously influenced by the language used to describe the role which will determine her next step. The STEM industry, particularly, is imbalanced due to a lack of women being trained and therefore is an industry that remains largely male-dominated. This transfers into the biased language used in the industries’ job listings, which in turn biases the data the job board algorithms are trained on.

A University of Waterloo and Duke University study showed that male-dominated industries (STEM industries) use masculine-themed words (like “leader”, “competitive” and “dominant”) which when interpreted by female applicants deteriorated their perception of the “job’s appeal” and their level of “belongingness” even if they felt able to perform the job. Above this, it is proven that a female will only apply for a job if she fulfills 100% of the criteria, whereas males will apply if they feel they fulfill only 60%.

Determined as ever, Mira eventually finds a job description and company she feels confident about, she submits an application. Her CV and cover letter are parsed and ranked alongside other applicants, male and female. Each success factor identified within the words Mira has used, like her programming skills, is weighted according to what has been historically successful in the past for that particular company.

In the age of Artificial Intelligence, past hiring decisions are used to train algorithms to identify the best suited candidates for the company. The issue with biased data is that even if gender is excluded from the application itself, the gender distribution of the training data may be strongly imbalanced since the industry has been historically male-dominated. This means that even if Mira gets to the point where the hiring company or job board puts her resume through their fit algorithm, she still may not receive the interview based on the inherent bias in the program.


If you’re not convinced by Mira’s journey, here’s a real life example: Amazon’s experimental hiring tool was used to screen and rate resumes based on historical hiring patterns. To date, a majority of Amazon employees were males and inevitably the system taught itself to penalise female resumes, with far greater efficiency than a human.


Still unsure if data bias is
perpetuating gender bias in STEM? Check out these articles from others in the
industry:

How Unconscious Bias Holds Women Back

Mitigating Gender Bias

AI Is Demonstrating Gender Bias


Processing…
Success! You're on the list.

Data Bias on the Daily: Criminal Sentencing- Not all algorithms are created equal

Imagine This…. You’ve been convicted of a non-violent crime, say petty theft. Your legal team decides the best course of action is to take a plea deal. On the day of your sentencing, the judge rejects your plea deal and doubles your sentence. Why? An algorithm says that you are at high risk for violent crime in the future…

You may be reading
this thinking, that can’t possibly be real? But that is an all too real
scenario because of the COMPAS algorithm.


COMPAS, an acronym for Correctional Offender Management Profiling for Alternative Sanctions, is a case management and decision support tool used by U.S. courts to assess the likelihood of a defendant becoming a repeat offender.


The problem with COMPAS, as a ProPublica report states, “Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” ProPublica also concluded that the algorithm was twice as likely to falsely flag black defendants as future criminals as it was to falsely flag white defendants. And therein lies the problem, the algorithm has inherently biased training data due to years of human bias in the courtroom.

COMPAS is not only
biased racially, but it also has bias against age and gender. An independent
study done by researchers at Cornell University and Microsoft found that
because most of the training data for COMPAS was based on male offenders the
model is not as good at distinguishing between male and female as it could be.
They even decided to make a separate COMPAS model aimed specifically at
recidivism risk prediction for women.

But why would COMPAS
separate the data based solely on gender when COMPAS has also shown to have
racial bias? Why are judicial systems still turning to private, for-profit,
companies whose algorithms are known to support racial, age and gender bias?

Turning to these
types of algorithms have long standing implications on human life and our
judicial system. Criminals receiving their sentences in the early ages of
algorithmic adoption should not be test samples or guinea pigs for faulty and
biased algorithms. As Artificial Intelligence becomes more main stream,
understanding the data sets and training methodologies is key to understanding
the results – how is data bias affecting your daily life?


For more information
on COMPAS and ProPublica’s report, please click
here
.

Up next: Data Privacy Part Four

Processing…
Success! You're on the list.

Solved: Dismantling the Silo

We are living in a world that is obsessed with connectivity yet so many large corporations are still working in silos. How can a corporation become connected and move towards Industry 4.0 if their data and systems are trapped in silos? This is a problem many corporations, big and small, face today. In order to understand and address the problem, first we must understand the silo. There is a solution, there is a path forward and Artificial Intelligence can lead the way.

Silos are created when information, goals, tools, priorities and processes are not shared with other departments and this pitfall becomes enforced by corporate culture. An effort to achieve the lowest overall cost and best functionality for different departments, or in the case of some manufacturers the same department in different plants, has created disparate data. Many systems are programmed to not function well together or only function in a stack but there is technology out there dedicated to dismantling the silo.

Executives often get into the trap of thinking the only way to advance into Industry 4.0 is to update disparate systems and increase capital expenditures. What many execs and people in all corporate functions do not yet understand is that Artificial Intelligence (AI) can be the systems connector. The entire premise of AI is built on the notion of interconnected information that may have previously been thought to be unconnected entirely. It is entirely unnecessary for a corporation to increase CAPEX when working with the right AI provider.

In order to get the most accurate picture of underlying issues within the corporation, AI must be able to connect to a vast amount of data from many different silos. This doesn’t mean that you have to dismantle the silos, you just need the right AI connector…. Let Artificial Intelligence be your Silo Dismantling Agent.

Data Bias on the Daily

Our upcoming September Blog series is Data Bias on the Daily: How Data Bias in Artificial Intelligence is Impacting You. This series will focus on data bias in various forms encountered daily and the goal is to educate consumers on how bias can enter algorithms, knowingly and unknowingly. It is important to define bias:

Bias: The systematic favoritism that is present in the data collection process, resulting in lopsided, misleading results.

How to Identify Statistical Bias

Bias in Artificial Intelligence and algorithms is sometimes intentional, can be caused by a number of things including, but not limited to, sample selection or data collection, and can be avoidable, if that is the desired outcome. (Many corporations want to write their algorithms with bias, in order to increase their bottom line.)

Maybe you’re a woman searching for a STEM job, but the data is biased against your application? Perhaps, your Amazon search is biased towards products and brands that will only increase Amazon’s bottom line? Data bias is even entering our judicial system, is it possible that algorithmic “advancement” is simply confirming long standing racial bias?


Stay tuned to learn more about how Data Bias impacts our daily lives by checking in with us on Mondays in September (9/16, 9/23 and 9/30).



Processing…
Success! You're on the list.