Facial Recognition, Data Privacy and Your Identity

For age old reasons, the government issued ID or passport has been the official link between our facial features and our identification. And the two in combination have been the traditional way of both authenticating and identifying us on an ad-hoc and transient basis at bars, banks, airports, and so much more…

The range of how our identity and facial features can be used in both private affairs and civil procedures is virtually endless, even without our knowledge or explicit consent. The regulation of usage is a difficult topic for unanimous agreement surrounding our privacy and safety in the same world where freedom of expression, movement and liberty contribute to our livelihoods.

Then we introduce technology to the equation.

Where technology adds a level of magic, comfort and efficiency to our lives, never experienced before relieving us of the boring, mundane and impossible it also adds a level of risk to our data privacy and security. And we all know that once something has been introduced to the internet, it’s near impossible to remove.

Take Google Nest Hub Max – we entrust it to connect us to our homes when we are far away for us to track and see for ourselves key events – saving energy, ensuring comfort and convenience whilst maximising the surveillance and security of our babies, dogs and homes. But can we trust where Google is storing our most sensitive data and what they are doing with it? Nest uses Face Match, facial recognition software, which is enabled by the front facing always-on camera for security, to understand which user is using it and for video calls. If the feature is on, the detection is constant and the security of our data, how it is being processed and stored, cannot be guaranteed. The detection can however be turned off, but at the detriment to functionality.

Turn to Apple photos, Google Photos and Facebook tagging – instances where facial recognition is applied to data you provide, aka your photos. The ease of photography, the popularity of the selfie and the constant desire to update your friends, family and followers on your day-to-daymeans bulky streams consisting of hundreds if not thousands of photos. Manually organising these photos is administrative and time-consuming, in other words, something AI and automation can now do for us. So when Apple’s algorithms can identify the faces of your family and friends, even your pets, organise them and allow you to search by face, does this functionality make the security risk worth it?  

Facebook’s facial recognition feature notifies you when others upload photos of you. Whilst recent developments enable an individual to opt-out of this function, there is no guarantee that Facebook itself is not scanning and processing your image. Harnessing your identity to potentially create an online profile of you to sell to advertisers and who knows what else. Likewise with Google – which uses facial recognition and automation to autotag photos of you and your friends – you can also choose to opt out, but you have no control over what your friends may decide to do with photos of you and where that data publicly ends up.

All of these technologies and instances of applied facial recognition also enable facial mapping, providing swift and secure entry into our smart phones – deeming them impenetrable when in the wrong hands. Once unlocked, a further safeguarding layer of facial recognition is provided for mobile payments like Apple Wallet, which unlike other touch-less forms of payment, has no limit.

Whilst there is an enigma surrounding where our facial image data actually lives inside of Google, Facebook and Apple’s servers, how secure it is and how effective is the encryption?  One thing is certainly clear, the multi-purposes of our devices and their built in sensitivity to our changing environments that simplify and augment our lives  – be it Nest which knows, in real time, the occupancy of our homes and our preferred weather, or our phones which house our banking details alongside thousands of images, text and browser history – the future of our data privacy begs the question:

Is the cost of convenience and our need to publish and document our life’s moments  worth more right now than the risk of jeopardizing our privacy and possibly identify?


Up next: DNA Testing: Is knowing your heritage worth risking your privacy?


Processing…
Success! You're on the list.

Enterprise Artificial Intelligence – Academic Theory or Ready for Primetime?

Solvetheunsolvable has already explored various aspects of consumer AI and products purporting to leverage AI technologies, but is AI for Enterprise ready for primetime?

Investors aren’t the only ones betting big on Artificial Intelligence, it turns out Higher Education is also investing heavily into the space. With heavy investment in research and development, enterprise level AI seems to be having a rocky start.  Earlier this month Northeastern University allocated $50 million to an Institute for Experiential Artificial Intelligence. This institute will be dedicated to uniting leading experts to solve the world’s unsolvable problems.

“This new institute, the first of its kind, will focus on enabling artificial intelligence and humans to collaborate interactively around solving problems in health, security, and sustainability. We believe that the true promise of AI lies not in its ability to replace humans, but to optimize what humans do best.”


Northeastern President Joseph E. Aoun

This isn’t Northeastern’s first step into the world of Artificial Intelligence and Automation. They already have an Institute for Experiential Robotics that is bringing together engineers, sociologists and other experts, including economists, to design and build robots with abilities to learn and execute human behaviors.  Northeastern isn’t just building Institutes for experts to conduct research, they are making it a priority to prepare their students for success in the age of artificial intelligence. They have an entire curriculum dedicated to what they call, humanics which is a key part of their strategic plan, Northeastern 2025.

Northeastern 2025 Promo Video

“We are building on substantial strengths across all colleges in the university,” said Carla Brodley, dean of the Khoury College of Computer Sciences. “Experiential AI is highly relevant to our mission.”

Though Northeastern is an example of one university betting heavily on AI, they are not alone in their quest to equip students with proper education for the AI-enabled future. In fact, government agencies are getting involved in funding AI in Education. The UK has pledged to invest £400 million in math, digital and technical education through the government’s AI sector deal to protect Britain’s technology sector amid Brexit and an additional £13 million for postgraduate education on AI. In the US, just a few days ago, the National Science Foundation announced a joint federal program to fund research focused on artificial intelligence at colleges, universities and nonprofit or nonacademic organizations focused on educational or research activities. 

The National Science Foundation is awarding $120 million to fund planning grants and support up to six institutes, but there’s a catch. Each institute must have a principal focus on at least one of six themes:

  • Trustworthy AI
  • Foundations of Machine Learning
  • AI-Driven Innovation in Agriculture and the Food System
  • AI-Augmented Learning
  • AI for Accelerating Molecular Synthesis and Manufacturing
  • AI for Discovery in Physics

As universities and governments bet big on the future of AI and education, it underscores the importance of AI on a global scale in the future, but does it call into question the current existence of AI solutions ready to take business to the next level? Utilizing AI and automation will be imperative for corporations to remain competitive and for the advancement of business, but when will the floodgates be swept open, and by who, remains a mystery.

Will you leap into the future and embrace AI now? How do you see the futuristic vision of enterprise AI transform your business? Challenge Solvetheunsolvable with your business conundrum or leave your thoughts in the comments below and let’s explore what AI can do for you. 


Processing…
Success! You're on the list.

Alexa, How are you using my data?

Smart Home technology is a well saturated market with technology that just a short few years ago many thought could not be possible. We have long talked about voice assistants and video enabled devices but now technology that was once thought to be futuristic has arrived, seemingly omnipresent in many households around the global. Not only are Artificial Intelligence enhanced video enabled devices now available but they come in many varieties, from home protection to two-way video chatting with your pet.


Video “chat” with a pet?….

When did animals start chatting?


With these ever present devices in so many households – cameras, digital assistants, smart TVs, smart thermostats and more – are we enhancing our physical privacy or actually putting it in jeopardy? Are the benefits, such as automation, smart phone remote capabilities and more, worth the risk of data privacy?

Are you taking advantage of Smart Tech, or is it taking advantage of you?

So what’s the big deal if your smart home has data that’s making your life easier? Amazon’s Alexa can make it easier for you to order more household items. Google Home can integrate with Google Nest to allow you to control your A/C by simply telling it to change the temperature. All great features that help make things a little bit more convenient in our day to day, but what exactly are these companies doing with our data?

Amazon is pretty transparent in regards to the voice data Alexa is storing, a quick look on their website tells us that. But what about Google, their biggest competitor in the Smart Home space? Google is fairly transparent as well, though as previously mentioned in the intro post, changing your privacy settings may impact your service. Google’s privacy policy website tells us that they are mostly using your saved data to improve searches and targeted ads, see this video below:

These are great examples of transparency from these corporations and they outline relatively mundane uses for your data but it’s still important to understand. The future consequences of these corporations having your stored data should be the biggest concern. Google’s CEO Eric Schmidt said in 2010:

“One of the things that eventually happens … is that we don’t need you to type at all,” later adding: “Because we know where you are. We know where you’ve been. We can more or less guess what you’re thinking about.”

Eric Schmidt

Adapt that quote to the Smart Home and eventually Google doesn’t need you to speak to Google Home, rather the A/C just changes to suit your pre-determined preferences when you arrive home because of patterns in your stored data combined with Artificial Intelligence. Alexa doesn’t need you to tell her to order paper towels, they just show up because all of your stored data has told them it’s time for another shipment.

While these specific examples of transparency regarding data storage are promising, consider how much you’re willing to give away and where the line is with your data privacy. Consider the fact that these devices are always listening and while the corporations behind them may be simply using this data to “train” their AI, the government or third party apps could be using this data for other reasons.

In 2018 law enforcement subpoenaed Amazon for Amazon Echo data as evidence in a criminal trial for murder. The lines between privacy, technology and criminal justice are changing daily. Amazon is not the only tech company that has had this happen, Fitbit and Apple have run into similar situations.  While most technology companies are quick to defend consumer privacy the question still stands:


How much of your personal privacy are you willing to
give away?


Letting AI into our daily lives is not something to fear but maintaining control over data and privacy should be a top concern. There’s many ways to protect your privacy, or at least limit your exposure, while utilizing the benefits of Smart Home tech. Awareness is the first step in achieving enhanced privacy. Visit 101 Data Protection Tips for a comprehensive list of ways to attempt to protect your privacy.


Up next: DNA Testing: Is knowing your heritage worth risking your privacy?


Processing…
Success! You're on the list.


Robotics: Automation or Artificial Intelligence?

The rise of robotics – a long touted seismic shift in human existence, the day an inanimate creature is brought to life.  A scary reality in the minds of many conspiracy theorists, and a reality many tech leaders would have us believe is already upon us.  But how close are we to engineering a robotic race?

“Who controls the past controls the future. Who controls the present controls the past.”

George Orwell, 1984

It’s difficult to not think about physical robots tackling common human tasks when we see the word robotics, but now robotics refers to a much larger application of technology and rising industry. Robotics refers to a focus on creating efficiency and replicating mundane tasks, a world that exists beyond purely physical robots, giving rise to automation bots.

Robotic Process Automation (RPA) is an example of an automation bot operating in a digital world. You may be thinking, obviously it’s automation, it’s even in the name, but what is RPA? Used to perform simple, repetitive tasks, such as data entry, RPA is a programmable “bot” that automates a process in order to free up more time for humans that would otherwise be doing these mundane tasks. RPA cannot be considered Artificial Intelligence as it does not have the ability to understand the implications of the tasks it is performing, or predict future scenarios arising from the performance of these tasks.

Amazon Scout

In contrast, Amazon’s Scout is out on the streets in California, a physical robot, making package deliveries.  This Scout robot may be physical and operate in the real world but similarly to RPA it is another example of automation, lacking human intuition. Just like RPA this Scout robot is programmed to deliver a package straight to your door, removing this repetitive task, lessening the burden on man and machine, but the bot is not capable of modifying the delivery location to the back door under the overhang when rain is predicted, unless the delivery instructions are explicitly programmed to do so.  Far from artificial intelligence, the Scout is simply a machine programmed to automate a repetitive human function.

While individuals commonly mistake robotics as artificial intelligence, it’s important to understand why RPA and delivery robots are not examples of true artificial intelligence. Are they intelligent bots? Maybe. They certainly process and execute complicated instructions and factor many variables, but they lack inherent cognitive function.  Humans are constantly concerned about the demise of humanity as robots are brought to life, but because artificial intelligence still lacks the ability to replicate common sense, the rise of the robotic race will still remain in the halls of science fiction.



Processing…
Success! You're on the list.

Data Privacy Series

When’s the last time you read the terms and conditions before clicking “accept” as you downloaded the hottest new app to your smartphone? Do you really know what companies are doing with your data from your search history, the pictures on your phone, or even your personal health records?

How our data,
personal details and digital patterns, is being used by the scores of apps,
programs, and devices we interact with on a daily basis remains a mystery. When
we click “Accept” on the terms and conditions page, usually in a
hurry, we are blindly 
“choosing” to opt in to whatever data collection and privacy
infringements the developer has built into the technology. What’s more, most
companies use vague statements on their websites regarding what they are doing
with your data and even threaten to impact your service if you decide not to
share your data.

For example, this snippet comes directly from Nest’s FAQ’s:

Nest FAQs

How important is that app or device?  Is it worth signing over your digital rights?


Join the
SolvetheUnsolvable team this month to explore how private your data really is…

Facial Recognition,
friend or foe? Family Heritage Mapping, a key to the past or losing control of
your future?… Who else is checking in on grandma? The hidden dangers in Smart
Home technology. Are you using your cell phone, or is it using you?

Check back in on
Wednesdays this October to learn more about data privacy…

Up next: DNA Testing: Is knowing your heritage worth risking your privacy?


Processing…
Success! You're on the list.


Data Bias on the Daily: Is AI hindering her job search?

The gender pay gap and women’s representation in leadership roles continues to captivate headlines but what action is really being taken… It’s time to take a journey through the application process for a young adult female, let’s call her Mira, looking to land an interview in a Science, Technology, Engineering or Math (STEM) focused corporation…

A large part of Mira’s job search is online, where she will turn to various social platforms to seek out new opportunities, and depending on the channel she picks, she will be shown job advertisements that ultimately will be based on biased pay per click (PPC) and purchasing algorithms.

Advertising a STEM job across various social networks reached 20% fewer females than males, even though it is illegal to gender bias in both the US and Europe.

Management Science Study

Since advertising algorithms are designed to optimise advertising spend, and maximise reach, less potential female candidates were exposed to the ad and therefore did not get the opportunity to apply. To put it simply, the algorithms are biased against female candidates due to the higher cost of reaching them, even if ultimately the female candidate would be a better hire and take less resources to train.

Despite all of these challenges and biases facing her, Mira chugs along and finds a job posting in her field that captures her attention. Upon reading further into the job and the company, she is both consciously and unconsciously influenced by the language used to describe the role which will determine her next step. The STEM industry, particularly, is imbalanced due to a lack of women being trained and therefore is an industry that remains largely male-dominated. This transfers into the biased language used in the industries’ job listings, which in turn biases the data the job board algorithms are trained on.

A University of Waterloo and Duke University study showed that male-dominated industries (STEM industries) use masculine-themed words (like “leader”, “competitive” and “dominant”) which when interpreted by female applicants deteriorated their perception of the “job’s appeal” and their level of “belongingness” even if they felt able to perform the job. Above this, it is proven that a female will only apply for a job if she fulfills 100% of the criteria, whereas males will apply if they feel they fulfill only 60%.

Determined as ever, Mira eventually finds a job description and company she feels confident about, she submits an application. Her CV and cover letter are parsed and ranked alongside other applicants, male and female. Each success factor identified within the words Mira has used, like her programming skills, is weighted according to what has been historically successful in the past for that particular company.

In the age of Artificial Intelligence, past hiring decisions are used to train algorithms to identify the best suited candidates for the company. The issue with biased data is that even if gender is excluded from the application itself, the gender distribution of the training data may be strongly imbalanced since the industry has been historically male-dominated. This means that even if Mira gets to the point where the hiring company or job board puts her resume through their fit algorithm, she still may not receive the interview based on the inherent bias in the program.


If you’re not convinced by Mira’s journey, here’s a real life example: Amazon’s experimental hiring tool was used to screen and rate resumes based on historical hiring patterns. To date, a majority of Amazon employees were males and inevitably the system taught itself to penalise female resumes, with far greater efficiency than a human.


Still unsure if data bias is
perpetuating gender bias in STEM? Check out these articles from others in the
industry:

How Unconscious Bias Holds Women Back

Mitigating Gender Bias

AI Is Demonstrating Gender Bias


Processing…
Success! You're on the list.

Chatbots, Influencer Bots, Gaming Bots, Oh my!

As it becomes
increasingly difficult to get the attention of executives and consumers through
email and advertisements, many companies are turning to Artificial
Intelligence. But the real question remains, what are the ethics behind
corporations using bots to promote their goods and services?

Chatbots have shown
real ROI to many businesses and at times have proven more effective than an
under-trained salesforce. An
Oracle survey
has shown that Chatbots could save $174 Billion across
Insurance, Financial Services, Sales, and Customer Service. With vast
abilities, including the ability to chat with actual human dialogue it’s no
wonder companies are so eager and quick to turn to this advancement in AI.
Should a company be ethically responsible to share with consumers that they are
chatting with a bot? Is this on the consumer to “beware” of bots? Do
we as a society care? Should we care?

Influencer Bots, like LilMiquela and Shudu, use CGI and are run by companies specializing in Artificial Intelligence to blend the lines between reality and robot. These two influencer bots have amassed almost 2 million followers and “LilMiquela” is touted as “Musician, change-seeker, and robot with the drip💧💖” in her Instagram Bio. These bots are influencing the purchases we make, the culture we enjoy and now even the music industry. Lil Miquela’s profile is at least honest that she is a bot but Shudu’s bio vaguely states: “The World’s First Digital Supermodel.”


In a world where social media is influencing so many consumer purchasing decisions, especially in younger demographics, is it even ethical to create an influencer bot?


Gaming bots, when
used as intended, can often enhance the gaming experience. For example,
Fortnite has upped their bots’ ability in order to maintain an enjoyable
experience for new comers in their vast online community. However, most gamers
know that bots have often been abused since the idea of using an algorithm to
replace a human online became a reality. Many have used the bot technology to
their advantage and there are companies out there doing something about it.
Niantic, creator of one of the most popular cell phone apps ever, Pokemon Go,
has been extremely strict on bots and gamers trying to cheat the system, going
so far as to take
legal action
.

With bots becoming
one of the most common Artificial Intelligence interacted with daily we need to
start questioning the ethical implications. How are AI empowered bots improving
our daily lives? What are the implications of influencer bots like Lil Miquela
and Shudu on the future of our society? How aware are you that Artificial
Intelligence is impacting your purchasing decisions?


Processing…
Success! You're on the list.

Solved: Dismantling the Silo

We are living in a world that is obsessed with connectivity yet so many large corporations are still working in silos. How can a corporation become connected and move towards Industry 4.0 if their data and systems are trapped in silos? This is a problem many corporations, big and small, face today. In order to understand and address the problem, first we must understand the silo. There is a solution, there is a path forward and Artificial Intelligence can lead the way.

Silos are created when information, goals, tools, priorities and processes are not shared with other departments and this pitfall becomes enforced by corporate culture. An effort to achieve the lowest overall cost and best functionality for different departments, or in the case of some manufacturers the same department in different plants, has created disparate data. Many systems are programmed to not function well together or only function in a stack but there is technology out there dedicated to dismantling the silo.

Executives often get into the trap of thinking the only way to advance into Industry 4.0 is to update disparate systems and increase capital expenditures. What many execs and people in all corporate functions do not yet understand is that Artificial Intelligence (AI) can be the systems connector. The entire premise of AI is built on the notion of interconnected information that may have previously been thought to be unconnected entirely. It is entirely unnecessary for a corporation to increase CAPEX when working with the right AI provider.

In order to get the most accurate picture of underlying issues within the corporation, AI must be able to connect to a vast amount of data from many different silos. This doesn’t mean that you have to dismantle the silos, you just need the right AI connector…. Let Artificial Intelligence be your Silo Dismantling Agent.

Data Bias on the Daily: What’s in your Amazon Cart?

With multiple billions of packages shipped per year, and even
more billions of items purchased, it’s no
wonder that Amazon is a household name. In the time it takes you to read this
article, an estimated 100,000 items will be purchased on Amazon. But how many
of those purchases will be impacted by data bias? It’s likely every single one.

Have you ever
stopped to think about the algorithms behind those convenient home page
suggestions, people also purchased, and related to items you viewed? Maybe in
passing or idly but likely not in detail. In this post we will be exploring
just how much one quick search on Amazon can be littered with bias.

Bias can end up in
your shopping cart in many different stages along the way. It’s safe to assume
that Amazon wants to maximize it’s bottom line, even though they claim they
want to provide the consumer with a great deal. When the engineers set to
writing shopping algorithms at Amazon corporate, they have to add parameters
that are computable and achieve a certain goal. With this in mind, they are
likely adding the bias that they would like to increase Amazon’s bottom line
(aka, profitability of your purchase). Let’s apply this to an example search.

Perhaps you’ve been
searching for storage solutions for your messy closet. The moment your results
appear for “closet storage” you are encountering data bias. Exhibit
A, the screenshot below:

You’re looking at the first instance of bias because a majority of the page is showing you “sponsored” results. “Funding Bias” is written into many algorithms all over the internet and is the reason why sponsored items always show up first. These brands: JYYG, TYPE A, OUMYJIA and JEROAL are paying to bias your search. This is one of, if not the most common search bias encountered but we see so much “sponsored” content that, at this point, we may not even think of the bias’s effects.

Regulations have
been placed on funding bias, note the “Sponsored” tag and other
evidence for example on social media platforms when influencers post with
“#ad.” These indicators are set up to clue in the consumer that their
purchasing decisions are being biased by funded posts, something relatively new
the last several years. Corporations spent years biasing search history towards
paid posts without having to let the consumer know, this is a huge stride
against hidden data bias. There’s nothing wrong with “pay to play!”
As a consumer, this can be a helpful way to find one of your new favorite
brands, products or technology solutions but people now believe it’s important
for the consumer to know when funding is biasing their purchase.

Another example of
bias in this search are the items marked with the “best seller” tag.
Under “Help & Customer Service” Amazon.com says: “The Amazon
Best Sellers calculation is based on Amazon.com sales and is updated hourly to
reflect recent and historical sales of every item sold on Amazon.com.”
There is a litany of data bias that could be going on here. Was the
“Simple Houseware” organizer at one point a “sponsored”
post which led to it becoming a “best seller?” A quick google search
shows that Simple Houseware only sells on Amazon, this could be another
contributing factor to the status as a “best seller.” We simply do
not know the parameters the calculation is setting for “sales” as
that is quite a broad term.

To play devil’s
advocate, the “sponsored” 
results could be amazing products that you will purchase, love and even
re-order in the future. The data bias written into the “Best Sellers”
calculation could absolutely be favoring great products you will know and love.
However, it’s critical that we don’t turn a blind eye to the bias and we
continue to scroll past the “sponsored” and “best sellers”
to get a full picture for purchasing.

The objective of
this article isn’t to get you to stop buying from Amazon but rather to consider
the data bias right in front of you. If there is this much data bias in one
simple Amazon search, how many more things in your life are impacted by data
bias? How much is your business and its bottom line impacted by data bias?
Algorithms, AI and their inherent bias are a part of daily life. How will you
use them to create change for good?


Up next: DNA Testing: Is knowing your heritage worth risking your privacy?


Processing…
Success! You're on the list.


Data Bias on the Daily

Our upcoming September Blog series is Data Bias on the Daily: How Data Bias in Artificial Intelligence is Impacting You. This series will focus on data bias in various forms encountered daily and the goal is to educate consumers on how bias can enter algorithms, knowingly and unknowingly. It is important to define bias:

Bias: The systematic favoritism that is present in the data collection process, resulting in lopsided, misleading results.

How to Identify Statistical Bias

Bias in Artificial Intelligence and algorithms is sometimes intentional, can be caused by a number of things including, but not limited to, sample selection or data collection, and can be avoidable, if that is the desired outcome. (Many corporations want to write their algorithms with bias, in order to increase their bottom line.)

Maybe you’re a woman searching for a STEM job, but the data is biased against your application? Perhaps, your Amazon search is biased towards products and brands that will only increase Amazon’s bottom line? Data bias is even entering our judicial system, is it possible that algorithmic “advancement” is simply confirming long standing racial bias?


Stay tuned to learn more about how Data Bias impacts our daily lives by checking in with us on Mondays in September (9/16, 9/23 and 9/30).



Processing…
Success! You're on the list.