Why In-House Artificial Intelligence Projects Fail

Companies all over the world, from giant corporations to start-ups, are keen to cash in on the vast value of Artificial Intelligence. With an intent to capture as much of that value as possible, many spend millions on in-house AI solution development rather than outsourcing to address their most critical business challenges. In a world where there’s a feasible DIY solution for almost everything, Artificial Intelligence is most often the outlier. The complexity and cost of AI solution development demands experience to reduce financial risk and ensure speed to benefit, especially in highly competitive sectors.

Think of your business as a human body, and your business challenges as illnesses of varying severity. Some challenges, like some illnesses, are treatable with over the counter medications, while others require a visit to the doctor, prescriptions, long-term treatment or intensive care. If you had an illness that required extensive medical attention, you wouldn’t hesitate to seek out the best medical team for treatment. Should you treat your business any differently?

In deciding whether to go in-house or outsource, it is important to consider how a strategic AI implementation will impact your business. If it’s done right, it will reduce costs, increase revenue and enhance competitive advantage. If it’s done wrong, to what extent is your business at risk?

Financial opportunity from AI abounds across sectors (see the figure below), and there is both margin opportunity and market share on the table for the businesses that harness AI to tackle their strategic challenges first. In other words, getting your AI implementation done fast and right matters, and you must weigh your decision to go in-house or outsource accordingly.

Artificial intelligence (Al) has the potential to create value across sectors. 
Al impact, 
$ billion 
700 
500 
400 
Healthcare systems 
and services 
Public and social sectors 
300 
Advanced electronics/ 
semiconductors 
Retail 
Transport and logistics 
Travel 
Consumer packaged goods 
Automotive and assembly 
Banking 
Basic materials 
Insurance 
Media and 
entertainment 
High tech 
O Oil and gas 
100 
20 
Telecommunications 
Pharmaceuticals 
and medical 
products 
30 
Chemicals 
Agriculture 
Aerospace and defense 
40 
50 
60 
McKinsey&Company 
Share of Al impact in total impact derived from analytics, % 
Source: McKinsey Global Institute analysis

With competitive advantage on the line and the clock ticking, corporations place their bets on whether to navigate the AI journey alone or with partners. Instinctively, they are hesitant to collaborate, tantalized by the prospect of minimizing solution costs, while building their own innovative capacity and owning the resulting IP outright.

Logically, they then look to market outcomes and learn why in-house AI solution development efforts fail more often than not, even in the Fortune100 and at tech companies.

Without experience developing and delivering AI solutions, many corporations fail understand the costs, resources, processes, stakeholders, and even the objectives involved from the onset. As a result, in-house projects often lack a clear and viable design and delivery strategy, roadmap and KPIs, dramatically reducing the speed to benefit if not inhibiting benefit delivery altogether. Program costs and timelines become a driving force for failure. With little transparency into which aspects of the solution will drive the most value, there is no clear way to prioritize spending. Costs either spiral out of control, or corners are consecutively cut in design, development, testing and delivery, resulting in piecemeal solutions that impair data quality, promote bias and diminish solution accuracy, functionality, utility and outcomes.

More often than not, successful AI adopters partner with proven providers on a combination of off-the-shelf solution tailoring, ground-up solution design and solution delivery. How do they decide to partner rather than go it alone?

First, they recognize the competitive imperative for AI—the opportunity cost of following rather than leading in their market—along with the direct costs of failure, and their lack of in-house knowledge and experience with AI solution design and delivery. Second, they find a provider with a successful track record in similar or analogous environments. Third, they develop trust with that provider by laying the groundwork for a happy marriage in contracting. Then they see it through. From the leadership level-down, they commit to the partnership and collaborate from end-to-end to ensure project success.

Strategic AI implementations are broad in scope, capturing data and impacting activities across corporate ecosystems. This complexity is readily apparent in industry, where AI not only provides data-driven direction for decision-making at every step of the value chain and in every organizational department, but directly informs control and automation strategies in production, testing, packaging, distribution and even purchasing.

In recognition of the immense value and complexity of AI in industry, and the competitive need for speed in adoption, the World Economic Forum, in collaboration with McKinsey & Company, has published a toolkit of “practical recommendations” for industrials to accelerate their AI journey at scale. Appearing in The Next Economic Growth Engine Scaling Fourth Industrial Revolution Technologies in Production, this toolkit advocates the adoption of proven AI solutions and related technologies through a partnership and acquisition approach rather than in-house development.

Figure 6: Industry toolkit for accelerating adoption of technology 
Value delivery engine 
Intelligence 
• Predictive maintenance 
• Machine learning-supported, 
root-cause problem-solving 
for quality claims 
Connectivity 
Augmented reality-guided 
assembly operations 
Real-time IOT-based 
performance management 
Flexible automation 
• Robots to automate 
challenging tasks 
• Real-time product release 
39 high-impact digital applications ready for deployment 
Scale-up engine 
Mobilize 
Mobilize the 
Organization 
Strategize 
Set the vision and the 
value to capture 
Innovate 
Spark innovation by 
demonstrating the 
value at stake 
Scale up 
Capture full value 
& Company, Fourth in With the World

With more and more data available for exploit across industries, the opportunities for its monetization through AI are greater and increasingly complex. So too is the risk of getting your implementation wrong.

Can your business afford the DIY approach?


Challenge us to solve your unsolvable business quandaries.


Processing…
Success! You're on the list.

Enterprise Artificial Intelligence – Academic Theory or Ready for Primetime?

Solvetheunsolvable has already explored various aspects of consumer AI and products purporting to leverage AI technologies, but is AI for Enterprise ready for primetime?

Investors aren’t the only ones betting big on Artificial Intelligence, it turns out Higher Education is also investing heavily into the space. With heavy investment in research and development, enterprise level AI seems to be having a rocky start.  Earlier this month Northeastern University allocated $50 million to an Institute for Experiential Artificial Intelligence. This institute will be dedicated to uniting leading experts to solve the world’s unsolvable problems.

“This new institute, the first of its kind, will focus on enabling artificial intelligence and humans to collaborate interactively around solving problems in health, security, and sustainability. We believe that the true promise of AI lies not in its ability to replace humans, but to optimize what humans do best.”


Northeastern President Joseph E. Aoun

This isn’t Northeastern’s first step into the world of Artificial Intelligence and Automation. They already have an Institute for Experiential Robotics that is bringing together engineers, sociologists and other experts, including economists, to design and build robots with abilities to learn and execute human behaviors.  Northeastern isn’t just building Institutes for experts to conduct research, they are making it a priority to prepare their students for success in the age of artificial intelligence. They have an entire curriculum dedicated to what they call, humanics which is a key part of their strategic plan, Northeastern 2025.

Northeastern 2025 Promo Video

“We are building on substantial strengths across all colleges in the university,” said Carla Brodley, dean of the Khoury College of Computer Sciences. “Experiential AI is highly relevant to our mission.”

Though Northeastern is an example of one university betting heavily on AI, they are not alone in their quest to equip students with proper education for the AI-enabled future. In fact, government agencies are getting involved in funding AI in Education. The UK has pledged to invest £400 million in math, digital and technical education through the government’s AI sector deal to protect Britain’s technology sector amid Brexit and an additional £13 million for postgraduate education on AI. In the US, just a few days ago, the National Science Foundation announced a joint federal program to fund research focused on artificial intelligence at colleges, universities and nonprofit or nonacademic organizations focused on educational or research activities. 

The National Science Foundation is awarding $120 million to fund planning grants and support up to six institutes, but there’s a catch. Each institute must have a principal focus on at least one of six themes:

  • Trustworthy AI
  • Foundations of Machine Learning
  • AI-Driven Innovation in Agriculture and the Food System
  • AI-Augmented Learning
  • AI for Accelerating Molecular Synthesis and Manufacturing
  • AI for Discovery in Physics

As universities and governments bet big on the future of AI and education, it underscores the importance of AI on a global scale in the future, but does it call into question the current existence of AI solutions ready to take business to the next level? Utilizing AI and automation will be imperative for corporations to remain competitive and for the advancement of business, but when will the floodgates be swept open, and by who, remains a mystery.

Will you leap into the future and embrace AI now? How do you see the futuristic vision of enterprise AI transform your business? Challenge Solvetheunsolvable with your business conundrum or leave your thoughts in the comments below and let’s explore what AI can do for you. 


Processing…
Success! You're on the list.


Data Bias on the Daily: Criminal Sentencing- Not all algorithms are created equal

Imagine This…. You’ve been convicted of a non-violent crime, say petty theft. Your legal team decides the best course of action is to take a plea deal. On the day of your sentencing, the judge rejects your plea deal and doubles your sentence. Why? An algorithm says that you are at high risk for violent crime in the future…

You may be reading
this thinking, that can’t possibly be real? But that is an all too real
scenario because of the COMPAS algorithm.


COMPAS, an acronym for Correctional Offender Management Profiling for Alternative Sanctions, is a case management and decision support tool used by U.S. courts to assess the likelihood of a defendant becoming a repeat offender.


The problem with COMPAS, as a ProPublica report states, “Only 20 percent of the people predicted to commit violent crimes actually went on to do so.” ProPublica also concluded that the algorithm was twice as likely to falsely flag black defendants as future criminals as it was to falsely flag white defendants. And therein lies the problem, the algorithm has inherently biased training data due to years of human bias in the courtroom.

COMPAS is not only
biased racially, but it also has bias against age and gender. An independent
study done by researchers at Cornell University and Microsoft found that
because most of the training data for COMPAS was based on male offenders the
model is not as good at distinguishing between male and female as it could be.
They even decided to make a separate COMPAS model aimed specifically at
recidivism risk prediction for women.

But why would COMPAS
separate the data based solely on gender when COMPAS has also shown to have
racial bias? Why are judicial systems still turning to private, for-profit,
companies whose algorithms are known to support racial, age and gender bias?

Turning to these
types of algorithms have long standing implications on human life and our
judicial system. Criminals receiving their sentences in the early ages of
algorithmic adoption should not be test samples or guinea pigs for faulty and
biased algorithms. As Artificial Intelligence becomes more main stream,
understanding the data sets and training methodologies is key to understanding
the results – how is data bias affecting your daily life?


For more information
on COMPAS and ProPublica’s report, please click
here
.

Up next: Data Privacy Part Four

Processing…
Success! You're on the list.