Skip to content

“Algorithms are not biased; data is biased” – MWC 2019

Last week at MWC in Barcelona, the session panels focused on the hottest topics in mobile, such as 5G, artificial intelligence and blockchain. The more controversial panels discussed the bias found in data, and how that data goes onto inform algorithms, which results in unethical conclusions. Speakers and panelists pointed out the racial bias in prison sentencing, gender bias in mortgage loans, financial institutions, age-related bias that occurs during job recruitment, and pre-existing conditions in health care coverage.

Danny Guillory, the head of global diversity and inclusion at AutoDesk told Fortune Magazine that by running a search for a professional social network for social engineers, the results were primarily Caucasian men. Guillory pointed out that when you engage or ask for more results, the AI delivers candidates with similar attributes – more Caucasian men. Another example of AI bias is the notorious Microsoft’s Tay AI, when released on Twitter back in March of 2016, the AI quickly became misogynist and racist on social media within a staggering 24 hours.

AI may seem like an auxiliary technology to how we live our daily lives today, however, it will soon be the primary driver across the tech industry. PricewaterhouseCoopers estimates the world economy will reach an additional 15.7 trillion in value by 2030 due to artificial intelligence. To put this into perspective, the top 5 technology companies today have a combined value of about $4 trillion, which includes Apple, Amazon, Microsoft, Google and Facebook. The annual global technology spend is similar – about $3 trillion. Over the next decade, AI will drive a market 5x the size of tech’s current global spend.

Although this growth is exciting on many levels, the panelists at MWC 2019 voiced concerns about the handling of inherent biases that comes from data, as clearly discrimination by age, race, gender, education or other factors within audience segmentation is counterproductive to the advancement of society that AI promises.


My newsletter subscribers get this information first. Sign up here.


AI algorithms are responsible for making consequential decisions and are trained to find lookalikes or other markers to learn patterns. Some argue that the bias occurs when the computer system reflects the humans who designed it. Proven downsides to artificial intelligence have surfaced in recent years, for instance with fake news allegedly influenced the 2016 Presidential election. These accusations are proof that we have run out of time in addressing these concerns, especially as we near the precipice of a much larger, multi-trillion-dollar AI market.

Provided there is more diversity within the field of artificial intelligence, many of the panelists asked who should regulate the infractions of algorithmic bias – governments or markets? Many felt there should be an international community to establish guidelines for AI. But even then, will the lower classes be invited or what level of inclusivity will an international community realistically provide for, as the world’s most vulnerable and marginalized people are unlikely to be represented. In this way, AI could further the gap between lower class and upper class along socioeconomic lines, if it hasn’t done so already as AI is currently in use by the largest financial funds in capital markets.

The unanimous solution among the panelists and speakers was to broaden the conversation and not limit artificial intelligence jobs only to technical experts. “Requiring someone to know Python in order to work with AI is not democratizing AI,” one panelist pointed out. Along these lines, a more human centric approach is necessary.

I consult for financial firms. Inquire here.


Sign Up to Receive Bi-monthly Insider Analysis:

I’m an industry insider who writes free in-depth analysis on public tech companies. This year, I predicted Facebook’s Q2 crash, Roku’s meteoric rise, Oracle’s slow decline, and more. Be industry-specific. Know more than the broader markets. Sign Up Now. I look forward to staying connected.



Published inArtificial IntelligenceFinancial MarketsMobileTech Stocks

2 Comments

  1. Joe Haluska Joe Haluska

    Wow! A lot to think about. In no particular order, some thoughts. Concerning the algorithms, “you get out of it what you put into it”, so if garbage in, then garbage out, and vice versa. So the construction is of paramount importance. Is AI simply subject to sampling error, hobbled by a Bayes Theorem type of restriction that makes decisions dependent on the “truths” to which it has access? And regarding regulation, IMO neither government nor markets, but rather an “United Nations” of academics, including technologists, ethicists, sociologists, psychologists……but does that lead to technological singularity? Does the AI eventually assume the responsibility of the managers?
    And then, is the bias itself in need of “self analysis”, attempting to correct its own errors? Does AI become the manager of humanity, for our own good, and hopefully not our detriment?
    Thank you Beth, for your as-usual leading edge report on the future…
    Regards, Joe

    • beth.technology beth.technology

      These are great points, Joe! Garbage in, garbage out will def. apply to AI. I think the echo chambers with data are amplified with algorithms, and rather than AI competing with intelligent humans who have levels of complexity, it is simply spitting out patterns because it does not form intelligent thoughts based on emotional intelligence. I think AI will always be restricted to some level and will require humans to check the work that is being done.

Leave a Reply

Your email address will not be published. Required fields are marked *