The ‘Machine Learning’ Threat

Artificial intelligence (AI) has the potential to improve many of the tools and resources we use today. It already has – as chatbots provide more complete information, and financial firms are more accurately detecting financial fraud. AI can work marvelously as long as there are limits to its capability and the underlying tenets programmed in are indisputably and always true. Unfortunately, we have a history of elevating theories to indisputable truths and data mining convenient and desirable truths by the “powers that be.”

What is Truth?

Over the years it seems that “truth” has become more subjective and relative. Something may be true relative to known information, but it may not be true in relation to all information (known and unknown). As humankind continues to grow and progress, we learn new things and what we learned yesterday as a truth may not stand tomorrow. For example, many years ago there was nothing wrong with smoking. The truth was that smoking did no harm. Then our information changed, which made the truth change. Today, the truth is that smoking is unhealthy and dangerous. And the fact we didn’t acknowledge this truth before is somewhat laughable.

So what truths do we accept today that may change in the future? Or what about truths that are subjective? Some people will adopt certain information as truth and others will dispute that truth. For instance, where did COVID come from? The truth initially was that it came from a bat. Any ideas to the contrary were summarily labeled as “conspiracy theory,” silenced, and canceled. Today we still lack definitive proof that COVID came from a bat and the original “conspiracy theory” of a lab leak is a real possibility. The truth is still unknown, but there are people on all sides that have their own truth.

The ‘Machine Learning’ Threat

This leads me to one of the professed benefits of AI – the ability to learn, adapt, adjust, and improve its function. Sounds great on the surface, but that is what really concerns me. What indisputable truths/tenets/facts are being programmed into the AI? Are some of those truths incorrect – whether we realize it today or not? AI that is machine learning and adapting and adjusting based on faulty assumptions and truths could produce incorrect and misleading information. And what if we don’t even realize this because we believe AI is smarter than we are? Or, what if those that question the outputs are summarily labeled as “conspiracy theorists” and canceled?

Who is creating a given AI tool? How do we know that the creator(s) are relying on indisputable and unchanging truth? What hidden motives may be programmed deep within the system that may cause it to skew results down the road in a biased manner that would benefit some group or individual? What kinds of checks and balances would allow us to safely trust AI? I know, I am probably sounding like a conspiracy theorist. That sometimes happens when people have honest questions and there is no easy or convenient answer.

The inability for our society to have open and honest debates about facts (those we hope are true and those we wish weren’t true) causes me concern with respect to AI and it’s machine learning abilities. Those within AI publicly recognize that we need to take this slowly. I agree. To prepare us for this innovation we need to return to a society where debating ideas and seeking truth (not just desired truths) are valued and practiced regularly.

Related: Improving the Investor Experience Will Set You Apart