Machine Learning: What It Is and What It Isn't
As conversations about the capabilities of artificial intelligence and related technologies pick up speed, you might increasingly encounter unfamiliar terms and concepts – and the fact that notions such as “AI” and “machine learning” and other related things are frequently (and incorrectly) used interchangeably can only increase confusion around these emerging technologies.
One term you’ve likely heard and may be wondering about is machine learning. And if the phrase makes you envision super robots that can do everything a human does – well, you wouldn’t be that far off.
Machine Learning In the Simplest Possible Terms
Though artificial intelligence (AI) refers to the full gamut of functions that allow computers to perform tasks that were traditionally conducted by humans, machine learning is only a subset of AI. More specifically, machine learning is an application of AI in which algorithms and statistical models are used to teach computers to perform specific functions without being programmed to do so. In other words, machine learning is the science of getting computers to learn, so that each interaction informs and refines the next ones, improving the machine’s ability to deliver results over time.
In machine learning, computers are provided volumes and volumes of data inputs, so that instead of having a specific set of rules upon which to base their actions, they can retrieve more specific learnings that apply to the unique situations that confront them. This improves accuracy because the computer no longer needs to rely on generalizations. The results of each new interaction then become part of the data set that is used moving forward.
What makes machine learning different from regular computer programming is the machine’s ability to learn and adjust its responses without the ongoing need for human intervention. Without machine learning, a programmer would have to adjust the code to change or refine the outcomes a computer is capable of delivering. With machine learning, on the other hand, the computer is freed from the need for rules-based programming and instead continuously uses the data it encounters and results it achieves to improve future responses.
The key benefit of writing algorithms that allow computers to teach themselves is that it resolves a key limitation of technology that previously would have been insurmountable. Known as Polanyi’s Paradox after a British Hungarian philosopher, the challenge stems from the problem that in the absence of machines that can teach themselves, computers are limited to doing what they are told; however, as the philosopher Michael Polanyi famously noted, “We can know more than we can tell.” This means humans aren’t capable of transmitting all of the knowledge they possess implicitly, and thus if the functionality of machines relies on explicit directions, computers will always be limited in their capabilities.
What Machine Learning Is and Isn’t
Probably the best-known example of machine learning today is self-driving cars. Self-driving cars don’t have to have previously driven a route to successfully navigate their path. Instead, self-driving cars use data such as maps, an understanding of key signals such as traffic lights, speed limit signs and other cues. With this programmed machine learning, self-driving cars can perform driving functions and adapt and adjust to new conditions based upon an accumulation of experience.
But the potential of machines to learn to adapt to new data and provide different responses does not mean the capabilities are limitless. While computers’ ability to process and respond to new situations will undoubtedly continue to evolve and improve, machine learning is currently better suited to certain types of tasks and scenarios than others.
Currently, machine learning is most suitable for tasks that have well-defined inputs and outputs, with clearly definable goals and metrics. Data classification, such as the automatic tagging of contract metadata, is one example where machine technologies can be effective.
Machines are relatively limited, however, at applying chains of logic or complex reasoning or providing feedback as to the rationale for a specific action or decision. One example of this limitation could be a smart home device that plays music for you: if you regularly select fast-tempo upbeat music in the morning, your device may use cues such as beats per minute to make new musical recommendations to start your day. But machine learning would not allow your computer to infer that you are selecting that particular style of music to combat feeling tired, and therefore this learning would not enable your computer to automatically recommend music when you feel tired at other times of the day.
Another thing machine learning is not, is infallible. While the nature of machine learning means that it will continue to improve every time it is employed, for now, humans must still check outputs and verify their results. In other words, the invasion of the robots is something we need not yet fear.