If you’ve been reading this blog for a while, you’ll notice that we often end up talking about explainable AI. That’s because explainable AI is something that our customers care a lot about.
If you’re an investor relying on Bitvore, for example, you potentially have a lot of time and money invested in how our algorithm classifies information. Why is one article tagged the way it is? Why does our algorithm rank sentiment the way it does?
So, here’s a secret: Not every job needs a hammer. Not every problem looks like a nail. Not every text and content analysis problem needs AI. Building AI models comes with an operational and computational cost. It's important to know where that benefit kicks in on the return-on-investment curve. Sometimes small, well-defined problems are better suited to small(ish), deterministic solutions. It's important to pick the right tool for the task.
What’s the Difference Between AI and Brute Force
A Rubik’s cube is an excellent analogy to start with. Solving a Rubik’s cube makes you look smart, but you don’t necessarily have to be an intelligent person to solve a Rubik’s cube. Instead, you memorize a series of what amount to IF/THEN statements—to solve a row that looks like X, move a column to position Y, and so on.
These are called heuristics. People use heuristics when there are far too many choices, options, or moves to fully memorize. If you solve a Rubik's cube enough times, you learn which heuristics end up in a "payoff", aka you solve the puzzle or solved it more quickly, and which ones end up in a dead end or take longer. Most computer programs do not use heuristics for small problems. For a Rubik's cube, similar to tic-tac-toe, there exists a vast yet finite number of legal moves from one configuration to the next. Even more importantly, if each of those configurations are small enough to fit into the memory of a computer, the minimal number of moves to solve the cube can be computed.
When presented with a Rubik’s cube, the computer looks up its configuration in a database and then looks up the steps it needs to solve. With enough computing resources, you can create a software application that solves a Rubik’s cube in the blink of an eye simply by iterating through all the possible moves. While a computer solving the cube may seem intelligent, there’s nothing related to artificial intelligence about the way this approach does this. For problems that don't have 'enumerable' moves or a well defined set of configurations, brute force doesn't cut it.
An algorithm would work entirely differently. OpenAI, for example, solved this problem by creating a 3D model of a Rubik’s cube and then creating an AI that would repeatedly attempt to solve it in simulation. Once it understood how to solve a cube initially, it would repeatedly try to optimize its solutions until it could solve a Rubik’s cube at a relatively high speed.
Why Use Brute Force When Algorithms are Available?
Algorithms are great when you need to create an application that can solve complex problems quickly while using a relatively small amount of resources. However, the disadvantage of algorithms is that creating an algorithm takes a lot of time and resources. Meanwhile, brute force might take a relatively large amount of time and resources to solve a problem, but creating that brute force solution might be nearly effortless in comparison.
Returning to our Rubik’s cube example, it took OpenAI from May to July of 2017 to create an algorithm that could successfully solve a 3D Rubik’s cube in simulation. (Granted, they made things harder for themselves by solving it with a simulated mechanical limb.) When they moved from a 3D Rubik’s cube to a real-life cube and robotic arm, their algorithm could still only solve the cube around 60% of the time.
Even when it comes to far less complicated tasks, it takes a long time and a lot of resources to train an AI. Research shows that the resources needed to train a best-in-class AI model have doubled roughly every three and a half months. Not only do these models need more servers and compute time to train, but they also take more energy—there’s a real chance that over-reliance on AI could actually exacerbate global warming, as training a high-end machine learning model might generate over 300 tons of carbon. That’s over 16 times as much carbon as the average American emits in a year.
By contrast, brute-forcing a problem might not be as elegant, but it’s far less resource-intensive. You may not optimize problem-solving via brute force perfectly, but that disadvantage often goes by the wayside.
What Brute Force Has to Do with Explicability
Didn’t we start by talking about explicability?
One last advantage of brute force is that it can be easier to explain. We often find that as soon as you mention AI, people either start reflexively distrusting the output of an application—or by contrast, they start trusting it too much. If you’re talking about a black-box algorithm, then there might literally be no way to convey why your algorithm is making its choices.
By using brute force judiciously, we’re able to arrive at a happy medium. We don’t use algorithms to reinvent the wheel, which keeps our resource use down. By using algorithms only where necessary, we also reduce the number of mechanisms in the application that we need to audit. In other words, we have enough bandwidth to make sure that our algorithms are getting things right.