Running Up Against the Limits of Common Sense in Artificial Intelligence

1033310_Bitvore images_16_041321One problem in artificial intelligence is a tendency to run out of training data. There are occasional instances in which there is not enough training data to train an AI model fully. In manufacturing, for example, it's hard to train an AI to monitor a custom-built machine, especially when it's just been turned on, because it represents a training set of one. There's no good way for the AI to detect anomalies because no data exists representing what an anomaly would look like.

One solution under consideration is to ask the AI to essentially generate its own examples for training. For instance, you could ask an AI to simulate a year's worth of operational logs from this hypothetical custom machine, and then train a different AI to detect anomalies in those logs. The problem here is that due to a lack of what we're calling "common sense," the AI generating the training data might create scenarios that can't logically exist. For example, the training data could state that the machine is both broken and functioning normally at the same time. It would be a fool's errand to try and train an AI on this data.


What's the State of the Art in Common Sense AI?

You may remember a blog of ours from late 2020 about the Chinese Room problem in artificial intelligence. In it, we discussed that because AI doesn't understand its own output, it may generate nonsensical outputs by their nature. Since then, we've decided that the problem is worth a closer look. Who's doing the most interesting work when it comes to adding common sense to AI?

So far, the industry is still taking baby steps.

A recent test of off-the-rack text generation models shows that artificial intelligence still doesn't have a great understanding of its own output. Although these models can imitate different writing styles based on their input in training, they don't understand what the words mean. When told to create a sentence combining the words "dog, frisbee, throw, catch," they came up with this: "Two dogs are throwing frisbees to each other."

It's a grammatically correct sentence (and an adorable mental image), but it's just not a thing that would happen in real life.

Overall, the researchers in this study found that their best-performing model only achieved an accuracy rate (according to common sense metrics) of 31.6%. That's a reasonably hopeful figure—after all, the accuracy rate for common sense in AI was a lot closer to zero percent not too long ago—but still less than half of the accuracy rate displayed by the average person.

Life experience may be the next big hurdle for common sense in AI. A human being only has to see a dog once before they understand that one would have a hard time throwing a frisbee. A text generation AI never "sees" a dog; however—they can read about them. There are many texts about dogs playing with frisbees, but very few of them would explicitly spell out the fact that dogs do not have opposable thumbs and therefore cannot handle a frisbee very well.


Developing Around the Problem of Common Sense in AI

Right now, AI common sense is a limiting factor in the development of artificial intelligence. The solution—for now— is to keep humans in the loop. If an AI generates mission-critical output, you need a live, attentive human person to understand whether its output makes sense. One day we may not need that human person, but we're not quite there yet.

Still, just because we're not there yet doesn't mean that AI with common sense isn't around the corner. An AI that can produce output that's 30% intelligible is frighteningly smart—especially when you consider that humans only score around 60% when confronted with the same tests. Based on the rapid development of AI, it's not hard to consider a product that has more common sense than the average person. In other words, we might be writing a follow-up to this article sooner than you think. 



Subscribe to Bitvore News Blog Weekly Email

Recent Posts


See all