AI is known for its complexity—and usually, when the topic of complexity comes up, it turns out to be a net negative for the AI product in question. Complexity obscures. Too much complexity means that data scientists can’t understand why an AI product is making its decisions—and it means that users without data science degrees are totally cut off from the ability to use AI products.
Google has recently demoed a new method of creating AI models—one that results in increased explicability while allowing citizen data scientists to achieve results confidently. In addition, they used this AI to create two new tasty-sounding recipes! Before you start scratching your head (or looking for your measuring cups), we’re going to explain what this all means.
Google Uses Recipes to Demonstrate Explicability in AI
The top-line of this article says that Google used AI to create a new kind of recipe—and this isn’t exactly the truth. What really happened is that Google was able to develop an AI model that classified different kinds of recipes and then exploit the classification method until it spat out a recipe that was a hybrid of two categories. While not necessarily as cool as creating an AI that can generate new recipes from scratch, this is still a big step toward creating usable and explainable AI.
Here is the documented process, as explained by Google:
- First, find some data that you wish to explore. Sara Robinson, an AI researcher at Google, noticed a lot of patterns in her baking recipe, so she decided to use a machine learning model to break these patterns down.
- Sara decided to ask the following question: based on the distribution of ingredients in a recipe, could a machine learning model determine whether the recipe was a cookie, a cake, or bread?
- To answer the question, Sarah began by assembling a codex of 700 recipes. She standardized each recipe by converting cups and tablespoons into ounces and then uploaded each recipe into a spreadsheet.
- She then used a tool called AutoML to create a no-code machine learning model. This was as simple as importing the spreadsheet, selecting a training budget, and specifying which columns to use in the model itself. Training took just a few hours.
- At the end of the process—which involved only low-level data preparation and zero code—Sara had a model that could predict if a recipe was bread with a 93% accuracy, cake with 83% accuracy, and cookies with 79% accuracy.
This is where it gets more interesting. AutoML has a tab called feature importance, which can tell users which factor is most important in categorizing a recipe. Yeast, for example, clearly signaled to the model that it was dealing with a bread recipe. Meanwhile, the butter and eggs ratios seemed to predict whether a recipe was a cookie or cake—although this was harder to categorize.
Using this information, Sara was able to play with the model a little. She plugged in an existing recipe and then tweaked its components until the model was confused—it identified a 50% probability that the recipe was bread and a 50% probability that it was a cookie. She baked the resulting recipe that she dubbed a “breakie.” It was indeed a passable and surprisingly delicious bread-cookie hybrid.
Exploring the Promise and Limitations of AI—with Bread
A lot of media publications took the top-level implications of Robinson’s research (“Google Used Artificial Intelligence to Create Two New Mashup Desserts Based on Baking Search Data”) to imply that an artificial intelligence directly “invented” a new kind of recipe, which clearly did not happen. The researcher didn’t tell the AI to create a new recipe, and the recipe that resulted wasn’t even complete in a meaningful sense—there were no instructions, just a list of ingredients.
However, what did happen was probably more interesting than a new kind of cake (and I say this as a world-champion cake appreciator). Instead, Google demonstrated a system where a relatively untrained user could confidently create a high-accuracy AI model in just a short amount of time. In addition, they were able to use the explicability features built into this model to discover fresh insights in the form of new recipes.
As it becomes simpler to use AI, and as AI becomes more explicable, we’re going to see this kind of thing more often—and here at Bitvore, we encourage it. The more open that AI becomes, the more surprising and interesting projects people will create. Bitvore users are constantly using our products to discover the unexpected, and if you’d like to learn more about how they do that, then contact us today!