How Bitvore Builds AI One Step at a Time

1033310_Bitvore images_8_041321

There’s a common perception that to work correctly, AI must process vast reams of data. Indeed, there’s no denying that AI developed in this way can get a lot done—but the problem is that this method exacts a certain cost. 

When you train AI with vast data sets, you run into three big problems—time, cost, and externalities. Research from 2019 found that 40% of companies require more than a month to train an AI model, whereas only 14% could train a model in less than a week. A month is a very long time to wait, especially if you’re waiting for a product that’s supposed to provide near real-time analytics.

Next, there’s cost. This is variable but not inexpensive. Research shows that for a model with 110 million parameters, you can expect to pay between $2,500 and $50,000 in training costs. Meanwhile, a 1.5 billion parameter model can cost between $80,000 and $1.6 million. It’s obviously preferable to get your costs closer to $2,500 per model, but even this reduced cost won’t scale well if you want to build a lot of models.

Lastly, there are externalities to consider. Training an AI model can take up a significant fraction of a data center for a significant amount of time, with all the attendant electricity and cooling costs that might entail. Building an AI model is estimated to generate up to 78,000 pounds of CO2—twice as much carbon dioxide as a human being will breathe out over their entire lifetime.

 

Escaping the Time, Budget, and Carbon Costs of Artificial Intelligence

 

In short, building an average AI model can generate a lot of cost, and carbon—but here at Bitvore, we’re not about building average AI models. Instead of (literally) boiling the ocean by creating massive models, we—and a host of other companies—are embracing a trend known as “small data.” This involves using smaller data sets to create more limited-purpose AI models. Instead of having one huge general-purpose AI with a lot of functionality, we’ve created smaller and cheaper models faster, gradually building up the number of tasks we can accomplish via AI.

Training smaller AI means training with smaller data sets—usually less than a terabyte in size. Using a data set this size means that you can train your AI in hours or days rather than months or weeks. It also means that you can use less specialized equipment—a desktop computer instead of a server rack full of specialized GPUs.

Broadly speaking, experimenting with small data may lead to some exciting breakthroughs in artificial intelligence. For example, we could create machine learning models that master new skill categories after learning just one skill—like an AI that could master a game of Tetris and then use that skill to organize an entire warehouse, for example. 

We could also create AI that’s better at detecting anomalies in areas where there will never be a large sample size. Imagine a custom-built piece of equipment, for example—there’s only one of its kind, so AI can’t get a large baseline of training data to detect anomalies. With an advanced form of smaller AI, you could create a limited model that still understands anomalies when given a smaller set of training data. 

 

Building Smaller AI

 

Right now, more advanced forms of smaller AI are off in the future, but you can begin building smaller AI in your organization right now. 

Here at Bitvore, we integrate data from many individual providers, and the data is occasionally misaligned. In other words, there is a conflicting or missing field, and sometimes the naming schema is confusing. This introduces problems that may affect the accuracy of our predictions. 

To compensate, Greg Bolcer, the Chief Data Officer at Bitvore, wrote a program that was able to automatically detect and mitigate conflicting and missing fields in data that Bitvore receives from vendors. It took about a day to create and train, used a relatively small data set, and was designed with a single purpose. As a result, it was able to find up to 100,000 pieces of missing data.

We could have conceivably built this function as part of a much larger project or reworked our primary machine learning model to automatically compensate for discrepancies in vendor data—but that would have cost much more time and effort without adding much in the way of convenience. Instead, we committed to much more efficient use of time and resources for effectively the same outcome. That’s one of the major benefits of committing to smaller AI in practice.

For more about Bitvore and the ways that we use AI, download our case study below!

Download

Subscribe to Bitvore News Blog Weekly Email

Recent Posts

Archives

See all