I had an awesome time speaking at WIFIC recently in Prague. It was really nice to be able to attend an in-person event again in Europe.
Our panel session “Artificial Intelligence and Machine Learning, what does AI/ML really do for us right now?” was a lively discussion skilfully moderated by Tony Berkman from Two Sigma. AI thought leaders Harald Collet the CEO of Alkymi, Sylvain Forté the CEO of SESAMm, and Stathis Onasoglou from Google Cloud joined me on the panel. We discussed the various challenges around building teams and products to handle the quantitative and qualitative needs in translating potential to reality.
I focused specifically on the aspects of bias or blind spots in programmed outcomes.
Bias results can’t be generalized widely. We think of bias resulting from preferences or exclusions in tracing data, but bias can also be introduced by how data is gathered, the designs of the algorithms and how consumers interpret AI outputs.
Bias is rarely obvious; however, we have become accustomed to the preferences and inherent results presented to us via our favourite online shopping sites or search engines and we are conditioned to expect to see bespoke results based on each individual's online behaviours.
One of the key causes of bias is the data sets used to train the models. One of the well-known examples of AI bias is when a giant, global online retailer introduced AI to screen and recruit employees. However, this new process didn’t help diversity, equity and inclusion. Data used for training came from applicants submitted by the retailer, who were primarily white males with 10+ years of experience. This new "smart" screening engine in fact downgraded any applicant with terms “Women in Technology”, “women’s colleges".
Deep learning is most powerful today but this too can be flawed and need more diverse and inclusive datasets.
Some best practices to avoid pitfalls of AI biases:
Know your audience: Identify target application and end-user upfront and never underestimate the value of requirements. Data scientist must work closely with data engineers and data operations teams to tailor the training data to their target audience.
Multiple data sets should be used to train the models. AI will learn to reinforce similarities between multiple training sets and adjust to each input. There are multiple approaches to eliminate bias and this space continues to evolve. Transparency is key to a successful AI platform.
Trusted by more than 70 of the world’s top financial institutions, Bitvore provides the precision intelligence capabilities top firms need to counter risks and drive efficiencies with power of data-driven decision making.
Uncover rich streams of risk and ESG insights from unstructured data that act as the perfect complement to the internal data and insights your firm is already generating. Our artificial intelligence and machine learning powered system provides the ability to see further, respond faster, and capitalize more effectively.
To learn how the Bitvore solutions can help your organization visit www.bitvore.com.