If you own an auto dealership, you’ll likely offer a college student the used but affordable compact sedan. You’ll offer the 30- and 40-somethings with growing families the SUV or minivan. And you might offer the early retiree his or her dream car, be that a sports car, motorcycle, or RV.
As human beings, we use these types of filters when making everyday decisions. When it comes to offering financial products using machine learning and artificial intelligence (AI), though, the same criteria are accurately categorized as inappropriate bias.
Lots of Discussion on This Topic
“Machine-learning bias” is a topic that generates 46.9 million hits on Google (and many, many more for AI bias). For example, this story from Forbes talks about how the big cloud providers are developing and announcing tools to help address AI fairness. The MIT Sloan Business Review also looks at the topic in an article called “The Risk of Machine-Learning Bias (and How to Prevent It).”
Simply put, once a machine learning solution is given a target and a dataset, it will find and execute upon trend lines much faster than traditional human-generated predictive analytics. In other words, these algorithms will quickly internalize trends associated with things such as age, gender, and location.
It isn’t reasonable to blame the technology for this. After all, if we simply input a bunch of consumer data and ask it to output a decision, it’s just doing what we’ve asked.
Elimination of Bias?
Given these known facts, how can we stop these solutions from being biased?
One way is to curate the data fed into the model and remove data that could potentially lead to bias. The issue with this approach is that it reduces the benefit of the model itself.
Alternatively, we can limit the decision-making capability and usage of the model to those situations where bias is not a concern. For example, imagine that a customer receives a message through his or her channel of choice, in a friendly tone. The message has a call to action asking recipients to visit a webpage that displays on a cell phone without requiring them to scroll. This is an example of identifying a customer preference that could have been driven by age. But does that bother anyone?
The answer to this dilemma isn’t to limit the data inputs your machine learning models can use, but to control the decisions it can make.
How We’re Addressing the Issue
At Katabat, we’ve built a solution that limits the decision-making capabilities around digital debt collections to messaging only. Katabat helps lenders collect more dollars through a platform of personalized, digital communications tailored to customer preferences that is powered by a proprietary machine learning platform.
There is too much risk of machine-learning bias when allowing the models to make both a financial decision and a recommendation. For further certainty that Engage will select communications that are appealing to customers and meet all relevant regulations, our clients can review and approve every possible combination of messaging the algorithm could select from.
By constructing the solution in this way, we allow our clients to leverage the power of machine learning in the most critical phase of the engagement without risking any regulatory compliance minefields associated with bias.
Katabat’s VP for Product Innovation Kyle Christensen has been at the company for more than seven years. He previously worked for Sallie Mae for three years as Director of Dialer Operations and for Arrow Financial Services for nearly 12 years in a similar role.
=
Katabat is the leading provider of debt collections software to banks, agencies, and alternative lenders. Founded in 2006 and led by a diverse team of lending executives and leading software engineers, Katabat pioneered digital collections and has led the industry ever since. It is our mission to provide the best credit collections software in the market and solve debt resolution from the perspectives of both lenders and borrowers.
More from Katabat