When do we learn to trust the machine?

In the realm of image processing, researchers work on the problem of image inpainting, trying to fill up blank spaces cut out of a photo. They do so with amazing accuracy. Machine learning algorithms can reliably predict if a song will become a hit, or if a video contains cats. Through “reinforcement learning”, robot dogs with one leg broken can learn to walk with three, by themselves.

What may surprise you though, the powerful techniques used are not too far from algorithms that may be applied to solve business problems, such as next-best-offer or customer segmentation. The catch: we just don’t do it.

I have been working professionally on predictive modeling in credit risk for a few months now. As in most of the data mining tasks you may run into in a large corporation’s marketing/risk department, any algorithm which you just can’t immediately crack open and see under the hood, is out-of-bounds.

There are good reasons for that. It’s easier to explain to your manager. It’s easier to explain to people you’re handing over to. It’s easier to track the performance of what’s going on under the hood. It’s easier to implement them on your IT platform. Finally, it’s easier to explain to the regulator.

I was looking through papers presented in a prestigious credit risk conference when it struck me. No one seemed to be using algorithms invented since about 1995! But believe it or not – those techniques have come a long way since then.

The question is: why?

1. They are not packaged well. Many predictive analytics software packages often used to develop and maintain predictive models, such as SAS EM or SPSS Modeler, just do not include the state-of-the-art. This is only amplified by the fact that most organizations will not immediately invest in upgrading their “mining tools”, and it’s very often you’ll run into a bank running some tool from the late 1990s.

2. They require more training to understand and control. As machine learning algorithms develop, they are more obscure and their parameters are harder to learn. That’s why, the few global organizations that do work with these techniques hire top-notch scientists and full-stack developers, instead of business people with few weeks of stats software training. Not surprisingly, the first breed is a bit harder to find than the rest.

3. They do not promise huge gains. Going from old algorithm A to new algorithm B will give you some profit, but it’s not like a more fundamental business strategy change or acquisition won’t have 100 times the same impact. So this sort of data torture is no doubt for the companies on the frontiers, not for everyone.

4. We are still reluctant to let go. This may be more of a philosophical question now, but I sometimes figure that we are inherently reluctant to hand over control to machines. Although my earlier posts on this blog will make it very clear that I’m a big fan of gut-feeling decisions from seasoned executives; but maybe we can just cut the computers some slack. After all, they are capable of seeing complex relationships that we don’t and calculate possibilities, simulate outcomes that we never can. Maybe it is time to start trusting them a bit?

 


About Caner Turkmen

Share this post:

Leave a comment

You must register to leave a comment