Machine Learning, Human Values: What Will FinTech Magnify?

Written by DavidMort | Published 2018/04/12
Tech Story Tags: fintech | ai | banking | startup | ethics

TLDRvia the TL;DR App

Sankaet Pathak is one of my favorite people to talk to about innovation in financial services. The company he founded, SynapseFI, plays an important role in enabling collaboration between banks and financial technology startups. I came away from this recent conversation with a deeper understanding of the ethical risks posed by AI, insights into behavioral data science, and cautious optimism that we can use these tools and technology to create the financial world we want to live in.

DM: What are you’re trying to do at Synapse — both today and aspirationally?

SP: We are trying to make sure that every banking product turns into an API. Most of the innovation in finance has been slowed down because banks are either not able to support fintech companies or they themselves have not been able to innovate to accommodate what Americans really need.

Cost-per-customer needs to go down. If we can create a bank with a fully automated back office, then it does not matter if you keep $100 in your bank account or $100,000. Everybody would be a good customer, regardless of the deposits they hold.

DM: What do you identify as some of the ethical risks in financial services technology today? And can they be mitigated or how can they be mitigated?

SP: All of the ethical issues that we are dealing with in finance are easy to overcome. There are a couple things that we should be very focused on right now: lending practices that are slightly questionable with inherent biases in whatever a banker thinks a good loan candidate looks like, and how banking today is very optimized for spending.

As you start automating lending practices, or even card issuance practices, it is very easy to bake in biases. Automation does not always mean liberation. We have to make sure that as we build these products, we make conscious choices to fight against some of these habits and not just accept the status quo.

Secondly, everything that we’re doing in banking converges on the credit card. Banking is hyper-optimized to be able to sell this spend-money-get-money product. We have to change some of those models in banking.

DM: Are ethics, in your mind, machine-readable? What roles should humans have today versus three to five years from now as AI becomes more prevalent in financial services?

SP: Humans are going to be involved in these processes and decision making for as long as I can see, but it does require us to flex that ethics muscle more_._ I am worried that a lot of companies just say: Well, we gave a question to this black box and it gave us the answer and whatever the answer is, is the ultimate answer and we’re gonna roll with it.

A famous example is Uber’s surge pricing (maybe now with a change of leadership that will change). When some kind of a disaster happened, Uber’s initial response had mostly been: Oh well, the system does not know, and for that reason it set surge pricing, so it was surge pricing.

Those are not acceptable answers.

You need people on a cross-validation team to look at the results coming out of any machine-learning model on a constant basis to figure out: is it the intended consequence? You also need to look for parallels around race, ethnicity, gender, location, etc. Are we denying consumers, or misbehaving, or ill-treating some costumers because common elements are present? If so, we need to go back and rethink how we’re automating some of this decision-making.

There is this new thing called evolutionary algorithms, which is similar to reinforcement learning. I think evolutionary algorithms over time could off-load some of the ethical decision making, but they are not there yet.

DM: Is this a company-by-company problem, when you say “we,” or is it something that the industry needs to work together on more broadly in working groups or mentorship to really make change?

SP: First and foremost, consumer awareness is critical. The more consumers realize where data science or cognitive science is being used, they can choose to not support a company if they disagree with some practices. It is a big tool that has succeeded recently. We need to encourage more and more awareness to build a feedback loop: the company has to fix themselves. If they do not, consumers will not use them.

Next would be regulatory oversight. CFPB could do a good job in regulating how different data science models and practices are being used in different aspects of banking.

To enable that, I think data science work needs to be done in open source — which Open AI is doing a lot of. This would help people working at regulatory bodies understand how these technologies could be used, so they can start to build some regulatory infrastructure.

Then a lot of it comes down to the company itself. The company has to realize ethics is not just a feel-good thing, it is going to de-risk your business. It is important to care about how you are affecting your end customer, your employees, and your vendors. We have seen companies implode recently because they have not flexed their ethics muscle. I also believe a mission-driven company has better odds of hiring more talented people who would work very hard for them.

DM: How can newer organizations compete with the incumbents that have decades and decades of data?

SP: Synthetic data generation has been very significant over the last year and a half — and not a lot of people are talking about this for some reason. In Synapse’s case, we can synthesize a lot of the government ID data to be able to train different systems to validate government IDs. We have to worry about flash, lighting conditions, wear and tear, orientation, and the angling of the camera. With self-driving cars, you have to be able to synthesize different light conditions, fog, and all of these scenarios before they are even presented. This requires using a little bit of real data and then generating a lot of synthetic data.

It is not just smaller companies that can benefit from the synthetic data generation, but even the larger companies are using it to be able to build technologies they do not have data for today. Most of the training for Google Clips, from my understanding, happened on synthetic data.

DM: What other issues around data should people in financial services be talking about more?

SP: Behavioral science, how products and services tweak people’s emotions and eventually end up tweaking their behavior is significant data science work. I feel we have been very naive about this. We have pretty much said, Building out a product that people come back to use again and again is inherently a good thing. It is not that obvious, because a lot of these things trigger dopamine and make us addicted.

When people start liking our posts on Facebook it makes us feel good, the way Facebook displays that, the way it sends you the notifications, the infinite scroll — all of these things secrete dopamine in your brain. Banking does something similar without even realizing it. A credit card reinforces the behavior that believes spending is good, and not that spending is bad, because if you are not spending then you are not getting your cash-back rewards. I do not want to be too negative about credit cards, but I feel it is kind of a predatory product and regulators have not really figured that out yet.

The credit card can be flipped around easily by asking: How can we change consumer behavior where people get more incentivized by saving than spending? Synapse will launch a credit card — hopefully this year — that sticks with the cash-back reward, but gives it to people at a savings event instead of a spending event. In truth, interchange is how you make the money. The difference here, is you give it to the customer when they save money, as opposed to when they swipe. Then, you can slowly tweak their behavior around how they associate spending and savings.

Companies need to think about the kind of hooks that they are building into their application and how those can affect people in societies in the long run. We cannot just pretend that we do not know this area of research. We cannot say, Oh, we have no clue what you are talking about when you say “dopamine.” We know how it affects people. Now the question is: How can we do more research and figure out how can we use dopamine to improve people’s life in a positive way?

DM: How crucial is the juncture that we’re at right now in terms of what the machines are learning from us or what is being codified into algorithms?

SP: Anything that humans are doing today that is biased, or adversely affecting people’s financial lives, happens in such a distributed way that it is very hard to quantify at a high level. If you automate all of that, then all of these biases get magnified. For instance, when loan officer is writing a loan, they are most likely thinking on some level about the borrower’s clothing, age, gender, and ethnicity. As we sit and talk to them, we are just computing through that in our brain consciously or subconsciously. Now if you go ahead and build a facial recognition system and use that as a part of your underwriting process, you have not mitigated this problem- you have made it infinitely worse. Now it is not thousands and thousands of agents trying to underwrite someone a loan. rather, it is one big agent and whatever bias that one big agent has is going to affect a lot of people.

This is not a hypothetical, we have seen how Facebook was used. As we are automating these things and operating digitally at a larger scale than we have ever operated before, we have to be very, very cautious of how we are using data and what kind of vulnerabilities the system has. We did not realize how vulnerable Facebook was for a decade. We have to be very proactive early on.

DM: Thanks for talking with us Sankaet!

Notes: Dialogue edited for clarity and length.


Published by HackerNoon on 2018/04/12