Troubles with AI in insurtech
tags: [ src:article , insurance , artificial_intelligence , media ]
src: CNN
This is the twitter post from Lemonade that caused a raucous:
For example, when a user files a claim, they record a video on their phone and explain what happened.
Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues[emphasis added] that traditional insurers can’t, since they don’t use a digital claims process. (4/7)
People were concerned by the very likely possibility that Lemonade’s automated claims processing would use such features as a person’s skin-color to gauge the possibility of fraud. From the public’s perspective, this feels like yet another instance of the misapplication of AI, of some computer reading the signals and coming to some conclusion, that might be abhorrent, from a social perspective.
This is going to be common refrain for tech companies that purport to use AI to solve a problem.1 One might want to qualify that by restricting to problems involving people, but that feels like almost redundant. What’s sort of ironic is that Lemonade has really oversold their AI capabilities, and what they claim to be AI is probably some straightforward model. This comes from having read their S1 document fairly closely, and realizing stuff like their AI chat bots is nothing more than a prettified question flowchart.
With the advent of AI-based solutions to everything under the sun, most of which are black-boxes, I feel like there’s going to be a need for some sort of auditing body. I mean, you’re not going to be able to stop the flow of investor money during what I guess one can be called the “AI Spring” to these companies. The solution then is to be able to say that you’ve been audited, certified.2 In the short term, I think what we’ll probably see is that companies are going to learn to keep quiet on social media (mob mentality more likely to converge to disgust). It’s tricky, because for a product like what Lemonade is offering, their target audience is exactly those on social media.
Of course, what remains is the most difficult question: determining whether a model is unfit to be used publicly, perhaps because it’s unfair. And so we’re back to the [[project-fairness]].
Let’s linger on this particular problem for a little bit:
- I definitely think facial recognition is a much more visceral example than finding patterns in tabular data. Thus, the first rule should be to never use facial data as part of your AI engine (or, downplay it, obfuscate).
Backlinks
- [[202106011119]]
- Problems with Lemonade: [[troubles-with-ai-in-insurtech]]