#insurance

Troubles with AI in insurtech

src: CNN

This is the twitter post from Lemonade that caused a raucous:

For example, when a user files a claim, they record a video on their phone and explain what happened.

Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues[emphasis added] that traditional insurers can’t, since they don’t use a digital claims process. (4/7)

People were concerned by the very likely possibility that Lemonade’s automated claims processing would use such features as a person’s skin-color to gauge the possibility of fraud. From the public’s perspective, this feels like yet another instance of the misapplication of AI, of some computer reading the signals and coming to some conclusion, that might be abhorrent, from a social perspective.

This is going to be common refrain for tech companies that purport to use AI to solve a problem.1 One might want to qualify that by restricting to problems involving people, but that feels like almost redundant. What’s sort of ironic is that Lemonade has really oversold their AI capabilities, and what they claim to be AI is probably some straightforward model. This comes from having read their S1 document fairly closely, and realizing stuff like their AI chat bots is nothing more than a prettified question flowchart.

With the advent of AI-based solutions to everything under the sun, most of which are black-boxes, I feel like there’s going to be a need for some sort of auditing body. I mean, you’re not going to be able to stop the flow of investor money during what I guess one can be called the “AI Spring” to these companies. The solution then is to be able to say that you’ve been audited, certified.2 In the short term, I think what we’ll probably see is that companies are going to learn to keep quiet on social media (mob mentality more likely to converge to disgust). It’s tricky, because for a product like what Lemonade is offering, their target audience is exactly those on social media.

Of course, what remains is the most difficult question: determining whether a model is unfit to be used publicly, perhaps because it’s unfair. And so we’re back to the [[project-fairness]].

Let’s linger on this particular problem for a little bit:

  1. I definitely think facial recognition is a much more visceral example than finding patterns in tabular data. Thus, the first rule should be to never use facial data as part of your AI engine (or, downplay it, obfuscate).

Securities Law and Insurance

Relevant to my interests:

Insurers who sell directors’ and officers’ liability policies particularly hate it, since they often pay out these settlements: They thought they were insuring companies against the risk of accounting misstatements, but it turned out they were also insuring them against the risk of climate change and data breaches and everything else that can go wrong. There is a feeling that this can’t all be securities fraud, that securities fraud cases should be about securities fraud, and that climate change or sexual harassment should be litigated somewhere else.

The argument for how everything that affects stock prices is securities fraud is pretty straightforward:

The shareholders claim that they relied on Goldman’s statements—about managing conflicts, putting customers first, etc.—in buying Goldman’s stock; they also claim that every shareholder who bought Goldman stock between early 2007 and mid-2010 effectively relied on those statements, because those statements were incorporated into the price of Goldman’s stock. That is, if Goldman had instead said “we have lots of conflicts of interest but we don’t care, and we gouge our customers ruthlessly and illegally,” its stock would have been lower,[5] so anyone who bought during the class period was defrauded by paying too high a price. (This is called the “fraud-on-the-market” theory and comes from a 1988 Supreme Court case called Basic v. Levinson.)

How delightful. Basically, since stock prices reflect all information that’s available, this gives lawyers an avenue to form class-action lawsuits.

Matt summarises:

If it’s securities fraud to (1) have a code of ethics (or a policy on environmental, social and governance issues) and (2) also do something bad, then some companies will respond by not having codes of ethics. (Since that is easier and more reliable than not doing anything bad.) That is not an entirely good result! You want companies to promise to do good things! Ideally to do them too, but that is harder.