Updated An insurance biz has retracted boasts of how it uses AI algorithms to study videos of customers for “non-verbal cues” that their claims are fraudulent. The marketing U-turn came after the ethics of this approach was called publicly and loudly into question.
Using machine-learning software to automate decision-making processes to decide whether to accept or deny customers credit or insurance payments is particularly sensitive. Last month, America's consumer watchdog, the FTC, issued a strongly worded statement warning that it was illegal to deploy algorithms end up discriminating against people based on their race, color, religion, national origin, sex, marital status, and age when making financial-related decisions.
Alarm bells went off when Lemonade, a company based in New York, said it had built software that scanned videos of customers explaining the situations they found themselves, which were submitted as part of insurance claims, to decide whether those people were essentially lying or committing some other fraud.
Lemonade prides itself on providing an easier and simpler way for people to file pet, home, and life insurance claims. Customers speak to a chat bot, submit their claim with a video, and a decision on how much it should pay them can be made in a few minutes.
“When a user files a claim, they record a video on their phone and explain what happened,” Lemonade stated in a series of tweets that have since been deleted.
"Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues that traditional insurers can't, since they don’t use a digital claims process."
Netizens criticized Lemonade's technology, accusing it of being potentially biased, and overly reliant on flimsy sentiment and emotion analysis of the submitted videos. The backlash on Twitter prompted the company to delete its posts and issue a fresh statement in which it claimed it just used facial-recognition algorithms to make sure the same person wasn’t making multiple claims.
So, no AI lie detector as previously claimed – just identification algorithms, apparently.
“There was a sizable discussion on Twitter around a poorly worded tweet of ours (mostly the term ‘non-verbal cues’) which led to confusion as to how we use customer videos to process claims,” the upstart stated on its website. “There were also questions about whether we use approaches like emotion recognition (we don’t), and whether AI is used to automatically decline claims (never!)”
“We do not use, and we’re not trying to build, AI that uses physical or personal features to deny claims,” it reiterated.
Our systems don’t evaluate claims based on background, gender, appearance, skin tone, disability, or any physical characteristic (nor do we evaluate any of these by proxy) (3/4)— Lemonade (@Lemonade_Inc) May 26, 2021
- Banks across America test facial recognition cameras 'to spy on staff, customers'
- Researchers made an OpenAI GPT-3 medical chatbot as an experiment. It told a mock patient to kill themselves
- Another reminder that bias, testing, diversity is needed in machine learning: Twitter's image-crop AI may favor white men, women's chests
- Enjoy a tipple or five? You might need this AI system to tell you when it's time for a new liver
And in a S-1 filing to America's financial regulator, the SEC, Lemonade said its system collects roughly 1,700 “data points” from customers.
“We use technology and artificial intelligence to reduce hassle, time, and cost associated with purchasing insurance and the claims submission and fulfillment process,” Lemonade stated in its SEC submission, dated June 2020.
"We built our entire company on a unified, proprietary, state-of-the-art technology platform. Our customers are able to purchase insurance on our website or through our app, generally in a matter of minutes. Our artificial intelligence system handles substantially all of our customer onboarding and a meaningful portion of our claims."
What those data points describe is unclear. It went on to admit its own technology could have unintended consequences, in which customers may be paid too much or too little due to biased and discriminatory decisions. On the one hand, this is a boilerplate warning to investors and the financial markets that the biz could screw up and implode, and thus investments could be lost, though on the other hand, it is pretty specific about how it could go wrong.
“Our proprietary artificial intelligence algorithms may not operate properly or as we expect them to, which could cause us to write policies we should not write, price those policies inappropriately or overpay claims that are made by our customers. Moreover, our proprietary artificial intelligence algorithms may lead to unintentional bias and discrimination.”
A determination by federal or state regulators that the data points we collect and the process we use for collecting this data unfairly discriminates against some groups of people could also subject us to fines and other sanctions, including, but not limited to, disciplinary action, revocation and suspension of licenses, and withdrawal of product forms.
Any such event could, in turn, materially and adversely affect our business, financial condition, results of operations and prospects, and make it harder for us to be profitable over time. Although we have implemented policies and procedures into our business operations that we feel are appropriately calibrated to our artificial intelligence and automation-driven operations, these policies and procedures may prove inadequate to manage our use of this nascent technology, resulting in a greater likelihood of inadvertent legal or compliance failures.
The company was launched in 2016, and operates across the US and parts of Europe, including France, Germany, and the Netherlands. It has yet to turn a profit, and spends most of its money on sales and marketing.
“Our future success depends on our ability to continue to develop and implement our proprietary artificial intelligence algorithms, and to maintain the confidentiality of this technology,” it stated in its filing.
The Register has asked Lemonade for further comment. ®
Updated to add
We've noticed that even though Lemonade, in its U-turn on Wednesday, said its AI software is "never" used to "automatically decline claims," in its the S-1 filing from last year, it boasted that one of its chat bots called AI Jim could, as of March 2020, handle a customer's "entire claim through resolution in approximately a third of cases, paying the claimant or declining the claim without human intervention."
However, a spokesperson for the upstart told The Register today: "The term 'non-verbal cues' was a bad choice of words to describe the facial recognition technology we’re using to flag claims submitted by the same person under different identities. These flagged claims then get reviewed by our human investigators. We have never, and will never, let AI auto-reject claims.”
They also told us that despite being called AI Jim, the part of the chat-bot that automatically denies or approves claims isn't artificially intelligent. "These checks are not done using AI, but they are still processed by the same product we call AI Jim," the spokesperson told us. "Approximately a third of our claims require zero human intervention in order to approve or deny – without using AI or machine learning."