#artificial_intelligence
AI for Health
src: WHO guidance
Summary
Six key (ethical) principles:
- Human autonomy: think of the humans! (also right to privacy)
- Safety/well-being/public interest: this is like the Asimov robot laws
- Transparency/explainability/intelligibility: self-explanatory
- Responsibility/accountability: I think this follows naturally from the previous point, but means there’s “points of human supervision” (human-in-the-loop?)
- Inclusivness/equity: fairness in a nutshell
- Responsive/sustainable: on-line, adaptive learning + sustainability w.r.t. the environment
To someone versed in the societal import of ML, I don’t think there’s too much in the way of surprises in this guidance document, though it does highlight a few things (worth repeating):
- the differentiation between high and low-income countries, and the potentially widening gap in healthcare outcomes brought about by AI. while there’s nothing inherently problematic about that, it does bring up the potential problem of a mismatch in the focus of problems (i.e. cardiovascular diseases and other lifestyle-based, chronic illnesses for high-income versus the more straightforward, brutal problems faced by low-income).
- biased learning from data collected in the west is a key problem: we know very well that racial groups often have very different health outcomes for the same treatment
- healthcare in other parts of the world are oftentimes much more holistic (i.e. Chinese Medicine?). how do we reconcile such traditions?
- AI requires big data, which runs counter to fundamental privacy rights. this is where privacy-preserving measures will be key. on the other hand, the acquisition of such data in less scrupulous countries might be disastrous.
Troubles with AI in insurtech
src: CNN
This is the twitter post from Lemonade that caused a raucous:
For example, when a user files a claim, they record a video on their phone and explain what happened.
Our AI carefully analyzes these videos for signs of fraud. It can pick up non-verbal cues[emphasis added] that traditional insurers can’t, since they don’t use a digital claims process. (4/7)
People were concerned by the very likely possibility that Lemonade’s automated claims processing would use such features as a person’s skin-color to gauge the possibility of fraud. From the public’s perspective, this feels like yet another instance of the misapplication of AI, of some computer reading the signals and coming to some conclusion, that might be abhorrent, from a social perspective.
This is going to be common refrain for tech companies that purport to use AI to solve a problem.1 One might want to qualify that by restricting to problems involving people, but that feels like almost redundant. What’s sort of ironic is that Lemonade has really oversold their AI capabilities, and what they claim to be AI is probably some straightforward model. This comes from having read their S1 document fairly closely, and realizing stuff like their AI chat bots is nothing more than a prettified question flowchart.
With the advent of AI-based solutions to everything under the sun, most of which are black-boxes, I feel like there’s going to be a need for some sort of auditing body. I mean, you’re not going to be able to stop the flow of investor money during what I guess one can be called the “AI Spring” to these companies. The solution then is to be able to say that you’ve been audited, certified.2 In the short term, I think what we’ll probably see is that companies are going to learn to keep quiet on social media (mob mentality more likely to converge to disgust). It’s tricky, because for a product like what Lemonade is offering, their target audience is exactly those on social media.
Of course, what remains is the most difficult question: determining whether a model is unfit to be used publicly, perhaps because it’s unfair. And so we’re back to the [[project-fairness]].
Let’s linger on this particular problem for a little bit:
- I definitely think facial recognition is a much more visceral example than finding patterns in tabular data. Thus, the first rule should be to never use facial data as part of your AI engine (or, downplay it, obfuscate).
Backlinks
- [[202106011119]]
- Problems with Lemonade: [[troubles-with-ai-in-insurtech]]
Openendedness
src: article
Comes from the same people as [[why-greatness-cannot-be-planned]].
System 1 and 2
I forget where this was mentioned (either in one of the AI podcast episodes, or this numenta video), but basically we can think of GPT-3 as being the first almost perfect copy of system-1 human thinking, which is how Kahneman chose to dichotomize how our brains work—essentially, system-1 is the fast, intuitive thinking, while system-2 is the deliberate, rational, logical thinking.
Pattern recognition is basically system-1, and it’s where all the problems of correlation ≠ causation occur, since it’s just focused on predicting things by association. And that’s basically what GPT-3 is capable of doing.
The question is then how do we get to system-2 thinking, which is pretty much our competitive edge—deliberate thought. Here’s a random #idea that I had on my run, and I suspect someone has already thought about: what if system-2 = system-1 + simulation? It seems to me that the crucial piece of the puzzle is basically being able to simulate the world, or at least some very crude model of it. Once you have the capacity to simulate the world, then you can run your system-1 inferences, and see how things compare to the truths of your simulation, while also making sure to update your model of the world against reality.
Why Greatness Cannot be Planned
source: reddit
- book is overkill (haven’t read it), but essentially:
- examples in reinforcement learning where if all you do is optimize for an explicit objective, then the algorithm will oftentimes do silly things
- instead of moving towards the objective, why not bounce around and looking for novelty, so you get serendipity (and creativity)1 which I assume how he is able to generalize this principle in CS to the real world..
- feels like the whole bandit setting exploration vs exploitation problem
- this is usually in the context of evolutionary algorithms, which usually optimize for some fitness criteria (at each epoch).
on a more positive note:
- this relates to what I’ve been thinking about recently, which is, how do we move forward from the neural network training paradigm.
- does it make sense to have an objective function, and training data?
Backlinks
- [[openendedness]]
- Comes from the same people as [[why-greatness-cannot-be-planned]].