Envisioning the Future
tags: [ brainstorm , _econtalk , _podcast ]
Some advice that Sergey Levine gave at the end of the AI #podcast is to envision what you would want to see solved, and then work backwards. It’s a very simple turn of perspective, but I think it might be helpful to try this exercise. For instance, clearly all this work I’m doing on matrix completion and collaborative filtering, it doesn’t feel like it’s in service of any higher goal.
Though, now that I’ve thought about it for a second, I wonder if this is a more difficult exercise in a field like statistics (less so in machine learning).
I think the things that drive me are sort of meta. There are things I want to see change.
- I want people to stop getting so enamored with neural things, and to make sure that what they’re doing is actually useful. That being said, this is clearly small-fry.
- I have a feeling that statistics is going to have a come-back, though not in any sort of obvious manner. I think causal reasoning, which I guess is not really the purview of statisticians (at least not in my department), is going to be ever-more important. Interpretability is basically like statistics. Off-policy reinforcement, which is basically out-of-sample learning, requires statistical ideas (of modelling and robustness).
- I think feedback loops are the key to everything. The key to generalized intelligence, the key to fairness.
Now that I’m thinking about this more, I realize that I think a lot like an academic in that I am interested in descriptive research, have sweeping generalizations of the phenomenon, but nothing prescriptive. Though the engineering side of my personality would disagree with that.
If I had to come up with concrete things that I want to accomplish:
- I think social networks have become the death of us, but they really needn’t be like that. Human connection is the most powerful thing that we have (which suggests that local/small-scale social networks are the solution).
- I don’t really think there exists an algorithmic solution to solving the problem, unless you want there to be more oversight (from the AI) in policing the kinds of content/sharing. But that sounds dystopian.
- signed social networks
- I really like the simple models in the social network literature, that in some sense are really just descriptive. though you could argue there is something interesting in terms of emergent properties/complexity from simple beginnings, which could potentially relate to how to generate consciousness.
- so for instance, another way to think of communities is basically to have some notion of balance, and the rest follows.
- though, as we already know, balance subsumes transitivity which also creates communities.
- these ideas help us to understand how we humans function. Christakis takes the next step and argues that these social attributes of ours are the drivers of the goodness in our life.
- I think an actually interesting idea that actually has some value is showing the value of enemies/negativity.
- A recent episode on #econtalk #podcast, talked about how primate brains have unfettered aggression responses. Imagine if humans had such responses, any small infraction would lead to fights. So clearly there is a balance between pacificity and unbounded belligerence.
- it feels like you definitely need the threat of violence to get coöperation.
- animosity is simply a small-scale, constant version of belligerence.
- I really like the simple models in the social network literature, that in some sense are really just descriptive. though you could argue there is something interesting in terms of emergent properties/complexity from simple beginnings, which could potentially relate to how to generate consciousness.
Big Questions
Backlinks
- [[implicit-regularization]]
- In the spirit of [[envisioning-the-future]], let’s think about the key questions in the area of implicit regularization.