202005182341
Why Greatness Cannot be Planned
tags: [ src:book , artificial_intelligence ]
source: reddit
- book is overkill (haven’t read it), but essentially:
- examples in reinforcement learning where if all you do is optimize for an explicit objective, then the algorithm will oftentimes do silly things
- instead of moving towards the objective, why not bounce around and looking for novelty, so you get serendipity (and creativity)1 which I assume how he is able to generalize this principle in CS to the real world..
- feels like the whole bandit setting exploration vs exploitation problem
- this is usually in the context of evolutionary algorithms, which usually optimize for some fitness criteria (at each epoch).
on a more positive note:
- this relates to what I’ve been thinking about recently, which is, how do we move forward from the neural network training paradigm.
- does it make sense to have an objective function, and training data?
Backlinks
- [[openendedness]]
- Comes from the same people as [[why-greatness-cannot-be-planned]].