#gpt
System 1 and 2
I forget where this was mentioned (either in one of the AI podcast episodes, or this numenta video), but basically we can think of GPT-3 as being the first almost perfect copy of system-1 human thinking, which is how Kahneman chose to dichotomize how our brains work—essentially, system-1 is the fast, intuitive thinking, while system-2 is the deliberate, rational, logical thinking.
Pattern recognition is basically system-1, and it’s where all the problems of correlation ≠ causation occur, since it’s just focused on predicting things by association. And that’s basically what GPT-3 is capable of doing.
The question is then how do we get to system-2 thinking, which is pretty much our competitive edge—deliberate thought. Here’s a random #idea that I had on my run, and I suspect someone has already thought about: what if system-2 = system-1 + simulation? It seems to me that the crucial piece of the puzzle is basically being able to simulate the world, or at least some very crude model of it. Once you have the capacity to simulate the world, then you can run your system-1 inferences, and see how things compare to the truths of your simulation, while also making sure to update your model of the world against reality.
Language Generation
- What if you started with GPT-2, fined-tuned it on erotica (bdsm tag from erotic story site), and then prompted it with bible verses? comedy gold via reddit
- seems a little crude, to have to do the “conditioning” via these “prompts”
- although, in some sense, it’s sort of like an initial seed type thing. i wonder if it’s possible to make it so that you can essentially have it as a parameter
- or, at the very least, it seems really crude to have to always be re-prompting the text (using some kind of sliding window that I’m not sure I completely understand) (need to understand how that actually works)
- this is part of something called NaNoGenMo, which is this really fun application of all these NLP things #teaching
- links: