Drop off the truck today!
Agents and expensive information
- Antonio sent a note asking if I’d be interested in contributing to a chapter. Sent him this response:
-
There is something that I’d like to explore that might fit. It’s the idea that in most environments, agents (animal, human, machine, etc.) are incentivized to cheat. I think this is because information is expensive to produce, but essentially free to copy. The problem is that if all the agents cheat, then the system will collapse because the agents become decoupled from reality (what I call a stampede). So the system as a whole is incentivized to somehow restrict cheating.
-
I think this could be very interesting to work through, but I don’t have a model (or even an approach really) developed that would describe it. I think that this might be related to game theory, though I haven’t found much in the literature.
-
GPT-2 Agents
- Working on building a text corpora. Going to add a search for “Opening” and “Variation” which I’ll try before using the DB version – done
- Having some problem that starts after a few games. Found the culprit game. Will work on tomorrow. It might be tied to a linefeed?
GOES
- Working on the GVSETS paper and slide deck