…clearly Biden was a net drag on McAuliffe. Overall, Virginians disapproved of Biden’s handling of the presidency by a 10-point margin, with nearly half saying they “strongly disapprove” — double the percentage who strongly approved. Nearly 3 in 10 Virginia voters said their vote was meant to express opposition to Biden, network exit polls found, compared to the 2 in 10 who said their vote was to express support for Biden. The economy was by far the most important issue driving Virginia voters, and people who put the economy at the top of their list favored Youngkin by a dozen percentage points. (Washington Post)
I just found this: https://github.com/google-research/tiny-differentiable-simulator
It appears to be a NN-enhanced physics sim: “TDS can run thousands of simulations in parallel on a single RTX 2080 CUDA GPU at 50 frames per second:“
Here are the relevant papers:
- “NeuralSim: Augmenting Differentiable Simulators with Neural Networks”, Eric Heiden, David Millard, Erwin Coumans, Yizhou Sheng, Gaurav S. Sukhatme. PDF on Arxiv
- “Augmenting Differentiable Simulators with Neural Networks to Close the Sim2Real Gap”, RSS 2020 sim-to-real workshop, Eric Heiden, David Millard, Erwin Coumans, Gaurav Sukhatme. PDF on Arxiv and video
- “Interactive Differentiable Simulation”, 2020, Eric Heiden, David Millard, Hejia Zhang, Gaurav S. Sukhatme. PDF on Arxiv
I also found this MIT thesis from 2019: Augmenting physics simulators with neural networks for model learning and control
GPT Agents
- Finished training the balanced model and am re-running the original prompts
- A really negative prompt will produce a low review distribution. Here’s an example of GPT generating reviews in response to a slightly negative set of prompts ([there are absolutely no vegetarian options], [there is not a single vegetarian option on the menu], [the menu has no vegetarian options]), compared with the ground truth of the Yelp database returning reviews and ratings that match the string ‘%no vegetarian options%‘:
- The distribution of star ratings is obviously different too:
- As you can see on the right, the ground truth is distinctly different. The correlation coefficient between the two distributions on the right is -0.4, while it’s well above 0.9 when comparing any of the three distributions to the left.
- So it’s clear that the model has a bias towards positive reviews. In fact, if you look at the baseline distribution from the first 1,000 reviews of restaurants in the ‘American’ category, we can see the underlying distribution that the model was trained on:
- The new question to answer is what happens to the responses when the training data is balanced for stars? Also, I realize that I need to run a pass through the models with just a ‘review:‘ prompt.
- Dammit, the ‘balanced’ training corpora isn’t. Need to fix that and re-train
- 4:15 Meeting
SBIRs
- MDA costing meeting
- Work on building first pass map. It’s actually working pretty well! Need to write an example script for tomorrow
- Need to create some views