How do people learn the large, complex web of social relations around them? We test how people use information about social features (such as being part of the same club or sharing hobbies) to fill in gaps in their knowledge of friendships and to make inferences about unobserved friendships in the social network. We find the ability to infer friendships depends on a simple but inflexible heuristic that infers friendship when two people share the same features, and a more complex but flexible cognitive map that encodes relationships between features rather than between people. Our results reveal that cognitive maps play a powerful role in shaping how people represent and reason about relationships in a social network.
Hmm. Can’t download the PDF or read the full article
The spread of misinformation is a global phenomenon, with implications for elections, state-sanctioned violence, and health outcomes. Yet, even though scholars have investigated the capacity of fact-checking to reduce belief in misinformation, little evidence exists on the global effectiveness of this approach. We describe fact-checking experiments conducted simultaneously in Argentina, Nigeria, South Africa, and the United Kingdom, in which we studied whether fact-checking can durably reduce belief in misinformation. In total, we evaluated 22 fact-checks, including two that were tested in all four countries. Fact-checking reduced belief in misinformation, with most effects still apparent more than 2 weeks later. A meta-analytic procedure indicates that fact-checks reduced belief in misinformation by at least 0.59 points on a 5-point scale. Exposure to misinformation, however, only increased false beliefs by less than 0.07 points on the same scale. Across continents, fact-checks reduce belief in misinformation, often durably so.
GPT Agents
Finished reading in Andreea’s data. I’m going to add a column called ‘test’, that has some text in it to judge the quality of training. I’m going to start out with ‘ten’, ‘twenty’, ‘thirty’, and ‘forty’, which will show up in those percentages. We’ll be able to compare the percentages in the generated and the original. Done with the original
Create corpora and start training model.
Built corpora
Training!
Done! Need to verify the test percentages
[[[month:August location:Auckland text:@rnz_news @NZStuff @minhealthnz @NewshubNZ @jacindaardern @simonjbridges @nzlabour I have a few theories but they are completely illogical. My theory is that many in government and opposition are too trusting, while many in the media are too partisan. #covid19nz #covid19_nz #nzpol, test:twenty]]][[[month:August location:New Zealand text:Dr Liz Gordon: NZ’s Covid-19 response a failure
[[[month:April location:New Zealand text:“As they travel around the world, as we go back to the U.S., it is critical that they be able to meet with health officials and other trusted advisers to update their status”. https://t.co/7kZk6HtRcV #covid19nz #Healthandsafety, test:forty]]][[[month:April location:Wellington City, New Zealand text:It's getting harder and harder to resist the temptation to throw shade at the PM's leadership. She's deliberately and deliberately slipping up
[[[month:April location:Wellington, New Zealand text:New Zealand will now have a COVID-19 emergency alert system. A system based on scientific certainty, based on best informed research. The sooner we use science the sooner we’ll all get back to normal life. This is a global challenge. https://t.co/YFpI6zWgv7 #coronavirus #COVID19nz, test:forty]]][[[month:April location:Wellington text:New Zealand is now in #COVID19nz mode. The system works
[[[month:April location:New Zealand text:A month of #Covid19nz has taught me to trust #SocialDistancing and not to accept #selfishness. So much of #NZtourism comes from poor, vulnerable, and elderly people. If you or someone you know has #Covid19NZ symptoms, please report them to contact tracing at 0800 451 9453 https://t.co/Jw2i0U8Jqn, test:forty]]][[[month:April location:New Zealand text:@MatthewHootonNZ @TheAMShowNZ
[[[month:May location:Auckland, NZ text:#coronavirusnz #COVID19nz One of the new covid-19 cases reported in Queenstown this week is a case in the community. https://t.co/XmqEZd2aDw, test:forty]]][[[month:May location:Muriwai, Aotearoa text:Māori Health Minister Māori Party @RikkiRakaka @nzlabour https://t.co/gwqEZqNrA7 #COVID19 #CO
[[[month:May location:Wellington, New Zealand text:This is a welcome relief to many. Here's an idea: don't just sell as much as you can. Instead, take out the cash and start collecting. #COVID19nz, test:forty]]][[[month:May location:0 text:Can't say my children are very good at math - and in math classes I find they get lots of confused - but when I read someone ask them "how many years of age do they still live with?", they instantly burst into laughter. #nzpol #covid19nz
[[[month:April location:Auckland, New Zealand text:Great article by @Kiwi_Country to explain the importance of #COVID19nz and how to use your personal details to protect your community. Great info in the article https://t.co/v9L0GtX8z3 https://t.co/s3Nm3z3qCZ, test:ten]]][[[month:April location:Auckland, New Zealand text:My thoughts: #covid19NZ #NewZealandLockdown https://t.co
[[[month:June location:Aotearoa, New Zealand text:?♂️ #Covid_19 #COVID19nz https://t.co/Nb8Cq8xFZJ, test:forty]]][[[month:June location:Te Upoko o Te Ika a Maui text:Māori #COVID19nz #lockdownnz https://t.co/tS9QjkFZhM, test:forty]]][[[month:June location:Christchurch City, New Zealand text:What
[[[month:June location:Auckland, New Zealand text:It's important to be clear about the amount of work we can do to safeguard the community and the health and wellbeing of NZers. Read this: https://t.co/vh0MxwKlG7 #COVID19nz, test:forty]]][[[month:June location:New Zealand text:“All the good work that the Govt's emergency plans have done” @SiouxsieW #covid19nz https://t.co/fDZFdZJ0
[[[month:March location:Wellington City, New Zealand text:RT TheDailyBlogNZ "Life in Lock Down: Day 2 | Frank Macskasy - The Daily Blog https://t.co/4tP8dZZWmA #nzpol #covid19nz https://t.co/HmDw5BcE5j", test:forty]]][[[month:March location:0 text:Life in Lock Down: Day 2 | Frank Macskasy - The Daily Blog https://t.co/xWZFyLk3
[[[month:April location:New Zealand text:MEDIA WATCH: Jacinda destroys Duncan Garner | The Daily Blog https://t.co/Hk5VZ9oTkC #nzpol #covid19nz https://t.co/p7tMkLH9Rk", test:ten]]][[[month:April location:New Zealand text:GUEST BLOG: Geoff Simmons – The Price of Citizenship | The Daily Blog https://t.co/JKc0NqEqH #nzpol #covid19nz https://t.
4:15 UMBC Meeting. We’ll try French, Chinese, (and Mexican) to see if the ratings change
We present a method of generating a collection of neural cellular automata (NCA) to design video game levels. While NCAs have so far only been trained via supervised learning, we present a quality diversity (QD) approach to generating a collection of NCA level generators. By framing the problem as a QD problem, our approach can train diverse level generators, whose output levels vary based on aesthetic or functional criteria. To efficiently generate NCAs, we train generators via Covariance Matrix Adaptation MAP-Elites (CMA-ME), a quality diversity algorithm which specializes in continuous search spaces. We apply our new method to generate level generators for several 2D tile-based games: a maze game, Sokoban, and Zelda. Our results show that CMA-ME can generate small NCAs that are diverse yet capable, often satisfying complex solvability criteria for deterministic agents. We compare against a Compositional Pattern-Producing Network (CPPN) baseline trained to produce diverse collections of generators and show that the NCA representation yields a better exploration of level-space.
This could be an interesting scenario generator
GPT Agents
Started on importer
Book
Send out emails to agents!
SBIRs
Got all the stories done. Need to assign points, etc.
1:00 Sprint planning meeting
Decided to try to put everything into a TKinter app. I already know the framework pretty well, I just need to brush up. This way I’ll be able to reuse a lot of the GraphNavigator code
Meeting with Andreea and her student, ___. We’re going to train up a model on their NZ twitter corpora
SBIRs
Updated last sprints stories and put together slides for demos
Work on stories for next sprint
Work on getting more content into GML files. Got it working:
node [
id 1
label "Canada"
weight 150222.0
long_text "A random number: 0.13436424411240122"
]
And after going through Gephi and getting positions, colors, and sizes:
node
[
id 0
label "Bahamas"
graphics
[
x 78.24309
y 161.46931
z 0.0
w 20.0
h 20.0
d 20.0
fill "#edf8fb"
]
weight "4179.0"
long_text "A random number: 0.763774618976614"
]
Welcome to the 16th issue of the Papers with Code newsletter. In this edition, we cover:
some of the latest developments in language modeling,
efficient Transformer models for long text modeling,
advancements in code understanding and generation,
top trending ML papers of August 2021,
GPT-Agents
Created a table of filtered results (%coronavirus%, %chinavirus%, and %sars-cov-2%) with 1,000 of each and ran sentiment to compare
Well crap, the carriage returns in the ground truth are messing everything up. Need to write come code to pull, fix and put back into the table. Not today!
SBIRs
Write new stories
Continue working on storing additional information in networkx nodes
Book
2:00 Meeting with Michelle. Finish cover letters! Done! Maybe? Tweaked a bit more
9:15 Standup. Not sure what to talk about here given the new schedule crazyness
It also occurs to me that since I’ll be adapting my academic research code to produce the demo, there’s no IP for anyone being developed for this effort.
More poking at Svelte with Zach? Some progress. Still can’t get to switch pages
11:00 Kickoff meeting – looks like we have a bit more time
2:00 Adversarial reinforcement tagup
GPT Agents
Need to generate new tweets from the chinavirus, covid, and sars-cov-2 models using the prompt ‘[[[‘ as a baseline to compare with the ground truth – done!
Need to sample ground truth and put it in the gpt_experiments tables
We present a new dataset of Wikipedia articles each paired with a knowledge graph, to facilitate the research in conditional text generation, graph generation and graph representation learning. Existing graph-text paired datasets typically contain small graphs and short text (1 or few sentences), thus limiting the capabilities of the models that can be learned on the data. Our new dataset WikiGraphs is collected by pairing each Wikipedia article from the established WikiText-103 benchmark (Merity et al., 2016) with a subgraph from the Freebase knowledge graph (Bollacker et al., 2008). This makes it easy to benchmark against other state-of-the-art text generative models that are capable of generating long paragraphs of coherent text. Both the graphs and the text data are of significantly larger scale compared to prior graph-text paired datasets. We present baseline graph neural network and transformer model results on our dataset for 3 tasks: graph -> text generation, graph -> text retrieval and text -> graph retrieval. We show that better conditioning on the graph provides gains in generation and retrieval quality but there is still large room for improvement.
Truck stuff – need to verify that they know it’s a 2016
Reviewing papers
SBIRs
Continuing to work on Svelte. Trying to get previous useful lessons to show up as pages, but they are svelte files, not HTML, so I’m not sure how to point to them
Pre-meeting
Scheduling. Orest wants to finish Oct 29, but we’re already a week into September, so I’m going to counter with Nov 5
Get slides done for Thurs meeting. Tried to get MARCOM to help with formatting, but the fuse is too short
Orest set up a meeting that conflicts with the GPT meeting. Trying to get him to move it, otherwise send a note that I will be about 15 min late
GPT Agents
Go over untrained model results
See if we can make the chess models talk about having tea with the Queen
Neural networks have been adapted to leverage the structure and properties of graphs. We explore the components needed for building a graph neural network – and motivate the design choices behind them.
Book
Working on tweaks for today’s meeting
2:00 Meeting
SBIRs
Continue with Svelte
I seem to have been able to get typescript set up and running:
Work on finding a venue for the automating imagination paper
OED Definition of imagination:
The power or capacity to form internal images or ideas of objects and situations not actually present to the senses, including remembered objects and situations, and those constructed by mentally combining or projecting images of previously experienced qualities, objects, and situations. Also (esp. in modern philosophy): the power or capacity by which the mind integrates sensory data in the process of perception.
Also, using GNNs as ways of storing the relationships between the text generated by the GPT
No public health authority should rely on an AI system to make recommendations, of course. But as they grow in power and reach, AI systems could become another tool in leaders’ belts, allowing them to quickly parse existing scientific knowledge for insights that could help to guide in-the-moment decision-making. As the systems become better at citing their sources and explaining their output, their value as tools for guiding decision-making will only grow, because the validity of their predictions can be checked and vetted.
SBIRs
7:30 Meeting with Zach. I’m going to see if he agrees with the “front-end-first” approach I’d like to try. He agrees, so I’m working my way through the tutotial
To install a template project as per here, you have to use the git command line app
Installing the template project from the GIT command line
That creates the following structure:
Project structure in IntelliJ
Then to run the app, I use the terminal and use <ctrl> enter:
Getting things running
This handles hot deployment in the browser, so I think I’m doing it right?
Looking more deeply at Svelte and thinking about building a standalone frontend that doesn’t interact with websockets, but fakes the functionality so that when the Python connections are added in it works?
So we’re officially done in Afghanistan now? One of these years, I’m going to try to figure out what the response to 9/11 cost, what the expectations were, and what actually happened
SBIRs
Working with Zach on the webapp. We may be able to do all this with websockets and no server
Sprint planning – done
Starting on websockets. Installed websockets. I installed asyncio, but it’s part of Python. That’s nice! Uninstalled and everything still works
The hello world works!
Took a detour down SSL and got stuck on cert format issues? Look at that later
Sending data to the browser:
That works too!
GPT-Agents
Still cranking on generating reviews with the untrained model
3:00 Meeting. Made a bet with Shimei that the 800k chess model has forgotten that the Queen could drink tea. We’ll see if we can prompt the model to talk about something other than chess next week
If you want to summarize your research in a sentence… have an AI do it. SciTLDR sums up papers given an abstract, intro & conclusion. And it works impressively well: https://scitldr.apps.allenai.org (Via Twitter)
Recently, many datasets have been proposed to test the systematic generalization ability of neural networks. The companion baseline Transformers, typically trained with default hyper-parameters from standard tasks, are shown to fail dramatically. Here we demonstrate that by revisiting model configurations as basic as scaling of embeddings, early stopping, relative positional embedding, and Universal Transformer variants, we can drastically improve the performance of Transformers on systematic generalization. We report improvements on five popular datasets: SCAN, CFQ, PCFG, COGS, and Mathematics dataset. Our models improve accuracy from 50% to 85% on the PCFG productivity split, and from 35% to 81% on COGS. On SCAN, relative positional embedding largely mitigates the EOS decision problem (Newman et al., 2020), yielding 100% accuracy on the length split with a cutoff at 26. Importantly, performance differences between these models are typically invisible on the IID data split. This calls for proper generalization validation sets for developing neural networks that generalize systematically. We publicly release the code to reproduce our results.
SBIRs
Got the client communicating with the server using Websockets and the server relaying those messages to RabbitMQ!
Travel-time prediction constitutes a task of high importance in transportation networks, with web mapping services like Google Maps regularly serving vast quantities of travel time queries from users and enterprises alike. Further, such a task requires accounting for complex spatiotemporal interactions (modelling both the topological properties of the road network and anticipating events — such as rush hours — that may occur in the future). Hence, it is an ideal target for graph representation learning at scale. Here we present a graph neural network estimator for estimated time of arrival (ETA) which we have deployed in production at Google Maps. While our main architecture consists of standard GNN building blocks, we further detail the usage of training schedule methods such as MetaGradients in order to make our model robust and production-ready. We also provide prescriptive studies: ablating on various architectural decisions and training regimes, and qualitative analyses on real-world situations where our model provides a competitive edge. Our GNN proved powerful when deployed, significantly reducing negative ETA outcomes in several regions compared to the previous production baseline (40+% in cities like Sydney).
I think that the GNNs should be usable to produce the maps themselves. Need to try this with simulation
Created a folder for Graph Neural Network research
Ride down to DC today for this and hopefully not get wet!
Isaac Gym offers a high performance learning platform to train policies for wide variety of robotics tasks directly on GPU. Both physics simulation and the neural network policy training reside on GPU and communicate by directly passing data from physics buffers to PyTorch tensors without ever going through any CPU bottlenecks. This leads to blazing fast training times for complex robotics tasks on a single GPU with 2-3 orders of magnitude improvements compared to conventional RL training that uses a CPU based simulator and GPU for neural networks. We host the results and videos at this https URL and isaac gym can be downloaded at this https URL.
Update repo and switch to dev. Verify that everything still works – it does! And receives messages as well. Oddly it seems to b e splitting the messages between the Python and TypeScript listeners:
SveltKit console logs are black and Python is blue
GPT Agents
Make some spreadsheets that compare the stars/sentiment properties of the relative models. Done. The models are remarkably stable, even down to 3k. They make more mistakes with the specific meta training but that seems to be about it?
Trying to generate reviews from the untrained gpt2 models. The 117M model was (probably?) too small, so I’m trying the 774M model without finetuning. It requires two passes – the first creates the review (using a bigger prompt), and then I use the result and tack on “{}. I give it a star rating of“. Then I need to parse the ratings, which can be numbers or strings. I’ve kind of run out of energy so I’ll finish later.
You must be logged in to post a comment.