- We present a new dataset of Wikipedia articles each paired with a knowledge graph, to facilitate the research in conditional text generation, graph generation and graph representation learning. Existing graph-text paired datasets typically contain small graphs and short text (1 or few sentences), thus limiting the capabilities of the models that can be learned on the data. Our new dataset WikiGraphs is collected by pairing each Wikipedia article from the established WikiText-103 benchmark (Merity et al., 2016) with a subgraph from the Freebase knowledge graph (Bollacker et al., 2008). This makes it easy to benchmark against other state-of-the-art text generative models that are capable of generating long paragraphs of coherent text. Both the graphs and the text data are of significantly larger scale compared to prior graph-text paired datasets. We present baseline graph neural network and transformer model results on our dataset for 3 tasks: graph -> text generation, graph -> text retrieval and text -> graph retrieval. We show that better conditioning on the graph provides gains in generation and retrieval quality but there is still large room for improvement.
Truck stuff – need to verify that they know it’s a 2016
- Continuing to work on Svelte. Trying to get previous useful lessons to show up as pages, but they are svelte files, not HTML, so I’m not sure how to point to them
- Scheduling. Orest wants to finish Oct 29, but we’re already a week into September, so I’m going to counter with Nov 5
- Get slides done for Thurs meeting. Tried to get MARCOM to help with formatting, but the fuse is too short
- Orest set up a meeting that conflicts with the GPT meeting. Trying to get him to move it, otherwise send a note that I will be about 15 min late
- Go over untrained model results
- See if we can make the chess models talk about having tea with the Queen