Phil 10.16.18

7:00 – 4:00 ASRC DARPA

  • Steve had some good questions about quantitative measures:
    • I think there are some good answers that we can provide here on determining the quality of maps. The number of users is an educated guess though. In my simulations, I can generate enough information to create maps using about 100 samples per agent. I’m working on a set of experiments that will produce “nosier” data that will provide a better estimate, but that won’t be ready until December. So we can say that “simulations indicate that approximately 100 users will have to interact through a total of 100 threaded posts to produce meaningful maps”
    • With respect to the maps themselves, we can determine quality in four ways. The mechanism for making this comparison will be bootstrap sampling (https://en.wikipedia.org/wiki/Bootstrapping_(statistics)), which is an extremely effective way of comparing two unknown distributions. In our case, the distribution will be the coordinate of each topic in the embedding space.
      1. Repeatability: Can multiple maps generated on the same data set be made to align? Embedding algorithms often start with random values. As such embeddings that are similar may appear different because they have different orientations. To determine similarity we would apply a least-squares transformation of one map with respect to the other. Once complete, we would expect a greater than 90% match between the two maps in success.
      2. Resolution: What is the smallest level of detail that can be rendered accurately? We will be converting words into topics and then placing the topics in an embedding space. As described in the document, we expect to do this with Non-Negative Matrix Factorization (NMF). If we factor the all discussions down to a single topic (i.e. “words”), then we will have a single point map that can always be rendered with 100% repeatability, but it has 0% precision. If, on the other hand, we can place every word in every discussion on the map, but the relationships are different every time, then we can have 100% precision, but 0% repeatability. As we cluster terms together, we need to compare repeated runs to see that we get similar clusters each time. We need to find the level of abstraction that will give us a high level of repeatability. A 90% match is our expectation.
      3. Responsiveness: Maps change over time. A common example is a weather map, though political maps shift borders and physical maps reflect geographic activity like shoreline erosion. This duration may reflect the accuracy of the map, with slow change happening across large scales while rapid changes are visible at higher resolutions. A change at the limit of resolution should ideally be reflected immediately in the map and not adjust the surrounding areas.
  • More frantic flailing to meet the deadline. DONE!!!

4:00 – 5:30 Antonio Workshop

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.