Category Archives: Mapping

Phil 6.22.20

Cornell University was having a sale, so I got a book:

Mental Territories

  • Rarely recognized outside its boundaries today, the Pacific Northwest region known at the turn of the century as the Inland Empire included portions of the states of Washington and Idaho, as well as British Columbia. Katherine G. Morrissey traces the history of this self-proclaimed region from its origins through its heyday. In doing so, she challenges the characterization of regions as fixed places defined by their geography, economy, and demographics. Regions, she argues, are best understood as mental constructs, internally defined through conflicts and debates among different groups of people seeking to control a particular area’s identity and direction. She tells the story of the Inland Empire as a complex narrative of competing perceptions and interests.

DtZ:

  • Change the code so that there is a 30 day prediction based on the current rates regardless of trend. I think it tells the story of second waves better:

30_days

GPT-2 Agents

  • The ACSOS paper was rejected, so this is now the only path going forward for mapmaking research.
  • Used the known_nearest to produce a graph:
  • The graph on the left is the full graph, and the right is culled. First, note that node c is not in the second graph. There is no confirming link, so we don’t know if it’s an accident. Node e is also not on the chart, because it has no confirming link back through any 2-edge path.
  • Ok, I tried it for the first time on the chess data. There is a bug where [a-h] and [1-8] are showing up as nodes that I have to figure out. But they show up in the right way! Orthogonal and in order!

chess_nearest_bug

  • The bug seems to be in the way that List.extend() works. It seems to be splitting the string (which is a List, duh), and adding those elements as well? Nope, just doing one nesting too many
  • Ok, here are the first results. The first image is of all neighbors. The second is of only verified nearest neighbors (at least one edge chain of 2 that lead back to the original node)

chess_all_neighbors

chess_nearest_neighbors

  • In both cases, the large-scale features of the chessboard are visible. There is a progression from 1 to 8, and a to h. It seems clearer to me in the lower image, and the grid-like nature is more visible. I think I need to get the interactive manipulation working, because some of this could be drawing artifacts
  • Trying out the networkx_viewer. A little worried about this though:

networkxviewer

  • And rightly so:

kablooee

  • Going to try cloning and fixing. Nope. It is waaaaaaayyyyyy broken, and depends on earlier version of networkx
  • Networkx suggests Gephi, and there is a way to export graphs from networkx. Trying that
  • Seems usable?

Gephi

GOES

  • Kind of stuck. Waiting on Vadim
  • Probably will be working on a couple of SBIRs for the next few weeks

Phil 6.19.20

stampede

12:00 – Sy’s defense at noon!

GPT-2 Agents

  • Fixed the regex in ChessMovesToDb
  • More work on finding closest neighbors.
    • Maybe keep a record of the number and type of pieces that are used?
    • Looks like the basics are working. Here’s the test graph:

known_nearest

    • And here are the results. I made the code so that it only shows each neighbor once, but it may be useful to keep track of the number of times a neighbor shows up in a list. This might not be important in chess, but in less structured text environments (RPGs to Reddit threads), it may be valuable:
      find_closest_neighbors(): nodes = ['a', 'b', 'c', 'd', 'e', 'f', 'g']
      {'node': 'a', 'known_nearest': ['f', 'd']}
      {'node': 'b', 'known_nearest': ['f', 'd']}
      {'node': 'c', 'known_nearest': []}
      {'node': 'd', 'known_nearest': ['f', 'a', 'b', 'g']}
      {'node': 'e', 'known_nearest': []}
      {'node': 'f', 'known_nearest': ['a', 'g', 'd', 'b']}
      {'node': 'g', 'known_nearest': ['f', 'd']}

       

    • At this point it’s not recursive, but it could be. I’m worried about combinatorial explosion though

GOES

  • Submit GVSETS paper – done!
  • Meeting with Vadim and Issac at 11:00
    • Goal is to move all the RW code out of the sim class and into its own and call methods from the sim class

Phil 6.18.20

Hotel reservations!

Sent a ping to Don about a paper to review

GPT-2 Agents

  • Started on common neighbor algorithm. Definitely a good place for recursion
  • Generating larger file

adjacency

moves

  • If you look at the center of the plot and squint a bit, you can see a bit of the grid:

networkx

  • There is an error: The string ‘, White moves pawn from h3 to g4. White takes black pawn. LCZero v0.24-sv-t60-3010 moves black knight from h5 to g7. White moves pawn from g4 to h5. LCZero v0.24‘ is parsing incorrectly due to the truly bizarre name (The little known Grand Master LCZero v0.24-sv-t60-3010). Need to fix the regex. I think I just need to make it so that there has to be a space in front and a space/period after.

GOES

  • Readthrough of GVSETS paper
  • 2:00 Meeting

Waikato

  • Alex had a really good insight in that groups that are working at coming to consensus use terms to discuss their level of agreement that are independent of the points being argued. That’s could really be important in text analysis.

Phil 6.17.20

Listened to a fantastic interview with Nell Irvin Painter (White Supremacy at Home and Abroad):

GPT-2 Agents

  • Working on finding the connections between nodes
  • Now that I know how to add weights to edges, I think I want to add the piece that made the move. It needs to be a list, since multiple types of pieces can connect two squares. Added a dict_array per edge:
    if target not in nlist:
        self.G.add_edge(source, target, weight=0)
        self.G[target]['dict_array'] = []
    self.G[target]['weight'] += 1
    for key, val in data_dict.items():
        a:List = self.G[target]['dict_array']
        a.append({key:val})
  • I also realize that moves that repeatedly connect squares are more likely to be close, simply because the available squares of more distant moves increase in a geometric fashion. I added a method that writes out moves to Excel where I can play with them. Here are some moves:

moves

  • In looking at these moves, it does seem to be that the majority of the moves seem to be short (e.g. b6-b7, b6-a7, b6-b5). The only exception is the knight (b6-d7). So I think there is a confidence value that I can calculate for the ‘physical’ adjacency of nodes in a network. This could also apply to belief spaces as well. Most consensus requires coordination and common orientation (pos, heading, speed), so commonly connected topics can be said to be ‘closer’
  • Good chat with Aaron about CVPR and algorithms

GOES

  • Finish revisions and send to T and Aaron for review. Last thing is to tie back to ground vehicles in the discussion. Done! I think… Need to read the whole thing and see if it still hangs together
  • 2:00 – Meeting

Phil 6.12.20

Hey! My dissertation is online now!

Optimizing Multiple Loss Functions with Loss-Conditional Training

  • The idea behind our approach is to train a single model that covers all choices of coefficients of the loss terms, instead of training a model for each set of coefficients. We achieve this by (i) training the model on a distribution of losses instead of a single loss function, and (ii) conditioning the model outputs on the vector of coefficients of the loss terms. This way, at inference time the conditioning vector can be varied, allowing us to traverse the space of models corresponding to loss functions with different coefficients

GPT-2 Agents

  • Applied to get on the OpenAI API waitlist
  • Started figuring out igraph. Welp, it doesn’t plot because cannot load library ‘libcairo-2.dll’: error 0x7e Diesn’t seem to be a good fix. It’s a shame, because igraph seems to be great for analyzing graphs mathematically. Removing everything
  • Looks like I can use networkx combined with networkx_viewer (pypi)(github). Look into that next. Upgraded from 2.1 to 2.4
  • Pulled my NetworkxGraphing.py class over from Antibubbles and verified that it still works!

networkx

GOES

  • Send Jason my download code
  • Work on GVSETS paper
    • Added formatting changes and moved footnotes to citations
    • Adding a figure for the pipeline. Hmmm. It’s um… big

pipeline

Phil 6.11.20

Call Simon

GPT-2 Agents

  • Embeddings and plots
  • Got the sequences generated. They look pretty cool too, like codes:
    e2 e4 c7 c5 g1 f3 b8 c6 d2 d4 c5 d4 f3 d4 g7 g6 b1 c3 f8 g7 f1 e2 d7 d6 c1 g5 a7 a6 d1 e2 f6 e8 f2 f3 e8 c7 g5 f4 f7 f5 e4 f5 g6 f5 e2 f3 c7 d5 f3 g4 c8 d7 e2 d2 d8 c7 b2 b3 f5 f4 d4 b3 d7 f5 f1 e1 e7 f5 e1 e6 f5 g4 b3 d4 a8 c8 d4 f5 c6 f5 e6 f6 f5 d4 a1 e1 g4 h5 f3 f4 f8 f6 d2 f6 c8 f8 f6 h4 d4 e6 c2 c3 e6 d4 c3 d4 h5 f3 e1 e7 f8 f7 e7 f7 g7 f7 g1 f2 b7 b5 c4 b5 a6 b5 g2 g4 f7 g6 h2 h3 g8
    e2 e4 c7 c6 d2 d4 d7 d5 b1 d2 g8 f6 f1 d3 d5 e4 d2 e4 b8 d7 g1 f3 e7 e6 d1 e2 f6 e4 d3 e4 d8 c7 e4 b1 d7 f6 c1 g5 f6 g4 h7 h6 g5 h4 c7 d7 e2 e3 g4 e5 h1 g1 e5 c6 f2 f3 f8 e7 e3 e2 g2 g4 f8 d8 f3 e5 d7 d3 e2 d3 d8 d3 g1 d1 a7 a6 c1 b1 d3 d6 b1 a1 e7 f6 a2 a3 c8 e6 f3 e4 b7 b5 b2 b4 f6 g7 b4 a5 b5 a4 e5 c6 a8 b8 d1 f1 a4 a3 c6 e5 a3 a2 h4 e1 a2 a1 f1 a1 b8 a1 d4 d5 e6 c8 d1 b1 g8 f8 a1 b1 f8 e7 b1 c2 e7 d6 e5 d7 g7 d4
    e2 e4 e7 e6 d2 d4 d7 d5 b1 c3 f8 b4 e4 e5 b4 c3 b2 c3 g8 e7 d1 b3 c7 c5 a2 a3 b8 c6 f2 f4 b7 b5 a3 a4 b5 b4 b3 b2 a4 a5 c5 d4 c3 d4 e7 g6 g1 f3 c6 e7 c8 a6 c1 g5 e7 g8 a1 b1 a8 c8 e5 d6 g8 f6 g5 f6 d8 f6 f1 f2 h7 h6 b2 b5 f6 d6 f3 h4 d6 e7 h4 g6 a6 g2 g6 e7 f8 e8 e7 f5 g2 f3 g1 g2 c8 c2 b1 c1 c2 c8 a5 a6 c8 a8 h2 h3 f3 e4 b5 b3 f7 f6 b3 b2 f6 f5 f5 d6 e6 e5 b2 a1 e8 a8 f2 f5 f5 e4 f5 f7 a8 b8 c1 f1 b8 b5 a1 a2 a8 a7
    e3 d2 d4 g8 f6 c2 c4 e7 e6 b1 c3 f8 b4 e2 e3 c1 d2 d7 d5 c4 d5 f6 d5 f2 f3 b8 c6 g1 f3 f7 f5 g2 g4 f5 g4 d1 e2 d5 f4 d2 f4 e6 f5 e2 e5 b4 c3 e5 c3 d8 d3 e1 e2 d3 e2 e2 e2 f5 f4 e2 e1 f4 f3 e1 f2 a8 d8 f2 f1 c8 e6 f4 e3 f8 f7 f3 h4 e6 d5 f1 g2 f7 f3 g2 f1 f3 h3 f1 e1 d5 e4 e1 f2 h3 h4 g4 h5 h4 h5 f2 f3 h5 f5 f3 g2 f5 h5 e3 g5 h5 h3 g5 e3 h3 h4 e3 g5 e4 g2 g2 g3 g2 d5 g3 h3 d5 e6 g5 f6 g8 h7
    e2 e4 e7 e5 g1 f3 b8 c6 b1 c3 g8 f6 f1 b5 d7 d6 d2 d3 a7 a6 f8 e7 b5 a4 b7 b5 a4 b3 h2 h3 c8 b7 a2 a4 b5 b4 a4 b5 c6 b8 c1 g5 f6 e8 f3 e5 e8 d6 g5 f4 d8 e7 f4 d6 e7 d6 e5 f3 d6 e7 c3 a4 c7 c6 a4 c5 c6 c5 d3 d4 e7 e5 d4 c5 b7 c8 a1 c1 b8 d7 f3 e5 f7 f6 e4 f5 d7 f6 c1 c6 e5 e4 g1 h1 f6 h5 f1 e1 h5 g3 e5 d7 e4 h4 c6 c4 h4 g3 d1 g4 g3 h3 h1 g1 h3 h1 g1 h2 h1 h5 g4 f5 c8 b7 c4 c5 h5 e2 e1 d1 e7 f6 b2 b3 f6 e5 c5 c8 a8 c8

     

  • Drawing the embeddings. Fun, but not really useful. And this is kind of my point about embeddings like W2V that don’t take into account the trajectory of the sentence the word is part of. We know that the structure of the board is represented in the text. We need a more sophisticated embedding to to extract it

square_embeddings

  • Something that might make sense is to see how these points cluster as well
  • I think I might try plotting individual columns later, but first I’m going to try building some from/to networks by piece

GOES

  • Try downloading yaw flip
  • Was able to connect to the server, though now I don’t need a port number?
  • Specifying the queries. Fixed a few mnemonics.
  • Had to try a few times, but I got it!

influx_copy

  • Not sure what to do next. Update GVSETS paper?
  • 2:00 CASSIE meeting – learned a lot of things
  • I got promoted!
  • Implemented a perplexity measure. Looking at this as a way of understanding mode collapse, and maybe conspiracy theories?
  • Done for the day

Phil 6.4.20

GPT-2 Agents

  • Thinking about how to parse the data to build maps.
    • Clearly, there are black and white agents (the players). Option 1 would be to simply collect the from-to points at the player level
    • One step down would be the piece family, pawns, rooks, bishops. The king and queen would be single instances
    • The most granular would be to track the individual pieces.
    • The issue I’m struggling with is when pieces pass over squares rather than through them. There is no explicit d3 when white moves the pawn from d2 to d4. Trying to think of the best way to uncover the latent information.
    • I think a good way to procrastinate about this problem is to parse the games into a database of moves. The format is always “<player/color> moves <(color)piece> from <start> to <end>. There is additional information as well (game, players, move number), but that could be added later.
    • Done
    • table_moves
    • Sent a note to Thomas Wolf at Huggingface
    • I know what I’m going to do!
      • Create probes for each piece, like:
        • white moves pawn from e2 to
        • black moves pawn from e7 to
        • A slightly more sophisticated parser will need to work with “The game begins
      • I can take the results of multiple probes and store them in the table_moves, then run statistics by color, piece, etc
      • Then see if it’s possible to connect one piece to another piece using a “from/to chain” across multiple pieces. There will probably be some sort of distribution where the median(?) value should be a set of adjacent squares.
      • The connections can be tested by building adjacency matrices by piece and by move number range

GOES

  • Hey! The VPN is much more responsive today! Logged into Influx
  • Getting the right time for the query from here
    • start: 2020-04-06 15:30:00.000
    • end: 2020-04-06 18:00:00.000
    • Need to get the right mnemonics. Pinged Bruce
  • Starting some timing tests on my local influx copy
  • The VPN has stopped again
  • 2:00 NSOF Meeting – Nice demo by Jason
  • 3:30 AIMS IRAD
    • Status. John is going part time
    • Railed against the poor VPN access to the DevLab

ML Brownbag – Aaron did a nice job

Phil 2.12.20

7:00 – 8:00pm ASRC PhD, GOES

  • Create figures that show an agent version of the dungeon
  • Replicate the methods and detailed methods of the cartography slides
  • Text for each group by room can be compared by the rank difference between them and the overall. Put that in a spreadsheet, plot and maybe determine the DTW value?
    • Add the sim version of the dungeon and the rank comparison to the dissertation
  • Put all ethics on one slide – done
  • Swapped out power supply, but now the box won’t start. Dropped off to get repaired
  • Corporate happy hour

Phil 1.17.20

An ant colony has memories that its individual members don’t have

  • Like a brain, an ant colony operates without central control. Each is a set of interacting individuals, either neurons or ants, using simple chemical interactions that in the aggregate generate their behaviour. People use their brains to remember. Can ant colonies do that? 

7:00 – ASRC

  •  Dissertation
    • More edits
    • Changed all the overviews so that they also reference the section by name. It reads better now, I think
    • Meeting with Thom
  • GPT-2 Agents
  • GSAW Slide deck

Phil 12.26.19

ASRC PhD 7:00 – 4:00

  • Dissertation
    • Limitations
  • GPT-2 agents setup – set up the project, but in the process of getting the huggingface transformers, I wound up setting up that project as well
    • Following directions for
      • pip install transformers
      • git clone https://github.com/huggingface/transformers
        • cd transformers
        • pip install .
      • pip install -e .[testing]
        • make test – oops. My GNU Make wasn’t on the path – fixed it
        • running tests
          • Some passed, some failed. Errors like: tests/test_modeling_tf_t5.py::TFT5ModelTest::test_compile_tf_model Fatal Python error: Aborted
          • Sure is keeping the processor busy… Like bringing the machine to its knees busy….
          • Finished – 14 failed, 10 passed, 196 skipped, 20 warnings in 1925.12s (0:32:05)
  • Fixed the coffee maker
  • Dealt with stupid credit card nonsense

Phil 12.7.19

You can now have an AI DM. AI Dungeon 2. Here’s an article about it: You can do nearly anything you want in this incredible AI-powered game. It looks like a GPT-2 model trained with chooseyouradventure. Here’s the “how we did it”. Wow

The Toxins We Carry (Whitney Phillips)

  • My proposal is that we begin thinking ecologically, an approach I explore with Ryan Milner, a communication scholar, in our forthcoming book You Are Here: A Field Guide for Navigating Polluted Information. From an ecological perspective, Wardle’s term “information pollution” makes perfect sense. Building on Wardle’s definition, we use the inverted form “polluted information” to emphasize the state of being polluted and to underscore connections between online and offline toxicity. One of the most important of these connections is just how little motives matter to outcomes. Online and off, pollution still spreads, and still has consequences downstream, whether it’s introduced to the environment willfully, carelessly, or as the result of sincere efforts to help. The impact of industrial-scale polluters online—the bigots, abusers, and chaos agents, along with the social platforms that enable them—should not be minimized. But less obvious suspects can do just as much damage. The truth is one of them.
  • Taking an ecological approach to misinformation

Phil 10.22.19

7:00 – 4:00 ASRC

  • Dissertation – starting the maps section
  • Need to finish the financial OODA loop section
  • Spending the day at a Navy-sponsored miniconference on AI, ethics and the military (no wifi at Annapolis, so I’ll put up notes later). This was an odd mix of higher-level execs in suits, retirees, and midshipmen, with a few technical folks sprinkled in. It is clear that for these people, the technology(?) is viewed as AI/ml. The idea that AI is a thing that we don’t do yet does not emerge at this level. Rather, AI is being implemented using machine learning, and in particular deep learning.

Phil 9.1.19

Still tired from yesterday’s adventure:

Strava

Thinking about @scottbot’s thread on TalkToATransformner.com. In a way, the GPT-2 and other models are an arrangement of all the text that the system has incorporated. I think I’s like to go through the dictionary, one word at a time, and use that word as a seed for building a low-dimensional threaded term model that we can visually examine, in the same way that I used text from Tymora to show the “belief space” around each room. For example, here are some responses to silence:

  • Lecture, Thursday 17 June 2014, 10:30am
    In the course of our study we will observe the behaviour of animals at two distinct locations – one to an animal’s immediate (neonatal) environment, the other to two distinct environments in its home territory – the home environment in which the animal lives. We will also observe the behaviour of animals in an adjacent environment to each animal we observe in one of these new environments; one for the social or ‘prey’ species and the other for the ‘host’ species.
  • In addition to a few other notable games, like the two I mentioned earlier, the team was able to pick up several top players to form one of the premier teams in the region of Korea. At this time, the team was ranked 11th in the league of legends scene.
  • Sensing my heart beating at a frantic pace, I raised my head to look at the place where the girl’s mouth had been. A large, blue, round hole had appeared there.
  •  キャラクシー 土方の親愛神・レイラン Aamir AB Black Bird Bomb AB Bluebird Blues AB Bomb AB Chuck AB Dashing Wingman AB Green Bird Hal AB Hal AB Matilda AB Red AB Red Bird Red AB The Blues AB White Bird Matilda AB Wingman AB Yellow Bird Chuck ABE Archer Blues ABE Capt’n Bomb ABE Cleric Matilda ABE Fencer Red ABE Guard Captain Pig ABE Knight Red ABE Mage Chuck ABE Pirate Bomb ABE Priest Matilda ABE Royal Guard Pig ABE Trickster Blues ABE Wizard Chuck Abiding Sniper Dragonbound, Myr Absolute Zero Hrungnir Abyssal CyberDragon, Sadalmelik Abyssal 
  • is about what you do, not what you say. And the answer is: it’s not something you should say. In fact, I want you to make an effort to talk to others.
  • I was so excited for my wedding day, and I had to be there. My mom had an idea to hold my wedding in a different room and I made it happen. I had my brother, sister, bridesmaid, and a couple of guys wait in the front. It was so beautiful and relaxing. Thank you, thank you, THANK YOU!

My intuition is that these word sequences can be treated like sequential threads, since they are sentences/paragraphs, and have an implicit sense of time. And like like the room conversations in the temple, that there will be denser and sparser areas on the network/landscape that we might be able to map. If we can build these small pieces, we can then experiment with ways to assemble the larger quilt.

Just to reinforce this idea, I thought about using more specific terms or phrases. Here’s what we get with bird sanctuary. I’d a much more constrained landscape:

  • is open only 24 hours a day and is open on the following holidays:
  • Tower of the Winds – Cave of Wonders – Rune Isle
  • The idea of an animal sanctuary for a big-cat sanctuary is one of the most amazing things that a lot of people will ever come up with that they can’t see in the current environment of wildlife protection. 
  • an annual four-day event that promotes conservation efforts.
  • (2) Pescado Bay Nature Preserve (2) Pacific Coast Aquarium (11) Pacific Grove (1) Pacifica Harbor (1) Philadelphia Zoo (1) Philadelphia Museum of Art (1) Philadelphia World’s Fair (2) Piebald Beach (1) Pinnacle Beach (1) Placid Bay (1) Point Park and Wildlife Management area

Based on David Massad’s tweet, I think the phrases to use are news headlines, that can be compared to some sort of ground truth contained in the story.

 

Phil 12.20.18

7:00 – 4:00 ASRC NASA/PhD

  • Goal-directed navigation based on path integration and decoding of grid cells in an artificial neural network
    • As neuroscience gradually uncovers how the brain represents and computes with high-level spatial information, the endeavor of constructing biologically-inspired robot controllers using these spatial representations has become viable. Grid cells are particularly interesting in this regard, as they are thought to provide a general coordinate system of space. Artificial neural network models of grid cells show the ability to perform path integration, but important for a robot is also the ability to calculate the direction from the current location, as indicated by the path integrator, to a remembered goal. This paper presents a neural system that integrates networks of path integrating grid cells with a grid cell decoding mechanism. The decoding mechanism detects differences between multi-scale grid cell representations of the present location and the goal, in order to calculate a goal-direction signal for the robot. The model successfully guides a simulated agent to its goal, showing promise for implementing the system on a real robot in the future.
  • Path integration and the neural basis of the ‘cognitive map’
    • Accumulating evidence indicates that the foundation of mammalian spatial orientation and learning is based on an internal network that can keep track of relative position and orientation (from an arbitrary starting point) on the basis of integration of self-motion cues derived from locomotion, vestibular activation and optic flow (path integration).
    • Place cells in the hippocampal formation exhibit elevated activity at discrete spots in a given environment, and this spatial representation is determined primarily on the basis of which cells were active at the starting point and how far and in what direction the animal has moved since then. Environmental features become associatively bound to this intrinsic spatial framework and can serve to correct for cumulative error in the path integration process.
    • Theoretical studies suggested that a path integration system could involve cooperative interactions (attractor dynamics) among a population of place coding neurons, the synaptic coupling of which defines a two-dimensional attractor map. These cells would communicate with an additional group of neurons, the activity of which depends on the conjunction of movement speed, location and orientation (head direction) information, allowing position on the attractor map to be updated by self-motion information.
    • The attractor map hypothesis contains an inherent boundary problem: what happens when the animal’s movements carry it beyond the boundary of the map? One solution to this problem is to make the boundaries of the map periodic by coupling neurons at each edge to those on the opposite edge, resulting in a toroidal synaptic matrix. This solution predicts that, in a sufficiently large space, place cells would exhibit a regularly spaced grid of place fields, something that has never been observed in the hippocampus proper.
    • Recent discoveries in layer II of the medial entorhinal cortex (MEC), the main source of hippocampal afferents, indicate that these cells do have regularly spaced place fields (grid cells). In addition, cells in the deeper layers of this structure exhibit grid fields that are conjunctive for head orientation and movement speed. Pure head direction neurons are also found there. Therefore, all of the components of previous theoretical models for path integration appear in the MEC, suggesting that this network is the core of the path integration system.
    • The scale of MEC spatial firing grids increases systematically from the dorsal to the ventral poles of this structure, in much the same way as is observed for hippocampal place cells, and we show how non-periodic hippocampal place fields could arise from the combination of inputs from entorhinal grid cells, if the inputs cover a range of spatial scales rather than a single scale. This phenomenon, in the spatial domain, is analogous to the low frequency ‘beats’ heard when two pure tones of slightly different frequencies are combined.
    • The problem of how a two-dimensional synaptic matrix with periodic boundary conditions, postulated to underlie grid cell behaviour, could be self-organized in early development is addressed. Based on principles derived from Alan Turing’s theory of spontaneous symmetry breaking in chemical systems, we suggest that topographically organized, grid-like patterns of neural activity might be present in the immature cortex, and that these activity patterns guide the development of the proposed periodic synaptic matrix through a mechanism involving competitive synaptic plasticity.
  • Wormholes in virtual space: From cognitive maps to cognitive graphs
    • Cognitive maps are thought to have a metric Euclidean geometry.
    • Participants learned a non-Euclidean virtual environment with two ‘wormholes’.
    • Shortcuts reveal that spatial knowledge violates metric geometry.
    • Participants were completely unaware of the wormholes and geometric inconsistencies.
    • Results contradict a metric Euclidean map, but support a labelled ‘cognitive graph’.
  • Back to TimeSeriesML
    • Encryption class – done
      • Create a key and save it to file
      • Read a key in from a file into global variable
      • Encrypt a string if there is a key
      • Decrypt a string if there is a key
    • Postgres class – reading part is done
      • Open a global connection and cursor based on a config string
      • Run queries and return success
      • Fetch results of queries as lists of JSON objects