Need to add gridlines (Add method to ShapePrimitives):
prim = GeomLines(Geom.UHStatic)
prim.addVertex(0)
prim.addVertex(1)
prim.addVertex(2)
# thats the first triangle
# you can also add a few at once
prim.addVertices(2, 1, 3)
prim.addVertices(0, 5, 6)
This may not going to work for sociolinguistics? It’s still a self-supervised learner. And now we can point to the original data for each word. Hmm. I take it back. THis could be cool
Realized this morning that there is another way of doing prompts now that I have the tool running. Something like “Once you accept the idea that {}, the next likely step is”. After playing around a bit, I got this prompts working:once we accept the “conspiracy theory” that “the moon landing was faked”, then next theory we deed to verify is that “the moon” is actually just a big-ass TV, and the moon-landing was just an elaborate hoax. This looks like a promising direction!
SBIRs
Finish paper and get off to MARCOM for checking – done
Got the template paper running it its own folder – done
JuryRoom.
Read Tamahau’s paper before meeting – no meeting, but done anyway
GPT Agents
There was a meeting! Next task is to put together the outline for the two papers and distribute the link
Work on paper! Based on the reviewer’s comments, I’m thinking of changing the name of the maps from Belief maps to “Jukurrpa Charts”, which reference the Australian Aboriginal Dreamtime, which I think are a nice analogy to what language models really are, a system of fixed relationships between latent concepts and language that can be traversed using narrative forms, or “dreamtracks”. The other option would be to use the term Neural Language Maps. The only problem is that NLM also means ‘Neural Language Models’. Maybe Neural Semantic Maps (NSMs)?
The paper got accepted with R&R! Worked with Aaron on roughing out the changes
Made an update to the codebase to just draw the groups since the computational orders data doesn’t have the topic connections. It will be nice to generate quick overviews too.
Used that to create a new GML for our splash figure:
Finish mods to ForceNode and CanvasFrame. Yay! Got everything working the way I want. Need to integrate in the main codebase now.
# Repulsion function. `self' and `n2` should be nodes. This will # adjust the dx and dy values of `self` `n2`. If you want objects with more mass to be given # 'more room', set common_mass to zero def linRepulsion(self, n2:'ForceNode', common_mass:float = 1.0): xDist = self.x - n2.x yDist = self.y - n2.y distance2 = xDist * xDist + yDist * yDist # Distance squared if distance2 > 0: if common_mass == 0: common_mass = self.mass factor = self.pp.repulsion_scalar * common_mass / distance2 self.dx += xDist * factor self.dy += yDist * factor
Moved adjust_node to ForceNode and renamed it:
def adjust_mass_size(self, mass:float = 0, size:float = 0, scalar:float=0 ): #set up the scalar regardless if mass == 0 and scalar == 0: print("WARNING! CanvasFrame.adjust_node(): mass == 0 and scalar == 0") return
if scalar == 0 and size != 0: scalar = size / self.size
if mass != 0: self.mass = mass else: self.mass *= scalar
# now just use the scalar. The reason we do this is that the call to self.canvas.coords() returns # a SCALED value of coordinates. If we set them directly from size, they will be too big if # zoomed out and too small if zoomed in self.size *= scalar x0, y0, x1, y1 = self.cd.canvas.coords(self.id) old_x_size = abs(x1 - x0) old_y_size = abs(y1 - y0) x_size = old_x_size * scalar y_size = old_y_size * scalar x_offset = (x_size - old_x_size)/2.0 y_offset = (y_size - old_y_size)/2.0 self.cd.canvas.coords(self.id, x0-x_offset, y0-y_offset, x1+x_offset, y1+y_offset)
Update the CanvasFrame testbed so that ForceNodes have their weight and size adjusted in real time. Then migrate to the MapBuilder app
Poked at the Atlas algorithm for a while and some of the ways that mass are used don’t make much sense to me. Making some tweaks. I do have an update working, but I’m not sure if I want to pass a scalar or an explicit value in. Maybe both? If the value is zero, don’t use it?
GPT Agents
4:15 UMBC meeting. Didn’t discuss that much. I need to get started on the Yelp template as soon as the IEEE paper final is submitted
JuryRoom
Went over changes with Jarod. He’ll add an appendix of definitions
Went over progress with Tamahau. I need to review his SLR pdf as well
Finish fixing the code so that there is an option for weights to be the default or calculated. I can use the calculated now for the GML, and then work out how to use the calculated for the live editing.
I still need to adjust the sizes and weights in the live edit
Put talk together once the code is fixed and schedule
GPT Agents
Read Jarod’s SLR – About halfway through
Good meeting with Tony and Panos. I went over the concept and tools, and we had a remarkably wide-ranging discussion. I wound up sending Tony and Panos PDFs of the book draft
Each of us may know much, yet the collective knowledge in our communities is far richer and more powerful when harnessed towards a common end. Moreover, when spread over time and generations, collective contributions to knowledge can generate the forms of cumulative cultural achievement that underlie the extraordinary global success of our species. This theme issue reveals the surprising diversity of contexts over which the emergence and evolution of forms of collective knowledge and cumulative culture have begun to be identified in the natural world and engineered into new technologies. Manifestations range across the cultural evolution of vocal repertoires in birds and whales, primate conventions and technologies, language evolution, social networking, swarm robotics and humanoid robots.
SBIRs
Managed to break something in the conspiracy data that is causing the graph to explode. Chasing down possible causes. It seems to be related to saving the model to the DB
Fixed it. The weight was being set differently on the load than as the graph is built. The thing is, the load is probably right, but I need to make that work as the graph is built first
GPT Agents
Nice chat with Antonio about using the tool to explore gamification strategies
You must be logged in to post a comment.