I think I want to put together a small command-line app that allows a discussion with the language model. All text from the ongoing conversation is saved and and used as the input for the next. A nice touch would be to have some small number of responses to choose from, and the conversation follows that branch.
Come to think of it, that could be a cool artificial JuryRoom/Eliza
Generate compact text for Sim to try training
Look into W2V 3D embedding of outputs, and mapping to adjacent outputs (The wo/man walked into the room). We know that there should be some level of alignment
Graphs are one of the fundamental data structures in machine learning applications. Specifically, graph-embedding methods are a form of unsupervised learning, in that they learn representations of nodes using the native graph structure. Training data in mainstream scenarios such as social media predictions, internet of things(IOT) pattern detection or drug-sequence modeling are naturally represented using graph structures. Any one of those scenarios can easily produce graphs with billions of interconnected nodes. While the richness and intrinsic navigation capabilities of graph structures is a great playground for machine learning models, their complexity posses massive scalability challenges. Not surprisingly, the support for large-scale graph data structures in modern deep learning frameworks is still quite limited. Recently, Facebook unveiled PyTorch BigGraph, a new framework that makes it much faster and easier to produce graph embeddings for extremely large graphs in PyTorch models.
GOES
Add composite rotation vector to ddict output. It’s kind of doing what it’s supposed to
Think about a NN to find optimal contributions? Or simultaneous solution of the scalars to produce the best approximation of the line? I think this is the way to go. I found pymoo: Multi-objective Optimization in Python
Our framework offers state of the art single- and multi-objective optimization algorithms and many more features related to multi-objective optimization such as visualization and decision making. Going to ask Vadim to see if it can be used for our needs
Incorporating context into word embeddings – as exemplified by BERT, ELMo, and GPT-2 – has proven to be a watershed idea in NLP. Replacing static vectors (e.g., word2vec) with contextualized word representations has led to significant improvements on virtually every NLP task. But just how contextual are these contextualized representations?
With the increasing ubiquity of natural language processing (NLP) algorithms, interacting with “conversational artificial agents” such as speaking robots, chatbots, and personal assistants will be an everyday occurrence for most people. In a rather innocuous sense, we can perform a variety of speech acts with them, from asking a question to telling a joke, as they respond to our input just as any other agent would.
Book
Write some of the “Attention + Dominance” paper/chapter outline for Antonio. It’s important to mention that these are monolithic models. It could be a nice place for the Sanhedren 17a discussion too.
GOES
Rework primary_axis_rotations.py to use least-squares. It’s looking pretty good!
It’s still not right, dammit! I’m beginning to wonder if the rwheels are correct? Wheels 1 and 4 are behaving oddly, and maybe 3. It’s like they may be spinning the wrong way?
Nope, it looks like it is the way the reaction wheel contributions are being calculated?
With these challenges in mind, we built and open-sourced the Language Interpretability Tool (LIT), an interactive platform for NLP model understanding. LIT builds upon the lessons learned from the What-If Tool with greatly expanded capabilities, which cover a wide range of NLP tasks including sequence generation, span labeling, classification and regression, along with customizable and extensible visualizations and model analysis. (GitHub)
Read and annotate Michelle’s outline, and add something about attention. That’s also the core of my response to Antonio
More cults
2:00 Meeting
Thinking about how design must address American Gnosticism, and the danger and opportunities of online “research”, and also how things like maps and diversity injection can potentially make profound impacts
GOES
Update test code to use least squares/quaternion technique
Looks like we are getting close to ingesting all the new data
Had a meeting with Ashwag last night (Note – we need to move the time), and the lack of ‘story-ness’ in the training set is really coming out in the model. The meta information works perfectly, but it’s wrapped around stochastic tweets, since there is no threading. I think there needs to be some topic structure in the meta information that allows similar topics to be grouped sequentially in the training set.
3:30 Meeting
GOES
9:30 meeting
Update code with new limits on how small a step can be. Done, but I’m still having normal problems. It could be because I’m normalizing the contributions?
Need to fix the angle rollover in vehicle (and reference?) frames. I don’t think that it will fix anything though. I just don’t get why the satellite drifts after 70-ish degrees:
There is something not right in the normal calculation?
Got my panda3D text positioning working. I create a node and then attach the text to it. I think that’s not needed, that all you have to do is use render rather than the a2* options. Here’s how it works:
for n in self.tuple_list: ls:LineSegs = self.lp.get_LineSeg(n) node = TextNode(n) node.setText(n) tnp:NodePath = self.render.attachNewNode(node) #tnp.set_pos(-1, ypos, 1) tnp.setScale(0.1) self.text_node_dict[n] = tnp
I then access the node paths through the dict
Book
Spent a good chunk of the morning discussing the concept of dominance hierarchies and how they affect networks with Antonio
Need to write some, dammit!
GOES
My abstract has been accepted at the Military Operations Research Symposium’s (MORS) 4 day workshop in November!
More Replayer. Working on text nodes. Done! It looks good, and is pointing out some REALLY ODD THINGS . I mean, the reaction wheel axis are not staying with the vehicle frame…
10:00 meeting with Vadim
The (well, a) problem was that the reaction wheel vectors weren’t being reset each time, so the multiplies accumulated. Fixed! Now we have some additional problems, but these may be more manageable:
Got the points moving based on the spreadsheet. I need to label them. It looks pretty straightforward to use 3D positions? I may have to use Billboards to keep things pointing at the camera
Look into Prof. Kristin Du Mez (Calvin College – @kkdumez)’s book (Jesus and John Wayne)?
More writing
Meeting with Michelle. Came up with an interesting tangent about programming/deprogramming wrt software development and cults
GPT-2 Agents
Adding an optional split regex to parse_substrings. Here’s how I wound up doing it. This turns out to be slightly trickey, because the matches don’t include the text we’re interested in, so we have a step that splits out all the individual texts. We also throw away the text that leads to the first line and the last line of text since both can be assumed to be incomplete
split_regex = re.compile(r"(?=On [a-zA-Z]+ of [0-9]+,)") split_iter = split_regex.finditer(tweet_str) start = 0 tweet_list = [] for s in split_iter: t = tweet_str[start: s.start()] tweet_list.append(t) start = s.start()
Shimei’s presentation went well!
Work on translation
GOES
Start on playback of the vehicle and reference frames
A historian believes he has discovered iron laws that predict the rise and fall of societies. He has bad news.
GPT-2 Agents
Tried Sim’s model, it’s very nice!
Created a base class for creating and parsing tweets
Found a regex that will find any text between two tokens. Thanks, stackoverflow!
Here’s an example. I need to look into how large the meta information should be before it starts affecting the trajectory
On July of 2020, @MikenzieCromwell (screen name "Mikenzie Cromwell", 838 followers) posted a tweet from Boston, MA. They were using Twitter Web App. The post had 0 replies, 0 quotes, 1 retweets, and 3 favorites. "An example of the importance of the #mentalhealth community's response to #COVID19 is being featured in the @WorldBank survey. Check out the latest #MentalHealthResponse survey data on the state of mental health services in the wake of the pandemic. https://t.co/9qrq4G4XJi"
GOES
More Replayer
Got the vertex manipulation! It’s hard to get at it though the geometry, but if you just save the LineSegs object,
ls = LineSegs(name) self.prim_dict[name] = ls
you can manipulate that directly
ls:LineSegs = self.lp.get_LineSeg("test_line") ls.setVertex(1, x, y, z)
Meeting with Vadim at 10:00. We found some pretty bad code that sets the torques on the reaction wheels
Splitting the results on the probes. It looks like the second tweet in a series is better formed. That kind of makes sense, because the second tweet is based on the first. That leads to an interesting idea. Maybe we should try building chains of text using the result from the previous
Generating text with 1000 chars and parsing it, throwing away the first and last element in the list. I can also parse out the tweet, location, and sentiment:
[1]: In December of 2019, @svsvzz (21046 followers, 21784 following) posted a retweet, mentioning @ArticleSpot. It was sent from Saudi Arabia. @svsvzz wrote: "RT @ArticleSpot New update # Comment_study is coming..". The retweet was categorized as "Neutral".
Location = Saudi Arabia
Sentiment = Neutral
Tweet = RT @ArticleSpot New update # Comment_study is coming..
[2]: In December of 2019, @HussainALhamad (2340 followers, 29 following) posted a retweet, mentioning @ejazah_ksa. It was sent from Riyadh, Saudi Arabia. @HussainALhamad wrote: "RT @ejazah_ksa Poll: Do you support #Suspension of studying in #Riyadh tomorrow, Monday? If you support (Retweet) If you do not support (Like)". The retweet was categorized as "Positive".
Location = Riyadh, Saudi Arabia
Sentiment = Positive
Tweet = RT @ejazah_ksa Poll: Do you support #Suspension of studying in #Riyadh tomorrow, Monday? If you support (Retweet) If you do not support (Like)
[3]: In December of 2019, @mahfouz_nour (11 followers, 57 following) posted a tweet. She wrote: "♪ And the rest of the news about a news that the study was suspended in the study ♪ ♪ And God bless you ♪ ♪ Now ♪". The tweet was categorized as "Negative".
Location = None
Sentiment = Negative
Tweet = ♪ And the rest of the news about a news that the study was suspended in the study ♪ ♪ And God bless you ♪ ♪ Now ♪
[4]: In December of 2019, @tansh99huda99 (1211 followers, 519 following) posted a retweet, mentioning @HashKSA. @tansh99huda99 wrote: "RT @HashKSA # comments on Monday at all schools for students in #Dahan, and the decision does not include teachers' and teachers' levels.". The retweet was categorized as "Neutral".
Location = None
Sentiment = Neutral
Tweet = RT @HashKSA # comments on Monday at all schools for students in #Dahan, and the decision does not include teachers' and teachers' levels.
Created some slides. I think they look pretty good:
Nearly half of the voters have seen Trump in all of his splendor—his infantile tirades, his disastrous and lethal policies, his contempt for democracy in all its forms—and they decided that they wanted more of it.
Added some code that makes it easier to compare countries and states and produced an animated GIF. I’m more concerned about Maryland now!
Went down to DC yesterday. So weird to see the White House behind multiple sets of walls, like a US base in Afghanistan
Dentist at 3:00
GPT-2 Agents
Working on generating a new normalized data set. It needs to be mush smaller to get results by the end of the week. Done. It takes a couple of passes through the data to get totals needed for percentages, but it seems to be working well
Restarted training
Topic extraction from Tweet content
GOES
Started working on 3D view of what’s going on with the two frames. I think I’m just going to have to start over with a a smaller codebase though, if Vadim can’t find what’s going on in his code.
You must be logged in to post a comment.