In the past decade, we have witnessed the rise of deep learning to dominate the field of artificial intelligence. Advances in artificial neural networks alongside corresponding advances in hardware accelerators with large memory capacity, together with the availability of large datasets enabled practitioners to train and deploy sophisticated neural network models that achieve state-of-the-art performance on tasks across several fields spanning computer vision, natural language processing, and reinforcement learning. However, as these neural networks become bigger, more complex, and more widely used, fundamental problems with current deep learning models become more apparent. State-of-the-art deep learning models are known to suffer from issues that range from poor robustness, inability to adapt to novel task settings, to requiring rigid and inflexible configuration assumptions. Collective behavior, commonly observed in nature, tends to produce systems that are robust, adaptable, and have less rigid assumptions about the environment configuration. Collective intelligence, as a field, studies the group intelligence that emerges from the interactions of many individuals. Within this field, ideas such as self-organization, emergent behavior, swarm optimization, and cellular automata were developed to model and explain complex systems. It is therefore natural to see these ideas incorporated into newer deep learning methods. In this review, we will provide a historical context of neural network research’s involvement with complex systems, and highlight several active areas in modern deep learning research that incorporate the principles of collective intelligence to advance its capabilities. We hope this review can serve as a bridge between the complex systems and deep learning communities.
Chat with Dave tonight? – Need to send links to papers, OpenAI, Stable Diffusion thread, Paul Sharre’s books, etc
Fill out reimbursement forms – done
Travel to Chirp – tried in Concur. Hopeless mess
Experiment logger meeting – done
Reached out to Dr. Giddings on format of white paper
Change table_user to have user ID as unique, primary key and see if update into works right – done
Add checkboxes for optional user attributes (requires downloading users for tweets)
No meeting is scheduled, so write up a status report
We offer comprehensive evidence of preferences for ideological congruity when people engage with politicians, pundits, and news organizations on social media. Using 4 years of data (2016–2019) from a random sample of 1.5 million Twitter users, we examine three behaviors studied separately to date: (i) following of in-group versus out-group elites, (ii) sharing in-group versus out-group information (retweeting), and (iii) commenting on the shared information (quote tweeting). We find that the majority of users (60%) do not follow any political elites. Those who do follow in-group elite accounts at much higher rates than out-group accounts (90 versus 10%), share information from in-group elites 13 times more frequently than from out-group elites, and often add negative comments to the shared out-group information. Conservatives are twice as likely as liberals to share in-group versus out-group content. These patterns are robust, emerge across issues and political elites, and exist regardless of users’ ideological extremity.
Jim Donnies (winterize and generator) – done
9:30 RCSNN design discussion – done
2:00 Meeting with Loren – done
Write up some sort of trip report
Reach out to folks from conference – done
Start on distributed data dictionary? Kind of?
Roll in Brenda’s Changes – continuing
Ping Ryan for chapter/paper/article on authoritarians and sociotechnical systems
Add cluster ID to console text when a node is clicked and a button to “exclude topic” that adds an entry to “table_exclude” that has experiment_id, keyword (or “all_keywords”), and cluster_id. These clusters are excluded when a corpora is generated.
Re-clustering will cause these rows to be deleted from the table
Add training corpora generation with checkboxes for meta-wrappers and dropdown for “before” or “after”
I have gotten to the point where I am proud of my regex-fu: r”[^\d^\w^\s^[:punct:]]”
I’m at the MORS conference and did my first presentation to people in three years. Very pleasant. Based on how ML is going over in the other areas, I think MOR is about 2 years behind where I am, which is about a year behind SOTA. This includes large companies like Raytheon. Transfer learning is magic, people are still working with RNNs and LSTMs, and no one knows about Transformers and Attention.
Dataiku is pretty neat, but doesn’t learn from their users what works best for a dataset. So odd
Had a good chat with Dr. Ryan Barrett about how authoritarian leaders in general and Putin in particular trap themselves in an ever more extreme echo chamber by controlling the media in such a way that the egalitarian voices that use mockery are silenced, while the more extreme hierarchicalists are allowed to continue.
Got the Explore option, where subsampled, clickable nodes are drawn to the canvas works. I was able to find out that several clusters in my pull were in German:
Need to save all this to the DB next. Started on the buttons. I will also need to load up these other fields, so the retreive_tweet_data_callback() will have to be changed
We show that combining human prior knowledge with end-to-end learning can improve the robustness of deep neural networks by introducing a part-based model for object classification. We believe that the richer form of annotation helps guide neural networks to learn more robust features without requiring more samples or larger models. Our model combines a part segmentation model with a tiny classifier and is trained end-to-end to simultaneously segment objects into parts and then classify the segmented object. Empirically, our part-based models achieve both higher accuracy and higher adversarial robustness than a ResNet-50 baseline on all three datasets. For instance, the clean accuracy of our part models is up to 15 percentage points higher than the baseline’s, given the same level of robustness. Our experiments indicate that these models also reduce texture bias and yield better robustness against common corruptions and spurious correlations. The code is publicly available at this https URL.
Named Tensors allow users to give explicit names to tensor dimensions. In most cases, operations that take dimension parameters will accept dimension names, avoiding the need to track dimensions by position. In addition, named tensors use names to automatically check that APIs are being used correctly at runtime, providing extra safety. Names can also be used to rearrange dimensions, for example, to support “broadcasting by name” rather than “broadcasting by position”.
9:00 Sprint review – done
Added stories for next sprint
2:00 MDA Meeting – done. Loren’s sick
Set up overleaf for Q3 – done
Go to DC for forum
Load up laptop
Get some more done on embedding? Yup, split out each step so that changing clustering (very fast) doesn’t have to wait for loading and manifold reduction
Save everything back to the DB and make sure the reduced embeddings and clusters are loaded if available
Rising partisan animosity is associated with a reduction in support for democracy and an increase in support for political violence. Here we provide a multi-level review of interventions designed to reduce partisan animosity, which we define as negative thoughts, feelings and behaviours towards a political outgroup. We introduce the TRI framework to capture three levels of intervention—thoughts (correcting misconceptions and highlighting commonalities), relationships (building dialogue skills and fostering positive contact) and institutions (changing public discourse and transforming political structures)—and connect these levels by highlighting the importance of motivation and mobilization. Our review encompasses both interventions conducted as part of academic research projects and real-world interventions led by practitioners in non-profit organizations. We also explore the challenges of durability and scalability, examine self-fulfilling polarization and interventions that backfire, and discuss future directions for reducing partisan animosity.
Finish MORS slides. Need a fortification map bridge, and then back to the main deck. Done!
Started to add the 3d visualization. Panda3d is in and loading the hierarchy. Need to put in the trivial top level that moves an object with the data dictionary
Pinged Shimei and got a response. Restart meetings next Wednesday after the conference?
We propose and explore the possibility that language models can be studied as effective proxies for specific human sub-populations in social science research. Practical and research applications of artificial intelligence tools have sometimes been limited by problematic biases (such as racism or sexism), which are often treated as uniform properties of the models. We show that the “algorithmic bias” within one such tool — the GPT-3 language model — is instead both fine-grained and demographically correlated, meaning that proper conditioning will cause it to accurately emulate response distributions from a wide variety of human subgroups. We term this property “algorithmic fidelity” and explore its extent in GPT-3. We create “silicon samples” by conditioning the model on thousands of socio-demographic backstories from real human participants in multiple large surveys conducted in the United States. We then compare the silicon and human samples to demonstrate that the information contained in GPT-3 goes far beyond surface similarity. It is nuanced, multifaceted, and reflects the complex interplay between ideas, attitudes, and socio-cultural context that characterize human attitudes. We suggest that language models with sufficient algorithmic fidelity thus constitute a novel and powerful tool to advance understanding of humans and society across a variety of disciplines.
1:00 MDA presentation – done!
Finish MORS deck – nope, but closer. Need to have a slide that shows we are building small maps – almost like fortification scale