Political bots are social media algorithms that impersonate political actors and interact with other users, aiming to influence public opinion. This study investigates the ability to differentiate bots with partisan personas from humans on Twitter. Our online experiment (N = 656) explores how various characteristics of the participants and of the stimulus profiles bias recognition accuracy. The analysis reveals asymmetrical partisan-motivated reasoning, in that conservative profiles appear to be more confusing and Republican participants perform less well in the recognition task. Moreover, Republican users are more likely to confuse conservative bots with humans, whereas Democratic users are more likely to confuse conservative human users with bots. We discuss implications for how partisan identities affect motivated reasoning and how political bots exacerbate political polarization.
While many theoretical studies have revealed the strategies that could lead to and maintain cooperation in the Iterated Prisoner’s dilemma, less is known about what human participants actually do in this game and how strategies change when being confronted with anonymous partners in each round. Previous attempts used short experiments, made different assumptions of possible strategies, and led to very different conclusions. We present here two long treatments that differ in the partner matching strategy used, i.e. fixed or shuffled partners. Here we use unsupervised methods to cluster the players based on their actions and then Hidden Markov Model to infer what the memory-one strategies are in each cluster. Analysis of the inferred strategies reveals that fixed partner interaction leads to behavioral self-organization. Shuffled partners generate subgroups of memory-one strategies that remain entangled, apparently blocking the self-selection process that leads to fully cooperating participants in the fixed partner treatment. Analyzing the latter in more detail shows that AllC, AllD, TFT- and WSLS-like behavior can be observed. This study also reveals that long treatments are needed as experiments with less than 25 rounds capture mostly the learning phase participants go through in these kinds of experiments.
Finished up alignment in belief space, started on lists, stories, games and maps. I also downloaded the whole project and stuck it in subversion. Don’t want to lose it
Found this today: transdiffusion.org: Founded in 1964, Transdiffusion’s huge archive of television and radio material is provided free to people wishing to learn more about the history of broadcasting in the UK
Working on the Humans and Information chapter. Needs a LOT of work
Yet more timesheet crap
Got the graphics loading from the config file. Need to arrange in a hierarchy and draw a module that gets its information from the data dictionary
Antigenic cartographyhas its roots in a mathematical technique called “multidimensional scaling,” which has been around since the 1960s. The algorithm uses data about the distances between pairs of objects to reconstruct a map of the objects’ relative locations. For example, if you had a table that lists the distances between a bunch of U.S. cities—like you might find in a road atlas—you could use a multidimensional scaling algorithm to reconstruct a map of those cities based solely on the distances between them. (IEEE Spectrum – The Algorithm that Mapped Omicron Shows a Path Forward)
Read the epilogue to Aaron last night and made some tweaks. I need to work on the suggestions
Got a firm “no” and no leads from Kendall Hunt. Sigh
Need to do informed consent, recruiting flyers and emails
Hyperparameter (HP) tuning in deep learning is an expensive process, prohibitively so for neural networks (NNs) with billions of parameters. We show that, in the recently discovered Maximal Update Parametrization (muP), many optimal HPs remain stable even as model size changes. This leads to a new HP tuning paradigm we call muTransfer: parametrize the target model in muP, tune the HP indirectly on a smaller model, and zero-shot transfer them to the full-sized model, i.e., without directly tuning the latter at all. We verify muTransfer on Transformer and ResNet. For example, 1) by transferring pretraining HPs from a model of 13M parameters, we outperform published numbers of BERT-large (350M parameters), with a total tuning cost equivalent to pretraining BERT-large once; 2) by transferring from 40M parameters, we outperform published numbers of the 6.7B GPT-3 model, with tuning cost only 7% of total pretraining cost. A Pytorch implementation of our technique can be found at this http URL and installable via `pip install mup`.
“It was certainly not a job I’d tell my friends and family about with pride. When they asked what I did at ByteDance, I usually told them I deleted posts (删帖). Some of my friends would say, “Now I know who gutted my account.” The tools I helped create can also help fight dangers like fake news. But in China, one primary function of these technologies is to censor speech and erase collective memories of major events, however infrequently this function gets used.”