Just Audio – bouncing around, seeing how to fix the Philco
Bottling Plant – Set up appt for tomorrow? Nope, the 16th at 11:00
FSK parking options? – Set up appt for next Thursday at 10:00. Maybe
Roll in edits – Done! Pinged NoStarch too, and updated the repo.
Wow – the book is kind of done
Registered with Santander X. Need the LLC info next, but this could be useful for startup help
LLM stuff
2:30 LLM meeting. Make sure AWS instance is up – done. We still can’t really agree on what the article is. Though I think my section should argue that Grok is being shaped in ways to deliberately work from Musk’s perspective as a propaganda tool. However, sycophantic chatbots are potentially worse. Introduce totalitarianism and atomization as a precondition. Sycophantic chatbots can act to concentrate or atomize users based on their deep biases. In the end, this could conceivable create on one hand a monolithic movement of people who have lost their identity to the movement, and on the other side, dispersed the natural resistance to that movement to the point of ineffectualness.
And add something about social dominance orientation
I do think I can move the last two paragraphs over to the conclusions though.
The length of acknolwegdements in scholarly books varies substantially by authors’ demographics. Women, people of color, and LGBQ scholars tend to write more about the people, resources, and conditions that made their books possible. So, too, do younger scholars and those whose parents have graduate degrees. These differences may indicate both the increased size of the support networks these authors draw on, and/or a greater awareness of those networks’ value.
Mine is 1,300 words. Need to see where that places me.
Tasks
Start on the office bookshelves – started
Ping Nellie – done
Ping FSK and the Bottling Plant for a tour
Update the spreadsheet
Edge lawn?
SBIRs
Slides – done
9:00 Sprint Demos -done
Stories – done
12:00 GFE meeting – done
3:00 Sprint planning – done
LLM stuff
Start editing the CACM opinion piece. Write a paragraph that weaves together the following:
The relationship between computing systems and the brain has served as motivation for pioneering theoreticians since John von Neumann and Alan Turing. Uniform, scale-free biological networks, such as the brain, have powerful properties, including generalizing over time, which is the main barrier for Machine Learning on the path to Universal Reasoning Models. We introduce `Dragon Hatchling’ (BDH), a new Large Language Model architecture based on a scale-free biologically inspired network of $n$ locally-interacting neuron particles. BDH couples strong theoretical foundations and inherent interpretability without sacrificing Transformer-like performance. BDH is a practical, performant state-of-the-art attention-based state space sequence learning architecture. In addition to being a graph model, BDH admits a GPU-friendly formulation. It exhibits Transformer-like scaling laws: empirically BDH rivals GPT2 performance on language and translation tasks, at the same number of parameters (10M to 1B), for the same training data. BDH can be represented as a brain model. The working memory of BDH during inference entirely relies on synaptic plasticity with Hebbian learning using spiking neurons. We confirm empirically that specific, individual synapses strengthen connection whenever BDH hears or reasons about a specific concept while processing language inputs. The neuron interaction network of BDH is a graph of high modularity with heavy-tailed degree distribution. The BDH model is biologically plausible, explaining one possible mechanism which human neurons could use to achieve speech. BDH is designed for interpretability. Activation vectors of BDH are sparse and positive. We demonstrate monosemanticity in BDH on language tasks. Interpretability of state, which goes beyond interpretability of neurons and model parameters, is an inherent feature of the BDH architecture.
It really makes me thing that it would be a good time to revisit lateral inhibition / hierarchical stimulation
Both the general public and academic communities have raised concerns about sycophancy, the phenomenon of artificial intelligence (AI) excessively agreeing with or flattering users. Yet, beyond isolated media reports of severe consequences, like reinforcing delusions, little is known about the extent of sycophancy or how it affects people who use AI. Here we show the pervasiveness and harmful impacts of sycophancy when people seek advice from AI. First, across 11 state-of-the-art AI models, we find that models are highly sycophantic: they affirm users’ actions 50% more than humans do, and they do so even in cases where user queries mention manipulation, deception, or other relational harms. Second, in two preregistered experiments (N = 1604), including a live-interaction study where participants discuss a real interpersonal conflict from their life, we find that interaction with sycophantic AI models significantly reduced participants’ willingness to take actions to repair interpersonal conflict, while increasing their conviction of being in the right. However, participants rated sycophantic responses as higher quality, trusted the sycophantic AI model more, and were more willing to use it again. This suggests that people are drawn to AI that unquestioningly validate, even as that validation risks eroding their judgment and reducing their inclination toward prosocial behavior. These preferences create perverse incentives both for people to increasingly rely on sycophantic AI models and for AI model training to favor sycophancy. Our findings highlight the necessity of explicitly addressing this incentive structure to mitigate the widespread risks of AI sycophancy.
AI chatbots have been shown to be successful tools for persuasion. However, people may prefer to use chatbots that validate, rather than challenge, their pre-existing beliefs. This preference for “sycophantic” (or overly agreeable and validating) chatbots may entrench beliefs and make it challenging to deploy AI systems that open people up to new perspectives. Across three experiments (n = 3,285) involving four political topics and four large language models, we found that people consistently preferred and chose to interact with sycophantic AI models over disagreeable chatbots that challenged their beliefs. Brief conversations with sycophantic chatbots increased attitude extremity and certainty, whereas disagreeable chatbots decreased attitude extremity and certainty. Sycophantic chatbots also inflated people’s perception that they are “better than average” on a number of desirable traits (e.g., intelligence, empathy). Furthermore, people viewed sycophantic chatbots as unbiased, but viewed disagreeable chatbots as highly biased. Sycophantic chatbots’ impact on attitude extremity and certainty was driven by a one-sided presentation of facts, whereas their impact on enjoyment was driven by validation. Altogether, these results suggest that people’s preference for and blindness to sycophantic AI may risk creating AI “echo chambers” that increase attitude extremity and overconfidence.
Debates about misinformation and countermeasures are often driven by dramatic analogies, such as “infodemic” or “information warfare”. While useful shortcuts to interference, these analogies obscure the complex system through which misinformation propagates, leaving perceptual gaps where solutions lie unseen. We present a new framework of the complex multilevel system through which misinformation propagates and show how popular analogies fail to account for this complexity. We discuss implications for policy making and future research.
This is quite good. It shows how attacks work at different levels, from Individual, through social groups, social media, and States/Societies. It would be good to add to the current article or to the KA book
Recent academic debate has seen the emergence of the claim that misinformation is not a significant societal problem. We argue that the arguments used to support this minimizing position are flawed, particularly if interpreted (e.g., by policymakers or the public) as suggesting that misinformation can be safely ignored. Here, we rebut the two main claims, namely that misinformation is not of substantive concern (a) due to its low incidence and (b) because it has no causal influence on notable political or behavioral outcomes. Through a critical review of the current literature, we demonstrate that (a) the prevalence of misinformation is nonnegligible if reasonably inclusive definitions are applied and that (b) misinformation has causal impacts on important beliefs and behaviors. Both scholars and policymakers should therefore continue to take misinformation seriously.
Tasks
Bills – done
Car registration – done
Water plants – done
Chores – done
Dishes – done
Storage run
SBIRs
2:00 IRAD meeting – not sure what we got out of that
LLMs
More work on the article, need to fold in the sycophant chatbot paper – done!
I realize that I want to make “cards” for data files and models that make the loading in of the next part of the pipeline easier. Add that to the stories for next sprint
9:00 Standup – done
10:30 BP discussion – done. Need to put hours in for each phase and in the exec summary
3:00 SEG – done, going to every other week until things pick up
4:00 ADS – went reall well! Sent off Sow, and discussed follow-on work
Autocrats depend on a capable secret police. Anecdotal evidence, however, often characterizes agents as surprisingly mediocre in skill and intellect. To explain this puzzle, this article focuses on the career incentives underachieving individuals face in the regular security apparatus. Low-performing officials in hierarchical organizations have little chance of being promoted or filling lucrative positions. To salvage their careers, these officials are willing to undertake burdensome secret police work. Using data on all 4,287 officers who served in autocratic Argentina (1975–83), we study biographic differences between secret police agents and the entire recruitment pool. We find that low-achieving officers were stuck within the regime hierarchy, threatened with discharge, and thus more likely to join the secret police for future benefits. The study demonstrates how state bureaucracies breed mundane career concerns that produce willing enforcers and cement violent regimes. This has implications for the understanding of autocratic consolidation and democratic breakdown.
I would bet that this behavior shows up on belief maps. It’s also another attack vector. An AI MitM attack that looks for mediocre comms could target those individuals for exploitation. Also, this is most dangerous in organizations that are legally allowed to use lethal force.
And, come to think of it, if you need an army of goons, then adjusting your hiring to ensure that low-achievers are preferentially hired would be part of the plan.
I come to realize that the far-right’s fetishism over the Second Amendment was likely never about rising up in opposition to some feared socialist, gunnapping American regime. It was about recruiting and arming a disordered militia in support of the autocracy of the right
scraped all incoming bluesky posts the other day for a bit, it's somewhere north of 2m, might be interesting to compare against earlier samples for trend huggingface.co/segyges/blue…
Large Language Models compress massive amounts of training data into their parameters. This compression is lossy but highly effective—billions of parameters can encode the essential patterns from terabytes of text. However, what’s less obvious is that this process can be reversed: we can systematically extract structured datasets from trained models that reflect their internal knowledge representation.
You must be logged in to post a comment.