Category Archives: Phil

Phil 9.24.2024

I just read this, and all I can think of is that this is exactly the slow speed attack that AI would be so good at. It’s a really simple playbook. Run various versions against all upcoming politicians. Set up bank accounts in their names that pay for all this and work on connecting their real accounts. All it takes is vast, inhuman patience: https://www.politico.com/news/2024/09/23/mark-robinson-porn-sites-00180545

SBIRs

  • 1:00 Tradeshow demo meeting – done. Looks like I will be doing the development of the back end
  • Finished white paper and ROM spreadsheet.

Grants

  • Finished reading proposal 12, next is 14. Started

GPT Agents

  • Take a run at the Challenges section. Nope.

Phil 9.23.2024

Autumnal equinox today. And it’s the last push to move everything into the garage for the basement finishing

  • Dumpster
  • Move shelving and storage
  • Laundry room
  • Bike stuff

Grants

  • Start reading proposal 12

Phil 9.21.2024

I’d like to write an essay that compares John Wick to Field of Dreams as an example of what AI can reasonably be expected to be able to produce and what it will probably always struggle with.

This weekend is the last push to move everything into the garage for the basement finishing

  • Dumpster – good progress! I think I’ll finish tomorrow
  • Move shelving and storage (don’t forget that the trailer can also be used for longer-term storage
  • Bike stuff

Grants

  • Finished reading proposal 10. Since I have notes, I’m going to read the other two first and get a sense of what these grants look like before writing the reviews

Phil 9.20.2024

UCL demographer’s work debunking ‘Blue Zone’ regions of exceptional lifespans wins Ig Nobel prize

  • Dr Newman received the award for research that revealed fundamental flaws in extreme old-age demographic research, by demonstrating that data patterns are likely to be dominated by errors and finding that supercentenarian and remarkable age records exhibit patterns indicative of clerical errors and pension fraud (paper pre-print, not yet peer-reviewed). 

Chores

  • Recycling run – done!
  • Goodwill run – done!
  • 11:30 Lunch with Greg – done!
  • Trim grass – done

SBIRs

  • 2:00 Meeting – done

Grants

  • Continue reading proposal 10. Everything is due Oct 9. About 3/4 through

Phil 9.20.2024

“AI” often means artificial intentionality: trying to trick others into thinking that deliberate effort was invested in some specific something. That attention that was never invested is instead extracted from the consumer— a burden placed on them

SBIRs

  • 9:00 standup
  • 10:30–11:30 Virtual Event | The Cyber Landscape in the Indo-Pacific | Center for a New American Security. It was interesting – The big players (e.g. Microsoft) are still treating hostile information operations as a form of cybercrime, which tends to be slow, and may not be up to warfare-level engagements. It’s kind of like treating war as the crime of mass murder. Which it sort of is, but working to bring your enemy to trial only happens after the shooting stops, usually.
  • Meeting with Aaron to discuss white paper. Probably Monday
  • 4:30 Book Club

GPT Agents

  • Finish background! Done! Reorganized things too.
  • 2:45 LLM meeting – did a lot of editing. We should have a first draft by next week

Grants

  • Continue reading proposal 10. Everything is due Oct 9. About halfway through

Phil 9.18.2024

He’s talking about SocialAI

Craigslist Founder Pledges $100 Million to Boost U.S. Cybersecurity

  • Craigslist founder Craig Newmark believes hacking by foreign governments is a major risk to the U.S. and plans to donate $100 million to bolster the country’s cybersecurity.

SBIRs

  • Work on white paper rewrite – Done!

GPT Agents

  • Finish background – almost
  • 3:00 Alden meeting

Grants

  • Start reading 1st. Everything is due Oct 9

Phil 9.17.2024

Need to set up my new reviews on Easy Chair. Done! Now I need to read the things!

Leaked Files from Putin’s Troll Factory: How Russia Manipulated European Elections

  • Leaked internal documents from a Kremlin-controlled propaganda center reveal how a well-coordinated Russian campaign supported far-right parties in the European Parliament elections — and planted disinformation across social media platforms to undermine Ukraine.

LLMs Will Always Hallucinate, and We Need to Live With This

  • As Large Language Models become more ubiquitous across domains, it becomes important to examine their inherent limitations critically. This work argues that hallucinations in language models are not just occasional errors but an inevitable feature of these systems. We demonstrate that hallucinations stem from the fundamental mathematical and logical structure of LLMs. It is, therefore, impossible to eliminate them through architectural improvements, dataset enhancements, or fact-checking mechanisms. Our analysis draws on computational theory and Godel’s First Incompleteness Theorem, which references the undecidability of problems like the Halting, Emptiness, and Acceptance Problems. We demonstrate that every stage of the LLM process-from training data compilation to fact retrieval, intent classification, and text generation-will have a non-zero probability of producing hallucinations. This work introduces the concept of Structural Hallucination as an intrinsic nature of these systems. By establishing the mathematical certainty of hallucinations, we challenge the prevailing notion that they can be fully mitigated.

SBIRs

  • 9:00 Standup
  • 10:00 VRGL/GRL meeting
  • 11:30 White paper fixes. So I’ve found some conflicts between various ONR whitepaper requests. Here’s the one that I’ve been working to. Notice that there is no mention of paper structure other than the cover page. This concerns me because I have a page of references:
  • Note that there is still no mention if references are counted as part of the white paper. To get some insight on that, you have to look at the Format for Technical Proposal in the second document:
  • At this point we are two degrees of separation from the original call (Different request, which is a FOA, rather than a BAA, and a technical proposal rather than a white paper).
  • Regardless, I’m leaning towards reformatting the paper to be more in line with the FOA, following its structure and assuming the references don’t count.
  • I’d keep the finalized version of this white paper in case they come back with a request for a 5-page paper including references, but do a more FOA-compliant version that shares a lot of the content.
  • 2:30 AI ethics. Approved a project!

GPT Agents

  • Work on Background section. Some good, slightly half-assed progress

Phil 9.16.2024

Fix siding before it rains! Done! Though I need to tweak it a bit so the seams match better

Mow lawn before it rains! Done!

SBIRs

  • 11:00 VRL/GRL meeting – delayed till tomorrow
  • Work on White paper. First correct-length draft done! Meeting tomorrow to walk through and tweak. Then figure out if the references attach and how to submit

Went to see The Physics of Baseball along with a nice beer

Phil 9.13.2024

You can tell the days are getting shorter more quickly now

Nice overview of KA Networks: https://spectrum.ieee.org/kan-neural-network. There is a nice pytorch library, too: https://github.com/KindXiaoming/pykan

  • In the new architecture, the synapses play a more complex role. Instead of simply learning how strong to make the connection between two neurons, they learn an activation function that maps input to output. And unlike the activation function used by neurons in the traditional architecture, this function could be more complex—in fact a “spline” or combination of several functions—and is different in each instance. Neurons, on the other hand, become simpler—they just sum the outputs of all their preceding synapses. The new networks are called Kolmogorov-Arnold Networks (KANs), after two mathematicians who studied how functions could be combined. The idea is that KANs would provide greater flexibility when learning to represent data, while using fewer learned parameters.

Phil 9.12.2024

Oscillations in an artificial neural network convert competing inputs into a temporal code

  • Computer vision is a subfield of artificial intelligence focused on developing artificial neural networks (ANNs) that classify and generate images. Neuronal responses to visual features and the anatomical structure of the human visual system have traditionally inspired the development of computer vision models. The visual cortex also produces rhythmic activity that has long been suggested to support visual processes. However, there are only a few examples of ANNs embracing the temporal dynamics of the human brain. Here, we present a prototype of an ANN with biologically inspired dynamics—a dynamical ANN. We show that the dynamics enable the network to process two inputs simultaneously and read them out as a sequence, a task it has not been explicitly trained on. A crucial component of generating this dynamic output is a rhythm at about 10Hz, akin to the so-called alpha oscillations dominating human visual cortex. The oscillations rhythmically suppress activations in the network and stabilise its dynamics. The presented algorithm paves the way for applications in more complex machine learning problems. Moreover, we present several predictions that can be tested using established neuroscientific approaches. As such, the presented work contributes to both artificial intelligence and neuroscience.

SBIRs

  • Read through and edit white paper.
  • 9:00 standup
  • 9:30 FOM demo discussion
  • 11:00 Deltek focus group
  • 12:45 USNA
  • 4:30 Book club

GPT Agents

  • 2:45 meeting

Phil 9.11.2024

It was a lovely early fall day 23 years ago. I don’t remember a cloud in the sky. Man, those memories are vivid.

Catonsville cleanup day 12:00 – 2:00. Nope, it’s the 14th. Don’t know how I got confused.

SBIRs

  • 12:00 CEO Employee town hall
  • 1:00 AI demo. I think this is just a capability thing?
  • Finished the first pass of the white paper!

Phil 9.10.2024

Baiting the bot

  • LLM chatbots can be engaged in endless “conversations” by considerably simpler text generation bots. This has some interesting implications.

SBIRs

  • 9:00 Standup
  • More white paper – got through the research objectives

Phil 9.9.2024

SBIRs

  • Added a bunch of links to the USNA sources for the capstone project
  • 10:30 NG demo meeting?
  • Made good progress on the white paper

Also took a big load of basement to the local acceptance facility. They don’t take paint, but the big one in Cockysville takes… well, pretty much everything. I’ll load up today and make another run tomorrow.

Phil 9.6.2024

Unexpected Benefits of Self-Modeling in Neural Systems

  • Self-models have been a topic of great interest for decades in studies of human cognition and more recently in machine learning. Yet what benefits do self-models confer? Here we show that when artificial networks learn to predict their internal states as an auxiliary task, they change in a fundamental way. To better perform the self-model task, the network learns to make itself simpler, more regularized, more parameter-efficient, and therefore more amenable to being predictively modeled. To test the hypothesis of self-regularizing through self-modeling, we used a range of network architectures performing three classification tasks across two modalities. In all cases, adding self-modeling caused a significant reduction in network complexity. The reduction was observed in two ways. First, the distribution of weights was narrower when self-modeling was present. Second, a measure of network complexity, the real log canonical threshold (RLCT), was smaller when self-modeling was present. Not only were measures of complexity reduced, but the reduction became more pronounced as greater training weight was placed on the auxiliary task of self-modeling. These results strongly support the hypothesis that self-modeling is more than simply a network learning to predict itself. The learning has a restructuring effect, reducing complexity and increasing parameter efficiency. This self-regularization may help explain some of the benefits of self-models reported in recent machine learning literature, as well as the adaptive value of self-models to biological systems. In particular, these findings may shed light on the possible interaction between the ability to model oneself and the ability to be more easily modeled by others in a social or cooperative context.

Chores

  • House – Done
  • Bills – Done
  • Lawn – done
  • Groceries – done
  • See if I can fix the door on the truck.
  • Start moving things out of the basement and into the garage – ordered boxes
  • T.W. Ellis – done