Phil 9.15.2023

SBIRs

  • More scale paper – poked at it a little
  • Dahlgren white paper? – Got a good start with Aaron

GPT Agents

  • Write the email inviting people to participate and the email to the chairs as well. – done
  • Add emails and captions to the Word doc – done
  • Submit! – done

Phil 9.14.2023

Meet Greg at 6:00!

SBIRs

  • I guess we’ll see what is going on with the server today?
  • 9:00 Standup
  • GPT IRAD decision?
  • 11:30 CSC
  • More scale paper. Need to start looking for some pix. Finished the disruption section. I think counterattack is an extension of disruption, and should be written that way. Of course, there’s a lot of groundwork that would have to be done in advance to put all the actors in place. That’s a tricky issue that’s worth discussing.
  • Tweaked the template for the Dahlgren paper and added some links to examples of prompt engineering to produce JSON files
  • Add a 0.5 point story for AI ethics

GPT Agents

  • 2:00 UMBC Meeting. Test the new ContextTest and walk through the IRB – done with the later. Need to tweak the former – done
  • Add education history to work history prompt – done
  • Add “I assert that I am at least 18 years old” – done
  • Add recruitment email and screenshots to attachments – done
  • Change REI to Amazon – done
  • Draft email for all department chairs that includes an introduction of what the study is and who we are.

Phil 9.13.2023

Listening to Alban Claudin’s “Room of Reflection” and quite liking it

SBIRs

  • Working on venues for the scale paper/book. Need to start filling out the “defense” section. Started. Finished “Detection.” Next is “Disruption.”
  • Wrote up a short Python script that runs the loops that we think would generate the trajectories that we (think?) we need. I just realized that there needs a “trim” function that removes the beginning and end so we only have computable data
  • 10:00 meeting with Rukan. The machine is hanging on file access because read permissions have been changed
  • 3:00 AI Ethics meeting. Do homework! Done. Shiny, yet bad videos
  • Registered for the Digital Platforms and Societal Harms event

GPT Agents

  • Looks like we meet at 2:00 on Thursdays
  • Got a good start on the IRB! Need some guidance to finish

Phil 9.12.2023

What if Generative AI turned out to be a Dud?

SBIRs

  • Our security people have decided that collaborative writing using overleaf is too much of a threat so they will not allow it. On top of all their other policies, I am very close to quitting.
  • Need to register for Digital Platforms and Societal Harms
  • Wrote up some code to show the loops for trajectories and sent to SEG
  • Work on Scale paper
    • Defense section
    • Venues – Moved to Overleaf. Still need to finish descriptions

GPT Agents

  • IRB form! Progress!
  • We are now meeting at 2:00 on Thursdays

Phil 9.11.2023

Twenty-two years ago, I remember this day starting as a crisp autumn morning with infinite, clear blue skies.

Sam Bankman-Fried’s jail conditions offer a glimpse at systemic failure

Everything I’ll forget about prompting LLMs

SBIRs

  • Submit expenses
  • We need another story. In this case, it’s another war room vignette, but this time from the defense’s side. Maybe with M again? Of course, part of this is figuring out what defenses might actually look like. One thing I’d like to re-use in the idea of diverse operator teams looking for misbehaving models. In this case though, the models are trained to be honeypots for attacks maybe? They go along in their day-to-day, sending emails, running dummy companies, having dates, etc. When they start acting too aligned, then it’s time to start looking for trouble. Maybe digital twins of important people?
  • 9:00 Sprint demos. Make Slides
  • 2:00 Weekly MDA meeting
  • 3:00 Sprint planning

GPT Agents

  • Start filling out IRB form

Phil 9.8.2023

SBIRs

  • Had a good chat with Rukan yesterday. What worked with the hdfproc data didn’t work with the new offsets? He’s going to run some tests
  • I really want to add a new project to the LLM IRAD. Something like NNMap-enabled group support. Need a better name, some slides (mentioning “killer app” and all the possible uses), and a schedule.
  • Tweaked the Jan6 AI subsection to integrate better into the rest of the section
  • Need to add a “Detect and Defend” section
  • Need to add an “AI Arms Control for Societal AI Weapons” section. Show that this is in everyone’s best interests. Authoritarian regimes are potentially at greater risk, particularly for Spanner and Lobotomy attacks.

Phil 09.07.2023

SBIRs

  • 9:00 standup
  • LLM schedule planning with Aaron. Done
  • 2:00 Dahlgren follow-up meeting
  • More scale paper. Add QAnon as the other main component of Jan6

GPT Agents

  • Tests with Roger and/or Aaron?

Phil 09.06.2023

SBIRs

  • Submitted my technical fellows stuff
  • Steve’s presentation – added comments
  • Installing sw on the laptop – done
  • MDA next steps (intersection of TI and current time allows for sync. We’d need several points, but not too many
  • LLM planning. Need to create schedules?

GPT Agents

  • 3:00 meeting? Yup. Alden seems to be finding traction

Phil 9.5.2023

Nice three-day weekend, but boy did it end hot!

SBIRs

  • Q6 Report:
    • Add an overview of SEG’s white paper to commercialization section. Done
    • Submit – done!
    • Check with Aaron about the white paper to see if it’s good to go in – done
    • Spending a lot of time with Rukan on seeing if the propagation is the same for the two setups
    • Get started with the Solid getting started documentation. Once I have a framework up and running, then I can load the StampedeTheory chapter summaries to supabase. Then access using LangChain
    • MCWL meeting

GPT Agents

  • Ping Roger for some testing?

Phil 9.1.2023

Wow. September. And the Halloween candy displays have been up for a while already

SBIRs

GPT Agents

  • Sent out an email to the team to schedule some user testing

And I’ve run out of gas. Going to clean house

Phil 8.31.2023

Yesterday must have been pretty busy. I never made any notes.

I wrote a pitch to RadioLab about doing a story on the “living in a simulation” thing. I also turned that into a blog post

SBIRs

  • Had a good discussion on SEGs white paper. They are reviewing their changes and will get back to me with their final today. Hopefully.
  • Got some good stuff done on the Scale paper. More today

GPT Agents

  • Made a lot of progress here! All the new variables are in. I added some instructions. Still need to have the prompt titles and randomize – DONE! I think it’s ready to try out again, though I need to flip the switch back to GPT-4:
  • Added my times for potential meetings

Phil 8.29.2023

SBIRs

  • Got the preliminary schedule and task list done yesterday, so waiting for a response to discuss on Wednesday.
  • Work more on the historical examples section.
  • Speaking of simple sabotage: Poland investigates train mishaps for possible Russian connection
    • Saboteurs exploited the vulnerability in the Polish “radio stop” command system, which automatically brings trains to a stop when three tonal signals are broadcast through the railway’s radio network.

GPT Agents

Phil 8.28.2023

SBIRs

  • Sprint demos
  • 2:00 MDA Meeting – done
  • Finished the LOE part for the white paper. SEG need to add some clarification
  • 3:00 Sprint Planning – done

GPT Agents

  • Start rolling in changes to test app. Changed the db and a little of experimentService

Phil 8.25.2023

Do Aaron’s letter

SBIRs

  • 10:00 Meeting with Bob
  • Wrote up some comments on the LLM tool. I think the scope should be expanded
  • Put more guidance in the War Room story – done

GPT Agents

  • Add a textarea that displays the read-in text or PDF as a validation tool – done
  • Need to read in a few of the new files and get back to work on the new section

Phil 8.24.2023

Consciousness in Artificial Intelligence: Insights from the Science of Consciousness

  • Whether current or near-term AI systems could be conscious is a topic of scientific interest and increasing public concern. This report argues for, and exemplifies, a rigorous and empirically grounded approach to AI consciousness: assessing existing AI systems in detail, in light of our best-supported neuroscientific theories of consciousness. We survey several prominent scientific theories of consciousness, including recurrent processing theory, global workspace theory, higher-order theories, predictive processing, and attention schema theory. From these theories we derive “indicator properties” of consciousness, elucidated in computational terms that allow us to assess AI systems for these properties. We use these indicator properties to assess several recent AI systems, and we discuss how future systems might implement them. Our analysis suggests that no current AI systems are conscious, but also suggests that there are no obvious technical barriers to building AI systems which satisfy these indicators.

SBIRs

  • Now that I have a better way to organize groups, I re-labeled and split up some of the groups that I was using
  • Working on email to Bob S. Done. Meeting set for tomorrow at 10:00