Monthly Archives: April 2023

Phil 5.1.2023

Call Jim Donnies – done

SBIRs

  • Hotel for MORS – done
  • Ping Zach to set up a demo – done. Long chat. We’re moving forward
  • Working on Slides
  • MDA Meeting – I think everything has been worked out?

Phil 4.28.2023

“Source: ChatGPT”

This is a good thread, but it misses some important context. ArXiv isn’t all that easy to publish too. It really helps to have an .edu email address. You need to know how to use LaTeX. The author is a professor at a New Zealand University, with a long publishing history and a solid h-index. When you’re in a hurry and just skimming the abstract looking to bolster your reference section, this could easily pass the test.

And there’s another thing. As someone in the AI/ML space, the ability to get published in a high-profile conference or journal is getting much harder these days. Getting accepted often means having a result that improves on some benchmark. Poking around in new directions means not getting accepted and publishing on ArXiv. For example, Deep residual learning for image recognition has currently been cited over 150,000 times.

This is almost my avatar from the new paper

SBIRs

  • Went to the Microsoft/OpenAI thing yesterday. Mostly advertising, but it’s interesting to note that the Azure account has access to the 32k token input buffer model. Also, there are exactly two instances of the running inference model. It’s too big to be easily replicated. One really good things to see was how you can use the GPT to turn unstructured text into a JSON string that can be consumed by traditional programs. And the reverse is true too – anything can be used to generate a contextual prompt. THings are moving fast.
  • Great chat with Zach. We’re going to try to ingest the NOAA financial regs to throw the chatbot against. Also, some good discussion on how to use big models for assistive interfaces for the vision-impaired. We’ll try to set up something for Monday
  • 9:00 Meeting with Lauren
  • 10:00 Meeting with Aaron and Eric
  • Maybe something in the afternoon with Steve?

GPT Agents

  • Clean out NarrativeExplorer and start ListExplorer and SequenceExplorer. Will probably need some new tables?
  • Make a thread tonight!

Phil 4.27.2023

Calibrated Chaos: Variance Between Runs of Neural Network Training is Harmless and Inevitable

  • Typical neural network trainings have substantial variance in test-set performance between repeated runs, impeding hyperparameter comparison and training reproducibility. We present the following results towards understanding this variation. (1) Despite having significant variance on their test-sets, we demonstrate that standard CIFAR-10 and ImageNet trainings have very little variance in their performance on the test-distributions from which those test-sets are sampled, suggesting that variance is less of a practical issue than previously thought. (2) We present a simplifying statistical assumption which closely approximates the structure of the test-set accuracy distribution. (3) We argue that test-set variance is inevitable in the following two senses. First, we show that variance is largely caused by high sensitivity of the training process to initial conditions, rather than by specific sources of randomness like the data order and augmentations. Second, we prove that variance is unavoidable given the observation that ensembles of trained networks are well-calibrated. (4) We conduct preliminary studies of distribution-shift, fine-tuning, data augmentation and learning rate through the lens of variance between runs.

SBIRs

  • Spending the day at Explore Azure OpenAI & ChatGPT for Federal Agencies
  • Need to get back to slides

GPT Agents

  • After getting lists to work in the TopicNode class yesterday, I realize that I need a ListExplorer and SequenceExplorer app. It will be to confusing to stuff everything into NarrativeExplorer.

Phil 4.26.2023

U.S. is concerned about rivals’ space threats, leaked documents show

  • “Russian companies attempted to create space-rated components for select satellites,” the document asserts. “But the low quality of the components led to on-orbit malfunctions.” It did not identify specific failings.
  • This makes me think that Russia will focus on the weapons that it has more trust in, like misinformation. Very low cost, and how bad can the blowback be?

I changed my password and am currently locked out of all my work accounts as the change ripples through. Sigh. “Technology company” Again with the sigh.

SBIRs

  • 3:00 AI Ethics. Good discussion. I think we are leaning towards an “Ethics Review Board” as part of the gate review for proposals
  • Looking at using Metro tomorrow rather than driving to/from Arlington. I can park at Glenmont

GPT Agents

  • Continue with TopicNode
    • Get the inbound and outbound linkages working – done?
    • Write a lot of stack operations to put the network together. Going to take a break before I try it
    • 4:00 Meeting with Alden
      • Good discussion. We started looking at virality as a related work, but in the end got into a discussion about what it means to do a PhD, and that while methods&results is fine for an MS, a PhD is about proving that you have done original research, which means motivation, background, methods&results, discussion, conclusions, and often a discussion of ethics. Without the surrounding parts, you can’t show that the work is original and advances knowledge, and why that matters. I really do need to write this up, because a lot of this is unsaid at the time PhD students need to hear it.

Book

  • Got the final PDF today!

Phil 4.25.2023

Based at Salve Regina University’s Pell Center for International Relations and Public Policy, the Nationhood Lab is an interdisciplinary research, writing, testing and dissemination project focused on counteracting the authoritarian threat to American democracy and the centrifugal forces threatening the federation’s stability. The project delivers more effective tools with which to describe and defend the American liberal democratic tradition and better understand the forces undermining it.

Seventy years ago today: The 25 April 1953 issue of the journal Nature published a series of five articles giving the Watson and Crick double-helix structure DNA and evidence supporting it.[209] The structure was reported in a letter titled “MOLECULAR STRUCTURE OF NUCLEIC ACIDS A Structure for Deoxyribose Nucleic Acid“, in which they said, “It has not escaped our notice that the specific pairing we have postulated immediately suggests a possible copying mechanism for the genetic material.”[9] This letter was followed by a letter from Franklin and Gosling, which was the first publication of their own X-ray diffraction data and of their original analysis method.[47][210] Then followed a letter by Wilkins and two of his colleagues, which contained an analysis of in vivo B-DNA X-ray patterns, and which supported the presence in vivo of the Watson and Crick structure.[48] (From Wikipedia)

SBIRs

  • Figuring out how to get data to our server. Ron maybe? Need to check
  • Looks like I’m going to the USNA Capstone day again
  • Need to put together my stories
  • Finish getting Eric set up

GPT Agents

  • Start adjusting NarrativeExplorer
    • Read in additional info
    • Run sequences for a number of iterations
    • Run lists to a depth of recursions. The code is in GPT-2_Agents: InteractiveNode.py, and InteractiveGraphBuilder.py. I’ll need to move to an embedding model. That will need some testing and development.
    • Support making new contexts in the NarrativeExplorer.

Progress on the embedding model!

'vaccines cause autism' is 0.0000 away from 'vaccines cause autism'
'vaccines cause autism' is 0.0412 away from 'autism is caused by vaccines'
'vaccines cause autism' is 0.0659 away from 'autism is caused by the vax'
'vaccines cause autism' is 0.1111 away from 'the cause for autism is unknown'
'vaccines cause autism' is 0.2772 away from 'the earth is flat'

Done for the day. This is a fantastic result, though:

TopicNode.__init__()
TopicNode.add_known_good_list()
	reject threshold = 0.0655 dists = [0.02887398 0.01906836 0.03049576 0.03277439 0.02816651 0.03090093]
'vaccines cause autism' is 0.1030 away from 'the cause for autism is unknown' REJECT
'vaccines cause autism' is 0.2552 away from 'the earth is flat' REJECT
Topic 'vaccines cause autism' includes:
	'vaccines cause autism'
	'Vaccinations lead to autism'
	'Immunizations are linked to autism'
	'Autism is a result of vaccines'
	'Autism is triggered by vaccinations'
	'There's a connection between vaccines and autism'
	reject_threshold = 0.06555

Process finished with exit code 0

Phil 4.24.2023

Saw this on Twitter: Can We Build An AI Chatbot For Journalism?

  • Early Lessons In Accuracy, Sourcing, and Delight From A (Draft) Chatbot Based on NPR’s Planet Money Archives

Cancel hotel

SBIRs

  • 9:00 Sprint demos
  • 11:00 BMD tagup
  • 12:00 Customer meeting
  • 2:00 Weekly MDA meeting

GPT Agents

  • Name the regexes and make them global – done
  • Export the regexes and type along with the experiment – done
  • I realize that because the context is exported, that making new ones in the NarrativeExplorer will have to be an option.

Book

  • Tweet thread

Phil 3.22.2023

Finished all my tasks and my legs are still tired. I need to take the fixee out more.

Anyway, this is going to be one of those things that historians are going to have to explain:

Evaluating Verifiability in Generative Search Engines

  • Generative search engines directly generate responses to user queries, along with in-line citations. A prerequisite trait of a trustworthy generative search engine is verifiability, i.e., systems should cite comprehensively (high citation recall; all statements are fully supported by citations) and accurately (high citation precision; every cite supports its associated statement). We conduct human evaluation to audit four popular generative search engines — Bing Chat, NeevaAI, this http URL, and YouChat — across a diverse set of queries from a variety of sources (e.g., historical Google user queries, dynamically-collected open-ended questions on Reddit, etc.). We find that responses from existing generative search engines are fluent and appear informative, but frequently contain unsupported statements and inaccurate citations: on average, a mere 51.5% of generated sentences are fully supported by citations and only 74.5% of citations support their associated sentence. We believe that these results are concerningly low for systems that may serve as a primary tool for information-seeking users, especially given their facade of trustworthiness. We hope that our results further motivate the development of trustworthy generative search engines and help researchers and users better understand the shortcomings of existing commercial systems.

Phil 4.20.2023

We are a month into Spring already!

Inside the secret list of websites that make AI like ChatGPT sound smart

  • we analyzed Google’s C4 data set, a massive snapshot of the contents of 15 million websites that have been used to instruct some high-profile English-language AIs, called large language models, including Google’s T5 and Facebook’s LLaMA. (OpenAI does not disclose what datasets it uses to train the models backing its popular chatbot, ChatGPT)

Automatic Gradient Descent: Deep Learning without Hyperparameters

  • The architecture of a deep neural network is defined explicitly in terms of the number of layers, the width of each layer and the general network topology. Existing optimisation frameworks neglect this information in favour of implicit architectural information (e.g. second-order methods) or architecture-agnostic distance functions (e.g. mirror descent). Meanwhile, the most popular optimiser in practice, Adam, is based on heuristics. This paper builds a new framework for deriving optimisation algorithms that explicitly leverage neural architecture. The theory extends mirror descent to non-convex composite objective functions: the idea is to transform a Bregman divergence to account for the non-linear structure of neural architecture. Working through the details for deep fully-connected networks yields automatic gradient descent: a first-order optimiser without any hyperparameters. Automatic gradient descent trains both fully-connected and convolutional networks out-of-the-box and at ImageNet scale. A PyTorch implementation is available at this https URL and also in Appendix B. Overall, the paper supplies a rigorous theoretical foundation for a next-generation of architecture-dependent optimisers that work automatically and without hyperparameters.

One Small Step for Generative AI, One Giant Leap for AGI: A Complete Survey on ChatGPT in AIGC Era

  • OpenAI has recently released GPT-4 (a.k.a. ChatGPT plus), which is demonstrated to be one small step for generative AI (GAI), but one giant leap for artificial general intelligence (AGI). Since its official release in November 2022, ChatGPT has quickly attracted numerous users with extensive media coverage. Such unprecedented attention has also motivated numerous researchers to investigate ChatGPT from various aspects. According to Google scholar, there are more than 500 articles with ChatGPT in their titles or mentioning it in their abstracts. Considering this, a review is urgently needed, and our work fills this gap. Overall, this work is the first to survey ChatGPT with a comprehensive review of its underlying technology, applications, and challenges. Moreover, we present an outlook on how ChatGPT might evolve to realize general-purpose AIGC (a.k.a. AI-generated content), which will be a significant milestone for the development of AGI

JPEG Compressed Images Can Bypass Protections Against AI Editing

  • Recently developed text-to-image diffusion models make it easy to edit or create high-quality images. Their ease of use has raised concerns about the potential for malicious editing or deepfake creation. Imperceptible perturbations have been proposed as a means of protecting images from malicious editing by preventing diffusion models from generating realistic images. However, we find that the aforementioned perturbations are not robust to JPEG compression, which poses a major weakness because of the common usage and availability of JPEG. We discuss the importance of robustness for additive imperceptible perturbations and encourage alternative approaches to protect images against editing.

Book

  • Review updates and approve – DONE!!!!

SBIRs

  • Finish training – done
  • Moar slides and paper review – progress, but not done. more on Saturday

GPT agents

  • Work on getting context for lists
  • Export prompt and regex to the NarrativeExplorer input file
  • Fix regex to avoid parsing on “GPT-3” – done
  • Fixed (well, worked around) the bug that had the callback for a ListField being called from other TextComboExts. Can’t figure out what’s going on. The result is not horrible, though:

Phil 4.19.2023

Went to get my physical this morning. It appears I am still alive

SBIRs

  • Training – finished 2 of the three courses. Sooooo painful!

Book

  • Carefully review the chapters. On first pass it looks good, and the hand-rolled tweets look very credible

GPT Agents

  • 2:30 Aldin meeting – done. We’re going to set up something for next Wednesday
  • 4:00 LLM meeting – done. Need to formalize the idea of using aggregation for finding hallucinations vs. well-grounded generation
  • Need to fix the regex so it doesn’t split on “GPT-3”

Phil 4.18.2023

GPT Agents

  • Added threads to ContextExplorer, which worked really well
    • 🧬🦍 Alpha Male Chimpanzee Behavior: A Thread 🦍🧬
    • 1/5: Alpha males can use a range of behaviors to achieve their position, from dominance through violence to alliance building through activities such as grooming. The first pattern involves a single male using aggression to achieve alpha status, while the second pattern involves males building coalitions that are more powerful than any individual. #AlphaMales #Chimpanzees
    • 2/5: In Jane Goodall’s book “The Chimpanzees of Gombe,” she describes the aggressive behavior of male chimpanzees, with an average of one fight every 62 hours, compared to females’ average of one fight every 106 hours. Alpha males like Humphrey have an even higher attack rate, with one fight every 9 hours. #JaneGoodall #ChimpanzeeBehavior
    • 3/5: Becoming an alpha male allows access to the best food and females in estrus, which helps them pass on their genes. However, this does not emphasize alliance building, leaving the male responsible for his own defense and more likely to be displaced by a stronger, younger male. #AlphaMaleAdvantages #ChimpanzeeHierarchy
    • 4/5: Alpha males who gain their status through alliance building take longer to reach that status but typically stay in the position longer because they are defended by their allies. Alliances among chimpanzees are complex networks, and stable alliances can help a male reach alpha status. #AllianceBuilding #ChimpanzeeAlliances
    • 5/5: In Goodall’s study of the Gombe group, the male Figan was able to become alpha due to his stable alliance with his older brother Faben and less stable alliances with other dominant males. Once the hierarchy is established, there is less violent behavior in the group, but violence can emerge when two individuals close in rank engage in a test of dominance. #GombeChimpanzees #FiganAndFaben
    • 🔬 Understanding the complex social dynamics of chimpanzees can provide valuable insights into the evolution of human social behavior and the importance of cooperation and alliances in achieving and maintaining power. #Evolution #SocialBehavior
  • Need to finish wiring up the parsing for lists and sequences then add that to NarrativeExplorer

Book

  • Reply to ToC changes and mention that xxx needs credit for his picture

SBIRs

  • 9:15 standup – done
  • 1:00 BMD meeting -done
  • Spent some time with Rukan looking at pricing for Lambda Cloud. It looks much nicer than Azure – twice the GPUs and each GPU has twice the VRAM
  • Working on slide deck. Add something on foundational attacks – in process
  • Worked on ContextExplorer, getting Lists and Sequences to work. Still need to clean up context for Lists
  • Work on story

Phil 4.17.2023

So I had a lot of fun with the ContextExplorer tool over the weekend. I added a prompt that generates tweets based on the stored content. In this case, it randomly chooses a section of text at random and then generates something in Twitter format:

Beliefs that change at the right pace allow for group cohesion, like birds in a flock or fish in a school. Our brains synchronize when we share stories, but too slow or too fast can lead to boredom or overwhelm. Communication media influences human behavior and group structures. #science #groupdynamics

Stampede Theory, 2023

That’s a tweet generated from a random chunk of chapter one of my book

That was neat, so I tried a few other formats, like “factoid” and my current favorite, “press release:”

FOR IMMEDIATE RELEASE

How Technology Has Influenced the Way Humans Move and Behave in Groups

New York, NY – The way humans move and behave in groups has been influenced by technology in various ways. From the development of language to the use of maps and GPS, technology has changed the way we navigate the world and interact with each other.

According to recent research, beliefs are shared and can be located in a terrain, allowing individuals to move and form groups. However, this behavior can also lead to dangerous situations, such as stampedes, as seen in the 1913 Italian Hall disaster. Technology, such as maps and GPS, has changed the way we navigate the world and has also affected our behavior in social spaces.

“Technology has changed the way we behave in physical and virtual spaces, blurring the lines between being lost or offline,” said a spokesperson for the research team. “Our instincts developed for the physical world are applied to virtual group coordination. Changes in belief have an optimal velocity, not too slow to avoid boredom and not too fast to prevent exhaustion. Novelty is preferred, but being overwhelmed is not.”

Beliefs that change at the right pace allow people to move together like birds in a flock or fish in a school. When a group of people share similar beliefs and goals, they can move together through belief space and find it easier to do things as a group. Surprisingly, our brains synchronize when we share a story, firing at the same times and places. If beliefs change too slowly, people get bored, but if they change too quickly, people become overwhelmed. The right pace of change is important for group cohesion.

In conclusion, technology has had a significant impact on the way humans move and behave in groups. From the development of language to the use of maps and GPS, technology has changed the way we navigate the world and interact with each other. As we continue to evolve, it is important to understand the role that technology plays in shaping our behavior and beliefs.

Contact:
John Smith
Email: johnsmith@email.com
Phone: 555-555-5555

Since I’ve read in other texts, you can do press releases for them too! Here’s one for Moby-Dick:

FOR IMMEDIATE RELEASE

New York, NY – Herman Melville’s classic novel, Moby-Dick, continues to captivate readers with its thrilling tale of a vengeful whale and the men who pursue him. Even today, whalemen still hold a great respect for the Sperm Whale, which is seen as a fearsome creature that has been known to attack humans and is feared by other creatures in the sea.

Despite the dangers associated with hunting the Sperm Whale, some fishermen were still willing to pursue Moby Dick, the infamous white whale. Superstitious beliefs in the dangers of hunting the Sperm Whale caused some to be reluctant to embark on this new and daring warfare, believing that to attempt it would be to be torn into a quick eternity.

Whalers have found evidence of whales being captured in both the Pacific and Greenland seas, leading some to believe that the Nor’ West Passage was never a problem for whales, making their experiences with Moby Dick seem almost as mythical as old stories of ships floating up to the surface of a lake and waters coming from the Holy Land by an underground passage.

The White Whale was distinguished by its uncommon size, snow-white wrinkled forehead, and high, pyramidical white hump, which allowed it to be identified from a long distance in the uncharted seas.

Moby-Dick is a timeless classic that continues to inspire and intrigue readers with its thrilling tale of adventure and revenge. It is a must-read for anyone who loves a good story and wants to experience the thrill of the hunt for the infamous white whale.

For more information on Moby-Dick, visit your local bookstore or online retailer today.

Contact:
John Smith
Public Relations Manager
123-456-7890
johnsmith@email.com

Fun stuff. It also will generate test for the King James Bible, and Sun Tzu’s “Art of War,” which I’ve also read in. Note that in the Moby-Dick press release, it the model adds some meta-information about the book – it’s a “classic novel,” available at “your local bookstore or online retailer.”

GPT Agents

  • Add context and source info to the auto-generation – done
  • Add regex for list processing and code that puts the appropriate output in the “response” text area

SBIRs

  • Work on story
  • Work on slides – started
  • Help Eric get his code running
  • 2:00 MDA meeting – done

Phil 4.15.2023

This interactive visualization displays 21 million scientific papers collected in the PubMed database, maintained by the United States National Library of Medicine and encompassing all biomedical and life science fields of research.

You can scroll the narration in the left part of the screen, and interact with the visualization in the right part of the screen. Zooming in loads additional papers. Information about each individual paper appears on mouse-over, and clicking on a paper opens its PubMed page in a separate window. Search over titles is available in the upper-right corner.

Explanatory overview thread here

Why transformers are obviously good models of language

  • Nobody knows how language works, but many theories abound. Transformers are a class of neural network that process language automatically with more success than alternatives, both those based on neural computations and those that rely on other (e.g. more symbolic) mechanisms. Here, I highlight direct connections between the transformer architecture and certain theoretical perspectives on language. The empirical success of transformers relative to alternative models provides circumstantial evidence that the linguistic approaches that transformers embody should be, at least, evaluated with greater scrutiny by the linguistics community and, at best, considered to be the currently best available theories.

Phil 4.14.2023

Book

  • Finish tweets and provide 2 versions for the “mocking” and send off with links to copyright content – done!

SBIRs

  • Pinged Jarod with a link to the paper and changed the title to something bland, but accurate

GPT Agents

  • Set up list string to be in the form “Here’s a list of items/concepts/phrases that are similar to {}: || first concept seed || second concept seed || … || last concept seed”
  • The regex will split on the “||” and feed the first string with the following strings. In the case of the explorer app, all the lists will be shown in the output at once.
  • There will need to be a regex field for splitting lists and sequences that will be saved to the input file for NarrativeExplorer

Phil 4.13.2023

SBIRs

  • 9:15 standup
  • 11:30 Touchpoint
  • 3:00 SEG meeting – Loren is leaving!
  • Sent requests for access and training

GPT Agents

  • Work on adding lists and sequences. I should have some of the recursive code from GPT agents

Book

  • Finish tweet screenshots – done! Need to send them off