Phil 4.28.2023

“Source: ChatGPT”

This is a good thread, but it misses some important context. ArXiv isn’t all that easy to publish too. It really helps to have an .edu email address. You need to know how to use LaTeX. The author is a professor at a New Zealand University, with a long publishing history and a solid h-index. When you’re in a hurry and just skimming the abstract looking to bolster your reference section, this could easily pass the test.

And there’s another thing. As someone in the AI/ML space, the ability to get published in a high-profile conference or journal is getting much harder these days. Getting accepted often means having a result that improves on some benchmark. Poking around in new directions means not getting accepted and publishing on ArXiv. For example, Deep residual learning for image recognition has currently been cited over 150,000 times.

This is almost my avatar from the new paper

SBIRs

  • Went to the Microsoft/OpenAI thing yesterday. Mostly advertising, but it’s interesting to note that the Azure account has access to the 32k token input buffer model. Also, there are exactly two instances of the running inference model. It’s too big to be easily replicated. One really good things to see was how you can use the GPT to turn unstructured text into a JSON string that can be consumed by traditional programs. And the reverse is true too – anything can be used to generate a contextual prompt. THings are moving fast.
  • Great chat with Zach. We’re going to try to ingest the NOAA financial regs to throw the chatbot against. Also, some good discussion on how to use big models for assistive interfaces for the vision-impaired. We’ll try to set up something for Monday
  • 9:00 Meeting with Lauren
  • 10:00 Meeting with Aaron and Eric
  • Maybe something in the afternoon with Steve?

GPT Agents

  • Clean out NarrativeExplorer and start ListExplorer and SequenceExplorer. Will probably need some new tables?
  • Make a thread tonight!