I have a thought about an easier way to build NNMs. What if I took a topic model and created embeddings for, say, every sentence in the Gutenberg collection and the English Wikipedia (as a start). Then ran the word2vec algorithm on those embeddings in sequence? I think that I should get a new embedding space that has the sequential relationships between topics that should be able to accommodate trajectories. This could be validated by drawing trajectories on a UMAP reduced representation of the data. I think.
And boy are there a lot of embedding models
Tasks
- BSO – done
- LLC – done
- Send new text to V – done
- CA24154 – Networking European Security Knowledge
SBIRS
- Change AWS password – done
- 9:00 standup – done
- 3:30 ODIN? For some meeting tomorrow?
- 4:00 SEG – done. Good progress.
