Phil 11.19.20

GPT-2 Agents

  • Looks like we are getting close to ingesting all the new data
  • Had a meeting with Ashwag last night (Note – we need to move the time), and the lack of ‘story-ness’ in the training set is really coming out in the model. The meta information works perfectly, but it’s wrapped around stochastic tweets, since there is no threading. I think there needs to be some topic structure in the meta information that allows similar topics to be grouped sequentially in the training set.
  • 3:30 Meeting

GOES

  • 9:30 meeting
  • Update code with new limits on how small a step can be. Done, but I’m still having normal problems. It could be because I’m normalizing the contributions?
https://viztales.files.wordpress.com/2020/11/replayer_11_19_20.gif
  • Switching to a least-squares approach done?!
transform = lambda x: unpad(np.dot(pad(x), A))

print("\nTarget:")
print(secondary)
print("\nResult:")
print(transform(primary))
print("\nMax error: \n{}".format(np.abs(secondary - transform(primary)).max()))
print("\nA = \n{}".format(A))

Ap = A[:3, :3]
print("\nrotation matrix = \n{}".format(Ap))


print("getting quaternion")
q = Quaternion(matrix=Ap)
print("got quaternion")
print("Axis = {}".format(q.get_axis()))
print("Degrees = {}".format(q.degrees))

Book

  • More cults. Tying together Jonestown and Moby-Dick seems to be working better that what I was doing before