Phil 11.1.19

7:00 – 3:00 ASRC GOES

KerasTuner

  • Hugging Face: State-of-the-Art Natural Language Processing in ten lines of TensorFlow 2.0
    • Hugging Face is an NLP-focused startup with a large open-source community, in particular around the Transformers library. 🤗/Transformers is a python-based library that exposes an API to use many well-known transformer architectures, such as BERTRoBERTaGPT-2 or DistilBERT, that obtain state-of-the-art results on a variety of NLP tasks like text classification, information extraction, question answering, and text generation. Those architectures come pre-trained with several sets of weights. 
  • Dissertation
    • Starting on Human Study section!
    • For once there was something there that I could work with pretty directly. Fleshing out the opening
  • OODA paper:
    • Maximin (Cass Sunstein)
      • For regulation, some people argue in favor of the maximin rule, by which public officials seek to eliminate the worst worst-cases. The maximin rule has not played a formal role in regulatory policy in the Unites States, but in the context of climate change or new and emerging technologies, regulators who are unable to conduct standard cost-benefit analysis might be drawn to it. In general, the maximin rule is a terrible idea for regulatory policy, because it is likely to reduce rather than to increase well-being. But under four imaginable conditions, that rule is attractive.
        1. The worst-cases are very bad, and not improbable, so that it may make sense to eliminate them under conventional cost-benefit analysis.
        2. The worst-case outcomes are highly improbable, but they are so bad that even in terms of expected value, it may make sense to eliminate them under conventional cost-benefit analysis.
        3. The probability distributions may include “fat tails,” in which very bad outcomes are more probable than merely bad outcomes; it may make sense to eliminate those outcomes for that reason.
        4. In circumstances of Knightian uncertainty, where observers (including regulators) cannot assign probabilities to imaginable outcomes, the maximin rule may make sense. (It may be possible to combine (3) and (4).) With respect to (3) and (4), the challenges arise when eliminating dangers also threatens to impose very high costs or to eliminate very large gains. There are also reasons to be cautious about imposing regulation when technology offers the promise of “moonshots,” or “miracles,” offering a low probability or an uncertain probability of extraordinarily high payoffs. Miracles may present a mirror-image of worst-case scenarios.
  • Reaction wheel efficiency inference
    • Since I have this spiffy accurate model, I think I’m going to try using it before spending a lot of time evolving an ensemble
    • Realized that I only trained it with a voltage of +1, so I’ll need to abs(delta)
    • It’s working!

WorkingInference

  • Next steps:
    • I can’t do ensemble realtime inference because I’d need a gpu for each model. This means that I need to get the best “best” model and use that
    • Run the evolver to see if something better can be found
    • Add “flywheel mass” and “vehicle mass” to dictionary and get rid of the 0.05 value
    • Set up a second model that uses the inferred efficiency to move in accordance with the actual commands. Have them sit on either side of the origin
  • Committed everything. I think I’m done for the day