Category Archives: COVID-19

Phil 8.12.20

4:00 meeting with Marissa

#COVID

  • Nearly 150k tweets translated
  • Meeting today at 3:00
    • For next week, create a spreadsheet with 150 random tweets for translation tests (Pearson’s correlation?)

GPT2-Agents

  • Creating code that dumps probe results to files for better word clouds – done.
  • Boy, that produced some unexpected results. I changed the probe to “The man/woman walked into the room.”, and ran 100 probes for each, initializing with the same seed on the 117 million parameter GPT-2, which seems plenty large for these types of experiments.
  • The big surprise if how similar the start of the text is for each, then how it diverges. Here’s the first result for each probe (all text examples are ordered man, then woman):
    • This was about the time that an old friend of mine made his move and was going to be a huge help for me. I thought to myself and began crying. The thought of an old friend crying seemed almost too much, but I didn’t want to hurt him as much. My mind was racing, but all I could think about was how much the boy was hurting right now. He still wasn’t strong,
    • This was about the time that an accident really occurred. She had just turned up a couple hours earlier at the home they owned and hadn’t seen her since. She had a bad haircut and she couldn’t sit still and look after herself. When the accident happened, she didn’t even have her hair cut. She was so exhausted, she started crying. She was so exhausted and crying. After
  • And here’s the last:
    • He spoke, “I can’t believe there’s this situation here. This is very, very weird.” I told him I felt scared, and he replied, “You’re probably right about that, but I don’t know what it’s about.” He didn’t respond, and I was left thinking, “I’m sure I’m not alone in this crazy situation.” The next day, I went back to meet him. “He’s
    • She spoke, “I can’t believe there’s this situation here. This is very, very painful.” I told her I felt scared and I was upset. She said “I’ll tell you what I saw when I saw him.” I told her “He’s so skinny but the black hair looks gorgeous, he looks beautiful. “She started crying and began giving me some food and asking if I were alright. “
  • Here’s another good pairing:
    • It wasn’t that it was a bad situation. He just wasn’t feeling it. He felt that he wasn’t going to get laid, and if anything, he didn’t think it would help him get off. “We’ll go, then,” the woman said. There was still an argument at the back, but now it wasn’t too much worse. The woman had been arguing with the man, but the man was not
    • It wasn’t that it was a bad situation. That just wasn’t the case. She was just a little shy and reserved and didn’t really need anything to do with it. I had been on the phone with Amy for the last week. When I found out that she wanted to join me in our recent trip I was pretty bummed out. That’s when Amy started to feel bad about herself. For
  • The pairs don’t always begin with the same words. This pair still seems to have a legal/criminal context:
    • He handed out a number of hand-written handbags, which he claimed could be used to carry out a search that began when he first moved into the room. The door was then locked and the man went into the bathroom, which he said was the safest place. It was the first time the intruder had ever been caught, and the man said it was the second time they’d been spotted outside the residence. The man was
    • The two officers were holding their guns. As the woman made her way to her seat, she saw two men on a motorcycle walking towards her. She asked the man why he was not in the car with her. The man explained that he was afraid of the two men driving. The officers explained that she had to have sex and to stay with the men. The woman was terrified of the officers as the men drove away with their cameras and other equipment
  • It’s like the model start in similar places, but pointing in a slightly different direction. It seems to be important to run probes in identical sequences to have more insight into the way the model is perceiving the probes.

GOES

  • 1:30 Meeting with Vadim. He’s don an on-axis mass test and will do an off-axis test next. I showed him the quaternion frame tracker
  • 2:00 Status meeting

Book

  • Start moving chapters! Making progress!

Phil 7.8.20

A brief history of high-speed trading (via the Museum of American Finance)

  • In the late 1830s, Philadelphia broker William C. Bridges operated a private signal station between New York and Philadelphia which disseminated stock market news to him and his backers (and to no one else). The signals were transmitted through an “optical telegraph,” which consisted of a series of boards on a pole, mounted on hills that could be seen by a telescope.

DtZ

  • The IHME site has improved to the point that we should pull down our site

GPT-2 Agents

  • Need to think about how to show that interrogating a language model is sufficiently similar to interrogating actual data.
    • At this point, I know that the language model comes up with legal moves
    • I need to compare the statistics of actual moves to synthetic moves to see if the populations are sufficiently similar. This means that I need to get the training and evaluation data into the database. Once that’s done, I can compare the frequency of move types (e.g. “At move 10, White moves pawn from a2 to a4”), and the moves from a particular location (e.g. “e2” can have moves to “e3” and “e4” with the pawn, or diagonals with the “f1” bishop or the white queen).
    • The level of similarity should indicate if the biases of the players are represented in the language model.
      • There should be a way of determining a lower bound of data?
      • Once this is shown, then the idea of generalizing to other human interactions can be justified.
  • Started PGNtoDB, which will populate table_actual
    • Ignoring castling for now
    • Chunking into the database! And by chunking, I refer to the sound of the drive 🙂
    • And now I have a catalog of 188,324 human chess moves

chess_moves_db

GOES

  • 10:00 Meeting with Vadim
  • 2:00 Status
  • Last training for a while!

Phil 7.7.20

The opportunity cost of this is going to be so steep. I wonder what country will set up an effective, open, online university?

f1

GPT-2 Agents

  • Working through the texthero examples. Spent a lot of time figuring out how to print elements from a row in a Dataframe, which was ridiculously hard. Instead, I just turned it into a dict and worked with that
    # print the first n rows of a dataframe using the specified columns. Use a -1 for printing all rows 
    def print_df(df:pd.DataFrame, headers:List, num_rows:int = 4, max_chars:int = 80):
        s:pd.Series
        rows = 0
    
        d:Dict = df.to_dict('index')
        rd:Dict
        for index, rd in d.items():
            st = ""
            keys = rd.keys()
            for key in headers:
                if key in keys:
                    val = rd[key]
                    st += "{}: {}, ".format(key, val[:max_chars])
            print(st)
            rows += 1
            if num_rows != -1 and rows > num_rows:
                break
  • The scatterplot appears to use plotly, since it’s presented in the browser. That’s kind of cool, since it implies that the plotting functions of plotly are free somehow? After going to the plotly.com website, I see that “Plotly.py is free and open source and you can view the source, report issues or contribute on GitHub.” That would be worth digging into some more then. Here’s the PCA plot:

pca

  • You can make word clouds easily, too

WordCloud

GOES

  • Finish training? Ooops, forgot
  • Some discussion with Vadim about the structure of the control

ML Seminar

  • Good discussion on topic extraction over time. Basically, create k topics from the entire corpora. Each topic is a ranking of all the words in the corpus. Behavior over time is the amount of the top words from each topic k in each time sample t.

Phil 7.2.20

Emergence of polarized ideological opinions in multidimensional topic spaces

  • Opinion polarization is on the rise, causing concerns for the openness of public debates. Additionally, extreme opinions on different topics often show significant correlations. The dynamics leading to these polarized ideological opinions pose a challenge: How can such correlations emerge, without assuming them a priori in the individual preferences or in a preexisting social structure? Here we propose a simple model that reproduces ideological opinion states found in survey data, even between rather unrelated, but sufficiently controversial, topics. Inspired by skew coordinate systems recently proposed in natural language processing models, we solidify these intuitions in a formalism where opinions evolve in a multidimensional space where topics form a non-orthogonal basis. The model features a phase transition between consensus, opinion polarization, and ideological states, which we analytically characterize as a function of the controversialness and overlap of the topics. Our findings shed light upon the mechanisms driving the emergence of ideology in the formation of opinions.

DtZ has broken

dtzfail

GPT2-Agents

  • Continue working on the trajectory. I think that a plot that works entirely on distance to target can result in spirals, so there needs to be some kind of system that looks at the distance to the center line first, and if there is a fail, move the last node from the trajectory list to a dirty list. Then the search restores the cur node to the previous, and continue the search with the trajectory and dirty list nodes ignored?
  • Found an example to fix: A6 – H7
    • get_closest_node() line = [337.0, 44.0, 581.0, 499.0], cur_node = h1, node_list = [‘a6’, ‘b6’, ‘c7’, ‘d7’, ‘e6’, ‘c5’, ‘b7’, ‘g7’, ‘h6’, ‘g6’, ‘c6’, ‘e7’, ‘f7’, ‘g8’, ‘f6’, ‘d8’, ‘a8’, ‘e8’, ‘d6’, ‘b4’, ‘b8’, ‘c8’, ‘c4’, ‘e5’, ‘d5’, ‘d4’, ‘b5’, ‘c3’, ‘e4’, ‘f5’, ‘f8’, ‘f4’, ‘g5’, ‘g4’, ‘h5’, ‘h4’, ‘f3’, ‘d3’, ‘c2’, ‘e3’, ‘d2’, ‘e2’, ‘b2’, ‘b1’, ‘c1’, ‘e1’, ‘d1’, ‘a1’, ‘f1’, ‘g3’, ‘h3’, ‘g2’, ‘f2’, ‘g1’, ‘h2’, ‘h1’]
    • It does fine until it gets to E6, where it chooses c5
    • Adding a target distance-based search if the distance to line search fails seems to have fixed it:
      nlist = list(nx.all_neighbors(self.gml_model, cur_node))
      print("\tneighbors = {}".format(nlist))
      dist_dict = {}
      sx, sy = self.get_center(cur_node)
      
      for n in nlist:
          if n not in node_list:
              newx, newy = self.get_center(n)
              newa = [newx, newy]
              print("\tline dist checking {} at {}".format(n, newa))
              x, y = self.point_to_line([l[0], l[1]], [l[2], l[3]], newa)
              ca = [x, y]
              ib = self.is_between([sx, sy], [l[2], l[3]], [x, y])
              if ib:
                  # option 1: Find the closest to the line
                  dist = np.linalg.norm(np.array(newa)-np.array(ca))
                  dist_dict[n] = dist
                  print("\tis BETWEEN = {}, dist = {}".format(ib, dist))
      if len(dist_dict) == 0:
          ta = [self.get_center(self.target_node)]
          for n in nlist:
              if n not in node_list:
                  newx, newy = self.get_center(n)
                  newa = [newx, newy]
                  print("\ttarget dist checking {} at {}".format(n, newa))
                  # option 2: Find the closest to the target node
                  dist = np.linalg.norm(np.array(newa)-np.array(ta))
                  dist_dict[n] = dist
                  print("\tis CLOSEST: dist = {}".format(dist))
  • Got legal trajectories working. Below is a set of jumps that are legal (rook to c1, bishop to e3 and then h6, then rook the rest of the way) I think I want to also sort based on closest distance to the current node.

legal_moves

GOES

  • Add InfluxDB streaming to DD
  • 10:00 Sim meeting
  • 2:00 Status meeting

Phil 7.1.20

I should be riding across southern Spain right now

#COVID19

  • Huggingface has fixed the Marian Model/Toekenizer! Need to try Arabic, and if it works, translate the tweets in the db

GPT-2 Agents

  • Shimei had a good point last night that belief maps may be directed. For example, it may be easier for a person to go from smoking to harder drugs than to go back to just smoking. The path from hard drugs might involve 12-step programs, which would be unlikely to be reached from smoking. Other beliefs can swing back and forth, as we see with, for example, the desirability of deficit spending when in power.
  • Finishing up direct selection of source and target nodes as a way of procrastinating on calc_direct, which is going to be harder. I’m always nervous with recursion!
  • Added a test to see that one of the neighbors is the target node first
  • Broke out the angle calculations
  • Need to sort by distance to line and distance to target. It may be necessary to step away from the target occasionally. For now, it’s an option as I try to figure out what’s best:
    # option 1: Find the closest to the line
    # dist = np.linalg.norm(np.array(na)-np.array(ca))
    # option 2: Find the closest to the target node
    dist = np.linalg.norm(np.array(newa)-np.array(ta))
    dist_dict[n] = dist

GOES

  • Status report for June
  • 1:30 Sim progress meeting – things are working! Need to hook up the data dictionary to influx for monitoring and debugging. Add Erik to the invites
  • 2:00 Status meeting
  • I did a NASA/GSFC training module early!

Phil 6.22.20

Cornell University was having a sale, so I got a book:

Mental Territories

  • Rarely recognized outside its boundaries today, the Pacific Northwest region known at the turn of the century as the Inland Empire included portions of the states of Washington and Idaho, as well as British Columbia. Katherine G. Morrissey traces the history of this self-proclaimed region from its origins through its heyday. In doing so, she challenges the characterization of regions as fixed places defined by their geography, economy, and demographics. Regions, she argues, are best understood as mental constructs, internally defined through conflicts and debates among different groups of people seeking to control a particular area’s identity and direction. She tells the story of the Inland Empire as a complex narrative of competing perceptions and interests.

DtZ:

  • Change the code so that there is a 30 day prediction based on the current rates regardless of trend. I think it tells the story of second waves better:

30_days

GPT-2 Agents

  • The ACSOS paper was rejected, so this is now the only path going forward for mapmaking research.
  • Used the known_nearest to produce a graph:
  • The graph on the left is the full graph, and the right is culled. First, note that node c is not in the second graph. There is no confirming link, so we don’t know if it’s an accident. Node e is also not on the chart, because it has no confirming link back through any 2-edge path.
  • Ok, I tried it for the first time on the chess data. There is a bug where [a-h] and [1-8] are showing up as nodes that I have to figure out. But they show up in the right way! Orthogonal and in order!

chess_nearest_bug

  • The bug seems to be in the way that List.extend() works. It seems to be splitting the string (which is a List, duh), and adding those elements as well? Nope, just doing one nesting too many
  • Ok, here are the first results. The first image is of all neighbors. The second is of only verified nearest neighbors (at least one edge chain of 2 that lead back to the original node)

chess_all_neighbors

chess_nearest_neighbors

  • In both cases, the large-scale features of the chessboard are visible. There is a progression from 1 to 8, and a to h. It seems clearer to me in the lower image, and the grid-like nature is more visible. I think I need to get the interactive manipulation working, because some of this could be drawing artifacts
  • Trying out the networkx_viewer. A little worried about this though:

networkxviewer

  • And rightly so:

kablooee

  • Going to try cloning and fixing. Nope. It is waaaaaaayyyyyy broken, and depends on earlier version of networkx
  • Networkx suggests Gephi, and there is a way to export graphs from networkx. Trying that
  • Seems usable?

Gephi

GOES

  • Kind of stuck. Waiting on Vadim
  • Probably will be working on a couple of SBIRs for the next few weeks

Phil 6.8.20

Not at all happy with this COVID weight gain. My preferred stress management tool is exercise, but I’m at a minimum of 20 miles/day. Usually about 100 miles+ on weekends.

Starting to think about writing something on the ethics of mode collapse

D20

Florida

GPT-2 Agents

  • Back to pulling move and piece information out of generated text – done
  • Added heuristic for move number
  • Created dicts for db data. Add writes tomorrow!

GOES

  • Adding read tests – done! Had to screw around with utc conversions for a while
    • Writes are roughly 1/2 sec per 1,000
    • Reads are about 2/100 sec per 1,000
  • Tried to log in and get on the devlab influx system – nope:

bad gateway

  • Trying to figure out what makes sense to do next. Ping Vadim? Done

Phil 6.7.20

I know it seems like an artifact from another time, but the map is coming along. Here’s the us, based on 14 day trend in reported deaths:

dtz_map

Neural networks learning how to talk to each other. Need to see if there are any publications:

nn_chatter

Phil 6.2.20

Military

Remember when all we had to worry about dealing with a pandemic? Good times.

GPT-2 Agents

  • Downloaded a lot of PGN files. Looks like I could pull down the entire archive here: theweekinchess.com/twic. Need to write a script that pulls down the files and unzips them
  • Need to scan the directory and parse each pgn file – done
  • Created train (700,000 lines and eval 100,000 lines) files
  • Feed into GPT-2! Seems to be cranking along:

chessGPT

GOES

  • Submitted paper and slide deck
  • Putting together a brown-bag style presentation for the development of the GAN code
  • Ping Vadim to see what to do next?

ML seminar

  • Presented brown-bag talk
  • Need to share slides and put code on GitHub

 

Phil 5.26.20

Had a good, cathartic ride yesterday:

GPT-2 Agents

  • I’ve been working on the PGNtoEnglish class and was having an odd bug where occasionally a piece would pull a piece from the other side of the board. Since it was intermittent, it required many print statements and searching through the logs for “black knight”

blac knight

  • My problem was in forgetting how Python indexes into arrays. Here’s the code in question:

python

  • When I first wrote this, I had to deal with a lot of potential coordinates that were off the board, with indexes like (-2, -1), or (10, 8) for an 8×8 board. I thought to handle this with a try/except on IndexError (the bottom highlight). In other languages this would have worked, but Python allows negative indexes. Ooops! Adding the test for either index being negative (the top highlight) fixed that bug

D20

  • Ping Zach – done

GOES

  • Write up code review thoughts for Erik -done
  • Add n_critic to base class, along with adjustable false flag value
    • First, making sure that everything still works. Seems to.
    • Here’s the best I can do today, using the OneDGAN2a class with an RMSProp(lr=0.0005)

epochsNoise_trainedacc_loss

  • Assemble all the bits for an example
    • Verified that the InfluxTestTrainBase still works, and it’s using the InfluxDB values
    • Assemble all the bits for an example
      • Created a NoiseGAN2 with the same amount of points as the InfluxTestTrainBase model – done. Looks real good on the noise, too:

epochsNoise_trainedacc_loss

  • How to trim the columns on a 2D Numpy array:
    results = self.ifq.run_query(self.bucket, begin, end, filter_str)
    results = self.ifq.to_nd_array(results)
    results = np.delete(results, slice(clamp, None), 1)
    predict_table = model.predict(results)
  • Here’s all the parts nailed together:
  • Start the paper and the deck

ML Group

  • Need to create a walkthrough of coding practices for next week. I think I’ll use the trajectory of the GAN coding as the basis

 

Phil 5.25.20

GPT-2 Agents

  • Work on openings
  • Maybe create database that contains games as collections of moves. A query could produce the text for the language model
  • Created a database for openings, since there are multiple versions of the same opening and I couldn’t just use the site as an index into a dict. I mean…

openings

  • Chasing down more bugs. Did you know that ‘#’ means checkmate as well as ‘++’? Now you do!

D20

  • Rework the offsets to a y-day linear model rather than an x-y day linear model

Book

  • Semester’s over, so ping Thom – done

Phil 5.19.20

Groceries today. In the Before Time, this meant swinging by the store for fresh fruit and veggies while picking up something interesting for dinner that night. Now it means going to two stores (because shortages) standing in separated lines and getting two weeks of food. Very not fun

A Comprehensive guide to Fine-tuning Deep Learning Models in Keras (Part I)

Visualizing A Neural Machine Translation Model (Mechanics of Seq2seq Models With Attention)

The Illustrated Transformer

Collective Intelligence 2020 explores the impact of technology and big data on the ways in which people come together to communicate, combine knowledge and get work done. (Thursday June 18)

Attention

Attention seq2seq example

#COVID

  • Tried to get the HuggingFace translator (colab here) to download and run, but I got this error: ‘Helsinki-NLP/opus-mt-en-ROMANCE’ ‘Unable to load vocabulary from file. Please check that the provided vocabulary is accessible and not corrupted‘ I’m going to try downloading the model and vocab into a folder within the project to see if something’s missing using these calls:
    # to put the model in a named directory, load the model and tokenizer and then save (as per https://huggingface.co/transformers/quickstart.html):
    tokenizer.save('pretrained/opus-mt-en-ROMANCE')
    model.save('pretrained/opus-mt-en-ROMANCE')

GPT-2 Agents

  • Read in multiple games
    • Handled unconventional names
    • Handling moves split across lines – done
    • Need to handle promotion (a8=Q+), with piece change and added commentary. This is going to be tricky because the ‘Q’ is detected as a piece even though it appears after  the location. Very special case.
      • Rule: A pawn promotion move has =Q =N =R or =B appended to that.
  • Create a short intro
  • Save to txt file

GOES

  • Read the GAN chapter of Generative Deep Learning last night, so now I have a better understanding of Oscillating Loss, Mode Collapse, Wasserstein Loss, the Lipschitz Constraint, and Gradient Penalty Loss. I think I also understand how to set up the callbacks to implement
  • Since the MLP probably wasn’t the problem, go back to using that for the generator and focus on improving the discriminator.
  • That made a big difference!

acc_lossNoise_untrainedNoise_trained

  • The trained version is a pretty good example of mode collapse. I think we can work on improving the discriminator 🙂
  • This approach is already better at finding the amplitude in the noise samples from last week!

Noise_trained

  • Ok, going back to the sin waves to work on mode collapse. I’m going to have lower-amplitude sin waves as well
  • That seems like a good place to start
  • Conv1D(filters=self.vector_size, kernel_size=4, strides=1, activation=’relu’, batch_input_shape=(self.num_samples, self.vector_size, 1))
  • Conv1D(filters=self.vector_size, kernel_size=6, strides=2, activation=’relu’, batch_input_shape=(self.num_samples, self.vector_size, 1)):
  • Conv1D(filters=self.vector_size, kernel_size=8, strides=2, activation=’relu’, batch_input_shape=(self.num_samples, self.vector_size, 1))
  • The input vector size here is only 20 dimensions. So this means that the kernel size is 80% of the vector! Conv1D(filters=self.vector_size, kernel_size=16, strides=2, activation=’relu’, batch_input_shape=(self.num_samples, self.vector_size, 1))
  • Upped the vector size from 20 to 32
  • Tried using MaxPool1D but had weird reshape errors. Doing one more pass with two layers before starting to play with Wasserstein Loss, which i think is a better way to go. First though, let’s try longer trainings.
  • 10,000 epochs:
  • 20,000 epochs:

Phil 5.15.20

Fridays are hard. I feel like I need a break from pushing this rock up hill alone. Nice day for a ride tomorrow, so a few of us will probably meet up.

D20

  • Zach seems to be making progress in fits and starts. No word from Aaron
  • One way to make the system more responsive is to see if the rates are above or below the regression. Above can be flagged.

GPT-2 Agents

  • More PGNtoEnglish. Getting close I think.
  • Added pawn attack moves (diagonals)
  • Adding comment regex – done
  • Some problem handling this:
    Evaluating move [Re1 Qb6]
    search at (-6, -6) for black queen out of bounds
    search at (6, -6) for black queen out of bounds
    search at (0, -6) for black queen out of bounds
    search at (7, 7) for black queen out of bounds
    search at (-7, -7) for black queen out of bounds
    search at (7, -7) for black queen out of bounds
    search at (7, 0) for black queen out of bounds
    search at (0, -7) for black queen out of bounds
    raw: white: Re1, black: Qb6
    	expanded: white:  Fred Van der Vliet moves white rook from f1 to e1.
    	black: unset

GOES

  • Need to make the output of the generator work as input to the discriminator.
  • So I need to get to the input vector of latent noise to an output that is the size of the real data. It’s easy to do with Dense, but Dense and Conv1D don’t get along. I think I can get around that by reshaping the dense layer to something that a Conv1D can take. But that probably looses a lot of information, since each neuron will have some of each noise sample in it. But the information is noise in the first place, so it’s just resampled noise? The other option is to upsample, but that requires the latent vector to divide evenly into the input vector for the discriminator.
  • Here’s my code that does the change from a dense to Conv1D:
    self.g_model.add(Dense(self.vector_size*self.num_samples, activation='relu', batch_input_shape=(self.latent_dim, self.num_samples)))
    self.g_model.add(Reshape(target_shape=(self.vector_size, self.num_samples)))
    self.g_model.add(Conv1D(filters=self.vector_size, kernel_size=5, activation='tanh', batch_input_shape=(self.vector_size, self.num_samples, 1)))
  • The code that produces the latent noise is:
    def generate_latent_points(self, span:float=1.0) -> np.array:
        x_input = np.random.randn(self.latent_dim * self.num_samples)*self.span
        # reshape into a batch of inputs for the network
        x_input = x_input.reshape(self.latent_dim, self.num_samples)
        return x_input
  • The “real” values are:
    real_matrix = 
    [[-0.34737792 -0.7081109   0.93673414 -0.071527   -0.87720268]
     [ 0.99876073 -0.46088645 -0.61516785  0.97288676 -0.19455964]
     [ 0.97121222 -0.18755778 -0.81510907  0.8659679   0.09436946]
     [-0.72593361 -0.32328777  0.99500398 -0.50484775 -0.57482239]
     [ 0.72944027 -0.92555418  0.04089262  0.89151951 -0.78289867]
     [ 0.79514567 -0.88231211 -0.06080288  0.93291797 -0.71565884]
     [ 0.78083404 -0.89301473 -0.03758353  0.92429527 -0.73170157]
     [ 0.08266474 -0.94058595  0.70017899  0.3578314  -0.9979998 ]
     [-0.39534886 -0.67069473  0.95356385 -0.12295042 -0.85123299]
     [ 0.73424796  0.31175013 -0.99371562  0.5153131   0.56482379]]
  • The latent values are (note that the matrix is transposed):
    latent_matrix = 
    [[  8.73701754   6.10841293   9.31566343  -2.00751851   0.10715919
        6.94580853  -6.95308374   6.97502697 -11.09777023  -8.79311041]
     [ -3.61789323   0.11091496  10.94717459   3.14579647 -13.23974342
        2.78914476   9.40101397 -17.75756896   2.87461527   6.65877192]
     [  5.77331701   7.71326491   9.9877786   -3.81972802  -5.86490109
       -6.68585542 -13.59478633  -7.66952834 -10.78863284   5.9248856 ]
     [ -3.05226511  -5.36347909   1.3377953   14.87752343  -0.21993387
      -13.47737126   1.39357385  -1.85004465   6.83400948   1.21105276]]
  • The values created by the generator are:
    predict_matrix = 
    [[[-0.9839389   0.18747564 -0.9449842  -0.66334486 -0.9822154 ]]
     [[ 0.9514655  -0.9985579   0.76473945 -0.9985249  -0.9828463 ]]
     [[-0.58794653 -0.9982161   0.9855345  -0.93976855 -0.9999758 ]]
     [[-0.9987122   0.9480774  -0.80395573 -0.999845    0.06755089]]]
  • So now I need to get the number of rows up to the same value as the real data
  • Ok, so here’s how that works. We use tf.Keras.Reshape(), which is pretty simple. You simply put the most of shape you want as the single argument and it. So for these experiments, I had ten rows of 5 features, plus an extra dimension. So you would think that reshape(10,5,1) would be what you want.
  • Au contraire! Keras wants to be able to have flexibility, so one dimension is left to vary. The argument is actually (5, 1). Here are two versions. First is a generator using a Dense network:
    def define_generator_Dense(self) -> Sequential:
        self.g_model_Dense = Sequential()
    
        self.g_model_Dense.add(Dense(4, activation='relu', kernel_initializer='he_uniform', input_dim=self.latent_dim))
        self.g_model_Dense.add(Dropout(0.2))
        self.g_model_Dense.add(Dense(self.vector_size, activation='tanh')) # activation was linear
        self.g_model_Dense.add(Reshape((self.vector_size, 1)))
        print("g_model_Dense.output_shape = {}".format(self.g_model_Dense.output_shape))
    
        # compile model
        loss_func = tf.keras.losses.BinaryCrossentropy()
        opt_func = tf.keras.optimizers.Adam(0.001)
        self.g_model_Dense.compile(loss=loss_func, optimizer=opt_func)
    
        return self.g_model_Dense
  • Second is a network using Conv1D layers
    def define_generator_Dense_to_CNN(self) -> Sequential:
        self.g_model_Dense_CNN = Sequential()
        self.g_model_Dense_CNN.add(Dense(self.num_samples * self.vector_size, activation='relu', batch_input_shape=(self.num_samples, self.latent_dim)))
        self.g_model_Dense_CNN.add(Reshape(target_shape=(self.num_samples, self.vector_size)))
        self.g_model_Dense_CNN.add(Conv1D(filters=self.vector_size, kernel_size=self.num_samples, activation='tanh', batch_input_shape=(self.num_samples, self.vector_size, 1))) # activation was linear
        self.g_model_Dense_CNN.add(Reshape((self.vector_size, 1)))
        #self.g_model.add(UpSampling1D(size=2))
    
        # compile model
        loss_func = tf.keras.losses.BinaryCrossentropy()
        opt_func = tf.keras.optimizers.Adam(0.001)
        self.g_model_Dense_CNN.compile(loss=loss_func, optimizer=opt_func)
        return self.g_model_Dense_CNN

    :

  • Both evaluated correctly against the discriminator, so I should be able to train the whole GAN, once it’s assembled. But that is not something to start at 4:30 on a Friday afternoon!
    real predict = (10, 1)[[0.42996567]
     [0.55048925]
     [0.56003207]
     [0.40951794]
     [0.5600004 ]
     [0.5098837 ]
     [0.4046895 ]
     [0.41493616]
     [0.4196912 ]
     [0.5080263 ]]
    gdense_mat predict = (10, 1)[[0.48928624]
     [0.5       ]
     [0.4949373 ]
     [0.5       ]
     [0.5973854 ]
     [0.61968124]
     [0.49698165]
     [0.5       ]
     [0.5183723 ]
     [0.4212265 ]]
    gdcnn_mat predict = (10, 1)[[0.48057705]
     [0.5026125 ]
     [0.51943815]
     [0.4902147 ]
     [0.5988    ]
     [0.39476413]
     [0.49915075]
     [0.49861506]
     [0.55501187]
     [0.54503495]]