Phil 3.23.17

7:00 – 8:00, 4:00 – 5:00 Research

8:30 – 10:30, 12:30 – 3:30 BRC

  • I don’t think my plots are right. Going to add some points to verify…
  • First, build a matrix of all the values. Then we can visualize as a surface, and look for the best values after calculation
  • Okay………. So there is a very weird bug that Aaron stumbled across in running python scripts from the command line. There are many, many, many thoughts on this, and it comes from a legacy issue between py2 and py3 apparently. So, much flailing:
    python -i -m OptimizedClustererPackage.DBSCAN_clusterer.py
    python -m OptimizedClustererPackage\DBSCAN_clusterer.py
    C:\Development\Sandboxes\TensorflowPlayground\OptimizedClustererPackage>C:\Users\philip.feldman\AppData\Local\Programs\Python\Python35\python.exe -m C:\Development\Sandboxes\TensorflowPlayground\OptimizedClustererPackage\DBSCAN_clusterer.py
    

    …etc…etc…etc…

  • After I’d had enough of this, I realized that the IDE is running all of this just fine, so something works. So, following this link, I set the run config to “Show command line afterwards”: PyRunConfig The outputs are very helpful:
    C:\Users\philip.feldman\AppData\Local\Programs\Python\Python35\python.exe C:\Users\philip.feldman\.IntelliJIdea2017.1\config\plugins\python\helpers\pydev\pydev_run_in_console.py 60741 60742 C:/Development/Sandboxes/TensorflowPlayground/OptimizedClustererPackage/cluster_optimizer.py
    
  • Editing out the middle part, we get
    C:\Users\philip.feldman\AppData\Local\Programs\Python\Python35\python.exe C:/Development/Sandboxes/TensorflowPlayground/OptimizedClustererPackage/cluster_optimizer.py

    And that worked! Note the backslashes on the executable and the forward slashes on the argument path.

  • Update #1. Aaron’s machine was not able to run a previous version of the code, so we poked at the issues, and I discovered that I had left some code in my imports that was not in his code. It’s the Solution #4: Use absolute imports and some boilerplate code“section from this StackOverflow post. Specifically, before importing the local files, the following four lines of code need to be added:
    import sys # if you haven't already done so
    from pathlib import Path # if you haven't already done so
    root = str(Path(__file__).resolve().parents[1])
    sys.path.append(root)
  • After which, you can add your absolute imports as I do in the next two lines:
    from OptimizedClustererPackage.protobuf_reader import ProtobufReader
    from OptimizedClustererPackage.DBSCAN_clusterer import DBSCANClusterer
  • And that seems to really, really, really work (so far).

Phil 3.22.17

8:30 – 6:00 BRC

  • Working on GA optimizer. I have the fitness function running and it seems reasonable. First, here’s the data with one clustering run: Cluster_128
  • And here’s the PDF of fitness by min cluster size clusterOptimizer note that there are at least three pdfs, though the overall best overall value doesn’t change
  • Aaron is importing now. for some output, I now write the cluster iterations to a text file

Aaron 3.21.17

Missed my blog yesterday as I got overwhelmed with a bunch of tasks. I’ll include some elements here:

  • KeyGeneratorLibrary
    • I got totally derailed for multiple hours as one of the core libraries we use throughout the system to generate 128-bit non-crypto hashes for things like rowIds had gotten thoroughly dorked up. Someone had accidentally dumped 70 mb of binary unstructured content into the library and checked it in.
    • While I was clearing out all the binary content, I was asked to remove all of the unused dependencies from our library template. All of our other libraries include SpringBoot and a bunch of other random crap, but I took the time to rip it all out and build a new version, and update our Hadoop jobs to use the latest one. The combined changes dropped the JAR from ~75 mb to 3k. XD
  • Hadoop Development
    • More flailing wildly trying to get our Hadoop testing and development process fixed. We’re on a new environment, and essentially it broke everything, so we have no way to develop, update, or test any of our Hadoop code.
    • Apparently this has been fixed (again).
  • TensorFlow / Sci-Py Clustering
    • Sat in with Phil for a bit looking at his latest fancy code and the output of the clusters. Very impressive, and the code is nice and clean. I’m really looking forward to moving over to predominantly Python code. I’m super burned out on Java right now, and would far rather be working on pure machine learning content rather than infrastructure and pre-processing. Maybe next sprint?
  • TFRecord Output
    • Got a chance to write a playground for TFRecord output and Python integration, before realizing that the TF ecosystem code only supports InputFormat/OutputFormat for Hadoop, and due to our current issues I cannot run those tests locally at all. *sad trombone*
  • Python Integration
    • My day is rapidly winding to a close, but slapping out the test code for the Python process launching so I can at least feel like I accomplished something today.
  • Cycling / Health
    • Didn’t get to cycle today because I spent 2 hours trying to get a blood test so my doctor can verify my triglycerides have gone down.

Phil 3.21.17

7:00 – 8:00 Research

8:30 – 3:00 BRC

  • Switching gears from LaTex to Python takes effort. Neither is natural or comfortable yet
  • Sent Jeremy a note on conferences and vacation. Using the hours on my paycheck stub, which *could* be correct…
  • More clustering. Adding output that will be used for the optimizer clusters
    clusters = 4
    Total  = 512
    clustered = 437
    unclustered = 75
  • Built out the optimizer and filled it with a placeholder function. Will fill in after lunchminima
  • Had to leave to take care of dad, who fainted. But here are my thoughts on the GA construction. The issue with fitness test is that we have two variables to optimize, the EPS and the minimum cluster size, based on the number of clusters and the number of unclustered. I want to unitize the outputs sop that 2.0 is best and 0.0 is worst. The unclustered should be 1.0 – unclustered/total. The number of clusters should be clusters/(total/min_cluster_size).
  • The way the GA should work is that we start with a set of initial EPSs (0 – 1) and a set of cluster sizes (3 – total/3). We try each, throw the bottom half away, keep the top result and breed a new set by interpolating (random distances?) between the remaining. We also  randomly generate a new allele or two in case we get trapped on a local maxima.  When we are no longer getting any improvement (some epsilon) we stop. All the points can be plotted and we can try to fit a polyline as well (one for eps and for minimum cluster? Could plot as a surface…)

Phil 3.20.17

Spring!

7:00 – 8:30 Research

  • Morning thought. If the perimeter is set to ‘lethal’, what is the SHR that required the lowest replenishment in an “All Exploit” scenario? Also, how many explorers are needed to keep a runaway echo chamber in range?
  • Need to factor in some of what Arendt talks about in her Bose-Einstein Condensate model of end-stage totalitarianism. Also ordered this
  • MASON is a fast discrete-event multiagent simulation library core in Java, designed to be the foundation for large custom-purpose Java simulations, and also to provide more than enough functionality for many lightweight simulation needs. MASON contains both a model library and an optional suite of visualization tools in 2D and 3D. Documentation here.
  • Working on poster. Going to try LaTex mostly to get better at it. Need to pull up my TEI poster to see the format we use. Using the beamerposter format. So far, pretty painless.

9:00 – 5:00 BRC

  • Create the framework
    • Reader – built the generator part
    • Clusterer – have simple DBSCAN working. It’s pickier than I would have thoughtclusters
    • Optimizer
    • Writer
  • Request time “off” for collective intelligence (June 15-16)  and  HCIC (June 25- 29), and vacation (June 4 – 11)

3.19.17

Monday task!!!

Call OPM at 1-888-767-6738 after scrum

And this looks pretty interesting: https://github.com/unitedstates. Found it looking for bill full text to feed into the LMN system. Here’s an example of tagged xml

<?xml version="1.0"?>
<bill bill-stage="Introduced-in-House" dms-id="H7B2411C180AA4EF7AE87C3F9B3844016" public-private="public" bill-type="olc"> 
<metadata xmlns:dc="http://purl.org/dc/elements/1.1/">
<dublinCore>
<dc:title>113 HR 1237 IH: To authorize and request the President to award the Medal of Honor posthumously to Major Dominic S. Gentile of the United States Army Air Forces for acts of valor during World War II.</dc:title>
<dc:publisher>U.S. House of Representatives</dc:publisher>
<dc:date>2013-03-18</dc:date>
<dc:format>text/xml</dc:format>
<dc:language>EN</dc:language>
<dc:rights>Pursuant to Title 17 Section 105 of the United States Code, this file is not subject to copyright protection and is in the public domain.</dc:rights>
</dublinCore>
</metadata>
<form>
<distribution-code display="yes">I</distribution-code> 
<congress>113th CONGRESS</congress>
<session>1st Session</session>
<legis-num>H. R. 1237</legis-num> 
<current-chamber>IN THE HOUSE OF REPRESENTATIVES</current-chamber> 
<action> 
<action-date date="20130318">March 18, 2013</action-date> 
<action-desc><sponsor name-id="B001281">Mrs. Beatty</sponsor> introduced the following bill; which was referred to the <committee-name committee-id="HAS00">Committee on Armed Services</committee-name></action-desc>
</action> 
<legis-type>A BILL</legis-type> 
<official-title>To authorize and request the President to award the Medal of Honor posthumously to Major Dominic S. Gentile of the United States Army Air Forces for acts of valor during World War II.</official-title> 
</form> 
<legis-body id="HC4FC3A2EC9CD480F8E7100E4CF3C2F3C" style="OLC"> 
<section id="HED5DF0B8F90849ECB7D6A49028BC38E1" section-type="section-one"><enum>1.</enum><header>Authorization and request for award of Medal of Honor to Dominic S. Gentile for acts of valor during World War II</header> 
<subsection id="H21C369FA21D644EB9F38767C54E49A0B"><enum>(a)</enum><header>Findings</header><text display-inline="yes-display-inline">Congress makes the following findings:</text> 
<paragraph id="H6A5FBF181F68426CB457AB237F565723"><enum>(1)</enum><text display-inline="yes-display-inline">Major Dominic S. Gentile of the United States Army Air Forces destroyed at least 30 enemy aircraft during World War II, making him one of the highest scoring fighter pilots in American history and earning him the title of <quote>Ace of Aces</quote>.</text></paragraph> 
<paragraph id="HF3355B912FA6432295B768BE5B58842A"><enum>(2)</enum><text>Major Gentile was the first American fighter pilot to surpass Captain Eddie Rickenbacker’s WWI record of 26 enemy aircraft destroyed.</text></paragraph> 
<paragraph id="H18EE27FA1F7A48CB8AE62301BF0B31ED"><enum>(3)</enum><text>Major Gentile was awarded several medals in recognition of his acts of valor during World War II, including two Distinguished Service Crosses, seven Distinguished Flying Crosses, the Silver Star, the Air Medal, and received similar honors from Great Britain, Italy, Belgium, and Canada.</text></paragraph> 
<paragraph id="H2F7E271C44E84E5DBC95C9F58127B93E"><enum>(4)</enum><text display-inline="yes-display-inline">Major Gentile was born in Piqua, Ohio, and died January 23, 1951, after which he was posthumously appointed to the rank of major.</text></paragraph> 
<paragraph id="HA6F4601200454270939A016AC9B9F96D"><enum>(5)</enum><text>Major Gentile is buried in Columbus, Ohio. Gentile Air Force Station in Kettering, Ohio, is named in his honor and he was inducted into the National Aviation Hall of Fame in 1995.</text></paragraph></subsection> 
<subsection display-inline="no-display-inline" id="H5B70830D32B64B4B858A89AAC16A8A4D"><enum>(b)</enum><header>Authorization</header><text display-inline="yes-display-inline">Notwithstanding the time limitations specified in <external-xref legal-doc="usc" parsable-cite="usc/10/3744">section 3744</external-xref> of title 10, United States Code, or any other time limitation with respect to the awarding of certain medals to persons who served in the Armed Forces, the President is authorized and requested to award the Medal of Honor posthumously under section 3741 of such title to former Major Dominic S. Gentile of the United States Army Air Forces for the acts of valor during World War II described in subsection (c).</text></subsection> 
<subsection commented="no" display-inline="no-display-inline" id="H0003A92F45354335B4497C04FE62D068"><enum>(c)</enum><header>Acts of valor described</header><text display-inline="yes-display-inline">The acts of valor referred to in subsection (b) are the actions of then Major Dominic S. Gentile who, as a pilot of a P–51 Mustang in the Army’s 336th Fighter Squadron, Fourth Fighter Group, of the Eighth Air Force in Europe during World War II, distinguished himself conspicuously by gallantry and intrepidity at the risk of his life above and beyond the call of duty by destroying at least 30 enemy aircraft during his service in the United State Army Air Forces.</text></subsection></section> 
</legis-body> 
</bill>

Aaron 3.17.17

  • Hadoop Environment
    • More fun discussions on our changes to Hadoop development today. Essentially we have a DevOps box with a baby Hadoop cluster we can use for development.
  • ClusteringService scaffold / deploy
    • I spent a bit of time today building out the scaffold MicroService that will manage clustering requests, dispatch the MapReduce to populate the comparison tensor, and interact with the TensorFlow Python.
    • I ran into a few fits and starts with syntax issues where the service name was causing fits because of errant “-“. I resolved those and updated the dockerfile with the new TensorFlow docker image. Once I have a finished list of the packages I need installed for Python integration I’ll have to have them updated to that image.
    • Bob said he would look at moving over the scaffold of our MapReduce job launching code from a previous service, and I suggested he not blow away all the work I had just done and copy the as needed pieces in.
  • TFRecord output
    • Trying to complete the code for outputting MapReduce results as a TFRecord protobuff objects for TensorFlow.
    • I created a PythonIntegrationPlayground project with an OutputTFRecord.java class responsible for building a populated test matrix in a format that TensorFlow can view.
    • Google supports this with their ecosystem libraries here. The library includes instructions with versions and a working sample for MapReduce as well as Spark.
    • The frustrating thing is that presumably to avoid issues with version mismatches, they require you to compile your own .proto files with the protoc compiler, then build your own JAR for the ecosystem.hadoop library. Enough changes have happened with protoc and how it handles the locations of multiple inter-connected proto files that you absolutely HAVE to use the locations they specify for your TensorFlow installation or it will not work. In the old days you could copy the .proto files local to where you wanted to output them to avoid path issues, but that is now a Bad Thing(tm).
    • The correct commands to use are:
      • protoc –proto_path=%TF_SRC_ROOT% –java_out=src\main\java\ %TF_SRC_ROOT%\tensorflow\core\example\example.proto
      • protoc –proto_path=%TF_SRC_ROOT% –java_out=src\main\java\ %TF_SRC_ROOT%\tensorflow\core\example\feature.proto
    • After this you will need Apache Maven to build the ecosystem JAR and install so it can be used. I pulled down the latest (v3.3.9) from maven.apache.org.
    • Because I’m a sad, sad man developing on a Windows box I had to disable to Maven tests to build the JAR, but it’s finally built and in my repo.
  • Java/Python interaction
    • I looked at a bunch of options for Java/Python interaction that would be performant enough, and allow two-way communication between Java/Python if necessary. This would allow the service to provide the location in HDFS to the TensorFlow/Sci-Kit Python clustering code and receive success/fail messages at the very least.
    • Digging on StackOverflow lead me to a few options.
    • Digging a little further I found JPServe, a small library based on PyServe that uses JSON to send complex messages back to Java.
    • I think for our immediate needs its most straightforward to use the ProcessBuilder approach:
      • ProcessBuilder pb = new ProcessBuilder(“python”,”test1.py”,””+number1,””+number2);
      • Process p = pb.start();
    • This does allow return codes, although not complex return data, but it avoids having to manage a PyServe instance inside a Java MicroService.
  • Cycling
    • I’ve been looking forward to a good ride for several days now, as the weather has been awful (snow/ice). Got up to high 30s today, and no visible ice on the roads so Phil and I went out for our ride together.
    • It was the first time I’ve been out with Phil on a bike with gears, and its clear how much I’ve been able to abuse him being on a fixie. If he’s hard to keep up with on a fixed gear, its painful on gears. That being said, I think I surprised him a bit when I kept a 9+ mph pace up the first hill next to him and didn’t die.
    • My average MPH dropped a bit because I burned out early, but I managed to rally and still clock a ~15 mph average with some hard peddling towards the end.
    • I’m really enjoying cycling. It’s not a hobby I would have expected would click with me, but its a really fun combination of self improvement, tenacity, min-maxing geekery, and meditation.

Phil 3.17.17

7:00 – 8:00 Research

8:30 – 6:00 BRC

Phil 3.16.17

7:00 – 8:00, 4:00 – 5:30 Research

8:30 – 3:30 BRC

  • Added subtasks for the clustering optimizer
  • Meeting with Aaron and Heath about scalability
  • Converting a panda data frame to a numpy ndarray – done
    df = createDictFrame(rna, cna)
    df = df.sort_values(by='sum', ascending=True)
    mat = df.as_matrix()
  • Working on polynomials – donepolyLine
  • Played with the Mandelbrot set as well. Speedy!
  • This came across my feed: Scikit-Learn Tutorial Series

Phil 3.15.17

7:00 – 8:00 Research

8:30 – 5:00 BRC

  • Heath was able to upgrade to Python 3.5.2
    • Ran array_thoughts. Numbers are better than my laptop
    • Attempting just_dbscan: Some hiccups due to compiling from sources. (No module named _bz7). Stalled? Sent many links.
    • Success! Heath installed a binary Python rather than compiling from sources. A little faster than my laptop. No GPUs, CPU, not memory bound.
  • Continuing my tour of the SciPy Lecture Notes
  • Figuring out what a matplotlib backend is
  • Looks like there are multiple ways to serve graphics: http://matplotlib.org/faq/howto_faq.html#howto-webapp
  • More on typing Python
  • Class creation, inheritance and superclass overloading, with type hints:
    class Student(object):
        name = 'noName'
        age = -1
        major = 'unset'
    
        def __init__(self, name: str):
            self.name = name
    
        def set_age(self, age: int):
            self.age = age
    
        def set_major(self, major: str):
            self.major = major
    
        def to_string(self) -> str:
            return "name = {0}\nage = {1}\nmajor = {2}"\
                .format(self.name, self.age, self.major)
    
    
    class MasterStudent(Student):
        internship = 'mandatory, from March to June'
    
        def to_string(self) -> str:
            return "{0}\ninternship = {1}"\
                .format(Student.to_string(self), self.internship)
    
    
    anna = MasterStudent('anna')
    print(anna.to_string())
  • Finished the Python part, Numpy next
  • Figured out how to to get a matrix shape, (again, with type hints):
    import numpy as np
    
    
    def set_array_sequence(mat: np.ndarray):
        for i in range(mat.shape[0]):
            for j in range(mat.shape[1]):
                mat[i, j] = i * 10 + j
    
    
    a = np.zeros([10, 3])
    set_array_sequence(a)
    print(a.shape)
    print(a)

Phil Pi Day!

Research

  • I got accepted into the Collective Intelligence conference!
  • Working on LaTex formatting. Slow but steady.
  • Ok, the whole doc is in, but the 2 column charts are not locating well. I need to rerig them so that they are single column. Fixed! Not sure about the gray bg. Maybe an outline instead?

Aaron 3.13.17

  • Sprint Review
    • Covered issues with having customers present with Sprint Reviews; ie. don’t do it, it makes them take 3x as long and cover less.
    • Alternative facts presented about design tasks.
  • ClusteringService
    • Send design content to other MapReduce developer.
    • Sent entity model queries out regarding claim data.
  • Cycling
    • I went out for the 12.5 mile loop today. It was 30 degrees with a 10-12 mph wind, but it was… easy? I didn’t even lose my breath going up “Death Hill”. I guess its about time to move onto the 15 mile loop for lunchtime rides.
  • Sprint Grooming / Sprint Planning
    • It was decided to roll directly from grooming to planning activities.

Phil 3.13.17

7:00 – 8:00, 5:00 – 7:00 Research

  • Back to learning LaTex. Read the docs, which look reasonable, if a little clunkey.
  • Working out how to integrate RevEx
  • Spent a while looking at Overleaf and ShareLatex, but decided that I like TexStudio better. Used the MikTex package manager to download revtex 4.1.
  • Looked for “aiptemplate.tex” and “aipsamp.tex” and found them with all associated files here: ftp://ftp.tug.org/tex/texlive/Contents/live/texmf-dist/doc/latex/revtex/sample/aip. And it pretty much just worked. Now I need to start stuffing text into the correct places.

8:30 – 2:30 BRC

  • Got a response from the datapipeline folks about their demo code. sked them to update the kmeans_single_iteration.py and functions.py files.
  • The SciKit DBSCAN is very fast
    setup duration for 10000 points = 0.003002166748046875
    DBSCAN duration for 10000 points = 1.161818265914917
  • Drilling down into the documentation. Starting with the SciPy Lecture Notes
    • Python has native support for imaginary numbers. Huh.
    • Static typing is also coming. This is allowed, but doesn’t seem to do anything yet:
      def calcL2Dist(t1:List[float], t2:List[float]) -> float:
    • This is really nice:
      In [35]: def variable_args(*args, **kwargs):
         ....:     print 'args is', args
         ....:     print 'kwargs is', kwargs
         ....:
      
      In [36]: variable_args('one', 'two', x=1, y=2, z=3)
      args is ('one', 'two')
      kwargs is {'y': 2, 'x': 1, 'z': 3}
  • in my ongoing urge to have interactive applications, I found Bokeh, which seems to create javascript??? More traditionally, wxPython appears to be a set of bindings to the wxWidgets library. Installed, but I had to grab the compiled wheel from here (as per S.O.). I think I’m going to look closely at Bokeh though, if it can talk to the running Python, then we could have some nice diagnostics. And the research browser could possibly work through this interface as well.

Phil 3.10.17

Elbow Tickets!

7:00 – 8:00 Research

  • artisopensource.net
  • Accurat is a global, data-driven research, design, and innovation firm with offices in Milan and New York.
  • Formatting paper for Phys Rev E. Looks like it’s gotta be LaTex, or more specifically, RevTex. My entry about formats
    • Downloaded RevTex
    • How to get Google Docs to LaTex
    • Introduction to LaTex
    • Installing TexX slooooow…TexLive
    • That literally took hours. Don’t install the normal ‘big!’ default install?
    • Installing pandoc
    • Tried to just export a PDF, but that choked. reading the manual at C:\texlive\2016\tlpkg\texworks\texworks-help\TeXworks-manual\en
    • Compiled the converted doc! Not that I actually know what all this stuff does yet… LatexGP
    • And then I thought, ‘gee, this is more like coding – I wonder if there is a plugin for IntelliJ?’. Yest, but this page ->BEST DEVELOPMENT SETUP FOR LATEX – says to use texStudio. downloading to try. This seems to be very nice. Not sure if it will work without a LaTexInstall, but I’ll tray that on my home box. It would be a much faster install if it did. And it’s been updated very recently – Jan 2017
      • Aaaand the answer is no, it needs an install. Trying MikTex this time. Well that’s a LOT faster!

8:30 – 10:30, 11:00 – 2:00 BRC

Phil 3.9.17

7:00 – 7:30, 4:00-5:30  Research

9:30 – 3:30BRC

  • Neat thing from Flickr on finding similar images.
  • How to install pyLint as an external tool in IntelliJ.
  •  How to find out where your python modules are installed:
    C:\Windows\system32>pip3 show pylint
    Name: pylint
    Version: 1.6.5
    Summary: python code static checker
    Home-page: https://github.com/PyCQA/pylint
    Author: Python Code Quality Authority
    Author-email: code-quality@python.org
    License: GPL
    Location: c:\users\philip.feldman\appdata\local\programs\python\python35\lib\site-packages
    Requires: colorama, mccabe, astroid, isort, six
  • Looking at building scikit DBSCAN clusterer. I think the plan will be to initially use TF as IO. read in the protobuf and eval() out the matrix to scikit. Do the clustering in scikit, and then use TF to write out the results. Since TF and scikit are very similar, that should aid in the transfer from Python to TF, while allowing for debugging and testing in the beginning. And we can then benchmark.
  • Working on running the scikit.learn plot_dbscan example, and broke the scipy install. Maybe use the Windows installers? Not sure what that might break. Will try again and follow error messages first.
  • This looks like the fix: http://stackoverflow.com/questions/28190534/windows-scipy-install-no-lapack-blas-resources-found
    • Sorry to necro, but this is the first google search result. This is the solution that worked for me:
      1. Download numpy+mkl wheel from http://www.lfd.uci.edu/~gohlke/pythonlibs/#numpy. Use the version that is the same as your python version (check using python -V). Eg. if your python is 3.5.2, download the wheel which shows cp35
      2. Open command prompt and navigate to the folder where you downloaded the wheel. Run the command: pip install [file name of wheel]
      3. Download the SciPy wheel from: http://www.lfd.uci.edu/~gohlke/pythonlibs/#scipy (similar to the step above).
      4. As above, pip install [file name of wheel]
  • got a new error where
    TypeError: unorderable types: str() < int()
    • After some searching, here’s the SO answer
    • Changed line 406 from fixes.py from:
      if np_version < (1, 12, 0):

      into

      if np_version < (1, 12):
    • Success!!! DBSCAN_cluster_test
  • Sprint Review