6:30 – 7:00 , 4:00 – 6:00 Research
- Meeting with Don
- Something that’s built from agent-based models? Toward a Formal, Visual Framework of Emergent Cognitive Development of Scholars
- Detecting and visualizing emerging trends and transient patterns in scientific literature
- Agent-based computing from multi-agent systems to agent-based models: a visual survey
- Muaz A. Niazi – Complex Adaptive Systems, Agent-based Modeling, Complex Networks, Communication Networks, Cognitive Agent-based Computing
7:30 – 3:30, BRC
- From LearningTensorflow.com: KMeans tutorial. Looks pretty good
- This looks interesting: Large-Scale Evolution of Image Classifiers: Neural networks have proven effective at solving difficult problems but designing their architectures can be challenging, even for image classification problems alone. Evolutionary algorithms provide a technique to discover such networks automatically. Despite significant computational requirements, we show that evolving models that rival large, hand-designed architectures is possible today. We employ simple evolutionary techniques at unprecedented scales to discover models for the CIFAR-10 and CIFAR-100 datasets, starting from trivial initial conditions. To do this, we use novel and intuitive mutation operators that navigate large search spaces. We stress that no human participation is required once evolution starts and that the output is a fully-trained model.
- Working on calculating distance between two vectors. Oddly, these do not seem to be library functions. This seems to be the way to do it:
def calcL2Dist(t1, t2): dist = -1.0 sub = tf.subtract(t1, t2) squares = tf.square(sub) sum = tf.reduce_sum(squares) return sum - Now I’m trying to build a matrix of distances. Got it working after some confusion. Here’s the full code. Note that the ‘source’ matrix is declared as a constant, since it’s immutable(?)
import numpy as np import tensorflow as tf; def calcL2Dist(t1, t2): dist = -1.0 sub = tf.subtract(t1, t2) squares = tf.square(sub) dist = tf.reduce_sum(squares) return dist def initDictRandom(rows = 3, cols = 5, prefix ="doc_"): dict = {} for i in range(rows): name = prefix+'{0}'.format(i) dict[name] = tf.Variable(np.random.rand(cols), tf.float32) return dict def initDictSeries(rows = 3, cols = 5, offset=1, prefix ="doc_"): dict = {} for i in range(rows): name = prefix+'{0}'.format(i) array = [] for j in range(cols): array.append ((i+offset)*10 + j) #dict[name] = tf.Variable(np.random.rand(cols), tf.float32) dict[name] = tf.constant(array, tf.float32) return dict def createCompareDict(sourceDict): distCompareDict = {} keys = sourceDict.keys(); for n1 in keys: for n2 in keys: if n1 != n2: name = "{0}_{1}".format(n1, n2) t1 = sourceDict[n1] t2 = sourceDict[n2] dist = calcL2Dist(t1, t2) distCompareDict[name] = tf.Variable(dist, tf.float32) return distCompareDict sess = tf.InteractiveSession() dict = initDictSeries(cols=3) dict2 = createCompareDict(dict) init = tf.global_variables_initializer() sess.run(init) print("{0}".format(sess.run(dict)).replace("),", ")\n")) print("{0}".format(sess.run(dict2)).replace(",", "])\n")) - Results:
{'doc_0': array([ 10., 11., 12.], dtype=float32) 'doc_2': array([ 30., 31., 32.], dtype=float32) 'doc_1': array([ 20., 21., 22.], dtype=float32)} {'doc_1_doc_2': 300.0]) 'doc_0_doc_2': 1200.0]) 'doc_1_doc_0': 300.0]) 'doc_0_doc_1': 300.0]) 'doc_2_doc_1': 300.0]) 'doc_2_doc_0': 1200.0} - Looks like the data structures that are used in the tutorials are all using panda.
- Successfully installed pandas-0.19.2
