Deep convolutional networks have become a popular tool for image generation and restoration. Generally, their excellent performance is imputed to their ability to learn realistic image priors from a large number of example images. In this paper, we show that, on the contrary, the structure of a generator network is sufficient to capture a great deal of low-level image statistics prior to any learning. In order to do so, we show that a randomly-initialized neural network can be used as a handcrafted prior with excellent results in standard inverse problems such as denoising, super-resolution, and inpainting. Furthermore, the same prior can be used to invert deep neural representations to diagnose them, and to restore images based on flash-no flash input pairs.
Was a rough weekend. Dad passed away after a decade or so with severe dementia. My feelings are… complex. He donated his body to the anatomy board, so expect remains in 2 months to 2 years. Also, I should be getting a letter shortly that has the information wrt getting death certificates. Sigh. At least I can stop freaking out every time the phone rings from the facility.
Wrote my review for Panos
Back to geometric primitives
Pulled out the tmesh so that it could be added as a set of Geoms to a single node
Sphere! Still need to fix the texture coords – done
Added the beginnings of the satellite create and control. I can manipulate 6DOF independently, but with respect to the global origin
Scan hotel receipt (done) and fill out expense report – started, but couldn’t make it work. I need to get a new charge number. Spent hours on this trying to submit a travel expense report that didn’t have exceptions. SAP Concur is as bad an application as I have ever used.
Write down thoughts on inhibition and excitation in groups. Basically, when a group is engaged in discussion, some links are excitatory – a small group will engage in discussion, while others participate less or not at all – they are inhibited. These kind of discussions are almost always mediated by an explicit or implicit leader. The consensus that develops is greatly influenced by who is excited and who is inhibited. Also discuss typicality, or the clustering of belief around central items (examples of furniture have chairs and tables as high typicality examples)
Work on flowchart(s)
Generalize cube, size in 3 dimensions and normals from cross products
Change cylinder so that normals are from cross products – done, after considerable flailing.
School reimbursement and approval for 899 – forms filled out, waiting for signatures
Write down thoughts on inhibition and excitation in groups. Basically, when a group is engaged in discussion, some links are excitatory – a small group will engage in discussion, while others participate less or not at all – they are inhibited. These kind of discussions are almost always mediated by an explicit or implicit leader. The consensus that develops is greatly influenced by who is excited and who is inhibited.
We describe a distributed model of information processing and memory and apply it to the representation of general and specific information. The model consists of a large number of simple processing elements which send excitatory and inhibitory signals to each other via modifiable connections. Information processing is thought of as the process whereby patterns of activation are formed over the units in the model through their excitatory and inhibitory interactions. The memory trace of a processing event is the change or increment to the strengths of the interconnections that results from the processing event. The traces of separate events are superimposed on each other in the values of the connection strengths that result from the entire set of traces stored in the memory. The model is applied to a number of findings related to the question of whether we store abstract representations or an enumeration of specific experiences in memory. The model simulates the results of a number of important experiments which have been taken as evidence for the enumeration of specific experiences. At the same time, it shows how the functional equivalent of abstract representations- prototypes, logogens, and even rules-can emerge from the superposition of traces of specific experiences, when the conditions are right for this to happen. In essence, the model captures the structure present in a set of input patterns; thus, it behaves as though it had learned prototypes or rules, to the extent that the structure of the environment it has learned about can be captured by describing it in terms of these abstractions.
Analysing topics in short texts (e.g., tweets and new headings) is a challenging task because short texts often contain insufficient word co-occurrence information, which is important to learn good topics in conventional topic topics. To deal with the insufficiency, we propose a generative model that aggregates short texts into clusters by leveraging the associated meta information. Our model can generate more interpretable topics as well as document clusters. We develop an effective Gibbs sampling algorithm favoured by the fully local conjugacy in the model. Extensive experiments demonstrate that our model achieves better performance in terms of document clustering and topic coherence.