ambulance bed bolt briefcase calendar chain chevron-left chevron-right clock-o commenting-o commenting comments diamond envelope-o envelope facebook feed flask globe group heart-o heart heartbeat hospital-o instagram leaf map-marker medkit phone quote-left quote-right skype star-o star tint trophy twitter user-md user youtube

Publications


Efficient and accurate extraction of in vivo calcium signals from microendoscopic video data

Zhou P, Resendez SL, Rodriguez-Romaguera J, Jimenez JC, Neufeld SQ, Stuber GD, Hen R, Kheirbek MA, Sabatini BL, Kass RE, Paninski L | J Neurosci | 2017 | PDF

In vivo calcium imaging through microscopes has enabled deep brain imaging of previously inaccessible neuronal populations within the brains of freely moving subjects. However, microendoscopic data suffer from high levels of background fluorescence as well as an increased potential for overlapping neuronal signals. Previous methods fail in identifying neurons and demixing their temporal activity because the cellular signals are often submerged in the large fluctuating background. Here we develop an efficient method to extract cellular signals with minimal influence from the background. We model the background with two realistic components; (1) one models the constant baseline and slow trends of each pixel, and (2) the other models the fast fluctuations from out-of-focus signals and is therefore constrained to have low spatial-frequency structure. This decomposition avoids cellular signals being absorbed into the background term. After subtracting the background approximated with this model, we use Constrained Nonnegative Matrix Factorization (CNMF, Pnevmatikakis et al. (2016)) to better demix neural signals and get their denoised and deconvolved temporal activity. We validate our method on simulated and experimental data, where it shows fast, reliable, and high quality signal extraction under a wide variety of imaging parameters.


Learning to Associate Words and Images Using a Large-scale Graph

Ya H, Sun H, Helt J, Lee, TS | Conference of Computer Vision and Robotics 2017 | 2017 | PDF

We develop an approach for unsupervised learning of associations between co-occurring perceptual events using a large graph. We applied this approach to successfully solve the image captcha of China's railroad system. The approach is based on the principle of suspicious coincidence. In this particular problem, a user is presented with a deformed picture of a Chinese phrase and eight low-resolution images. They must quickly select the relevant images in order to purchase their train tickets. This problem presents several challenges; (1) the teaching labels for both the Chinese phrases and the images were not available for supervised learning, (2) no pre-trained deep convolutional neural networks are available for recognizing these Chinese phrases or the presented images, and (3) each captcha must be solved within a few seconds. We collected 2.6 million captchas, with 2.6 million deformed Chinese phrases and over 21 million images. From these data, we constructed an association graph, composed of over 6 million vertices, and linked these vertices based on co-occurrence information and feature similarity between pairs of images. We then trained a deep convolutional neural network to learn a projection of the Chinese phrases onto a 230-dimensional latent space. Using label propagation, we computed the likelihood of each of the eight images conditioned on the latent space projection of the deformed phrase for each captcha. The resulting system solved captchas with 77% accuracy in 2 seconds on average. Our work, in answering this practical challenge, illustrates the power of this class of unsupervised association learning techniques, which may be related to the brain's general strategy for associating language stimuli with visual objects on the principle of suspicious coincidence.


Learning Robust Object Recognition Using Composed Scenes from Generative Models

Wang H, Lin XY, Zhang Y, Lee TS | Conference of Computer Vision and Robotics 2017 | 2017 | PDF

Recurrent feedback connections in the mammalian visual system have been hypothesized to play a role in synthesizing input in the theoretical framework of analysis by synthesis. The comparison of internally synthesized representation with that of the input provides a validation mechanism during perceptual inference and learning. Inspired by these ideas, we proposed that the synthesis machinery can compose new, unobserved images by imagination to train the network itself so as to increase the robustness of the system in novel scenarios. As a proof of concept, we investigated whether images composed by imagination could help an object recognition system to deal with occlusion, which is challenging for the current state-of-the-art deep convolutional neural networks. We fine-tuned a network on images containing objects in various occlusion scenarios, that are imagined or self-generated through a deep generator network. Trained on imagined occluded scenarios under the object persistence constraint, our network discovered more subtle and localized image features that were neglected by the original network for object classification, obtaining better separability of different object classes in the feature space. This leads to significant improvement of object recognition under occlusion for our network relative to the original network trained only on un-occluded images. In addition to providing practical benefits in object recognition under occlusion, this work demonstrates the use of self-generated composition of visual scenes through the synthesis loop, combined with the object persistence constraint, can provide opportunities for neural networks to discover new relevant patterns in the data, and become more flexible in dealing with novel situations.


Transfer of View-manifold Learning to Similarity Perception of Novel Objects

Lin XY, Wang H, Li ZH, Zhang Y, Yuille A, Lee TS | ICLR 2017 Conference paper | 2017 | PDF

We develop a model of perceptual similarity judgment based on re-training a deep convolution neural network (DCNN) that learns to associate different views of each 3D object to capture the notion of object persistence and continuity in our visual experience. The re-training process effectively performs distance metric learning under the object persistency constraints, to modify the view-manifold of object representations. It reduces the effective distance between the representations of different views of the same object without compromising the distance between those of the views of different objects, resulting in the untangling of the view-manifolds between individual objects within the same category and across categories. This untangling enables the model to discriminate and recognize objects within the same category, independent of viewpoints. We found that this ability is not limited to the trained objects, but transfers to novel objects in both trained and untrained categories, as well as to a variety of completely novel artificial synthetic objects. This transfer in learning suggests the modification of distance metrics in viewmanifolds is more general and abstract, likely at the levels of parts, and independent of the specific objects or categories experienced during training. Interestingly, the resulting transformation of feature representation in the deep networks is found to significantly better match human perceptual similarity judgment than AlexNet, suggesting that object persistence potentially could be a important constraint in the development of perceptual similarity judgment in our brains.


A-Fast-RCNN: Hard Positive Generation via Adversary for Object Detection

Wang X, Shrivastava A, Gupta A | Proc. of IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2017 | 2017 | PDF

How do we learn an object detector that is invariant to occlusions and deformations? Our current solution is to use a data-driven strategy -- collect large-scale datasets which have object instances under different conditions. The hope is that the final classifier can use these examples to learn invariances. But is it really possible to see all the occlusions in a dataset? We argue that like categories, occlusions and object deformations also follow a long-tail. Some occlusions and deformations are so rare that they hardly happen; yet we want to learn a model invariant to such occurrences. In this paper, we propose an alternative solution. We propose to learn an adversarial network that generates examples with occlusions and deformations. The goal of the adversary is to generate examples that are difficult for the object detector to classify. In our framework both the original detector and adversary are learned in a joint manner. Our experimental results indicate a 2.3% mAP boost on VOC07 and a 2.6% mAP boost on VOC2012 object detection challenge compared to the Fast-RCNN pipeline. We also release the code for this paper.


Under Construction. Come back later.

Fast online deconvolution of calcium imaging data

Friedrich J, Zhou P, Paninski L | PLoS Computational Biology | 2017 | 10.1371/journal.pcbi.1005423 | PDF


UnrealCV: Connecting computer vision to unreal engine

Qiu W, Yuille A | Computer Vision-ECCV 2016 Workshops | 2016 | PDF

Pose from action: Unsupervised learning of pose features based on motion

Purushwalkam S, Gupta A | ECCV workshop, 2016 | 2016 | PDF

Geometric Neural Phrase Pooling: Modeling the Spatial Co-occurrence of Neurons

Xie L, Tian Q, Flynn J, Wang J, and Yuille A | ECCV 2016 | 2016 | PDF