Learn how DroneData is powering Cornell research in Artificial Intelligence.
DroneData and its founder Martin Woodall proudly support Artificial Intelligence research at Cornell University with the power of NVIDIA GPU Accelerated High Performance Compute servers. DroneData believes that science depends on “the educated imagination” to discover new truths and groundbreaking technologies.
The Sabuncu Lab is a new laboratory in Cornell's School of Electrical and Computer Engineering which conducts cutting-edge research at the intersection of artificial intelligence and healthcare. A core component of the research program deals with developing machine learning algorithms to process and analyze biomedical imaging data, such as brain MRI scans or chest CT scans. A common goal in their projects is to build a machine learning system that will automatically detect abnormal lesions in a patient’s scan and assist in clinical decisions, including diagnosis, prognosis and treatment planning. Researchers are exploring the use of advanced deep learning algorithms coupled with GPU-based hardware, to implement such a system. In one pilot project, the objective is to detect and classify lung nodules in chest CT scans obtained during lung cancer screening.
Geoff Pleiss, PhD student, computer science
Kilian Weinberger, associate professor, computer science
The goal of Deep Feature Interpolation is to produce identity-preserving changes to images using neural networks. Examples of identity-preserving changes would be adding/removing facial hair, aging, changing a frown to a smile, etc. It is important that these changes do not alter identifying features of the original photo — i.e. if adding facial hair to a photo of a man, the user does not end up with a picture of a different man.
We propose to produce these identity-preserving changes in a data-driven manner using deep feature representations. In a deep-feature space, it is hypothesized that class differences (e.g. young/old, facial hair/clean shaven) are linearly separable. Thus, changes to photos can be made with linear interpolations in this deep feature space.
One major issue is reconstructing an image from these modified deep representations. I am investigating if it is possible to learn invertible neural networks, such that it is easy to move between pixel-space and deep-feature space.
Paul Upchurch, PhD student, computer science
Kavita Bala, professor, computer science
We are investigating deep convolutional models which support changing the content of images and videos. For example, given a high-resolution image of a person, we can automatically age them or add facial hair while preserving the person's identity. Investigating convolutional models can improve the quality of the result and increase our understanding of how computers learn visual semantic filters.
Deep learning is able to achieve great predictive performance by automatically learning rich statistical representations which would be difficult to match with hand crafted feature engineering. If we can interpret this learned statistical structure then we can gain new fundamental insights into our data and predictions, leading to more effective predictive models and new scientific discoveries. We are working on developing interpretable deep models, with probabilistic representations of uncertainty, interpretable similarity metrics, graphical representations, and fully automatic structure discovery.
My project is on exploring more efficient deep learning models, and make them more applicable to devices with less computational power. Possible approaches include introducing sparsity, pruning redundant filters and/or quantizing the weight. It is also interesting to design more compact neural network architectures such the Densely Connected Convolutional Networks (DenseNets), which can save computational cost by design.
Binarized neural networks (BNNs) have recently shown a great deal of promise for FPGA and ASIC accelerators. However, network architectures for BNNs are still primitive compared to full-precision neural networks. This project plans to explore more advanced architectures for BNNs which has the potential to reduce the parameter size of BNNs.
Natural language provides an expressive and accessible interface to increasingly common automated agents and computing interfaces, including robotic agents and automated personal assistants. However, natural language understanding is a complex challenge and requires reasoning about linguistic meaning and its use in context, for example to resolve references to objects in the environment when instructing a robotic systems. The goal of this project is to develop algorithms for learning to understand natural language through interaction with users and experimenting in the world.
We are redesigning camera systems with computer vision in mind. The idea is that traditional camera systems spend time, energy, and complexity on producing high-quality photographs for human viewing, but computer vision algorithms can work equally well on noisy, low-resolution image data. Our experiments show that re-training convolutional neural networks (CNNs) on the raw data produced by our low-power camera lets them perform about as well as they do on full photographs, but the images are much cheaper to capture.