14 matching results for "ai":
Submitted Mar 08, 2017 to Science Blogs The joint many-task model tackles multiple NLP tasks with a single architecture. Tasks are layered such that subsequent and previous tasks benefit from training of the closely-related tasks. Though applied to specific NLP objectives, the proposed model introduces a powerful concept for future research.
|
Submitted Mar 04, 2017 (Edited Mar 04, 2017) to Science Courses and Tutorials If you are in the position I was where you are on the edge of building your own deep learning machine but a little unsure of time that you need to invest in getting that setup, this post is for you. And this is inspired by my fellow students, mainly Yad Faeq, Brendan Fortuner and our professor Jeremy Howard.
|
Submitted Mar 03, 2017 to Science Research Groups » Computer Science In the HIPS group, we are interested in building intelligent algorithms. What makes a system intelligent? Our philosophy is that "intelligence" means making decisions under uncertainty, adapting to experience, and discovering structure in high-dimensional noisy data. The unifying theme for research in these areas is developing new approaches to statistical inference: uncovering the coherent structure that we cannot directly observe and using it for exploration and to make decisions or predictions. We develop new models for data, new tools for performing inference, and new computational structures for representing knowledge and uncertainty.
|
Submitted Mar 02, 2017 to Scientific Software Mathematical optimization is a well-studied language of expressing solutions to many real-life problems that come up in machine learning and many other fields such as mechanics, economics, EE, operations research, control engineering, geophysics, and molecular modeling. As we build our machine learning systems to interact with real data from these fields, we often cannot (but sometimes can) simply ``learn away'' the optimization sub-problems by adding more layers in our network. Well-defined optimization problems may be added if you have a thorough understanding of your feature space, but oftentimes we don't have this understanding and resort to automatic feature learning for our tasks.
Until this repository, no modern deep learning library has provided a way of adding a learnable optimization layer (other than simply unrolling an optimization procedure, which is inefficient and inexact) into our model formulation that we can quickly try to see if it's a nice way of expressing our data. See our paper OptNet: Differentiable Optimization as a Layer in Neural Networks and code at locuslab/optnet if you are interested in learning more about our initial exploration in this space of automatically learning quadratic program layers for signal denoising and sudoku. |
Submitted Feb 23, 2017 to Science Community Organizations The performance of machine learning methods is heavily dependent on the choice of data representation (or features) on which they are applied. The rapidly developing field of representation learning is concerned with questions surrounding how we can best learn meaningful and useful representations of data. We take a broad view of the field and include topics such as deep learning and feature learning, metric learning, compositional modeling, structured prediction, reinforcement learning, and issues regarding large-scale learning and non-convex optimization. The range of domains to which these techniques apply is also very broad, from vision to speech recognition, text understanding, gaming, music, etc.
|
Submitted Feb 20, 2017 to Science Courses and Tutorials A readable explanation of generative adversarial networks (GANs) with example code using PyTorch.
|
Submitted Feb 16, 2017 to Science Blogs One aspect all recent machine learning frameworks have in common - TensorFlow, MxNet, Caffe, Theano, Torch and others - is that they use the concept of a computational graph as a powerful abstraction. A graph is simply the best way to describe the models you create in a machine learning system. These computational graphs are made up of vertices (think neurons) for the compute elements, connected by edges (think synapses), which describe the communication paths between vertices.
Unlike a scalar CPU or a vector GPU, the Graphcore Intelligent Processing Unit (IPU) is a graph processor. A computer that is designed to manipulate graphs is the ideal target for the computational graph models that are created by machine learning frameworks. We’ve found one of the easiest ways to describe this is to visualize it. Our software team has developed an amazing set of images of the computational graphs mapped to our IPU. These images are striking because they look so much like a human brain scan once the complexity of the connections is revealed – and they are incredibly beautiful too. |
Submitted Feb 11, 2017 (Edited Feb 11, 2017) to Scientific Data It has never been easier to build AI or machine learning-based systems than it is today. The ubiquity of cutting edge open-source tools such as TensorFlow, Torch, and Spark, coupled with the availability of massive amounts of computation power through AWS, Google Cloud, or other cloud providers, means that you can train cutting-edge models from your laptop over an afternoon coffee.
This week, a few machine learning experts and I were talking about all this. To make your life easier, we’ve collected an (opinionated) list of some open datasets that you can’t afford not to know about in the AI world. |
Submitted Jan 30, 2017 to Science Blogs We recently introduced our report on probabilistic programming. The accompanying prototype allows you to explore the past and future of the New York residential real estate market. This post gives a feel for the content in our report by introducing the algorithms and technology that make probabilistic programming possible.
|
Submitted Jan 26, 2017 to Science Videos and Lectures The Thirtieth Annual Conference on Neural Information Processing Systems (NIPS) is a multi-track machine learning and computational neuroscience conference that includes invited talks, demonstrations, symposia and oral and poster presentations of refereed papers. Following the conference, there are workshops which provide a less formal setting.
|
Submitted Jan 18, 2017 to Scientific Software PyTorch is a python package that provides tensor computation (like numpy) with strong GPU acceleration and deep neural networks built on a tape-based autograd system. PyTorch also works with numpy (ndarray), scipy and Cython.
|
Submitted Jan 11, 2017 to Science Videos and Lectures In this Web of Stories video, scientist Marvin Minsky shows off his neural network machine.
|
Submitted Jan 03, 2017 to Science Courses and Tutorials This is a ArXiv report summarizing the tutorial presented by Ian Goodfellow at NIPS 2016 on generative adversarial networks (GANs). The tutorial describes: (1) Why generative modeling is a topic worth studying, (2) how generative models work, and how GANs compare to other generative models, (3) the details of how GANs work, (4) research frontiers in GANs, and (5) state-of-the-art image models that combine GANs with other methods. Finally, the tutorial contains three exercises for readers to complete, and the solutions to these exercises.
|
Submitted Dec 19, 2016 to Science Research Groups » Computer Science The Stanford Artificial Intelligence Laboratory (SAIL) has been a center of excellence for Artificial Intelligence research, teaching, theory, and practice since its founding in 1962.
|