A Framework for analysing Non-Convex Optimization

Previously Rong’s post and Ben’s post show that (noisy) gradient descent can converge to local minimum of a non-convex function, and in (large) polynomial time (Ge et al.’15). This post describes a simple framework that... Continue

Markov Chains Through the Lens of Dynamical Systems: The Case of Evolution

In this post, we will see the main technical ideas in the analysis of the mixing time of evolutionary Markov chains introduced in a previous post. We start by introducing the notion of the expected... Continue

Saddles Again

Thanks to Rong for the very nice blog post describing critical points of nonconvex functions and how to avoid them. I’d like to follow up on his post to highlight a fact that is not... Continue

Escaping from Saddle Points

Convex functions are simple — they usually have only one local minimum. Non-convex functions can be much more complicated. In this post we will discuss various types of critical points that you might encounter when... Continue

Stability as a foundation of machine learning

Central to machine learning is our ability to relate how a learning algorithm fares on a sample to its performance on unseen instances. This is called generalization. In this post, I will describe a purely... Continue

Evolution, Dynamical Systems and Markov Chains

In this post we present a high level introduction to evolution and to how we can use mathematical tools such as dynamical systems and Markov chains to model it. Questions about evolution then translate to... Continue

Word Embeddings: Explaining their properties

This is a followup to an earlier post about word embeddings, which capture the meaning of a word using a low-dimensional vector, and are ubiquitous in natural language processing. I will talk about my joint... Continue

NIPS 2015 workshop on non-convex optimization

While convex analysis has received much attention by the machine learning community, theoretical analysis of non-convex optimization is still nascent. This blog as well as the recent NIPS 2015 workshop on non-convex optimization aim to... Continue

Nature, Dynamical Systems and Optimization

The language of dynamical systems is the preferred choice of scientists to model a wide variety of phenomena in nature. The reason is that, often, it is easy to locally observe or understand what happens... Continue

Tensor Methods in Machine Learning

Tensors are high dimensional generalizations of matrices. In recent years tensor decompositions were used to design learning algorithms for estimating parameters of latent variable models like Hidden Markov Model, Mixture of Gaussians and Latent Dirichlet... Continue