This course will provide theoretical foundations and practical experience in statistical learning methods or machine learning. Both supervised and unsupervised learning methods will be covered. Specific topics include linear models for regression and classifications, decision trees, neural networks, support vector machines, kernel methods, Gaussian processes, dimension reduction, density estimation and clustering. Prerequisite: probability and statistics or equivalent, calculus, linear algebra and proficiency in at least one high-level programming language.
Prerequisite: MATH 530/630 Probability & Statistical Inference for Scientists and Engineers
Many types of data occur sequentially. A conversation between two people can be thought of as a sequence of speech turns; a written sentence can be thought of as a sequence of words and punctuation symbols, while a spoken sentence can be thought of as a sequence of phonemes. Time-series data (weather observations, stock tickers, etc.) is inherently sequential, and, of course, molecular biology is replete with examples of sequential data (sequences of nucleotide bases in DNA and RNA, amino acids in protein strands, etc.).
This course, Analyzing Sequences, will concern itself with methods for analyzing these kinds of data, with a particular emphasis on sequences derived from linguistic data. Topics will include:
- Applications of finite state automata, in particular with regards to language modeling;
- Discrete and continuous Hidden Markov Models, in the context of tagging;
- Gaussian Mixture Models & the Expectation-Maximization algorithm;
- Conditional Random Fields;
- Suffix Arrays & Trees/Tries;
- Sequence alignment & matching.
While we will be primarily focused on written language, we will also occasionally explore problems from other areas (such as speech recognition).
This course covers a number of topics in machine learning. Topics include neural networks and multi-layer perceptron, sampling techniques such as Gibbs sampling and Metropolis-Hasting, learning energy-based models such as restricted Boltzmann machines (RBMs), overview of optimization techniques, and sparse autoencoders. The final topic is deep neural networks (DNN), which have been recently demonstrated to outperform other machine learning techniques in a variety of tasks ranging from speech recognition and natural language processing to computer vision. In fact, the topics were purposely chosen to cover all of the background material needed for DDN. Students will learn how to effectively train DNNs through unsupervised and supervised techniques, and will enable them to employ DNNs in their research problems. The course will also draw from applications in speech and language processing. Recommended background of this course includes programming proficiency in Python or Matlab, enough knowledge of calculus, linear algebra, and probability theory.
Prequisites: MATH 530/630 Probability & Statistical Inference for Scientists and Engineers, and CS/EE 559/659 Machine Learning.