CS 584
Machine Learning (Spring 2016)

Syllabus Show me more

cs 584 - Machine Learning

Computers are increasingly more powerful and so enable us to solve increasingly difficult problems. This ability opens a path to a multitude of new applications. Machine learning provides us with tools to address complex problems. For example, we can use computers to make predictions on future behavior (e.g. predict the values of stocks or predict dangerous traffic conditions while driving), diagnose conditions (e.g. diagnose a disease in humans, detect abnormal conditions in a computers, or detect signs for credit fraud), estimate unknown and constantly changing model parameters (in a vast range of applications in virtually all areas of computer science), explore large amounts of data in search for critical information (e.g. explore legal data, or analyze patterns in social networks), recognize objects in images (e.g. identify faces and fingerprints, recognize handwritten or machine printed text), understand video sequences (e.g. rigid motion of cars, or articulated human motion), understand audio signals (e.g. voice recognition), interact with humans through more natural means (e.g. identify human emotion and understand implicit/explicit human intention by observation), control autonomous robots (e.g. avoid obstacles or control articulated motion). The possibilities are endless.

The examples above illustrate some difficult problems to which we often do not have simple models or direct solutions. Moreover, partial models often result in observations and behavior that may appear random. Machine learning in concerned with the solution of such difficult problems. Instead of employing the usual approach of specifying the model explicitly (e.g. by programming a known sequence operations), machine learning employs algorithms that learn the models directly from the data. Learning models from data allow us to address complex problems such as the ones above. As such, machine learning algorithms are important in virtually almost all areas of computer science. Topics to be covered by cs584 in this semester include: overview of machine learning techniques, parametric regression, supervised learning, neural networks, support vector machines, computational learning theory, unsupervised learning, dimensionality reduction, graphical and sequential models. The course assumes some programming experience, and a basic knowledge of calculus, statistics, and linear algebra. For further details please refer to the course website or contact the course instructor.

Instructor

Gady Agam  
SB 237e, x7-583
Office hours:
Tuesday, Thursday
6:30-7:30pm



TA-s

Xi Zhang and Di Ma 
SB-115, x7-5705
Office hours:
Tuesday, Thursday
2:00-3:30pm
Monday, Wednesday
2:00-3:30pm

Sections

CS-584-01:  (WH-113)

CS-584-02:  (Internet)

CS-584-03:  (Internet)


Class hours:
Tuesday, Thursday
5:00-6:15pm


Syllabus



Course outline


What to expect from this course


Machine learning can be covered at different levels. The focus of this course is the understanding of algorithms and techniques used in machine learning. Students in the course are expected to write computer programs (Python) implementing different techniques taught in the course. The course requires mathematical background and some programming experience. This course does not intend to teach how to use a specific application software.



Objectives


  1. Introduce the fundamental problems of machine learning.
  2. Provide understanding of techniques, mathematical concepts, and algorithms used in machine learning to facilitate further study in this area.
  3. Provide understanding of the limitations of various machine learning algorithms and the way to evaluate performance of machine learning algorithms.
  4. Provide pointers into the literature and exercise a project based on literature search and one or more research papers.
  5. Practice software implementation of different concepts and algorithms covered in the course.



Overview


  1. Introduction: overview of machine learning, related areas, applications, software tools, course objectives.
  2. Parametric regression: linear regression, polynomial regression, locally weighted regression, numerical optimization, gradient descent, kernel methods.
  3. Generative learning: Gaussian parameter estimation, maximum likelihood estimation, MAP estimation, Bayesian estimation, bias and variance of estimators, missing and noisy features, nonparametric density estimation, Gaussian discriminant analysis, naive Bayes.
  4. Discriminative learning: linear discrimination, logistic regression, logit and logistic functions, generalized linear models, softmax regression.
  5. Neural networks: the perceptron algorithm, multilayer perceptrons, backpropagation, nonlinear regression, multiclass discrimination, training procedures, localized network structure, dimensionality reduction interpretation.
  6. Support vector machines: functional and geometric margins, optimum margin classifier, constrained optimization, Lagrange multipliers, primal/dual problems, KKT conditions, dual of the optimum margin classifier, soft margins, kernels, quadratic programming, SMO algorithm.
  7. Graphical and sequential models: Bayesian networks, conditional independence, Markov random fields, inference in graphical models, belief propagation, Markov models, hidden Markov models, decoding states from observations, learning HMM parameters.
  8. Unsupervised learning: K-means clustering, expectation maximization, Gaussian mixture density estimation, mixture of naive Bayes, model selection.
  9. Dimensionality reduction: feature selection, principal component analysis, linear discriminant analysis, factor analysis, independent component analysis, multidimensional scaling, manifold learning.
  10. Final project: students present selected topics and develop software implementation of related techniques based on the review of relevant literature. The work should be summarized in a concluding report which should include simulation results. A list of possible topics will be available prior to the project selection due date.



Grading





component description weight






participation up to 4 unjustified missed classes full credit 5%



assignment 1 parametric regression 5%



assignment 2 generative learning 5%



assignment 3 discriminative learning 5%



assignment 4 support vector machines 5%



assignment 5 unsupervised learning 5%



project project (20%) 20%



midterm exam open notes (4 paper pages) 10%



final exam open notes (8 paper pages) 40%






total
100%





  1. There is an additional mandatory assignment (assignment 0) which does not carry any credit. There is a penalty of 5% for not submitting this assignment.
  2. A certain percentage of the students may be invited to discuss their assignments.
  3. Late days: there is a total of 4 "late days" for all the assignments. After that 1 late day = -10%. Late days do not include weekends and university holidays. The final project can not be late. Assignments can not be submitted after classes end.
  4. Each member of this course bears responsibility for maintaining the highest standards of academic integrity. All breaches of academic integrity must be reported immediately. Copying of programs from any source (e.g. other students or the web) is considered to be a serious breach of academic integrity.



Books


  1. Elements of Statistical Learning, T. Hastie, R. Tibshirani and J. Friedman, Springer, 2001.
  2. Machine Learning, E. Alpaydin, MIT Press, 2010.
  3. Pattern Recognition and Machine Learning, C. Bishop, Springer, 2006.
  4. Machine Learning: A Probabilistic Perspective, K. Murphy, MIT Press, 2012.
  5. Pattern Classification, R. Duda, E. Hart, and D. Stork, Wiley-Interscience, 2000.
  6. Machine Learning, T. Mitchell, McGraw-Hill, 1997.



Tentative schedule


class
date
topic
assignment




1
01/12
Introduction
AS0
2
01/14
Regression

3
01/19


4
01/21
Kernel methods

5
01/26

AS1
6
01/28
Generative learning

7
02/02


8
02/04
Discriminative learning

9
02/09


10
02/11
Neural networks
AS2
11
02/16
No class

12
02/18
No class

13
02/23


14
02/25

AS3
15
03/01
Midterm exam

16
03/03
Support vector machines
PROJ
17
03/08


18
03/10


19
03/15
No class (Spring break)

20
03/17
No class (Spring break)

21
03/22
Graphical models
AS4
22
03/24


23
03/29


24
03/31
Unsupervised learning

25
04/05


26
04/07

AS5
27
04/12
Dimensionality reduction

28
04/14


29
04/19
Project presentations

30
04/21


31
04/26


32
04/28


33
05/03
Final exam: 5:00pm-7:00pm (WH-113)


Lectures


Videos of lectures

Videos of lectures are available through blackboard




    Reading materials

     

Topic Reading
Introduction to machine learning
Ch. 1-2
parametric regression
Ch. 3
kernel methods
Ch. 6
generative learning
Ch. 7-8
discriminative learning
Ch 4
neural networks
Ch. 11
support vector machines
Ch. 12
graphical models
Ch 17
unsupervised learning
Ch. 14
dimensionality reduction
Ch. 14

Assignments


For additional instructions/ hints check the FAQ page
Please submit assignments via blackboard


Assignment Description Data Weight Due date
assignment 1 Parametric regression data files
5%
assignment 2 Generative learning data files
5%
assignment 3 Discriminative learning data files 5%
assignment 4 Support vector machines data files 5%
assignment 5 Unsupervised learning data files 5%
project Presentation (10%) 
Project (20%)
 N/A  30% (proposal )
(final submission )



Additional assignments:



Additional information:


Faq


Yes Please