ST443      Half Unit
Machine Learning and Data Mining

This information is for the 2017/18 session.

Teacher responsible

Dr Xinghao Qiao

Availability

This course is compulsory on the MSc in Data Science. This course is available on the MSc in Marketing, MSc in Quantitative Methods for Risk Management, MSc in Statistics, MSc in Statistics (Financial Statistics), MSc in Statistics (Financial Statistics) (Research), MSc in Statistics (Research), MSc in Statistics (Social Statistics) and MSc in Statistics (Social Statistics) (Research). This course is available with permission as an outside option to students on other programmes where regulations permit.

Pre-requisites

The course will be taught from a statistical perspective and students must have a basic knowledge of statistical methods for linear regression models.

Students are not permitted to take this course alongside Algorithmic Techniques for Data Mining (MG4E1)

Course content

Machine learning and data mining are emerging fields between statistics and computer science which focus on the statistical objectives of prediction, classification and clustering and are particularly orientated to contexts where datasets are large, the so-called world of 'big data'. This course will start from the classical statistical methodology of linear regression and then build on this framework to provide an introduction to machine learning and data mining methods from a statistical perspective. Thus, machine learning will be conceived of as 'statistical learning', following the titles of the books in the essential reading list.   The course will aim to cover modern non-linear methods such as spline methods, generalized additive models, decision trees, bagging, boosting, neural network and support vector machines, as well as more advanced linear approaches, such as LASSO, linear discriminant analysis, k-means clustering, nearest neighbours. 

Teaching

20 hours of lectures and 10 hours of computer workshops in the MT.

The first part of the course reviews regression methods and covers linear and quadratic discriminant analysis, variable selection, nearest neighbours, shrinkage, dimension reduction methods, neural networs. The second part of the course introduces non-linear models and covers, splines, generalized additive models, tree methods, bagging, random forest, support vector machines, principal components analysis, k-means, hierarchical clustering.

Week 6 will be used as a reading week.

Formative coursework

Students will be expected to produce 8 problem sets and 1 project in the MT.

The problem sets will consist of some theory questions and data problems that require the implementation of different methods in class using a computer package.

Indicative reading

James, G., Witten, D., Hastie, T. and Tibshirani, R. An Introduction to Statistical Learning. Springer, 2014. Available online at http://www-bcf.usc.edu/~gareth/ISL/  

Hastie, T., Tibshirani, R. and Friedman, J. The Elements of Statistical Learning: Data Mining, Inference and Prediction. 2nd Edition, Springer,  2009. Available online at http://statweb.stanford.edu/~tibs/ElemStatLearn/index.html 

Bishop, G. Pattern Recognition and Machine Learning. Springer-Verlag, 2006.

Assessment

Exam (70%, duration: 2 hours) in the main exam period.
Project (25%) in the Week 11.
Coursework (5%) in the MT.

Key facts

Department: Statistics

Total students 2016/17: 28

Average class size 2016/17: 29

Controlled access 2016/17: No

Lecture capture used 2016/17: Yes (LT)

Value: Half Unit

Guidelines for interpreting course guide information

Personal development skills

  • Self-management
  • Team working
  • Problem solving
  • Application of information skills
  • Communication
  • Application of numeracy skills
  • Specialist skills