ELEN0062 - Introduction to machine learning
The goal is to turn data into information, and information into insight.
|Project||29 Sep. 2016|
|Deadline||25 Oct. 2016||Don't forget to submit your project 1 on the submission platform|
|Project||27 Oct. 2016|
23 Nov. 2016
|Don't forget to submit your project 2 on the submission platform|
|Project||24 Nov. 2016|
|Deadline||01 Dec. 2016||The toy example must have run on the Kaggle plateform|
|Deadline||17 Dec. 2016||
End of the challenge
|Deadline||19 Dec. 2016||
Don't forget to submit your report and code for the third project on the submission platform
Here is a very scarce list of supplementary material related to the field of machine learning. I tend to update this section when I come across interesting stuff but if you feel like you need more material on some topic, do not hesitate to ask!
Machine learning in generalThere are tons of online and accessible material in the domain of machine learning:
- Andrew Ng's online course (Standford): The most popular online course on ML. Archived from coursera.
- Pedro Domingos' online course (Washington).
- Reza Shadmehr (Baltimore) and his slides.
- Jeffrey Ullman's course on mining massive datasets (Standford) based on his reference book. Not everything is related to the course though.
Classification and regression trees
Artifical neural networksThere have been three hypes about ANN. The first one was about the perceptrons in the 60s until it was discovered it could not solve a XOR problem. The second hype started with the discovery of backpropagation but it soon became clear that the large and/or deep neural nets were very hard to train. We are in the midts of the third one right now with `deep learning`: neural nets with several (many) invisible layers.
- Graham Taylor: An Overview of Deep Learning and Its Challenges for Technical Computing (2014)
- Geoffrey Hinton: Introduction to Deep Learning and Deep Belief Nets (2012)
- Geoffrey Hinton: The Next Generation of Neural Networks (2007)
- Leon Bottou: Multilayer Networks series
- A simplified version of Backprop illustrated.
- An illustrated taxonomy of learning networks.
Learning theory (Bias/Variance...)
Model assessment and selection
Support Vector Machines
- Visualizing the kernel trick
- A couple of videos about constraint optimization (by Khan Academy):
Misc.There are many YouTube channels about ML. Here are a few:
- Sentex: A bit of everything
- Derek Kane: A bit of everything
- Welch Labs: A few videos about Neural Nets
- Two minutes papers: Many articles relate to (applications of) ML
Third project: the challenge
The third project is organized in the form a challenge, where you will compete against each other. This year, the challenge is about predicting the rating that a given user would give to a particular film. All the relevant information can be found on the Kaggle plateform which will hold the challenge.
The project is divided into four parts. All the deadlines can be found in the schedule section above.
Setup for the project
- Create an account on the Kaggle plateform
- Form groups of two. Concatenate your sXXXXXX id's as group names.
- Test the toy example
- Propose the best model you can
- Submit an archive on the submission platform in
tar.gzformat, containing a report that describes the different steps of your approach and your main results along with your source code. Use the same ids as for the Kaggle plateform. The report must contain the following information:
- A detailed description of all the approaches that you have used to win the challenge. The kaggle winning model guideline should be followed for each approach.
- A detailed description of your hyper-parameters optimization approach and your model validation technique.
- A table summarizing the performance of your differents approaches containing for each approach at least the name of the approach, the validation score, the score on the public and the private leaderboard.
- Any complementary information or figures that you want to mention.
- Present succinctly your approach to the rest of the class. (More information coming soon)