NEW: RESOURCE PAGE

Searching for tutorials and software about Deep Learning and Neural Nets? Be sure to look at my Resource Page!
Looking for Octave? Go to my Easy Octave on Mac page!

Thursday, March 12, 2015

Ilya Sutskever's reveals net training secrets on Ysong Yue's blog!

A big part of the net is linking - pointing people at something interesting. And so I need to tell you that machine-learning Professor Ysong Yue of Caltech has done something clever, he got Ilya Sutskever to do a guest post about Deep Learning on his blog. 

One of Ilya's many claims to fame is his use of training recurrent networks to generate text fragments that resemble Wikipedia, 
The meaning of life is the tradition of the ancient human repro- duction: it is less favorable to the good boy for when to remove her bigger. In the show’s agreement unanimously resurfaced. The wild pasteured with consistent street forests were incorporated by the 15th century BE. In 1996 the primary rapford undergoes an effort that the reserve conditioning, written into Jewish cities, sleepers to incorporate the .St Eurasia that activates the popula- tion. Mar??a Nationale, Kelli, Zedlat-Dukastoe, Florendon, Ptu’s thought is. To adapt in most parts of North America, the dynamic fairy Dan please believes, the free speech are much related to the 
or machine learning papers, which I guess is the geek equivalent of drinking your own urine to survive. 
Recurrent network with the Stiefel information for logistic regres- sion methods Along with either of the algorithms previously (two or more skewprecision) is more similar to the model with the same average mismatched graph. Though this task is to be studied un- der the reward transform, such as (c) and (C) from the training set, based on target activities for articles a ? 2(6) and (4.3).  

The method relies on Martens and Sutskever's innovative mathematical technique of hessian-free optimisation, which makes recurrent net training tractable. 

Now, many of us would like to know what the difference between the old neural and the new deep-neural net tech is, what can be reasonably expected etc. Well you will find all of this in  Ilya's guest post, together with some interesting comments about SVMs yadda yadda by Yoshua Bengio.  But above all you will find invaluable heuristic advice about practical stuff like setting learning rates, initialization,  minibatches,  dropout, and other details of training methods. In other words a discussion of cooking methods rather than just a dry recipe. 

Stop reading my dumb post and just go there already!

No comments:

Post a Comment

Hey, let me know what you think of my blog, and what material I should add!