Machine Learning Blueprint Newsletter, Edition 3, 10/1/17

 View Only

Machine Learning Blueprint Newsletter, Edition 3, 10/1/17 

Wed June 19, 2019 02:31 PM

B
This newsletter is written and curated by Mike Tamir and Mike Mansour. 

October 1, 2017

Hi all,
It’s Mike Tamir - I’ve teamed up with Blueprint News in order to launch the Machine Learning Blueprint, an ML Newsletter. We’ll be publishing weekly (third beta edition below), covering general news, tutorials, research, etc. and how it applies to the realm of Machine Learning and Data Science.
The stories are meant to be quick, 400 characters, with commentary when necessary. I’m sending this week’s edition to you because I’m looking for feedback on this issue. This can mean the story is too short, not relevant, you already saw it, etc.
Please reply to the email address sending the newsletter (ml@theblprint.com) with any thoughts you have. If you’d like to unsubscribe from further emails, please reply “unsubscribe” to this email. Thanks to everyone in advance!
Links to stories are in the titles.
Best,
Mike

Machine Learning News
New Theory Cracks Open the Black Box of Deep Learning?
One of the most common criticisms of Deep Learning is what’s going on in the hidden layers can seem opaque when compared to more prosaic algorithms like decision trees. New work from Naftali Tishby has been touted as an answer to this criticism.
COMMENTS
This work has seen a lot of attention this week in several articles. To be clear, headlines notwithstanding, Tishby’s results do not solve the “black box” opacity criticism. At first read, the thesis that neural nets succeed by constructing an “information bottleneck” as the data passes through the network, seems almost intuitive. Deep learning, in a very direct way, is just representation learning. Instead of feature engineering by hand, the hidden layers let the data tell them how to transform and manipulate the raw inputs for optimal performance. Of course, the devil is in the details. Tishby’s theorem demands a solid information theoretical background, but the case he makes is as fascinating as it is complex, and deep learning research could benefit from further investigation.
Will Matterport3D be the New ImageNet in the Future of “AI” Deep Learning Research?
ImageNet has been the benchmark for testing image recognition for the better part of a decade. The Matterport data set has the potential of playing this role for the decade to come.
COMMENTS
With attention to self driving continuing to accelerate, interest in deep learning models that can recognise depth is growing. Benchmark data sets are a useful asset when working on machine learning models because they help us know if our work represents an bonafide advance in the field. As with ImageNet or the Netflix prize dataset, creating a public challenge with a sizable bounty could solidify Matterport’s dataset as one of these benchmark sets.

Learning Machine Learning
Transitioning from Academic Machine Learning to Industry
A quick rundown of some of the essential skills and preparation that recent Data Science or Machine Learning grads will want to have as they enter the industry.
Comprehensive Collection of Cheat Sheets for Machine Learning and Data Science Engineers
DataCamp has put together a comprehensive set of Machine Learning tools cheat sheets in a single place. This article is an essential bookmark for any practitioner to keep handy.
Interesting Research
The Loss Surface of Deep and Wide Neural Networks
This paper reports new findings supporting the hypothesis that local minima in the error surface of a Deep NeuralNet tend to cluster together. The findings further present results that, in specialized cases, such minima also will tend to be equivalent in minimizing the error rate (meaning convergence at one local minima may be as good as the next).
Automating Design of NeuralNet Architecture with Evolutionary Algorithms
A trend has been building in the research community towards finding ways to automate not just the feature representation (using primarily deep layers in neural nets), but also learning how to construct and apply machine learning algorithims themselves. This paper leverages evolutionary strategies specifically for convo-net architectures.
Machine Learning News Links
How To Use the New Stack Overflow Bot
Multi-Task Learning Objectives for Natural Language Processing
NVIDIA Open Sources Deep Learning Accelerator NVDLA
Google Compute Engine Introduces Faster NVIDIA GPUs
Intel’s New Self-Learning Chip Promises to Accelerate Deep Learning/Machine Learning
New Deep Learning/Machine Learning Tools Launched by Microsoft
Machine Learning Books
Deep Learning Techniques for Music Generation - A Survey
A review article of the various techniques used for music generation with deep learning (primarily recurrent neural network) algorithms. A good entry point to acquaint oneself with the field.


#GlobalAIandDataScience
#GlobalDataScience

Statistics

0 Favorited
6 Views
0 Files
0 Shares
0 Downloads