Machine Learning Blueprint Newsletter, Edition 7, 11/12/17

 View Only

Machine Learning Blueprint Newsletter, Edition 7, 11/12/17 

Wed June 19, 2019 02:50 PM

This newsletter is written and curated by Mike Tamir and Mike Mansour

November 12, 2017

Hi all,
We’ve got some exciting news. We’ve launched a facebook group for our subscribers where everyone can talk about Machine Learning during the week.
Feel free to invite friends and join here: Machine Learning Blueprint Facebook Group

Spotlight Machine Learning Articles
Last week, Geoffrey Hinton and his team published two papers that introduced a completely new type of neural network based on so-called capsule theory, which stands to overturn state of the art in deep learning image recognition.
Machine Learning Blueprint's Take
Current state of the art in deep learning, ConvoNets, has a critical flaw. They rely too heavily on a technique called "Max-pooling" to detect the presence of specific patterns. Hinton claims that this "pooling operation used in convolutional neural networks is a big mistake and the fact that it works so well is a disaster," because it doesn't take into account relative orientation. In contrast, capsule neural nets are inspired by 3d representations in computer graphics. These "CapNets" leverage potential translations and rotations of the detected object to capture hierarchical 3d relationships. See here for an elegant summary driving the intuitions behind ConvoNets.

Labstix has developed techniques for creating 2d and 3d printed objects that fool object recognition algorithms like Google’s InceptionV3.
Machine Learning Blueprint's Take
Making changes to an image so that it still looks like, e.g., a cat to human eyes but fools a deep learning classifiers as in the below image has led to fantastic applications in recent years (see How Adversarial Networks Work). Such adversarial techniques can be fragile however, failing to trick the algorithm under simple rotations, for example. The Labstix results are remarkable, because they are stable under rotations flips and translations of the “adversarial objects” meaning there is more work to be done. Perhaps the introduction of Hinton’s new CapNets (above) will be part of an eventual solution.
TensorFlow releases a new “Eager Execution” mode to break out of the static graph runtime paradigm.
Machine Learning Blueprint's Take
The release of PyTorch early this year has given TensorFlow serious competition in the space of open source deep learning python development options. One of the major advantages is that PyTorch does not operate under a “static graph” paradigm, meaning that executions can happen “on the fly.” Eager Execution brings TensorFlow out of this paradigm as well now. With this advance, and Keras continuing to standardize as the defacto front end for TensorFlow, Google’s darling open source project is quickly bridging the gap with PyTorch.
Learning Machine Learning
A walk through of the reinforcement learning and MCTS based modifications made to the latest iteration of AlphaGo, AlphaGo Zero, that enabled it to train itself in 72hrs and beat its “world champion” predecessor, the original AlphaGo algorithm.
Interesting Research
Deep Learning AIs can now locate the referents of phrases in images, answer questions about visual scenes, and even execute symbolic instructions. To achieve this models must overcome well-studied learning challenges fundamental to infants learning their first words. Applying experimental paradigms from developmental psychology to AIs, this research explores the conditions under which established human biases and learning effects emerge to better better understand this phenomenon.
Kindermans et al show that a transformation with no effect on a deep learning model can cause numerous methods to incorrectly attribute saliency, calling explanation methods for deep learning model results into question.
Machine Learning News Links
AutoML for Large Scale Image Classification and Object Detection. Google has expanded their framework for automatically designing deep learning models to work more efficiently on massive datasets.
Why Universities Are Losing Their Best AI Scientists. Diffusion of machine learning knowledge and innovation through academia is in peril due to higher budgets and more impactful work in outside industry options.


#GlobalAIandDataScience
#GlobalDataScience

Statistics

0 Favorited
15 Views
0 Files
0 Shares
0 Downloads