View profile

Future of CV and NLP, Maths (Fields Medal, Maths for ML), AutoML Natural Language, Google Assistant, Cutting through the hype, NALUs

Revue
 
Hi all, Greetings from Ireland ☘️! I actually didn't expect to be thankful for the Irish weather ☔️.
 
August 6 · Issue #29 · View online
NLP News
Hi all,
Greetings from Ireland ☘️! I actually didn’t expect to be thankful for the Irish weather ☔️. Depending on your location ☀️, you might want to enjoy this week’s newsletter with a cold beverage 🍹.
I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.
If you were referred by a friend, click here to subscribe. If you enjoyed this issue, give it a tweet 🐦.

Looking into the crystal ball 🔮
Wondering what the future of computer vision may look like? Jitendra Malik, professor at Berkeley looks into the future in his acceptance talk for the John McCarthy Research Excellence Award at IJCAI. He emphasizes the move from predicting boxes and masks to predicting 3D shapes of objects. He also stresses the importance of creating models that can understand the social and causal aspects of situations and incorporate common sense knowledge to understand different levels of human behaviour. This echoes similar discussions in the NLP community, where researchers also see the need to incorporate world knowledge (see my summaries of NAACL-HLT 2018 and ACL 2018).
The main trends at ACL 2018 this year were to gain a better understanding what the representations of our models capture and to expose them to more challenging settings. For more details, you can read my full review of the conference. If you’ve also been at ACL 2018, I’d personally love to read your take. The conference is by now so big (with 6 parallel sessions) that I was only able to see a small part of it.
On the future of NLP, Richard Socher writes that the next challenge for AI is to understand the nuances of language. For instance, models need to be able to model different layers of context in order to properly understand a question.
In his ICML invited talk, Josh Tenenbaum reflects what it takes to build machines that learn and think like people. In particular, he also highlights the importance of modeling the world, e.g. explaining and understanding what we see and planning and problem solving.
Maths 🔢
The Fields Medal (the “nobel prize of Mathematics”) was awarded in the past week to four researchers. You can read here about what they have done. I’d particularly recommend watching the inspiring videos. For ML researchers, Alessio Figali’s work on optimal transport is particularly relevant due to its application to current generative modeling approaches. Optimal transport aims to minimize the cost of moving materials from their source to their target location (for example books on a bookshelf, bread from bakeries) and has been applied to modeling the dynamics of clouds, crystals, and soap bubbles. If we are dealing with probability distributions and the cost is the distance between the points, then the optimal cost is identical to the Wasserstein distance, which has been used in GANs or for learning cross-lingual embeddings.
In our day-to-day work, we certainly don’t need Fields Medal-level maths to make a difference. But how much maths does it actually take to get started in ML? Vincent Chen asks this question in this blog post. The TL;DR is that different problems require different levels of intuition. For building products, peers and study groups can motivate your learning. In the research world, broad mathematical foundations can give you the tools to push the field forward. In general, math (especially in research paper form) can be intimidating, but getting stuck is a huge part of the learning process.
Google NLP news 📰
There’s been a couple of interesting news and announcements about Google’s NLP services. In particular, Google has announced AutoML Natural Language to predict custom text categories and AutoML Translation to train custom translation models. It is unclear what part of this is actually AutoML; you generally don’t need a completely new architecture for these problems and typically would rather use a pretrained model than train from scratch. Quick shout-out to Aylien, where we’re training custom NLP models without the hype (disclaimer: I’m an employee at Aylien).
Lastly, Google also announced Contact Center AI, which effectively tries to automate customer support. Depending on the type of support, this seems like a domain that is narrow enough that training a dedicated assistant might be fruitful.
Another interesting new market for the Google Assistant might be factory floors. Together with Google Glass Enterprise edition, it could provide assistance to factory workers, for instance by telling them what materials would be needed for a particular task and where they could be found. 
In such enterprise settings, we expect an assistant to be informative and efficient. In day-to-day interactions, humor, sass, and tact, however are often more important. But how does Google Assistant get these very human qualities? It turns out that all you need to create a smart, funny, empathetic assistant is a roomful of smart, funny, empathetic people.
In other news, Google Translate apparently translates 143 billion words per day. In other words, each day we run 132,000x the entire Harry Potter saga or 177,000x the Bible through Google Translate. That’s a lot of words.
Cutting through the hype ✂️
The previous section touched on AutoML, Google’s much-hyped approach to “take an ability that a few PhD have today and […] make it possible in three to five years for hundreds of thousands of developers to design new neural nets for their particular needs”. FastAI’s Rachel Thomas does a great job of deconstructing AutoML in this blog post and describing why and why not it may be useful.
Other people such as Stephen Merity and online news sources such as Skynet Today and TheGradient do an equally great job at calling out overpromising or unrealistic expectations. In this context, here’s a cool list of ‘cold showers’ on overhyped topics, i.e. rigorous explanations that cut through the hype. Having something similar for ML would be cool. However, most cold showers might be short-lived as hype is often generated by news articles or press releases.
Bias, fairness, and interpretability 🏅
In this blog post, Yoel Zeldes gives an overview of why uncertainty estimates are important in DL. He outlines which types of uncertainty exist (data, model, measurement, label) and provides examples of how we can use uncertainty to interpret our model.
Recent news articles and studies highlight the inherent biases in our algorithms. However, in many cases the status quo, i.e. human bias is subject to perils that are likely even worse.
OpenAI used a standardized list of criteria to review over 700 applicants for maximal fairness. The stories and progress of the final 8 OpenAI Scholars are inspiring and encouraging for many who are getting started in ML and NLP.
Resources and tutorials 📋
Start-up document for grad students 🚀 Here’s a great list of resources for when you’re starting a PhD.
In-depth tutorial on transfer learning for NLP 🖥 This tutorial gives you an overview of transfer learning for NLP in general and our ULMFiT method in particular.
Comprehensive text classification guide 📖 This in-depth guide by Google walks you through everything you need to know to do text classification with TensorFlow.
Implementations
Neural Turing Machine 🤖 Mark Collier provides the first stable and reliable implementation of NTM. Best practices are described in this accompanying paper.
Pythia for VQA 📺 This modular framework for Visual Question Answering formed the basis for FAIR’s winning entry to the VQA Challenge 2018.
Paper picks
Neural Arithmetic Logic Units (arXiv) This recent paper by DeepMind is a great example of how we can introduce inductive bias in our models. The Neural Arithmetic Logic Unit (NALU) is composed of a smart composition of activations that encourage different ranges of values in their corresponding weight matrices and gates that switch on relevant operations. The resulting cell is shown to be able to learn arithmetic functions on both synthetic and real-world tasks. In particular, the model generalizes better than existing models from the range of numbers it has learned. I hope we will see more papers like this that propose modules focused on a particular application or type of reasoning.
For more papers to read, have a look at my summary of ACL 2018 where I’ve referenced more than 30 papers from the conference.
Did you enjoy this issue?
If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue