ACL 2018, Videos from NAACL 2018 and ICML 2018, Fanatic NMT, MojiTalk, CompareGAN, NLP is fun, Program synthesis, AutoML, DeepStack, Gumbel Softmax
Hi all,
It's been a year since the first edition of this newsletter 🎉! Thank you so much for sticking with it!
Thanks also to all that provided feedback on the last newsletter! A lot of you liked the narrative style. At the same time, the categorization was confusing, and some useful categories were missing. I'll try to strike a balance and combine the best of both worlds in the future.
This newsletter is a bit shorter as I've been traveling for ACL for the past two weeks ✈️. I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.
If you were referred by a friend, click here to subscribe. If you enjoyed this issue, give it a tweet 🐦.
ACL 2018 🐨
ACL took place last week in Oz, the land of koalas and kangaroos. I'll post a longer review soon. In the meantime, here are a few of my highlights:
AACL, the Asia-Pacific Chapter of the ACL was launched. That's great news for anyone interested in NLP in Asia and Oceania! Given the rate new chapters are launched (EACL in 1982, NAACL in 2000, AACL in 2018) it seems people in Africa will have to wait a while for their dedicated NLP conference. For increasing the diversity of our conferences, Hal Daumé III has also written "yet another list of things we can do to have more diverse sets of invited speakers".
"LSTMs work in practice, but can they work in theory?"
- Mark Steedman, ACL Lifetime Achievement Award winner 2018 for his work on CCG
If you're curious what some other Lifetime Achievement Award recipients had to say, here's the full list along with links to their speeches. As one example, Martin Kay who received the award in 2005 opined that "psycholinguistics (to him) [...] has always been the [...] the thing that computational linguistics stood (the) greatest chance of providing to humanity". These days, it seems that most people disagree. However, there are still things we can learn from psycholinguistics, as evidenced in the panel discussion in the Representation Learning Workshop.
Are you looking for must-read papers from ACL 2018? Definitely check out the best papers and the honorable mentions.
Other conferences and events 🤖
NAACL 2018 Talks for NAACL are now available here. That should give you enough viewing material for the next two weeks.
ICML 2018 I wasn't able to find videos of all talks at ICML, but here's a selection: If you're interested in imitation learning, definitely check out this tutorial. For anyone wanting to stay up-to-date with neural network architectures, check out this Deep Learning session featuring DiCE, MCTSnets, and TACO, among others. For a more comprehensive overview, check out David Abel's extensive notes. Ronan Collobert and Jason Weston won the Test of Time Award at ICML for their paper A Unified Architecture for Natural Language Processing: Deep Neural Networks with Multitask Learning, which had a huge impact in the NLP community. A standard NN architecture for NLP tasks? ✓ Multi-task learning? ✓ Pre-word2vec unsupervised pre-training? ✓ If you haven't read the paper, now's the time to check it out. Here's the video of the presentation.
IJCAI 2018 I don't have many news from this conference. Liza Shulyayeva summarized Oren Etzioni's talk, which discussed some of the work the Allen Institute is doing to drive the creation of common sense in AI focusing especially on a need for a concrete benchmark to measure results when it comes to implementing common sense.
Machine Learning Summer School, Buenos Aires Many of the slides of MLSS, Buenos Aires from illustrious speakers such as Sergey Levine, Ryan Adams, David Blei, and many more are available here.
ICLR 2019 The deadline for ICLR 2019 is on September 27, one month earlier than last year to allow submission to ICML 2019, which is also earlier. The conference no longer has a separate workshop track.
Fanatic NMT 😈
Maori input: "dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog dog"
Google Translate output: "Doomsday Clock is three minutes at twelve We are experiencing characters and a dramatic developments in the world, which indicate that we are increasingly approaching the end times and Jesus' return"
Another week, another over-the-top, ridiculous news headline! This time Google Translate is apparently Spitting Out Sinister Religious Prophecies when fed nonsensical input in a low-resource language. The output reads like paragraphs from the Bible.
If the input is nonsensical, the decoder may act like a language model and randomly sample from its training data. For low-resource languages, the Bible is often one of the only parallel texts available (apparently even for Google). For instance, the Bible has also been used to learn cross-lingual word embeddings. What can we take away from this? We should work more on low-resource languages. In addition, Google might want to implement some way to detect issues in translations. WeChat, for instance, uses different detection methods. Delip Rao has posted more things that are wrong with current NMT.
Code and implementations 🖥
Mojitalk (ACL 2018) 😂 Generate emotional responses at scale with the data and code from this ACL paper.
Compare GAN 🤺 Find here the code and pre-trained TensorFlow Hub models of Are GANs Created Equal? A Large-Scale Study and of the latest practical "cookbook" for GAN research, everything you need to train your GAN.
Cool overviews and intros💡
NLP is fun 😊 A clear and beginner-friendly introduction to NLP.
Program synthesis in 2017-18 🎨 A comprehensive overview of the current state-of-the-art in generating programs.
AutoML and Neural Architecture Search 🏛 An opinionated introduction to two of the hottest topics by fast.ai's Rachel Thomas.
DeepStack, AlphaZero, TRPO 🃏 A collection of in-depth reviews and self-contained lessons of influential research papers.
Gumbel softmax 🤖 A clear walk-through of how to train neural networks to sample from discrete distributions.
Other things
Facebook 👱📖 Facebook expanding their academic collaborations teams in Pittsburgh, Seattle, London, and Menlo Park. Yann LeCun and Mark Zuckerberg have also given high-profile interviews with Forbes and Recode respectively.
Reinforcement learning 🎮 OpenAI has continued to remove restrictions for their highly anticipated Dota 2 game against pros in two weeks. Arthur Jiuliani looks beyond the hype of recent Deep RL successes that “solved” Montezuma’s Revenge.
Automatic evaluation 📋 Automatic evaluation measures (BLEU for MT, ROUGE for summarization) are super common nowadays, but are they always a good idea? This post that accompanies an ACL 2018 paper shows how automatic measures are biased and that debiasing them is barely cheaper than doing a full human evaluation. For evaluating abstract reasoning, DeepMind introduces an abstract reasoning dataset similar to some tasks found in human IQ tests. Human intelligence has not evolved by hillclimbing on IQ tests, so such datasets are mainly useful for evaluating and probing trained models.
Africa 🌍 Moustapha Cisse introduces an African Master’s of Machine Intelligence at the African Institute for Mathematical Sciences that is supported by both Google and Facebook, an important step in building a more diverse AI community.
Paper picks 📃
Talk the Walk: Navigating New York City through Grounded Dialogue
Vries et al. propose the first large-scale dialogue dataset grounded in action and perception. The task involves two agents, a "guide" and a "tourist" collaborating to have the tourist navigate to a given location. The guide has access to a map and knows the target location, but does not know where the tourist is; while the tourist has a 360-degree view of the world, but knows neither the target location on the map nor the way to it. The dataset was created by manually capturing several neighborhoods of New York City with a 360 camera. Dialogue data was collected via Mechanical Turk.
LSTMs Can Learn Syntax-Sensitive Dependencies Well, But Modeling Structure Makes Them Better (ACL 2018)
It's nice if a paper's title is already so descriptive. The authors find that LSTM language model can in fact learn subject-verb agreement (contrary to earlier findings). However, more syntax-sensitive models such as recurrent neural network grammars perform even better.
Semantically Equivalent Adversarial Rules for Debugging NLP Models (ACL 2018)
The authors propose semantically equivalent adversaries (SEA)---and rules based on these---that induce semantic-preserving perturbations that induce changes in the model's predictions on many instances. They apply these to machine comprehension, VQA, and sentiment analysis and show that retraining models using data augmentation reduces errors.