2020 NLP wish lists, HuggingFace + fastai, NeurIPS 2019, GPT-2 things, Machine Learning Interviews
Happy holidays everyone! 🕯🎄🕎
I hope you all had a fantastic year. The last newsletter of 2019 concludes with wish lists for NLP in 2020, news regarding popular NLP and Deep Learning libraries, highlights of NeurIPS 2019, some fun things with GPT-2. Among the resources and posts, my highlights are resources for preparing for Machine Learning Interviews and posts about the nature of machine intelligence and highlights from EMNLP and CoNLL 2019.
Enjoy the time off with your loved ones and see you again in 2020!
Contributions 💪 If you have written or have come across something that would be relevant to the community, hit reply on the issue so that it can be shared more widely.
Newsletter shout-out 📰 If you are looking for news about NLP focusing on industry, then check out This Week in NLP where Robert Dale shares five NLP highlights each week.
I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.
If you were referred by a friend, click here to subscribe. If you enjoyed this issue, give it a tweet 🐦.
2020 NLP wish lists 🧚♀️
My wish list for NLP research in 2020:
Learning from few samples rather than from large datasets
Compact and efficient rather than huge models
Evaluate on at least another language (from a different language family)
New datasets contain at least one other language
NLP helps to unlock scientific knowledge (see e.g. this Nature paper)
Yoav Goldberg, Rada Mihalcea, and Ted Petersen also posted wish lists on Twitter. Here are my highlights:
Deep Learning libraries: HuggingFace + fastai 🤗⏩
In the last month, there have been some exciting improvements to popular Deep Learning libraries. 🤗Transformers now allows you to combine different pretrained models in an encoder-decoder framework:
In addition, the popular fastai library has also seen some changes. In this blog post, Maximilien describes how Transformers can be combined with fastai to get the best of both worlds.
The fast.ai team also developed a new Python programming environment called nbdev that is entirely based on Jupyter Notebooks. The environment brings the benefits of an IDE/editor such as reusable code, documentation, tests, code navigation, and version control to notebooks. If you love Jupyter notebooks, then this is for you!
NeurIPS 2019
NeurIPS 2019 was with around 13,000 attendees the largest ML conference of the year. The NeurIPS 2019 Program Chairs did a fantastic analysis of the reviewing process. The main take-aways:
NeurIPS has no free-loader problem: Most of the authors of submitted papers participate in reviewing.
It is still unclear how to filter papers before the full review.
Review quality (as measured by length) is not lower compared to smaller conferences.
Two highlights of the conference were Yoshua Bengio's keynote and the Outstanding New Directions Paper Award.
From System 1 Deep Learning to System 2 Deep Learning In his keynote, Yoshua asks how far we are from human-level AI and whether it is enough to grow our datasets, model sizes, and computer speed. The keynote—inspired by Thinking Fast and Slow—provides a roadmap on how to get from System 1 (fast unconscious non-linguistic and habitual decision making/inference), which our models are good at, to System 2 (slow, logical, sequential, conscious, linguistic, algorithmic, decision making).
Uniform convergence may be unable to explain generalisation in deep learning The Outstanding New Directions paper shows that many existing generalisation bounds fail to explain the generalisation behaviour of deep learning. In particular, these bounds increase with the training set size. They then provide a proof that shows that they cannot explain generalisation even for a simple problem. Overall, while the paper does not provide a solution to generalisation in deep learning, it encourages the community to look in a different direction.
Summaries The excellent notes from David Abel include the above two highlights as well as other presentations such as those at the meta-learning workshop. If you prefer a more visual summary, then have a look at Robert Lange's hand-written notes. Definitely read Chip Huyen's awesome summary of NeurIPS, which discusses important highlights related to:
Deconstructing the deep learning black box
New approaches to deep learning (Bayesian, graph neural networks, convex optimisation)
Neuroscience x Machine Learning
All talks from NeurIPS 2019 are available here.
Diversity and inclusion NeurIPS 2019 emphasized diversity and inclusion. Social meetups brought together people with common interests. Celeste Kidd highlighted issues around sexual harassment in her keynote including debunking myths around the climate for men in the #MeToo era. Ian Stewart presented a poster about the importance of serving LGBTQ people with NLP. Kelechi Ogueji and Orevaoghene Ahia were unable to present their work on Pidgin-to-English translation at the NeurIPS 2019 Workshop on Machine Learning for the Developing World due to visa issues.
Another important event in terms of diversity and inclusion in AI was Khipu 2019, which took place in Montevideo in November. For an excellent summary, read the 8 main takeaways from tryo labs. In general, there are opportunities for everyone of us to strengthen the AI community and help others, as described by Lila Ibrahim:
Sometimes that extra support is the difference between saying yes or saying no—between following a path in STEM, or doing something completely different. Whether it’s inspiring self-confidence, offering reassurance or providing a financial safety net, showing support and removing the barriers that prevent individuals achieving their full potential can have a powerful impact.
GPT-2 things ✍️
Your monthly tracker of interesting applications of language models. During the last month, GPT-2 has been...
used to create Cards Against Humanity cards;
used to co-write horror stories;
interviewed by The Economist (note that answers have been cherry-picked);
used as the dungeon master in a text-based adventure game.
Talks 🗣
Workshop on Theory of Deep Learning: Where Next? Playlist 🎓 If you want to learn more about the current state of the art in the theory of deep learning, then these presentations are a great starting point. They feature talks from luminaries such as Sanjeev Arora, Chris Manning, and Anima Anandkumar, among many others.
Resources 📚
NLP Best Practices ⚙️ This repository by Microsoft contains examples and best practices for building NLP systems focusing on state-of-the-art methods and common scenarios, provided as Jupyter notebooks and utility functions.
Machine Learning Interviews 👩💻 These slides by Chip Huyen contain a lot of important about ML interviews, from the differences between ML jobs to understanding the interviewing process, recruiting pipeline, and the interviewers' mindset. If you want to prepare for a particularly common interview, the Machine Learning Systems Design interview, then read Chip's extensive write-up including case studies and example questions.
All The Ways You Can Compress BERT 🗜 Compression methods are becoming more popular for large NLP models as discussed in past newsletters. Mitchell Gordon gives an overview of different compression methods based on different characteristics and compares their results.
Articles and blog posts 📰
Unsupervised NLU task learning via GPT-2 🐒 This is a nice post by Rakesh Chada that explores the zero-shot capabilities of GPT-2 based on different input prompts.
Building a custom OCR using YOLO and Tesseract 🖼 Optical character recognition (OCR) is an important step to extract text from images to further process it with NLP. This tutorial provides clear instructions on how to build an OCR system yourself.
Illustrated: Self-Attention 🌈 This step-by-step guide clearly demonstrates how self-attention works with animated illustrations and code.
A Visual Guide to Using BERT for the First Time 🦍 On the topic of self-attention, Jay Alammar provides another fantastic guide on using BERT in practice, such as for sentiment analysis on movie reviews.
What does it mean for a machine to “understand”? 🤖 Thomas Dietterich argues that "understanding" exists along a continuous spectrum of capabilities and that we should not discuss it in terms of "fake" and "genuine" understanding. He furthermore argues that we should define these capabilities in terms of tests that we can perform on an AI system.
Rasa Community Showcase 💬 This is a nice showcase of interesting dialogue agent applications, from finding the perfect stock photo, to providing women's health service and AI-assisted medical evaluation.
An Epidemic of AI Misinformation 😷 Gary Marcus comments on the steady stream of results that are first over-hyped, then quietly forgotten. As one example of misinformation, he mentions the GPT-2 interview with The Economist (see above).
What is Emergent Communication and Why You Should Care 🗣 Michael Noukhovitch gives a nice overview of emergent communication, a topic that combines multi-agent RL with language and is attracting increasing attention.
Highlights from CoNLL and EMNLP 2019 🏛 In this insightful post, Arun Tejasvi Chaganty summarises some of the most exciting ideas presented at the conferences. He focuses on methods that are broadly applicable, exciting datasets, and important results.
Papers + blog posts 📑
Selective Brain Damage: Measuring the Disparate Impact of Model Compression (Blog post, paper) This paper shows that pruning impacts a model's performance on a fraction of classes and examples disproportionally (thus "selective brain damage"). The examples that are impacted are ones that are also challenging to classify by unpruned models.
Hamiltonian Neural Networks (Blog post, paper) This paper proposes—inspired by Hamiltonian mechanics—Hamiltonian Neural Networks. These models learn a conservation law (mainly of the total energy) directly from data and are thus well suited for solving physics problems.
Paper picks 📄
Single Headed Attention RNN: Stop Thinking With Your Head Have a look at this paper by Stephen Merity for an entertaining, satirical read. It proposes a new LSTM variants that obtains strong results on WikiText-103 and is competitive with Transformers. Overall, it makes important points that research should not just follow the mainstream but that it is important to be contrarian. It also highlights the importance of resource efficiency. In both, the paper does not stand alone: Both MultiFiT and the Mogrifier are based on LSTMs, while recent BERT variants ALBERT and ELECTRA are also cheaper to train. Nevertheless, it is important to remind ourselves that these are important directions to pursue.
On the Measure of Intelligence Francois Chollet argues that solely measuring skill at any given task falls short in measuring intelligence because skill heavily depends on prior knowledge and experience; with enough inductive bias or training, a model can reach an arbitrary level of skill. Instead, he proposes a new formal definition of intelligence that highlights the importance of scope, generalisation difficulty, priors, and experience. He also presents the Abstraction and Reasoning Corpus (ARC), a new benchmark for measuring intelligence.