NLP News - Pytorch, awesome posters, NLP competitions & more!

This month, we will take a close look at Pytorch, some highlights of the recent or ongoing conference
NLP News
NLP News - Pytorch, awesome posters, NLP competitions & more!
By NLP News • Issue #3
This month, we will take a close look at Pytorch, some highlights of the recent or ongoing conferences, and a collection of impactful papers.  

Implementations
Pytorch implementation by Facebook of the ACL 2017 paper.
github.com  •  Share
Implementation of a model for sentiment analysis and demonstrative exploration of the underlying biases in the data.
All about Pytorch
Pytorch is a Deep Learning library designed specifically for implementing dynamic neural networks, which are particularly suited for NLP tasks with dynamic-length sequences. Other libraries that natively handle dynamic computation graphs are Chainer and DyNetPytorch 0.2.0 is now out with many long-awaited features such as broadcasting and advanced indexing. Soumith Chintala speaks here with O'Reilly about why many ML researchers are beginning to embrace Pytorch.
Conference countdown
ACL 2017 is over and ICML 2017 has just gotten started. Both conferences are host to an array of awesome papers. ACL 2017 Proceedings can be found here and ICML 2017 Proceedings can be found here. Videos of CVPR 2017 can be found here.
aclweb.org  •  Share
There is an art to the creation of a compelling conference poster that balances entertainment and informativeness. Here are two highlights from CVPR 2017 and one from ACL 2017.
Interpretability is becoming more and more important. The ICML 2017 committee has acknowledged this by awarding the best paper award to Understanding Black-box Predictions via Influence Functions by Koh & Liang. It develops tools that allow us to scale up influence functions, a classic technique from statistics to modern ML settings in order to understand black-box predictions. For anyone who wants to read more, here is a great overview of ideas on interpreting ML by O'Reilly.
arxiv.org  •  Share
Sequence-to-sequence (seq2seq) is one of the most popular frameworks for Deep Learning. It has been used to achieve state-of-the-art performance on machine translation, image captioning, speech generation, or summarization. Oriol Vinyals and Navdeep Jaitly given an overview of seq2seq in this tutorial and outline its future directions.
The Allen Institute for Artificial Intelligence has sponsored three challenges focusing on visual understanding at CVPR 2017. The best-performing teams can be found here. If you’re not interested in the visual aspect, check out this competition on human-computer QA and this competition on conversational intelligence (with a prize of $10,000) as part of NIPS 2017. Have a look here for more NIPS 2017 competitions.
You can find inspiration for activities during your stay in Copenhagen here. EMNLP 2017 also overlaps with Copenhagen World Music Festival and Copenhagen Tech Festival.
Industry insights
Uber ditches HipChat and Slack to create its own workplace chat app, uChat
What do you do if you’re a multi-billion dollar company and none of the existing chatbots fulfil your eclectic needs? Simple. You create your own.
Node raises $10.8 million to find you better sales leads
Node builds a platform that uses AI to find sales leads.
Prodigy: A new tool for radically efficient machine teaching
While open-source frameworks are increasingly lowering the barrier to building ML models, it is still a hassle to get data annotated. The creators of open-source NLP library spacy.io now present Prodigy, a tool that uses active learning to make data annotation more efficient.
Paper picks
In order to improve upon our models, we have to understand what kind of errors they make and how much they have of overfit to the particular biases inherent in the data. QA is one particular task, where models have achieved startlingly close-to-human-level performance, as can be seen on the SQuAD leaderboard here. Jia & Liang craft adversarial examples that probe certain parts of these QA models: Accuracy drops from an average of 75% F1 to 36% across sixteen models! Also have a look here for slides from Yoav Goldberg on the problems with SQuAD and for some brief thoughts from Paul Mineiro here.
arxiv.org  •  Share
One of the recent game changers in computer vision has been the combination of ImageNet + a large pre-trained CNN. In NLP, we have so far not found a model that is as useful for transfer learning (see also my post here). The closest we have in terms of data size and model capacity is Machine Translation. Researchers from Salesforce now show that we can pre-train not only the word vectors but the entire LSTM embeddings using MT, which can then be transferred successfully to a wide range of tasks. The paper and an MIT Tech Review post can be found here and here.
einstein.ai  •  Share
Researchers from Google show that we can achieve results that are competitive with the state-of-the-art across multiple tasks using small shallow feed-forward neural networks. This is a great result that enables training and deploying smaller and more accurate models in resource-constrained environments such as mobile phones.
arxiv.org  •  Share
This ACL 2017 outstanding paper shows that we can use multi-task learning, in particular unsupervised video prediction and language entailment generation as auxiliary tasks, to improve the challenging task of video captioning.
arxiv.org  •  Share
Distant supervision using tweets containing positive and negative emoticons has been a common method to achieve state-of-the-art performance on sentiment analysis. The authors of this EMNLP 2017 paper take this one step further and show that a model pre-trained on 1.2B tweets containing 64 different emojis obtains state-of-the-art performance on 8 sentiment, emotion and sarcasm detection datasets. The paper can be found here.
Dataset spotlight
NLP is already used nowadays to automate the writing of particularly formulaic articles such as sports news or financial announcements. Researchers from Harvard present a new dataset for the task of generating text conditioned on a small number of database records. The dataset contains data records paired with descriptive documents.
Twitter highlight
The role of arXiv preprints was a hot topic during the meeting at ACL 2017. Hal Daumé III has transcribed the audience comments via Twitter.
twitter.com  •  Share
Did you enjoy this issue?
NLP News
NLP News is a biweekly Natural Language Processing & Machine Learning newsletter from academia & industry curated by @seb_ruder.
Carefully curated by NLP News with Revue. If you were forwarded this newsletter and you like it, you can subscribe here. If you don't want these updates anymore, please unsubscribe here.