NLP News - Flies smell word vectors, MarianNMT, Distributed Learning with keras, NLP workshops, NLP for net neutrality, Fairness measures, Opinions on Reproducibility, How to conduct ML research, FigureQA
This edition of the NLP Newsletter answers the following burning questions: Can flies smell word vectors? ✅ What is a fast NMT library? ✅ How can I train on multiple GPUs with keras? ✅ Which NLP workshops should I attend next year? ✅ How many anti-net neutrality comments have been faked? ✅ How can I measure fairness? ✅ What does reproducibility in ML mean? ✅ How should I conduct ML research? ✅ What is a novel way to deal with large output spaces with the softmax? ✅ What is a cool new dataset for visual reasoning that I can use? ✅
Empirical thought experiments
The authors of this Science paper show that a fly categorizes smells by k-hot encoding odour in (normalized) 50-dimensional smell space, expanding (!) this into a 2k-dimensional latent space, and then finding nearest neighbours. They also simulate a fly smelling word vectors and MNIST pixels. Credit: Tom White (@dribnet)
Implementations and tools
Fast Neural Machine Translation in C++ with minimal dependencies by the creators of Moses and with support for multi-gpu training and translation, different model types, and compatible with Nematus.
Information extraction in Scala with Odin — blog.goodcover.com
A tutorial on information extraction in Scala using Odin, a tool out of the CLU lab at the University of Arizona.
Keras + Horovod = Distributed Deep Learning on Steroids — medium.com
A short tutorial on distributed GPU computing with keras using Horovod, a library developed by Uber.
LMChallenge, a library & tools to evaluate predictive language models — github.com
A standardized set of metrics developed by SwitfKey (acquired by Microsoft) for evaluating predictive language models that can be used with any dataset.
Linguistics focused
The schedule for the inaugural meeting of the Society for Computation in Linguistics (SCIL), a conference that seeks to emphasize the Computational Linguistics part of Natural Language Processing. Have a look at the abstracts to get a sense for some more linguistics-focused ideas.
The Rise and Fall of the English Sentence
This article explores the forces that influence the complexity of the language we speak and write and tries to answer why English often has complex sentences, while many other languages have little use for them.
Just Google It: A Short History of a Newfound Verb — www.wired.com
A brief piece about how the world's most popular search engine became a verb, at the cost of the romance of the early internet.
NLP Workshops 2018
The 2018 conference season is getting started; while not all conference websites for 2018 are available yet, some workshops have already been announced. If you're interested at which conference your favourite workshop will take place or want to check out a new one, I've linked those that I found below:
The 2nd Workshop on Neural Machine Translation and Generation (ACL)
First workshop on Fact Extraction and VERification (EMNLP) (source)
2nd Workshop on Abusive Language Online (source)
4th Workshop on Noisy User-Generated Text (EMNLP) (source)
2nd Workshop on Ethics in NLP (NAACL) (source)
NLP Techniques for Internet Freedom (COLING) (source)
SIGMORPHON Workshop (EMNLP) (source)
Fake News
More than a Million Pro-Repeal Net Neutrality Comments were Likely Faked
Bots and spam campaigns are used to influence online discourse. NLP can provide us with the means to identify them. In one of the most insightful and remarkable case studies, Jeff Kao uses NLP techniques to analyze net neutrality comments submitted to the FCC from April-October 2017. He finds that more than 1M pro-repeal net neutrality comments were likely faked and that it's highly likely that more than 99% of the truly unique comments were in favor of keeping net neutrality.
The College Kids Doing What Twitter Won't — www.wired.com
When Twitter does not rid its platform of bots, others have to do the work. This article describes how two Berkeley students created an app that identifies bots on Twitter.
Twitter Has Suspended Another 45 Suspected Propaganda Accounts After They Were Flagged By BuzzFeed News — www.buzzfeed.com
For identifying bots, often only basic analysis is necessary. This article details how BuzzFeed News identified 45 suspect accounts through analysis of their interactions and retweets with other suspect accounts in Brexit-related tweets. It shows that we need better ways to automatically identify such accounts.
Talking about fake news... What else can't we trust?
Bias, interpretability, and reproducability
MSR's Eric Horvitz reflects on gaining a meaningful understanding of automated decision-making at the Berkeley Center for Law & Technology Privacy Law Forum.
Fairness Measures - Detecting Algorithmic Discrimination — fairness-measures.org
A website that provides fairness benchmarking tools for ML. It has pointers to a series of datasets
corresponding to various fields and applications (e.g., finance, law, and human resources) and code implementing a series of measures introduced in the literature to analyze and quantify discrimination.
Hooray for Hollywood? New tool reveals gender bias in movie scripts — news.cs.washington.edu
Researchers at the University of Washington develop methods to measure the sometimes subtle biases in how men and women are portrayed on the big screen -- to increase our understanding of how language shapes our perception of gender roles. A demo is available here.
Some Opinions on Reproducibility in ML — drive.google.com
Hugo Larochelle's talk at the ICML 2017 Reproducibility Workshop that provides thoughts on what reproducibility in ML should mean and what we should aim for as a community.
The Doctor Just Won’t Accept That!
An opinion piece by Zachary Lipton that will be presented at the Interpretable ML Symposium at NIPS 2017 and calls for giving real-world problems and their respective stakeholders greater consideration.
More blog posts and articles
Helpful advice on how to conduct research from reddit — www.reddit.com
A very helpful reddit comment that provides a systemic task assessment of how to go about conducting ML research.
Expressivity, Trainability, and Generalization in Machine Learning — blog.evjang.com
Eric Jang on the three main types of areas to which ML papers contribute and what contributions in each area signify.
Understanding the Mixture of Softmaxes — smerity.com
Stephen Merity introduces one of the most recent innovations in language modeling and describes what major flaw in the traditional softmax it addresses, both theoretically and experimentally.
Understanding Medical Conversations — research.googleblog.com
A short blog post that shows that it is possible to build accurate automatic speech recognition model for transcribing medical conversations.
Alexa Prize finalists — developer.amazon.com
The finalists for the 2017 Alexa Price are from the Czech Technical University in Prague, the University of Washington, and the Heriot-Watt University in Edinburgh. The teams outperformed their competitors in real-life conversations with Alexa users from July 1 to August 15.
AI’s sharing economy: Why Microsoft creates publicly available datasets and metrics — blogs.microsoft.com
A blog post by Microsoft about why it creates publicly available datasets and metrics.
Industry insights
The Mummy Effect: Bridging the gap between academia and industry — rare-technologies.com
Radim Rehurek talks about the "Mummy Effect", a source of frustration that should feel familiar to every ML/NLP practitioner: a research paper looks amazing, outperforms the state-of-the-art, and might even have a big lab behind it. You implement it and discover all the corners the authors cut and all the carefully omitted assumptions. The whole elaborate thing crumbles to dust upon touch -- like an Egyptian mummy.
A Year After Pledging Openness, Apple Still Falls Behind On AI — www.buzzfeed.com
A year ago, Apple pledged it would engage more with the academic research community. But today, some AI experts say it’s still not enough.
Creative AI landscape — medium.com
An overview of the Create AI landscape by Luba Elliott.
Paper picks
Question Asking as Program Generation (NIPS 2017)
Rothe et al. propose an interesting, cognitive science-inspired take on question asking. Their approach treats questions as formal programs that output an answer that depends on the state of the world. In particular, they find that both producing informative as well as complex questions is important in learning to ask human-like questions
Chen proposes a novel RNN variant that instead of using complicated gated interactions as the classic LSTM, employs a small network to embed the input into a latent space. It then only requires one gate to achieve comparable performance to other RNN variants with improved interpretability and trainability.
Style Transfer in Text: Exploration and Evaluation (AAAI 2018)
Fu et al. propose two methods for style transfer in text. The models are inspired by their counterparts in computer vision and try to learn separate representations for content and style using (you might have guessed) adversarial networks.
Dataset spotlight
FigureQA: an annotated figure dataset for visual reasoning — www.maluuba.com
Maluuba introduces FigureQA , a new dataset composed of figure images -- like bar graphs, line plots, and pie charts -- and question and answer pairs about them.
The Self-dialogue Corpus — github.com
The Self-dialogue Corpus, a collection of self-dialogues across music, movies and sports containing 24,165 conversations across 23 topics collected by the Edina dialogue team as part of the Alexa Prize competition.
Fun and games
When your eyes look like ölő...