Tensorflow DevSummit 2018; Annotated Transformer; TF-Hub; CS224n reports; Awesome-NLP; Meta-learning; World Models; Neural Baby Talk; Semantic Plausibility
Welcome to the 20th edition of NLP News! Thanks for reading! 🎉
As it's often the visualizations and figures that best describe ideas and stick in our minds, the Paper Picks this time are more visual. Let me know what you think.
Highlights in this edition are: a Youtube Playlist of the Tensorflow DevSummit 2018; tutorials on the Transformer, Pandas Dataframes, text preprocessing, and TF-Hub; CS224n project reports and a curated list of NLP resources; interactive blog posts about meta-learning and World Models; the latest in AI news; and papers about skin-colored emoji, Neural Baby Talk, and semantic plausibility.
Tools and implementations
Code for the corresponding NAACL 2018 paper for probing whether RNNs can learn hierarchical syntactic relations (tl;dr: they can).
The second edition of the Conversational Intelligence (ConvAI) Challenge at NIPS 2018 provides a dataset for training with more engaging conversations and a simpler evaluation process.
OpenAI announces a contest that tests algorithms on previously unseen video game levels. The contest uses Gym Retro, a new platform integrating classic games into Gym, starting with 30 SEGA Genesis games.
A line-by-line annotated implementation of the Transformer by Alexander Rush (Harvard) using Pytorch.
An overview of optimization algorithms such as gradient descent, Newton's method, and Quasi-Newton with some nice, interactive visualizations.
A Jupyter notebook with all of the most important functionality of Dataframes all in one place.
Data preprocessing is unsexy but can make a big difference. This post is a useful walkthrough of common ways to preprocess text data in Python.
A short tutorial on how to build a simple text classifier with the new TF-Hub, a platform to reusable resources as pre-trained modules.
The project reports of this year's Stanford CS224n Natural Language Processing with Deep Learning course. A great resource to get some inspiration about interesting research topics. Most of the reports are on Question Answering with the SQuAD dataset, but applications range from speech synthesis, to semantic parsing, to visual question answering.
This GitHub page aggregates many useful resources for NLP. I particularly like the resources for NLP in non-English languages, e.g. Korean, Arabic, or Thai.
Blog posts and articles
Do you need some inspiration for next April 1? Here's a list of April Fool's pranks written by a neural network.
A visual introduction to meta-learning and how it can be applied to Natural Language Processing.
The interactive version of World Models with interactive demos of agents hallucinating dreams of reinforcement learning environments.
Yandex’s head of machine intelligence says Microsoft’s Tay showed how important it is to fix AI problems fast.
Even though earbud translators are far from perfect, they are still fairly reliable for travelers looking for very short answers to very short questions.
Emmanuel Macron Q&A: France's President Discusses Artificial Intelligence Strategy — www.wired.com In an interview with WIRED, French President Emmanuel Macron describes his plans to enhance the country's AI efforts—and differentiate them from those in the US.
Facebook has hired Luke Zettlemoyer, a research scientist who led AI2's NLP efforts to expand its AI research team in Seattle — adding to an already-fierce competition for talent.
Google DeepMind announces a new AI research presence in Paris as part of Google's effort to expand its research presence in Europe.
Jeff Dean, seasoned engineer and head of Google Brain, takes over as head of Google's AI.
Microsoft announces the Microsoft Professional Program in AI, a new publicly available learning track teaching artificial intelligence skills.
Textio, maker of AI-powered tools to augment business writing, announces a new product to help recruiters reach out to job candidates by crafting better recruiting messages.
Speech is the interface of the future; in many situations, such as on loud factory floors, speech is unfeasible, however. MIT's AlterEgo interfaces with devices through 'silent speech' and sheds light on how interaction might look like in situations where we would prefer not to speak aloud.