View profile

Tensorflow DevSummit 2018; Annotated Transformer; TF-Hub; CS224n reports; Awesome-NLP; Meta-learning; World Models; Neural Baby Talk; Semantic Plausibility

Welcome to the 20th edition of NLP News! Thanks for reading! 🎉As it's often the visualizations and fi
NLP News
Tensorflow DevSummit 2018; Annotated Transformer; TF-Hub; CS224n reports; Awesome-NLP; Meta-learning; World Models; Neural Baby Talk; Semantic Plausibility
By Sebastian Ruder • Issue #20 • View online
Welcome to the 20th edition of NLP News! Thanks for reading! 🎉
As it’s often the visualizations and figures that best describe ideas and stick in our minds, the Paper Picks this time are more visual. Let me know what you think.
Highlights in this edition are: a Youtube Playlist of the Tensorflow DevSummit 2018; tutorials on the Transformer, Pandas Dataframes, text preprocessing, and TF-Hub; CS224n project reports and a curated list of NLP resources; interactive blog posts about meta-learning and World Models; the latest in AI news; and papers about skin-colored emoji, Neural Baby Talk, and semantic plausibility.

Tensorflow DevSummit 2018 Playlist: 20 videos from Eager Execution to how Coca Cola uses Tensorflow
Tensorflow DevSummit 2018 Playlist: 20 videos from Eager Execution to how Coca Cola uses Tensorflow
Tools and implementations
Efficient Neural Architecture Search 
(ENAS) via Parameter Sharing
Colorless green recurrent networks dream hierarchically
Competitions
The Conversational Intelligence Challenge 2 (ConvAI2)
Retro Contest
Tutorials
The Annotated Transformer
How to build a simple text classifier with TF-Hub
Resources
Blog posts and articles
World Models
Microsoft’s neo-Nazi sexbot was a great lesson for makers of AI assistants
Industry insights
Earbud Translators: Not Perfect, Still Handy
Emmanuel Macron Q&A: France's President Discusses Artificial Intelligence Strategy
Facebook is beefing up AI research team in Seattle
Investing in France’s AI Ecosystem
Google veteran Jeff Dean takes over as company's AI chief
Aiming to fill skill gaps in AI, Microsoft makes training courses available to the public
Textio expands its AI to help humans craft better recruiting messages
MIT's AlterEgo headset reads your face to see words you're only saying in your mind
Paper picks
Self-Representation on Twitter Using Emoji Skin Color Modifiers (ICSW 2018): Robertson et al. show that Twitter users use skin-colored emojis mainly for self-representation, i.e. people with darker skin-colored profile photos use darker skin-colored emojis.
Self-Representation on Twitter Using Emoji Skin Color Modifiers (ICSW 2018): Robertson et al. show that Twitter users use skin-colored emojis mainly for self-representation, i.e. people with darker skin-colored profile photos use darker skin-colored emojis.
Neural Baby Talk (CVPR 2018): Caption generation is one of the most compelling combinations of computer vision and NLP, but existing methods generally don't ground captions in elements of the image. The proposed approach first generates a sentence template with slots tied to specific image locations and then fills in those slots using object detectors. This also allows the use of domain-specific object detectors, which can produce more appropriate captions such as in the image above.
Neural Baby Talk (CVPR 2018): Caption generation is one of the most compelling combinations of computer vision and NLP, but existing methods generally don't ground captions in elements of the image. The proposed approach first generates a sentence template with slots tied to specific image locations and then fills in those slots using object detectors. This also allows the use of domain-specific object detectors, which can produce more appropriate captions such as in the image above.
Modeling Semantic Plausibility by Injecting World Knowledge (NAACL-HLT 2018): Research has studied selectional preference, i.e. what are typical events that often occur together (e.g. man-swallow-candy is typical, while man-swallow-paintball is not). What has received less attention is semantic plausibility, i.e. what events are plausible given the properties of the involved entities. man-swallow-candy and man-swallow-paintball are both plausible, while man-swallow-desk is not. Wang et al. create a dataset for this task and show how world knowledge with regard to the relative properties of objects can be incorporated into a model to improve performance.
Modeling Semantic Plausibility by Injecting World Knowledge (NAACL-HLT 2018): Research has studied selectional preference, i.e. what are typical events that often occur together (e.g. man-swallow-candy is typical, while man-swallow-paintball is not). What has received less attention is semantic plausibility, i.e. what events are plausible given the properties of the involved entities. man-swallow-candy and man-swallow-paintball are both plausible, while man-swallow-desk is not. Wang et al. create a dataset for this task and show how world knowledge with regard to the relative properties of objects can be incorporated into a model to improve performance.
Did you enjoy this issue?
Sebastian Ruder

Get the highlights from Natural Language Processing & Machine Learning research & industry straight to your inbox every second Monday at 10am GMT.

If you don't want these updates anymore, please unsubscribe here
If you were forwarded this newsletter and you like it, you can subscribe here
Powered by Revue