Hi all, We cracked 3,000 subscribers! Woohoo! 🎉Thanks to all of you for reading! This edition contain
|
June 11 · Issue #25 · View online |
|
Hi all, We cracked 3,000 subscribers! Woohoo! 🎉Thanks to all of you for reading! This edition contains: slides of the Conversational AI tutorial and using RNNs for particle physics; an in-depth walk-through of the InfoGAN paper, an NLP Coursera course and an NLP book (by Jacob Eisenstein); articles about “killer robots”; posts on Code2pix and using text descriptions for interpretability; Google’s AI principles and how Alexa chooses which skill to use; and new research papers (one on why a map between two monolingual embedding spaces is non-linear and two on relational reasoning).
|
|
Alexa: remind me to feed the baby
|
|
Deep Learning for Conversational AI
|
RNNs and Beyond
NYU particle physicist Kyle Cranmer talks about RNNs, attention, and sequential generative models with an application to high energy physics.
|
|
InfoGAN · Depth First Learning
A detailed guide of the InfoGAN paper with required reading, practical and theoretical exercises, and Colab notebooks for reproduction of experiments.
|
Natural Language Processing | Coursera
The Natural Language Processing course on Coursera. It covers a wide range of tasks in Natural Language Processing from basic to advanced, including sentiment analysis, summarization, dialogue state tracking, etc.
|
Natural Language Processing book by Jacob Eisenstein
Jacob Eisenstein, professor at Georgia Tech shares the draft of his NLP book. The book covers foundations and more recent directions in 500+ pages.
|
Flying Car Nanodegree Program
If you’re already tired of self-driving cars and looking for the next biggest thing, why not apply to Udacity’s just announced Flying Car Nanodegree program.
|
|
MIT fed an AI data from Reddit, and now it only thinks about murder
|
Mark Zuckerberg, Elon Musk and the Feud Over Killer Robots
As the tech moguls disagree over the risks presented by something that doesn’t exist yet, all of Silicon Valley is learning about unintended consequences of A.I.
|
|
Notes on EAMT 2018
Noe Casas summarizes the Conference of the European Association for Machine Translation, EAMT 2018, which took place over three days focusing on research, industry, and translators.
|
Code2Pix - Deep Learning Compiler for Graphical User Interfaces
A blog post about creating a model to generate graphical user interfaces from textual textual descriptions.Â
|
Explain yourself, machine. Producing simple text descriptions for AI interpretability.
Humans explain their decisions with words. In this article, radiologist Luke Oakden-Rayner suggests AI systems should do the same.
|
|
git pro tip: stick to a consistent style for git commit messages. I suggest starting them with “In which our hero”. E.g., “In which our hero implements batch processing”
|
|
AI at Google: our principles
Google outlines the seven principles it will use to guide its work in AI going forward. It also enumerates AI applications it will not pursue, such as weapons or technology that will likely cause harm.
|
The Scalable Neural Architecture behind Alexa’s Ability to Arbitrate Skills
This post discusses the neural architecture that Alexa uses to find the most relevant skill to handle a natural utterance (presented in this ACL 2018 paper).
|
|
Characterizing Departures from Linearity in Word Translation (ACL 2018)
Nakashole and Flauger show that mapping two monolingual word embedding spaces in different languages onto each other requires a non-linear mapping (current approaches all use a linear transformation). This is similar to the finding in our ACL 2018 paper that monolingual embedding spaces are not approximately isomorphic. The authors illustrate this by locally approximating these maps using linear maps for each neighbourhood and find that they vary across word embedding space. In addition, they find that the amount of variation is tightly correlated with the distance between the neighbourhoods.
|
Relational inductive biases, deep learning, and graph networks (arXiv)
An interesting review of graph-based neural networks from a who’s who of Deep Learning researchers. The review could have been a bit deeper IMO; instead, it chooses to cast many existing approaches as instances of a “new” framework. While it gives a nice summary of existing relational inductive biases in neural networks, I was missing a deeper insight with regard to how to perform relational reasoning, e.g. for text. It broaches important questions such as how to generate a graph from an input, but doesn’t answer them. An interesting—if not novel—observation is that self-attention (as in the Transformer) performs some form of relational reasoning.
|
Relational recurrent neural networks (arXiv)
Not to be confused with Recurrent relational networks (thank God for descriptive paper titles). Santoro et al. propose to model the interaction between different memory cells in a memory-based neural network (in contrast to previous work, which only models the interaction between the current state and each memory). They do this using multi-head dot product attention (used in the Transformer paper) and evaluate on different tasks including language modelling where they show improvements.
|
Did you enjoy this issue?
|
|
|
|
If you don't want these updates anymore, please unsubscribe here.
If you were forwarded this newsletter and you like it, you can subscribe here.
|
|
|