Deep Learning Indaba 2018 edition 🌍
Hey all,
I spent the past week at the Deep Learning Indaba 2018 hosted in Stellenbosch in beautiful South Africa 🇿🇦. This newsletter features some highlights and useful resources of the event as well as the regular selection of dataset, articles, and research papers.
With travels and ongoing research, I've been recently struggling to find enough time to write the newsletter. Rather than provide you with a more frequent watered-down version, I've decided to switch to a monthly format that will enable me to provide you with more compelling content. I hope you'll stay with me 🤗. If not, you can unsubscribe at the bottom of the page. I'll be sad to see you go 😢.
I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.
If you were referred by a friend, click here to subscribe. If you enjoyed this issue, give it a tweet 🐦.
Deep Learning Indaba 2018 🌍
Indaba means 'gathering' or 'meeting' in Zulu. The Deep Learning Indaba 2018 was the second such gathering, aimed to strengthen African participation and contribution to AI and ML and brought to together more than 500 students and researchers from all over Africa and other countries.
While there are many symptoms for the lack of diversity in ML, local initiatives like the Indaba that are on the cutting edge greatly help in many ways to foster a more inclusive community. 200+ attendees who would not have been able to attend otherwise were provided funding and research in Africa was highlighted with many awards, such as a NIPS travel grant. Read this open letter to the Indaba team for an overview of the other ways in which the Indaba succeeded.
I strongly believe that a community that is as inclusive and diverse as possible is necessary in order for ML and NLP to have the largest positive impact and was fortunate to have the opportunity to attend the event as a mentor this year. The event was superbly organized by a team of amazing people and featured many leading ML researchers such as Nando de Freitas, Moustapha Cisse, Kyunghyun Cho, Shakir Mohamed, David Silver, and Jeff Dean as speakers. I was thoroughly amazed by the quality of the posters each day displaying the impressive diversity of ML research in Africa and enjoyed meeting so many smart and inspiring minds from across the content.
Jeff Dean and David Silver talks 👨🔬
A highlight was to see pioneers such as Jeff Dean and David Silver present. One thing that particularly struck me during Jeff Dean's talk was that he emphasized getting inspiration by exposing oneself to diverse research. He credited being at the forefront of so many different fields to having worked with---and having been inspired by---experts across many different teams and subject areas. In addition, Jeff expressed that he'd rather read ten papers superficially than one paper in-depth to get as much inspiration as possible as one can always go back to read a paper more deeply (he conceded that reading 100 abstracts might be even better).
David Silver gave the first-ever presentation of a new talk on 10 Principles for Deep Reinforcement Learning. I tweeted the slides and a summary. Many of the principles are also applicable to general ML. A recurrent theme during his talk was that we should be interested in how our agent scales, that is, how it performs in the limit when the number of samples/experiences goes to infinity. While I agree with the importance of measuring scaling behaviour, I'd argue that in most cases, we should be more interested in measuring how our agent scales with a small number of examples. As humans, we don't have access to an infinite number of samples; while we can obtain an infinite number of samples in simulation, generalization from few samples is key in the real world. David Silver suggests that agents might learn from imagined experience, which could be sampled from; however, this requires a very accurate world model, which is its own challenge.
Frontiers of Natural Language Processing 🚩
Herman Kamper and I organized a parallel session on the Frontiers of Natural Processing, which was a lot of fun. I gave a 45 minute review of some milestones in the recent (~10 year) history of NLP and Herman then MC'd a panel discussion with four amazing panellists (and me) on four big open problems in NLP. One thing that I found particularly thought-provoking were the responses we received from 30+ NLP experts who we had asked some big questions during the weeks leading up to the event. We summarized the responses and presented them during the session.
The slides of the session and a video recording will be put online soon. I'll also write a blog post summarizing the contents of the session. Videos of the other session of the Indaba should also be made available later.
Practicals 🏋️
Another highlight of the Indaba were the practicals (covering everything from feed-forward neural networks to Reinforcement Learning), which enabled the attendees to apply the methods they had learned during the lectures. These practicals have been created by the awesome Stephan Gouws and Avishkar Bhoopchand. All of them are available as Jupyter notebooks. You can also run them directly in Google Colaboratory (which is super useful) by following these instructions (taken from the Indaba website).
Ensure you are logged in to your Google account.
Click on the "Github" tab.
Type deep-learning-indaba into the "Enter a GitHub URL or search by organization or user" box and press ENTER.
Go through the practicals at your leasure.
How to write a great research paper 📝
Nando de Freitas, Ulrich Paquet, Stephan Gouws, Martin Arjovsky, and Kyunghyun Cho gave a session on how to How to write a great research paper. They discussed Simon Peyton Jones' 7 simple suggestions:
Don’t wait: write.
Identify your key idea.
Tell one story.
Nail your contributions to the mast.
Related work: later.
Put your readers first.
Listen to your readers.
Each of the speakers also shared their own lessons learned from their research experience, which are worth a read for both beginners and experts.
Datasets 💻
Google Dataset Search 🔎Finding datasets for evaluating your research and for testing your algorithms just got a whole lot easier. Now, there are no excuses anymore to only evaluate on one dataset.
Conceptual captions 📺 Conceptual captions is a new dataset for image captioning consisting of ~3.3 million image/caption pairs created by automatically extracting and filtering image caption annotations from billions of web pages. See this blog post for more details.
RecipeQA 🥘 RecipeQA is the first QA dataset, which contains step-by-step information on how-to tutorials. The dataset consists of over 36k question-answer pairs automatically generated from 20k cooking recipes.
More blog posts and articles
An Engineer’s Guide to Meaningful Work 👩💻 Francois Chollet, creator of keras, shares useful software development practices. One highlight:
Productivity boils down to high-velocity decision-making and a bias for action. This requires a) good intuition, which comes from experience, so as to make generally correct decisions given partial information, b) a keen awareness of when to move more carefully and wait for more information, because the cost of an incorrect decision would be greater than cost of the delay.
Can Mark Zuckerberg Fix Facebook Before It Breaks Democracy? 🏛 A nuanced New Yorker profile of Mark Zuckerberg and his approach to tackling the fake news problem.
AI and the news, an open challenge 📰 This open challenge supported by the John S. and James L. Knight Foundation, the William and Flora Hewlett Foundation, and Reid Hoffman, among others, seeks proposals to address four problems at the intersection of AI and the news:
Governing the Platforms.
Stopping Bad Actors.
Empowering Journalism.
Reimagining AI and News.
Anyone can submit a proposal and successful proposals will receive between $75k-200k. The deadline is October 12.
Paper picks
Rational Recurrences (EMNLP 2018) This paper shows that some recurrent neural networks can be expressed as weighted finite state automata (WFSAs). In particular, they define rational recurrences as recurrent hidden state update functions that can be written as the forward calculation of a finite set of WFSAs. They show that new RNNs can be derived by combining WFSAs, providing more evidence that inspiration from classic models is useful for designing neural networks.
Interpretation of Natural Language Rules in Conversational Machine Reading (EMNLP 2018) The authors propose a new QA task that requires both the interpretation of natural language rules and the application of background knowledge to answer questions such as "Can I work abroad?" after reading a website with government regulations. They collect 32k instances based on real-world rules and crowd-generated questions and scenarios and evaluate rule-based and ML models in this challenging setting.