Happy 2020 everyone 🎉! May your models be sample-efficient, your generalisation error be low, and your inter-annotator agreement be high!
This newsletter features lots of stuff: an update on new datasets in NLP progress, retrospectives of the past decade and the past year and prognoses for this year, new NLP courses, information about independent research initiatives, and ML-related interviews.
What I’m particularly excited about is the extensive list of resources, which range from useful references on interpretability and NER, to databases with a huge number of NLP datasets, as well as repositories and tools that enable you to organise and learn about important concepts in NLP. Lastly, you’ll find as usual a selection of exciting tools, articles, and papers.
Contributions 💪 If you have written or have come across something that would be relevant to the community, hit reply on the issue so that it can be shared more widely.
I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.
If you were referred by a friend, click here
to subscribe. If you enjoyed this issue, give it a tweet