Happy 2019 everyone 🎉! May your losses decrease consistently, your models generalize to out-of-domain data, and your annotators have high agreement!This newsletter covers slides on challenges in few-shot learning; predictions for 2019; JAX: autograd + XLA for numpy; lots of resources such as a guide to make models explainable, an MT reading list, a curriculum on foundations of ML, and key take-aways from the AI Index 2018; interviews and articles about pioneers in ML and NLP such as Karen Sparck Jones and Demis Hassabis; lots of interesting articles; papers on analysis methods and satire and exciting papers rejected from ICLR 2019.I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.If you were referred by a friend, click here to subscribe. If you enjoyed this issue, give it a tweet 🐦.
Share this post
Challenges in Few-shot learning; 2019…
Share this post
Happy 2019 everyone 🎉! May your losses decrease consistently, your models generalize to out-of-domain data, and your annotators have high agreement!This newsletter covers slides on challenges in few-shot learning; predictions for 2019; JAX: autograd + XLA for numpy; lots of resources such as a guide to make models explainable, an MT reading list, a curriculum on foundations of ML, and key take-aways from the AI Index 2018; interviews and articles about pioneers in ML and NLP such as Karen Sparck Jones and Demis Hassabis; lots of interesting articles; papers on analysis methods and satire and exciting papers rejected from ICLR 2019.I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.If you were referred by a friend, click here to subscribe. If you enjoyed this issue, give it a tweet 🐦.