Hi all,I hope you've had a good start to the new year. This newsletter is a bit delayed due to a confluence of conference deadlines. Whether you're struggling with conference deadlines or cabin fever, I hope this newsletter offers some respite.On another note, Revue, the platform that I've been using for this newsletter has been acquired by Twitter. In case this newsletter will also be restricted to 280 characters in the future, I will provide you with a language model pre-trained on past editions that will auto-complete the rest. Of course, such a language model—if it existed—has not been used to write this or previous editions or previous editions or previous editions or previous editions </SEP>This newsletter also starts with a new segment "How did we get here?" where I try to piece together as best as I can the progress from the inception of an NLP task to today, mostly by following the guidance of people much more knowledgeable in the respective area than me. Let me know if this is something you'd like to see for other NLP tasks in the future and—if so—what tasks you'd be particularly interested in.In other news, the NLP chapter of ELLIS (the European Laboratory for Learning and Intelligent Systems) will host a workshop on Open Challenges and Future Directions of NLP on 24–25 February. The five keynotes will be live-streamed. As far as I'm aware, the remaining sessions are restricted to a smaller audience. I will aim to share some highlights later on.I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.If you were referred by a friend, click here to subscribe. If you enjoyed this issue, give it a tweet 🐦.
Share this post
IE—how did we get here?, Large LMs, The human…
Share this post
Hi all,I hope you've had a good start to the new year. This newsletter is a bit delayed due to a confluence of conference deadlines. Whether you're struggling with conference deadlines or cabin fever, I hope this newsletter offers some respite.On another note, Revue, the platform that I've been using for this newsletter has been acquired by Twitter. In case this newsletter will also be restricted to 280 characters in the future, I will provide you with a language model pre-trained on past editions that will auto-complete the rest. Of course, such a language model—if it existed—has not been used to write this or previous editions or previous editions or previous editions or previous editions </SEP>This newsletter also starts with a new segment "How did we get here?" where I try to piece together as best as I can the progress from the inception of an NLP task to today, mostly by following the guidance of people much more knowledgeable in the respective area than me. Let me know if this is something you'd like to see for other NLP tasks in the future and—if so—what tasks you'd be particularly interested in.In other news, the NLP chapter of ELLIS (the European Laboratory for Learning and Intelligent Systems) will host a workshop on Open Challenges and Future Directions of NLP on 24–25 February. The five keynotes will be live-streamed. As far as I'm aware, the remaining sessions are restricted to a smaller audience. I will aim to share some highlights later on.I really appreciate your feedback, so let me know what you love ❤️ and hate 💔 about this edition. Simply hit reply on the issue.If you were referred by a friend, click here to subscribe. If you enjoyed this issue, give it a tweet 🐦.