A highlight was to see pioneers such as Jeff Dean and David Silver present. One thing that particularly struck me during Jeff Deanās talk was that he emphasized getting inspiration by exposing oneself to diverse research. He credited being at the forefront of so many different fields to having worked withāand having been inspired byāexperts across many different teams and subject areas. In addition, Jeff expressed that heād rather read ten papers superficially than one paper in-depth to get as much inspiration as possible as one can always go back to read a paper more deeply (he conceded that reading 100 abstracts might be even better).
David Silver gave the first-ever presentation of a new talk on 10 Principles for Deep Reinforcement Learning. I
tweeted the slides and a summary. Many of the principles are also applicable to general ML. A recurrent theme during his talk was that
we should be interested in how our agent scales, that is, how it performs in the limit when the number of samples/experiences goes to infinity. While I agree with the importance of measuring scaling behaviour, Iād argue that in most cases, we should be more interested in measuring how our agent
scales with a small number of examples. As humans, we donāt have access to an infinite number of samples; while we can obtain an infinite number of samples in simulation, generalization from few samples is key in the real world. David Silver suggests that
agents might learn from imagined experience, which could be sampled from; however, this requires a very accurate world model, which is its own challenge.