John D. Martin
I am a research scientist at Intel AI.
I recently completed a post-doc with Michael Bowling at the University of Alberta. I obtained my PhD from Stevens Institute of Technology, where I was advised by Brendan Englot. As an undergraduate, I studed physics at the University of Maryland. And in the past, I spent time at DeepMind, Google Brain, Columbia University, and Sikorsky Aircraft. More credentials can be found in my CV.
I want to figure out how computational aspects of intelligence can be understood with reinforcement learning.
For instance: Can reward signals express all that we mean by “goals and purposes”? (The Reward Hypothesis.) What principles might guide the construction of internal environment models? What are useful ways of learning from the experiences generated by an internal model? What limits an RL system’s agency? How should agents prioritize learning updates with a bounded computation budget? What principles support an agent’s ability to learn when its environment persistently changes? How can agents efficiently acquire knowledge from uninterpreted sensory experiences? And how can the findings from such questions be formalized and encoded into programs that solve meaningful problems?
I believe progress in reinforcement learning will lead to more a principled design of intelligent systems, and clarify some of our oldest and most fundamental questions about the nature of intelligence.
Much of my research draws inspiration from the work of my peers in statistical machine learning, cognitive and neurological science, economics, and philosophy. I often approach problems with a mixture of theory and empirical practice. I like to mathematically formalize philosophical concepts, then form hypotheses which can be rigorously tested with computational experiments.