Human-Centered Autonomy Lab.
My research focuses on exploring and uncovering structure in complex human-robot systems to create more intelligent, interactive autonomy. I develop rigorous human models and control frameworks that mimic the positive properties of human agents, while compensating for their shortcomings with safety guarantees. Most of my work centers around autonomous vehicles, considering how they may best integrate and operate in mixed environment, with both humans and autonomous vehicles on the road.
My research agenda aims to address these points by combining ideas from robotics, artificial intelligence, and control applied to human-robot systems and the transportation domain. The current major focuses are:
- Validating the autonomous systems using novel tools to find likely failures as well as rigorous experiments in high-fidelity simulation, immersive testbeds, and fully outfitted autonomous test vehicles;
- Developing robust models of human-robot systems that capture the highly stochastic behaviors of humans for use in semi- and fully autonomous control;
- Designing interactive control policies for intelligent systems in multi-agent settings, which can be applied to shared control schemes or fully autonomous systems that interact and collaborate with humans; and
- Learning from human behaviors for improved intelligent systems by formalizing methods to integrate people as sensors in perception modules and learning control policies based on expert human actions.
- Afolabi, Oladapo, Katherine Driggs-Campbell, Roy Dong, Mykel J Kochenderfer, and S Shankar Sastry. “People as Sensors: Imputing Maps from Human Actions.” In IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), 2018.
Specifically, we have recently been considering sampling and training paradigms that will improve deep reinforcement and imitation learning policies. Through this work, we have made the following contributions: (1) we have considered a robust control take on deep RL, using adversarial reinforcement learning to improve model mismatch and safe transfer; and (2) we have improve interactive imitation learning by using a Bayesian approach that accounts for model uncertainty in the learning process and improves the quality of collected demonstrations obtained from human experts.
- Kelly, Michael, Chelsea Sidrane, Katherine Driggs-Campbell, and Mykel J Kochenderfer. “HG-DAgger: Interactive Imitation Learning with Human Experts.” NeurIPS Workshop on Imitation Learning and Its Challenges in Robotics: arXiv:1810.02890, 2018.
- Ma, Xiaobai, Katherine Driggs-Campbell, and Mykel J Kochenderfer. “Improved Robustness and Safety for Autonomous Vehicle Control with Adversarial Reinforcement Learning.” In IEEE Intelligent Vehicles Symposium (IV), 2018.
- Menda, Kunal, Katherine Driggs-Campbell, and Mykel J Kochenderfer. “EnsembleDAgger: A Bayesian Approach to Safe Imitation Learning.” Technical Report, arXiv:1807.08364, 2018.
- Xiaobai Ma, Mark Koren, Ritchie Lee, Katherine Driggs-Campbell, and Mykel J. Kochenderfer. “Adaptive Stress Testing Toolbox.” To be released in early 2019!
- Katherine Driggs-Campbell, R. Dong, and R. Bajcsy. “Robust, Informative Human-in-the-Loop Predictions via Empirical Reachable Sets,” in IEEE Transactions on Intelligent Vehicles, 2018.
- Driggs-Campbell, Katherine, Vijay Govindarajan, and Ruzena Bajcsy. “Integrating Intuitive Driver Models in Autonomous Planning for Interactive Maneuvers.” IEEE Transactions on Intelligent Transportation Systems 18, no. 12 (2017): 3461–3472.
- Govindarajan, Vijay, Katherine Driggs-Campbell, and Ruzena Bajcsy. “Data-Driven Reachability Analysis for Human-in-the-Loop Systems.” In IEEE Conference on Decision and Control (CDC), 2017.
- Driggs-Campbell, Katherine, Victor Shia, and Ruzena Bajcsy. “Improved Driver Modeling for Human-in-the-Loop Vehicular Control.” In IEEE International Conference on Robotics and Automation (ICRA), 2015.
By effectively designing autonomous systems that keep the user in mind, we can: improve driving performance while handing off control between the automation and the human driver; increase overall trust in the automation; and improve the predictability of the autonomous actions and overall situational awareness.
- Govindarajan, Vijay, Katherine Driggs-Campbell, and Ruzena Bajcsy. “Affective Driver State Monitoring for Personalized, Adaptive ADAS.” In IEEE International Conference on Intelligent Transportation Systems (ITSC), 2018.
- Rezvani, Tara, Katherine Driggs-Campbell, and Ruzena Bajcsy. “Optimizing Interaction between Humans and Autonomy via Information Constraints on Interface Design.” In IEEE International Conference on Intelligent Transportation Systems (ITSC), 2017.
- Driggs-Campbell, Katherine, and Ruzena Bajcsy. “Identifying Modes of Intent from Driver Behaviors in Dynamic Environments.” In IEEE International Conference on Intelligent Transportation Systems (ITSC), 2015.
- Sadigh, Dorsa, Katherine Driggs-Campbell, Alberto Puggelli, Wenchao Li, Victor Shia, Ruzena Bajcsy, Alberto L Sangiovanni-Vincentelli, S Shankar Sastry, and Sanjit A Seshia. “Data-Driven Probabilistic Modeling and Verification of Human Driver Behavior,” 2014.
Categories
- No categories
Recent Comments
Archives
Categories
- No categories
Meta