University of Illinois at Urbana-Champaign
Katie Driggs-Campbell is currently an assistant professor at the University of Illinois at Urbana-Champaign in the Department of Electrical and Computer Engineering. Prior to that, she was a Postdoctoral Research Scholar at the Stanford Intelligent Systems Laboratory in the Aeronautics and Astronautics Department. She received a B.S.E. with honors from Arizona State University in 2012 and an M.S. from UC Berkeley in 2015. She earned her PhD in 2017 in Electrical Engineering and Computer Sciences from the University of California, Berkeley. Her lab works on human-centered autonomy, focusing on the integration of autonomy into human-dominated fields, merging ideas from robotics, learning, human factors, and control.
Fantastic Failures and Where to Find Them: Designing Trustworthy Autonomy
Autonomous robots are becoming tangible technologies that will soon impact the human experience. However, the desirable impacts of autonomy are only achievable if the underlying algorithms are robust to real-world conditions and are effective in (near) failure modes. This is often challenging in practice, as the scenarios in which general robots fail are often difficult to identify and characterize. In this talk, we’ll discuss how to learn from failures to design robust interactive systems and how we can exploit structure in different applications to efficiently find and classify failures. We’ll showcase both our failures and successes on autonomous vehicles and agricultural robots in real-world settings.
Thomas A. Henzinger
Tom Henzinger is president of IST Austria (Institute of Science and Technology Austria). He holds a Dipl.-Ing. degree from Kepler University in Linz, a Ph.D. degree from Stanford University (1991), and Dr.h.c. degrees from Fourier University in Grenoble and from Masaryk University in Brno. He was Assistant Professor at Cornell University, Professor at the University of California, Berkeley, Director at the Max-Planck Institute for Computer Science in Saarbrucken, and Professor at EPFL. His research focuses on modern systems theory, especially models, algorithms, and tools for the design and verification of reliable software and embedded systems. His HyTech tool was the first model checker for mixed discrete-continuous systems. He is an ISI highly cited researcher, a member of Academia Europaea, a member of the German and Austrian Academies of Sciences, and a Fellow of the AAAS, the ACM, and the IEEE. He received the Robin Milner Award of the Royal Society, the EATCS Award of the European Association for Theoretical Computer Science, the Wittgenstein Award of the Austrian Science Fund, and an ERC Advanced Investigator Grant.
Monitorability under Assumptions
We introduce the monitoring of trace properties under assumptions. An assumption limits the space of possible traces that the monitor may encounter. An assumption may result from knowledge about the system that is being monitored, about the environment, or about another, connected monitor. We define monitorability under assumptions and study its theoretical properties. In particular, we show that for every assumption A, the boolean combinations of properties that are safe or co-safe relative to A are monitorable under A. We give several examples and constructions on how an assumption can make a non-monitorable property monitorable, and how an assumption can make a monitorable property monitorable with fewer resources, such as integer registers.
Lane Desborough is currently in stealth mode, working on autonomous systems to reduce the burden of living with insulin-requiring diabetes. He was initially exposed to safety-critical industrial automation as a co-op chemical engineering student at the University of Waterloo. In grad school at Queen’s University he focused on controller performance assessment. He then spent two decades at Nova Chemicals, Honeywell, and General Electric, implementing and remotely monitoring safety-critical automation at chemical plants, oil refineries, pulp mills, and power plants around the world. When his son was diagnosed with type 1 diabetes in 2009, Lane soon found himself at Medtronic Diabetes, leading the engineering team responsible for the next step towards the “artificial pancreas” Automated Insulin Delivery (AID). He also co-created Nightscout, the first remote monitoring system for families living with type 1 diabetes (recently the #8 most forked repo on Github). In 2014 he co-founded Bigfoot Biomedical, and as Bigfoot’s Chief Engineer he developed safe, effective, and scalable systems to reduce the burden of insulin-requiring diabetes. He recently left Bigfoot to start another AID company.
The Physical Side of Cyber-Physical Systems
When our commercial reach exceeds our technical grasp, it is imperative that we advance our knowledge, that we embrace approaches to manage complexity, lest that complexity introduce undesired emergent properties. These complexity management approaches may seem new or novel, yet they rarely are. As science fiction author William Gibson is wont to say, “The future is already here, it just hasn’t been evenly distributed yet”.
Chemical engineering process control has afforded me a career spanning five continents and five industries. Although my current focus is the “artificial pancreas” – automated insulin delivery for people living with insulin-requiring diabetes – I have been privileged to be exposed to some of the most complex and challenging cyber-physical systems in the world; systems upon which society depends.
Most industries exist within their own bubble; exclusionary languages and pedagogy successfully defend their domains from new ideas. As one who has traversed many industries and worked on scores of industrial systems, a variety of personal, visceral experiences have allowed me to identify patterns and lessons applicable more broadly, perhaps even to your domain. Using examples drawn from petrochemical production, oil refining, power generation, industrial automation, and chronic disease management, I hope to demonstrate the need for, and value of, real-time verification.