Dieter Fox
University of Washington & NVIDIA
Where is RobotGPT?
9:00 - 10:00 AM, April 29, 2024
The last years have seen astonishing progress in the capabilities of generative AI techniques, particularly in the areas of language and visual understanding and generation. Key to the success of these models are the use of image and text data sets of unprecedented scale along with models that are able to digest such large datasets. We are now seeing the first examples of leveraging such models to equip robots with open-world visual understanding and reasoning capabilities. Unfortunately, however, we have not achieved the RobotGPT moment; these models still struggle with reasoning about geometry and physical interactions in the real world, resulting in brittle performance on seemingly simple tasks such as manipulating objects in the open world. A crucial reason for this problem is the lack of data suitable to train powerful, general models for robot decision making and control.
In this talk, I will discuss approaches to generating large datasets for training robot manipulation capabilities, with a focus on the role simulation can play in this context. I will show some of our prior work, where we demonstrated robust sim-to-real transfer of manipulation skills trained in simulation, and then present a path toward generating large scale demonstration sets that could help train robust, open-world robot manipulation models.
Bio: Dieter Fox is Senior Director of Robotics Research at NVIDIA and Professor in the Allen School of Computer Science & Engineering at the University of Washington, where he heads the UW Robotics and State Estimation Lab. Dieter's research is in robotics and artificial intelligence, with a focus on state estimation and perception applied to problems such as robot manipulation, mapping, and object detection and tracking. He has published more than 200 technical papers and is the co-author of the textbook “Probabilistic Robotics”. He is a Fellow of the IEEE, AAAI, and ACM, and recipient of the 2020 IEEE Pioneer in Robotics and Automation Award and the 2023 John McCarthy Award. He was an editor of the IEEE Transactions on Robotics, program co-chair of the 2008 AAAI Conference on Artificial Intelligence, and program chair of the 2013 Robotics: Science and Systems conference.
Xuanhe Zhao
MIT
Soft Medical Robots
2:25 - 3:25 PM, April 29, 2024
This talk will discuss the design, fabrication, control, and clinical translations of soft medical robots. We will use a magnetically-steerable soft guidewire robot capable of teleoperated robotic neurointervention as an example. We will first discuss the theory of magnetically-responsive soft materials that enable predictive models for the large deformation and actuation of the soft robot. Then we will discuss how to use the massive simulation data of the models to guide the design, 3D printing, and control of the soft robot. Thereafter, we will demonstrate the soft guidewire robots’ applications in remotely treating hemorrhagic and ischemic strokes by robotic aneurysm coiling and clot retrieval, respectively. We will validate the safety and efficacy of the soft guidewire robot in both phantom models and live pig models, and compare its performances with experienced neurointerventionalists carrying out manual operations. I will conclude the talk with a vision for the future development and impacts of soft medical robots – aided by and synergized with technologies such as AI, VR, and precision medicine.
Bio: Xuanhe Zhao is a Professor of Mechanical Engineering at MIT. The mission of Zhao Lab is to advance science and technology between humans and machines to address grand societal challenges in health and sustainability. A major current focus is the study and development of soft materials and systems. Dr. Zhao has won early career awards from NSF, ONR, ASME, SES, AVS, Adhesion Society, JAM, EML, and Materials Today. He has been a Clarivate Highly Cited Researcher since 2018. Bioadhesive ultrasound, based on Zhao Lab’s work published in Science, was named TIME Magazine's Best Inventions of the year in 2022. SanaHeal Inc., based on Zhao Lab’s work published in Nature, was awarded the 2023 Nature Spinoff Prize. Over ten patents from Zhao Lab have been licensed by companies and have contributed to FDA-approved and widely-used medical devices.
Robin R. Murphy
Texas A&M University
Robots (and Research) to the Rescue
8:45 - 9:45 AM, April 30, 2024
Ground, aerial, and marine robots are increasingly used by responders to save lives, mitigate cascading threats, and accelerate economic recovery after a disaster. The recent Surfside condo collapse and Hurricane Ian are two examples of the extreme environments that robots, and their operators, must function in. Clearly, rescue robots have great societal benefit; however our work at these two disasters illustrate why disaster robotics is important to robotics research in general. One reason is that research in the field at a disaster informs the virtuous research cycle, guiding both fundamental and convergent research. The use of robots at a disaster provides a “canary in the coal mine” indication of gaps in hardware, software, autonomy, and human-robot interaction that might take years to discover through hypothesis-driven laboratory testing. A second reason why disaster robotics is valuable is that it, by necessity, is pioneering domain-inspired, interdisciplinary synthesis, which in turns calls for new pedagogical approaches for educating the next generation of scientists and practitioners.
Bio: Dr. Robin R. Murphy, Ph.D. (’92) and M.S. (‘89) in computer science and B.M.E. (‘80) from the Georgia Institute of Technology, is the Raytheon Professor of Computer Science and Engineering at Texas A&M University and a director of the Center for Robot-Assisted Search and Rescue. Her research focuses on artificial intelligence, robotics, and human-robot interaction for emergency management. She is an AAAS, ACM, and IEEE Fellow, a TED speaker, and author of over 400 papers and four books including the award-winning Disaster Robotics which captures much of her research deploying ground, aerial, and marine robots to over 30 disasters in five countries including the 9/11 World Trade Center, Fukushima, Hurricane Harvey, and the Surfside collapse. Her contributions to robotics have been recognized with the ACM Eugene L. Lawler Award for Humanitarian Contributions and a US Air Force Exemplary Civilian Service Award medal. Dr. Murphy has served on numerous professional and government boards, including the Defense Science Board and National Science Foundation, as well as the AI for the Benefit of Humanity prize committee.
Elaine Short
Tufts University
Human-Centered AI for Accessible and Assistive Robotics
11:30 AM - 12:30 PM, April 29, 2024
Powered by advances in AI, especially machine learning, robots are becoming smarter and more widely used. Robots can provide critical assistance to people in a variety of contexts, from improving the efficiency of workers to helping people with disabilities in their day-to-day lives. However, inadequate attention to the needs of users in developing these intelligent robots results in systems that are both less effective at their core tasks and more likely to do unintended harm. This talk will explore how disability community ethics can be used to inspire new directions for intelligent interactive robotics, with examples from recent work from the Assistive Agent Behavior and Learning (AABL) Lab at Tufts University.
Bio: Elaine Schaertl Short is the Clare Boothe Luce Assistant Professor of Computer Science at Tufts University. She holds a PhD and MS in Computer Science at the University of Southern California (USC) and a BS in Computer Science from Yale University. Her research applies human-centered design and disability community ethics to the development, deployment, and evaluation of AI and ML for robotics. As a disabled faculty member, Elaine is an advocate for accessibility and disability inclusion. In her roles as co-PI of AccessComputing and co-Chair of AccessSIGCHI she has developed accessibility chair trainings, led new projects to support disabled students in accessing research careers, and advocated for accessibility improvements within the human-computer interaction and robotics communities.
Melisa Martinez
Carnegie Mellon University
From Haptic Devices to Interactive Machines: Using Robotics to Help Students Ground Abstract Mathematical Concepts
11:30 AM - 12:30 PM, April 29, 2024
Math is often presented as an abstract, procedural discipline in which both teacher and student have a firm belief that proficiency is due to fixed innate ability. Studies have shown however, that most students are capable of excelling in and enjoying math given the opportunity to do so by being presented with creative, varied, and interactive learning experiences and a growth mindset. In this talk, we present novel robotic systems designed for educational applications which, inspired by a constructionist framework, help students ground abstract mathematical concepts. The first is a haptic-supported learning environment that uses force feedback to help users make connections between different representations of trigonometric functions. The second is an interactive haptic interface which ties mathematical ideas through body syntonicity. Lastly, we present a robotic loom which takes advantage of the interdisciplinary nature of weaving to ground abstract mathematical concepts in a concrete and embodied art, view this art through an engineering lens, and integrate hands-on interdisciplinary learning into mathematics curricula.
Bio: Melisa Orta Martinez is an Assistant Professor in the Robotics Institute at Carnegie Mellon University where she leads the Social Haptics Robotics and Education (SHRED) Laboratory. She received her PhD from Stanford University in Mechanical Engineering and her masters in Electrical Engineering. Prior to that she worked at Apple Inc. in the Human Interface Devices group developing haptic technology. Her research combines the areas of robotics, haptics, human-computer interaction, and education. In her free time she practices MuayThai.
Naomi Fitter
Oregon State University
Harnessing the Potential of Playful Social Robots across Life Phases
10:30 - 10:45 AM, April 29, 2024
As robots appear in more everyday environments, they will have new opportunities to enhance the lives of the people around them. One reason why this potential is exciting is that robots, compared to "non-embodied" technology solutions (such as a phone, smart watch, computer, or AI assistant), have been shown to be more motivating and peer-like. In challenging interaction scenarios such as encouraging physical activity or other healthy habits, this type of clout can make or break the success of a technology-based intervention. In a current NSF NRI project, myself, Dr. Geoff Hollinger, and Dr. Sam Logan are designing improved modular hardware, creative interaction behaviors, and behavior tree-inspired algorithms for effectively leveraging assistive robots in early motor interventions for young children. To date, my research related to this project has included the design and investigation of assistive robots for playfully supporting physical activity in early motor interventions. We observed that these robots can encourage more physical activity during play and therapy sessions than other intervention approaches. Another current NIH NRI project within my research group seeks to use similar design principles to support and encourage functional maintenance exercises for older adults, another population that stands to benefit from the introduction of creative new healthcare system support. In this effort, myself, Dr. Bill Smart, and Dr. Carolyn Aldwin have completed design cycles for an assistive robotic system that will soon be deployed in long-term, real-world evaluations of how a robot might be able to promote exercises that support prolonged physical function over time in skilled nursing facility spaces. My ongoing and future research aims to create playful robotic systems to help people live healthier and happier lives in a range of intervention scenarios across populations and over the lifespan.
Nikolaos Papanikolopoulos
University of Minnesota
Cooperative Robotic Systems for Plant Health Management
10:45 - 11:00 AM, April 29, 2024
Financial and social elements of modern societies are closely connected to cultivating plants like corn and soybean. Due to the massive importance of these plants, nitrogen or potassium deficiencies during their cultivation process directly translate to major financial losses. Therefore, the early detection and treatment of these nutrient deficiencies is a task of great significance and value. However, current standard field surveillance practices are either completed manually or with satellite imaging, offering only infrequent, insufficient (from a spatial resolution perspective), and costly data to farmers.
This project promotes using autonomous teams of small aerial and ground co-robots armed with efficient plant-centric information-gathering algorithms and multi-modal perception abilities that fuse information from the visible spectrum (RGB) and multi-- or hyperspectral domains. It uses corn as the target crop. This work aims to introduce an automated strategy for plant field robotic mapping, monitoring, detecting nitrogen and potassium deficiency, and estimating crop biomass at fine spatiotemporal resolutions to better estimate the nutrient fertilizer requirements. Through the capacity of the aerial and ground robotic team to autonomously select and follow the viewpoints that enable comprehensive multi-modal 3D reconstruction of the corn canopy structure (biomass) at arbitrary resolutions, a superior alternative to high-altitude aerial imaging is suggested.
• This is joint work with A. Bacharis, H. Nelson, D. Mulla, D. Kaiser, C. Yang, and G. Bebis. The work was supported by USDA/NIFA through the grant 2020-67021-30755.
Ye Zhao
Georgia Institute of Technology
Enhancing Agility, Safety, and Social Intelligence in Legged Navigation
11:00 - 11:15 AM, April 29, 2024
Bipedal humanoid robots have demonstrated significant advances in locomotion and manipulation. However, there is a pressing need to enhance their safe navigation and decision-making capabilities. A key challenge lies in developing planning and decision-making frameworks that are both safe and resilient for these complex legged machines operating in unstructured environments, particularly those involving adversarial dynamic obstacles and human crowds. In this talk, I will present current progress of our NRI and FRR projects, which explore two approaches to improve safety and social intelligence in task and motion planning (TAMP) for agile legged navigation. I will discuss several hierarchically integrated TAMP frameworks designed to facilitate safe and socially acceptable navigation planning in environments that are partially observable and densely populated.
Joyce Chai
University of Michigan
Cognitive Robots in the Human World - Grounding Matters
11:15 - 11:30 AM, April 29, 2024
The rise of large foundation models and generative AI have revolutionized many aspects of cognitive robots. In this talk, I will give a brief introduction to our recent work on language communication with embodied agents. I will highlight the importance of grounding – whether language grounding or communication grounding - in the context of human-robot collaboration in everyday environments.
Eleonora Botta
University of Buffalo
How to catch large debris on orbit: A self-maneuvering robotic tether-net system
9:45 - 10:00 AM, April 30, 2024
For containing the growth of space debris, which jeopardizes operation of spacecraft, the active removal of large and massive derelict satellites and launcher upper stages is needed. A promising technology for this endeavor is the use of tether-based robotic systems. In this concept, a tether-net is thrown from a chaser spacecraft in the proximity of a target debris towards this target; the net entangles the target or closes around it, and the tether connecting the net to the chaser provides a link to tug debris to its disposal orbit. Usually, the system is designed to be mostly passive; however, autonomy is key to operations in space. This talk will present the design, modeling, and control of tether-nets for active debris removal. A simulation tool representative of a complete tether-net system will be presented, together with results of recent developments relying on increased autonomy to ensure the success of capture under various sensing and actuation uncertainties expected for in-orbit operations.
Nikolay A. Atanasov
UCSD
Active Bayesian Inference for Collaborative Robot Mapping
10:00 - 10:15 AM, April 30, 2024
This talk presents recent results from an NSF FRR CAREER project on joint perception and control for multi-robot active mapping. First, we discuss an approach for surface modeling that allows differentiable view synthesis. Next, we formalize multi-robot mapping and planning as optimization problems on a graph and derive a distributed algorithm that enables mobile robots to explore and map unknown environments autonomously. Finally, we highlight the education component of the project, which provides open-source materials for robot autonomy education.
Alberto Quattrini Li
Dartmouth
Resilient Low-Cost Underwater Robots through Co-Design of Algorithms and Sensors
10:15 - 10:30 AM, April 30, 2024
Why do underwater robots still struggle to reliably perform exploration and manipulation tasks despite the great progress in autonomous robotics? This talk delves into some of the main challenges that limit robots in such tasks and some solutions we designed towards resilient low-cost underwater robots. First, I will talk about algorithms for 3D reconstruction with an inexpensive sensor configuration with camera and lights. Second, I will cover enabling low-cost manipulation through hardware/algorithm co-design for underwater assembly. Lastly, I will touch on a laser-based sensing system to enable aerial drones to locate underwater robots. Each part will include discussion of field experiments and lessons learned. The talk will conclude with a brief overview on some of the open problems and current work to achieve the long-term goal of a ubiquitous collaborative multi-agent/multi-robot system that can support large scale aquatic applications, such as environmental monitoring or archaeological exploration.
Sanjiban Choudhury
Cornell University
Demo2Code: From Summarizing Demonstrations to Synthesizing Code
10:30 - 10:45 AM, April 30, 2024
Language instructions and demonstrations are two natural ways for users to teach robots personalized tasks. Recent progress in Large Language Models (LLMs) has shown impressive performance in translating language instructions into code for robotic tasks. However, translating demonstrations into task code continues to be a challenge due to the length and complexity of both demonstrations and code, making learning a direct mapping intractable. I'll talk about Demo2Code, a framework that generates robot task code from vision and language demonstrations. Demo2Code has two key pieces: (1) a recursive summarization technique that condenses demonstrations into concise specifications, and (2) a recursive code synthesis approach that expands each function recursively from the generated specifications. I'll present evaluations on various robot task benchmarks, including a novel game benchmark Robotouille, designed to simulate diverse cooking tasks in a kitchen environment.
Copyright © 2024 UMBC - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.