Room Location: Key Ballroom 9
Organizers:
Description:
The goal is to help people with research interests in robotics who have not previously been funded by NSF as part of the FRR Program PIs understand what the NSF (and importantly, the panels convened by NSF) will expect to see in a successful FRR proposal.
Agenda:
3:30 – 3:35pm: Robotics PD Introduction
3:35 – 3:43pm: Overview of FRR Program
3:43 – 3:51pm: Overview of the Proposal Review and Funding Processes
3:51 – 4:16pm: Panel on Developing a Successful NSF Proposal
4:16– 5:00pm: Huddles with PDs
Room Location: Key Ballroom 10
Organizers:
Description:
Rising population along with changing climate, social and environmental concerns are warranting intelligent machine system to increase productivity and profitability from existing crop lands.
This workshop aims to explore how cutting-edge robotics and AI enabled-technologies are revolutionizing traditional farming methods by providing new knowledge which is paving the way for smarter digital agriculture solutions. Throughout the session, esteemed speakers will share their expertise and insights into the integration of AI and robotics across various facets of agricultural machine systems. From autonomous drones for crop monitoring to machine learning algorithms for predictive analytics and robotic systems for precision crop management, this workshop will provides a unique opportunity to engage with leading experts, exchange ideas, and explore the potentials of AI and robotics in shaping the future of agriculture.
Agenda:
3:30 – 3:35pm: Opening remarks (Zhaojian Li)
3:35 – 4:35pm: Invited talks (Speakers introduced by Yunjun Xu)
3:35 – 3:45pm: “Speeding-up Multi-armed Robotic Fruit Harvesting”, Stavros Vougioukas, UC Davis
3:45 – 3:55pm: “Configurable, Adaptive, and Scalable Swarm (CASS) for Smart and Collaborative Agriculture”, Kiju Lee, Texas A&M
3:55 – 4:05pm: “Application of Soft Robotics in Agriculture”, Robert Shepherd, Cornell University
4:05 – 4:15pm: “Challenges and Research Opportunities in Developing Contact-Based Precision Robotic Pollinators”, Yu Gu, West Virginia University
4:15 – 4:25pm: “HAUCS: Robotic Systems for Scalable, Adaptable Maintenance of Pond Aquaculture Farms”, Bing Ouyang, Florida Atlantic University
4:25 – 4:35pm: “From Precision Weed Management to Spatial Mapping: A fleet of Autonomous Robots for Precision Agriculture”, Abhisesh Silwal, Carnegie Mellon University
4:35 – 5:00pm: Q&A and Panel Discussion (Moderated by Ajay Sharda)
Room Location: Key Ballroom 11
Organizers:
Description:
Surgical robots have changed from large manipulators performing open surgery to miniature, dexterous, and flexible robotic systems. These robots are enabling increasingly less invasive procedures with single or even natural orifice access, reducing blood loss, scarring, and recovery times for patients. This workshop will explore new trends and challenges in miniature and flexible robotics for the next generation of surgical robotic systems. Surgical robots for the operating room have also started to evolve from a simple extension of the surgeon’s hands to autonomous systems that augment procedures. We explore the role artificial intelligence could play in enabling complex levels of surgical autonomy, emphasizing the potential to improve surgical outcomes independent of the surgeon’s training or skill. As robotic systems and artificial intelligence evolve, so too will robotic autonomy, leading to the operating room of the future where autonomous robots perform complex surgeries with minimal damage, democratize surgical outcomes, and transform the healthcare system.
Agenda:
3:30 – 3:50pm: “The Future of Surgical Robotics: Vision, Challenges, and the Science to Meet Them”, Axel Krieger and Alan Kuntz
3:50 – 4:05pm: “Controlling Microrobot Swarms Using Rotating Magnetic Fields”, Jake Abbott, University of Utah
4:05 – 4:20pm: “Translating NSF Robotics Discoveries to the Real World”, Robert J. Webster III, Vanderbilt University
4:20 – 4:35pm: “Towards More Human-Aware Robotic Systems for Surgical Training and Intervention”, Ann Majewicz Fey, University of Texas
4:35 – 5:00pm: Panel Discussion
Room Location: Key Ballroom 12
Organizers:
Description:
Robotics and system autonomy provide tremendous opportunities for assembly, inspection, transport, maintenance and debris removal applications in space (particularly in near earth orbits), with unique performance and economic gains over the current state of the art. This workshop will provide an interactive setting to discuss such opportunities and converge on the key technical barriers to realizing these opportunities. We aim to also identify critical computational tools that need to be coherently developed and shared by communities such as robotics, machine learning, spacecraft dynamics and remote sensing, in order to help break down these barriers. To these ends, the workshop will follow the below structure: 1) We will start with a 10-15 minute opening statement on the purpose and process of the workshop; this could (tentatively) include a summary perspective on the needs for autonomy in space, delivered remotely by domain experts from NASA JPL. 2) This will be followed by the primary activity in this workshop: a 60 minute round-table discussion of the challenges, opportunities and tools within 3-4 separate working groups. We will provide a questionnaire to guide the discussion items. Working groups will be divided in terms of broad fundamental areas with regards to orbital robotics and autonomy, namely planning & controls, perception, human-autonomy teaming and simulation environments. 3) In the last 20 minutes, each working group will provide a 5-minute summary of their discussions and key take-away points. After the workshop, we would like to request a short presentation of the discussion summary and take-away points resulting from each working group (a 3-slide template will be provided for this purpose). This information will be curated by the hosts into a single slide deck pdf summarizing the outcomes of this workshop, and made available to all participants via email.
Agenda:
3:30 – 3: 40pm: Opening Remarks, Federico Rossi, NASA JPL (Zoom)
3:40 – 4:40pm: Round-table Discussions (3-4 working groups)
4:40- 5:00pm: Summary of Discussions from Each Group
Room Location: Key Ballroom 6
Organizers:
Description:
Dexterous manipulation is a fundamental robotics problem that involves several interdisciplinary studies including computer science, mechanical engineering, electrical engineering, and human factors. Great advancements have been made in address learning, planning, control, sensing, design, and simulation challenges for dexterous manipulation. The workshop aims to demonstrate recent innovate research in these areas and provide a platform for discussion on potential convergent research in dexterous manipulation. The workshop will bring together researchers working on different aspects of dexterous manipulation and foster collaboration for impactful and transformative research.
Agenda:
3:30 – 3:45pm: "Soft Robotic Grasping", Roger Quinn, Case Western Reserve University
3:45 – 4:00pm: "Learning Robot Manipulation Skills with Physics-based Models in the Human-centered Environments", Ahmed. H. Qureshi, Purdue University
4:00 – 4:15pm: "Mixed-transducer micro-origami for small-scale manipulation", Kenn Oldham, University of Michigan
4:15 – 4:30pm: "Towards Using Grasping Compliance of Underactuated Hands in Model-mediated Telemanipulation", Long Wang, Stevens Institute of Technology
4:30 – 4:45pm: "Context-Aware Task-Oriented Grasping by a Dexterous Robotic Hand", Hongsheng He, University of Alabama
4:45 – 5:00pm: Q&A and Panel discussions
Room Location: Key Ballroom 9
Organizers:
Description:
The goal is to help people with research interests in robotics who have not previously been funded by NSF as part of the FRR Program PIs understand what the NSF (and importantly, the panels convened by NSF) will expect to see in a successful FRR proposal.
Agenda:
2:00 – 2:05pm: Robotics PD Introduction
2:05 – 2:13pm: Overview of FRR Program
2:13 – 2:21pm: Overview of the Proposal Review and Funding Processes
2:21 – 2:46pm: Panel on Developing a Successful NSF Proposal
2:46– 3:30pm: Huddles with PDs
Room Location: Key Ballroom 10
Organizers:
Description:
Over the past two decades, advances in actuation, sensing, and computation have enabled diverse applications of aerial robots, ranging from aerial photography to environmental sampling to assisted agriculture. Compared to terrestrial robots, aerial vehicles can traverse much longer range with higher speed, yet they have smaller payload and consume higher power. This workshop aims to identify technical and conceptual challenges faced by aerial robotics research, and also outline emerging opportunities in the realm of human-robot interaction and bio-inspired design.
This workshop will focus on algorithmic, interaction, capability design, and physical intelligence challenges of aerial systems. From an algorithmic perspective, we will discuss recent advances in decentralized control of swarm flight, perception, robot collaborations, and new applications. From an interaction perspective, we will consider how aerial vehicles can be integrated into complex workflows, support diverse user interactions, and even function as social platforms. From a capability design perspective, we will explore what sensing, actuation, and intelligence capabilities are necessary to ensure useful capabilities for aerial robots across scales, as well as the tradespace for how each of these can be embedded within the others. From a physical intelligence perspective, we will examine bio-inspired designs such as flapping-wing propulsion and the use of soft actuators, structures, and end effectors. We will also look at challenges and opportunities as the scale of aerial robots shrinks. Can we create insect-scale robotic pollinators that are efficient, durable, and safe to the public? What advances in aerial robots of specific scales can be generalized across scales? We will bring together four to six researchers who will share their latest progress in related areas, and invite them to participate in an interactive panel discussion.
Agenda:
2:00 – 2:10pm: "MP2: One motor Micro Aerial Vehicle for Swarm Applications", Michael Rubenstein, Northwestern University
2:10 – 2:20pm: "Atmospheric Ion Thrusters for Micro Air Vehicles", Daniel Drew, University of Utah
2:20 – 2:30pm: "Insect-scale Aerial Robots Powered by Soft Artificial Muscles", Kevin Chen, MIT
2:30 – 2:40pm: "Dispersed Autonomy for Aerial Robots: Cloud Robotics in the Clouds", Eric Frew, University of Colorado, Boulder
2:40 – 2:50pm: "Air/Ground Coordination for Deployments over Large Spatial and Temporal Scales", Pratap Tokekar, University of Maryland.
2:50 – 3:00pm: "Improving Drones to Interact with Everyone", Brittany Duncan, University of Nebraska-Lincoln
3:00 – 3:30pm: Q&A and Panel discussions
Room Location: Key Ballroom 11
Organizers:
Description:
Small-scale robotics, those spanning from nanoscale to several centimeters, holds the key to unprecedented research breakthroughs across diverse fields. Of note is medicine, where robots could potentially be used in remarkable new applications, ranging from drug delivery to surgery to even micro-scale dentistry. Yet moving microrobots into medical applications presents a broad array of complex, interwoven challenges. How do we design and manufacturing of small-scale robots? What should be used for actuation and control in the body? And how do we meet the biocompatibility and safety requirements inherit to medical technology? The aim of this workshop is to convene NSF-funded PIs who have expertise in robotics, materials science, mechanical engineering, computer science, biomedical engineering, and medicine. Our goal is to explore these pivotal areas, creating a vibrant forum for knowledge exchange, collaborative exploration, and innovation.
The workshop will start with a series of keynote presentations, laying the foundational knowledge, identifying key challenges, and discussing good “first-indicator” applications of microrobots in medicine. Following the keynotes, panel discussions will provide a platform for deeper exploration of the themes introduced, encouraging interaction and discussion among the participants.
Keynote Topics:
Agenda:
2:00 – 2:20pm: "Taking the Fantastic Voyage: Small-Scale Robots as a Biomedical Technology", (Xiaolong Liu and Marc Miskin)
2:20 – 2:35pm: “The Incredible Shrinking Robot: Fact and Fiction in Microscale Implantables”, Pamela Abshire, University of Maryland, College Park
2:35 – 2:50pm: “Nanoscale Bacteria-Enabled Autonomous Delivery Systems (NanoBEADS) for Cancer Therapy”, Bahareh Behkam, Virginia Tech
2:50 – 3:10pm: "Combating Oral Biofilm Infections Using Microrobots", Hyun (Michel) Koo and Ed Steager, University of Pennsylvania
3:10 – 3:30pm: Q&A and Panel discussions
Room Location: Key Ballroom 12
Organizers:
Description:
Connected and automated vehicles (CAVs) are robotic systems which exhibit significant levels of computational capability and physical complexity. They have the capacity to make contextual decisions independently, without human intervention, while they interact in a complex environment. It is expected that CAVs will gradually penetrate the market, interact with human-driven vehicles (HDVs), and contend with vehicle-to-everything communication limitations. However, different penetration rates of CAVs can significantly alter transportation efficiency and safety. The workshop aims to highlight recent approaches that synergistically integrate human-driving behavior, control theory, and learning in an effort to address a fundamental gap in current methods for the safe co-existence of CAVs with HDVs. The primary objective of this workshop is to offer a comprehensive understanding of how we can improve the safety coexistence of CAVs and HDVs in any traffic scenario, e.g., crossing signal-free intersections, merging at roadways and roundabouts, cruising in congested traffic, passing through speed reduction zones, and lane- merging or passing maneuvers.
Agenda:
2:00 – 2:20pm: “Time-Optimal Trajectory Planning for Connected and Automated Vehicles in Mixed-Traffic Scenarios”, Andreas A. Malikopoulos, Cornell University
2:20 – 2:40pm: “Improving Urban Traffic with Autonomous and Human-Driven Vehicle Integration”, Murat Arcak, UC Berkeley
2:40 – 3:00pm: “Reactive Control for Connected and Automated Vehicles in Mixed Environments”, Logan Beaver, Old Dominion University
3:00 – 3:20pm: “Waves, Barriers, and the Safety of Mixed Traffic Systems”, Hosam Fathy, University of Maryland
3:20 – 3:30pm: Panel Discussion
Room Location: Key Ballroom 6
Organizers:
Description:
LfD-enabled robots are meant to be deployed in the wild where robotics experts are not
around to save the LfD algorithm and the robot from failure. This workshop intends to trigger a
discussion only around the algorithmic aspect of this problem.
Here are some of the cool things that are happening in contemporary LfD research (and our
speakers will discuss some of them):
1. We now have large scale datasets which facilitate policy learning, transfer learning, learning
generalist robot policy, etc. Work on Sim2Real is promising. There are many physics-based
simulators that are significantly contributing to advance RL-based policy learning research.
2. We have high performing policy learning algorithms – e.g. diffusion policy – that can learn
policies from raw vision data, in few-shots or even in zero-shot.
3. Recent vision-language models show an exciting way to include lay users in the robot
teaching process.
Do these advancements make LfD algorithms ready to interact with lay users in the wild? The goal
of this workshop is to trigger a discussion around this question. Imagine we handed our best
performing LfD-algorithm-powered robot to a techy-grandma living in a rural town in America, along
with a manual on how to use the LfD algorithm. Grandma shows the robot a new task, e.g. by
teleoperation, as many times as humanly possible for her. It is time for action for our LfD algorithm!
Do we have answers to the following?
1. Grandma will inevitably make mistakes as she teleoperates the robot. How do we deal with
those mistakes? How do we learn a policy from incorrect demonstrations?
2. Our algorithm will learn ‘a policy’ from the data, but is there any mechanism to guarantee
that the policy will do what it is supposed to do (other than reporting that it showed high
accuracy on held-out data)? In other words, do data-driven policy learning algorithms
discuss stability guarantee? Especially when run-time scenes look different, and we don’t
have a simulator for every task that Grandma will teach her robot?
3. How do we guarantee the safety of a learned policy? How can we extract safety constraints
from demonstrations and enforce those on the policy learning algorithm?
The goal of this workshop is to hear from some PIs – who are already designing advance policy
learning algorithms for robots – about their work and their thoughts on challenges involved with
answering these questions related to LfD safety.
Agenda:
2:00 – 2:05pm: Opening remarks (Momotaz Begum)
2:05 – 2:25pm: "Learning Shared Safety Constraints from Multi-task Demonstrations", Sanjiban Chowdhury, Cornell University
2:25 – 2:45pm: "Beyond Imitation Learning: Robust Policies via Offline RL and Active Feedback", Sergey Levine, University of California, Berkeley (Zoom)
2:45 – 3:05pm: "Exploiting Safety Guidance for Enhancing Data Efficiency in Imitation Learning", Somil Bansal, University of Southern California
3:05 – 3:30pm: Q&A and Panel Discussion
Zoom link for virtual attendance: https://unh.zoom.us/j/96007521988?pwd=OTYzekp2MDlmV3lMYXp3TmFIZTJhUT09
Copyright © 2024 UMBC - All Rights Reserved.
We use cookies to analyze website traffic and optimize your website experience. By accepting our use of cookies, your data will be aggregated with all other user data.