Touch is a crucial sensor modality for both humans and robots, as it allows us to directly sense object properties and interactions with the environment. Recently, touch sensing has become more prevalent in robotic systems, thanks to the increased accessibility of inexpensive, reliable, and high-resolution tactile sensors and skins. Just as the widespread availability of digital cameras accelerated the development of computer vision, we believe that we are rapidly approaching a new era of computational science dedicated to touch processing.
However, a key question is now becoming critically important as the field gradually transitions from hardware development to real-world applications: How do we make sense of touch?
While the output of modern high-resolution tactile sensors and skins shares similarities with computer vision, touch presents challenges unique to its sensing modality.
Unlike images, touch information is influenced by temporal components, its intrinsically active nature, and very local sensing, where a small subset of a 3D space is sensed on a 2D embedding.
We believe that AI/ML will play a critical role in successfully processing touch as a sensing modality. However, this raises important questions regarding which computational models are best suited to leverage the unique structure of touch, similar to how convolutional neural networks leverage spatial structure in images.
The development and advancement of touch processing will greatly benefit a wide range of fields, including tactile and haptic use cases. For instance, advancements in tactile processing (from the environment to the system) will enable robotic applications in unstructured environments, such as agricultural robotics and telemedicine. Understanding touch will also facilitate providing sensory feedback to amputees through sensorized prostheses and enhance future AR/VR systems.
The goal of this second workshop on touch processing is to continue to develop the foundations of this new computational science dedicated to the processing and understanding of touch sensing. By bringing together experts with diverse backgrounds, we hope to continue discussing and nurturing this new field of touch processing and pinpoint its scientific challenges in the years to come. In addition, through this workshop, we hope to build awareness and lower the entry barrier for all AI researchers interested in exploring this new field. We believe this workshop can be beneficial for building a community where researchers can collaborate at the intersection of touch sensing and AI/ML.
Important dates
- Submission deadline:
09 September 202422 September 2024 (AOE) (Anywhere on Earth) - Notification: 09 October 2024 (AOE)
- Camera ready:
23 October 2024 (AOE)01 November 2024 (AOE) (Anywhere on Earth)
Schedule
Time (Local Time, CST) | Title | Speaker |
---|---|---|
08:55 - 09:00 | Opening Remark | Organizers |
09:00 - 09:30 | Skin Physiology and the Human Perception of Sliding Touch | Roland Bennewitz |
09:30 - 10:00 | Aligning Touch, Vision and, Language for Multi-modal Perception | Raven Huang |
10:00 - 10:20 | Coffee Break | |
10:20 - 11:30 | Oral Talks | Paper Authors |
11:30 - 12:00 | Generating High-Resolution Perception with High-Resolution Tactile Sensing | Wenzhen Yuan |
14:00 - 14:30 | Tactile-Augmented Radiance Fields | Andrew Owens |
14:30 - 15:00 | Enhancing Physical Interactions via Touch | Yiyue Luo |
15:00 - 16:00 | Coffee Break + Poster Session | |
16:00 - 16:30 | Vision-based Multimodal Sensing at Large Area: Embodiment, Learning, and Control | Van Anh Ho |
16:30 - 17:30 | Panel Discussion |
Oral Talk Orders
- AnySkin: Plug-and-play Skin Sensing for Robotic Touch.
- Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile Skins.
- Aligning Touch, Vision, and Language for Multimodal Perception.
- An initial exploration of using Persistent Homology for Noise-Resilient Tactile Object Recognition.
- Smart Insole: Predicting 3D human pose from foot pressure.
- 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing.
- Enhance Vision-based Tactile Sensors via Dynamic Illumination and Image Fusion.
- (Remote) Learning Force Distribution Estimation for the GelSight Mini Optical Tactile Sensor based on Finite Element Analysis
- (Remote) A Machine Learning Approach to Contact Localization in Variable Density Three-Dimensional Tactile Artificial Skin
- (Remote) Touch Processing for Terrain Classification with Feature Selection
Speakers
- Roland Bennewitz (Leibniz Institute for New Materials, Germany)
- Raven Huang (University of California, Berkeley, United States)
- Van Anh Ho (Japan Advanced Institute of Science and Technology, Japan)
- Yiyue Luo (University of Washington, United States)
- Andrew Owens (University of Michigan, United States)
- Wenzhen Yuan (University of Illinois Urbana-Champaign, United States)
Accepted Papers
- Smart Insole: Predicting 3D human pose from foot pressure.
Isaac Han, Seoyoung Lee, Sangyeon Park, Ecehan Akan, Yiyue Luo, Kyung-Joong Kim. - Learning Force Distribution Estimation for the GelSight Mini Optical Tactile Sensor based on Finite Element Analysis.
Erik Helmut, Luca Dziarski, Niklas Funk, Boris Belousov, Jan Peters. - A Machine Learning Approach to Contact Localization in Variable Density Three-Dimensional Tactile Artificial Skin.
Mitchell Murray, Yutong Zhang, Carson Kohlbrenner, Caleb Escobedo, Thomas Dunnington, Nolan Stevenson, Nikolaus Correll, Alessandro Roncone. - 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing.
Binghao Huang, Yixuan Wang, Xinyi Yang, Yiyue Luo, Yunzhu Li. - Enhance Vision-based Tactile Sensors via Dynamic Illumination and Image Fusion.
Artemii Redkin, Zdravko Dugonjic, Mike Lambeta, Roberto Calandra. - Aligning Touch, Vision, and Language for Multimodal Perception.
Letian Fu, Gaurav Datta, Huang Huang, William Chung-Ho Panitch, Jaimyn Drake, Joseph Ortiz, Mustafa Mukadam, Mike Lambeta, Roberto Calandra, Ken Goldberg. - AnySkin: Plug-and-play Skin Sensing for Robotic Touch.
Raunaq Bhirangi, Venkatesh Pattabiraman, Mehmet Enes Erciyes, Yifeng Cao, Tess Hellebrekers, Lerrel Pinto. - Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile Skins.
Venkatesh Pattabiraman, Yifeng Cao, Siddhant Haldar, Lerrel Pinto, Raunaq Bhirangi. - Touch Processing for Terrain Classification with Feature Selection.
Stephen D. Liang. - An initial exploration of using Persistent Homology for Noise-Resilient Tactile Object Recognition.
Tarun Raheja, Nilay Pochhi.
Organizers
- Roberto Calandra (TU Dresden, Germany)
- Haozhi Qi (UC Berkeley, United States)
- Perla Maiolino (University of Oxford, United Kingdom)
- Mike Lambeta (Meta AI, United States)
- Jitendra Malik (UC Berkeley, United States)
- Yasemin Bekiroglu (Chalmers University of Technology)
Call for Papers
We welcome submissions focused on all aspects of touch processing, including but not limited to the following topics:
- Computational approaches to process touch data.
- Learning representations from touch and/or multimodal data.
- Tools and libraries that can lower the barrier of touch sensing research.
- Collection of large-scale tactile datasets.
- Applications of touch processing
We encourage relevant works at all stages of maturity, ranging from initial exploratory results to polished full papers. Accepted papers will be presented in the form of posters, with outstanding papers being selected for spotlight talks.
Submission Instructions
Submissions should use the NeurIPS Workshop template available here and be 4 pages (plus unlimited pages for references and acknowledgments). The reviewing process will be double-blind, so please submit as anonymous by using ‘\usepackage{neurips_2024}’ in your main tex file.
Accepted papers and eventual supplementary material will be made available on the workshop website. However, this does not constitute an archival publication and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.
Submissions can be made at OpenReview.
Contacts
For questions related to the workshop, please email to workshop@touchprocessing.org.