Touch is a crucial sensor modality for both humans and robots, as it allows us to directly sense object properties and interactions with the environment. Recently, touch sensing has become more prevalent in robotic systems, thanks to the increased accessibility of inexpensive, reliable, and high-resolution tactile sensors and skins. Just as the widespread availability of digital cameras accelerated the development of computer vision, we believe that we are rapidly approaching a new era of computational science dedicated to touch processing.

However, a key question is now becoming critically important as the field gradually transitions from hardware development to real-world applications: How do we make sense of touch?
While the output of modern high-resolution tactile sensors and skins shares similarities with computer vision, touch presents challenges unique to its sensing modality. Unlike images, touch information is influenced by temporal components, its intrinsically active nature, and very local sensing, where a small subset of a 3D space is sensed on a 2D embedding. We believe that AI/ML will play a critical role in successfully processing touch as a sensing modality. However, this raises important questions regarding which computational models are best suited to leverage the unique structure of touch, similar to how convolutional neural networks leverage spatial structure in images.

The development and advancement of touch processing will greatly benefit a wide range of fields, including tactile and haptic use cases. For instance, advancements in tactile processing (from the environment to the system) will enable robotic applications in unstructured environments, such as agricultural robotics and telemedicine. Understanding touch will also facilitate providing sensory feedback to amputees through sensorized prostheses and enhance future AR/VR systems.

The goal of this second workshop on touch processing is to continue to develop the foundations of this new computational science dedicated to the processing and understanding of touch sensing. By bringing together experts with diverse backgrounds, we hope to continue discussing and nurturing this new field of touch processing and pinpoint its scientific challenges in the years to come. In addition, through this workshop, we hope to build awareness and lower the entry barrier for all AI researchers interested in exploring this new field. We believe this workshop can be beneficial for building a community where researchers can collaborate at the intersection of touch sensing and AI/ML.

Important dates

Schedule

Time (Local Time, CST) Title Speaker
08:55 - 09:00 Opening Remark Organizers
09:00 - 09:30 Skin Physiology and the Human Perception of Sliding Touch Roland Bennewitz
09:30 - 10:00 Aligning Touch, Vision and, Language for Multi-modal Perception Raven Huang
10:00 - 10:20 Coffee Break  
10:20 - 11:30 Oral Talks Paper Authors
11:30 - 12:00 Generating High-Resolution Perception with High-Resolution Tactile Sensing Wenzhen Yuan
14:00 - 14:30 Tactile-Augmented Radiance Fields Andrew Owens
14:30 - 15:00 Enhancing Physical Interactions via Touch Yiyue Luo
15:00 - 16:00 Coffee Break + Poster Session  
16:00 - 16:30 Vision-based Multimodal Sensing at Large Area: Embodiment, Learning, and Control Van Anh Ho
16:30 - 17:30 Panel Discussion  

Oral Talk Orders

  1. AnySkin: Plug-and-play Skin Sensing for Robotic Touch.
  2. Learning Precise, Contact-Rich Manipulation through Uncalibrated Tactile Skins.
  3. Aligning Touch, Vision, and Language for Multimodal Perception.
  4. An initial exploration of using Persistent Homology for Noise-Resilient Tactile Object Recognition.
  5. Smart Insole: Predicting 3D human pose from foot pressure.
  6. 3D-ViTac: Learning Fine-Grained Manipulation with Visuo-Tactile Sensing.
  7. Enhance Vision-based Tactile Sensors via Dynamic Illumination and Image Fusion.
  8. (Remote) Learning Force Distribution Estimation for the GelSight Mini Optical Tactile Sensor based on Finite Element Analysis
  9. (Remote) A Machine Learning Approach to Contact Localization in Variable Density Three-Dimensional Tactile Artificial Skin
  10. (Remote) Touch Processing for Terrain Classification with Feature Selection

Speakers

Accepted Papers

Organizers

Call for Papers

We welcome submissions focused on all aspects of touch processing, including but not limited to the following topics:

We encourage relevant works at all stages of maturity, ranging from initial exploratory results to polished full papers. Accepted papers will be presented in the form of posters, with outstanding papers being selected for spotlight talks.

Submission Instructions

Submissions should use the NeurIPS Workshop template available here and be 4 pages (plus unlimited pages for references and acknowledgments). The reviewing process will be double-blind, so please submit as anonymous by using ‘\usepackage{neurips_2024}’ in your main tex file.

Accepted papers and eventual supplementary material will be made available on the workshop website. However, this does not constitute an archival publication and no formal workshop proceedings will be made available, meaning contributors are free to publish their work in archival journals or conferences.

Submissions can be made at OpenReview.

Contacts

For questions related to the workshop, please email to workshop@touchprocessing.org.

Sponsors

CeTI logo SECAI logo