Research

Perceptual Crossing Experiment in 3D

Perpetual Crossing Experiments (PCEs) are an attempt to investigate online situations where two people engage in real-time interaction, but how might adding additional dimensions lead to advancements in building more human-like AI?
By Pranav Pudu
|
November 27, 2023
READ THE PAPER

My name is Pranav Pudu, and I am a rising sophomore at the University of Texas at Dallas studying computer science. My project this summer is that I am trying to conduct a Perceptual Crossing Experiment (PCE) in 3D using a haptic device. PCEs are an attempt to investigate online situations where two people engage in real-time interaction, which is fundamental to social dynamics. Auvray et al.'s (2009) original perceptual crossing paradigm is a minimalist paradigm used to study online social interaction where two users interact virtually in a one-dimensional space and attempt to recognize each other in an environment with other agents (static objects and shadows) over a one-dimensional line using a cursor and clicks. Through PCE experiments, we can come to understand more about the intricacies of human interaction, leading to advancements in building more human-like AI.

During the experiment, users will pull on the pen for small time intervals and be asked to push a button if they believe the user interacting on the other end is another human or a control agent.

As additional dimensions are added to the experiment, it becomes increasingly difficult to find a cursor in multi-dimensional environments. Therefore, in my PCE 3D experiment, I chose a design where two users pull on a haptic pen connected virtually with live force feedback, eliminating the need to locate the other agent. During the experiment, users will pull on the pen for small time intervals and be asked to push a button if they believe the user interacting on the other end is another human or a control agent. Similar to the original PCE, shadows and random movements can be used as the control agents. Adding another dimension to the PCE can provide additional perspectives that may resonate with new research in fields involving creating better AI. This also raises questions such as: Could AI learn to be indistinguishable from humans in an online social interaction?

Progress

The Touch Haptic device from 3D Systems is a haptic tool that allows users to feel virtual objects by providing force feedback. It was designed for tactile research, offering position detection with 6 degrees of freedom and Force Feedback with 3 degrees of freedom. I have designed my 3D PCE around this Haptic device, where I am trying to establish a virtual link between two devices, enabling them to pull on the device while receiving live haptic feedback that simulates two people pulling on the same pen. While static objects may not serve as useful control agents in this format, shadows of the user or random movements can be used as other agents. Upon receiving the device at the office, I began experimenting with it, gaining insight into its technical and physical limitations. I was able to manipulate demo environments in Unity using the Unity Editor and the OpenHaptics Developer tools with C++.

Left: 3D Systems Touch; Right: PCE prototype in Unity

Initially, I worked on creating a prototype environment in Unity for the experiment, simulating a user pulling a pen from a mirrored image of themselves. However, the Unity plug-in for Haptic Direct only offered low-level control of the device, making it inadequate to facilitate the specific needs of the experiment, such as high-level control of force feedback. Consequently, I began exploring the OpenHaptics tools, which should allow me to generate forces directly with specific direction and magnitude. In addition to mirroring, I have been attempting to create other control objects, such as a mirrored shadow, by recording the users' movement and replaying it. Some other possible control objects could be shadows with a delay/randomized movements/AI agent. Using the OpenHaptics programmer's guide, I have been experimenting with the API and gaining a better understanding of how to utilize the device. Due to having less experience with hardware, the most significant challenge thus far has been working with haptic technology to accomplish the experiment. For the remainder of the summer, my goal is to complete and test a prototype environment where a virtual link between two haptic devices is created and where control agents (shadows, random movements, etc.) can be seamlessly implemented.