Developing Research Paradigms in Virtual Reality

With the acquisition of the HTC Vive virtual reality product, the SPOCC lab has begun to plan new ways in which human behavior can be studied. Building research paradigms that involve VIrtual Reality requires learning skills that are not often learned by psychologists. Computer programming and trouble shooting are becoming the most relevant tasks to complete in order to prepare our research projects. The current learning-goals for students who wish to do research using VR in the SPOCC lab are:

To code applications that access data collected from the VR set-up.

To program simple stimuli to be presented through the VR Head-Mounted-Display.

To sync presented stimuli with data recorded and calculated from a user’s own motion.

As the research questions become more complex here in the SPOCC lab, the skill sets of students who do that research become more multi-faceted. We are truly training resourceful researchers.

Peace.

Anthony Drew, M.S.

Putting Heads Together: Collaboration at work

One of the great things about studying perception and action is that it allows you to engage in interesting research and address issues from many different fields.

Recently we were lucky enough to work on a collaborative project with Dr. Harvery Thurmer (Dept. of Music), and Dr. Bill Berg (Kinesiology and Health) along with former graduate students Rachelle Wolfe, M.M.  and Henry Cook, Ph.D.

The project involved a very timely question – how can we reduce repetitive strain injuries in violinists and violists (and really everyone)?

The study was unique in that it used assessment techniques from all three disciplines: performance training and evaluation using the Alexander technique (body mindfulness)   muscle activity evaluation using Electromyography  (EMG)                                                                 and kinematic (movement) evaluation using motion capture (magnetic motion tracking)

recording both the performance and kinematics of performance

screenshot of partial motion capture and data

It also gave our students a chance to learn how collaboration can work and learn how different types of researchers approach questions,

M. Howard ’18 and S. Laane ’17 outfit I. Held ’17 with sensors

The results of the initial study was published in June in Medical Problems of Performing Artists

Here is a Miami newsletter article about the project.

The best part of this project however, was being able to both learn and teach with colleagues and students!

-JS

Spring Update: PSY375

This year had a small but dedicated group of students in our lab course (PSY375).

The students this year were interested in discovering how different types of constraints influence performance on a task. For some in the group this was the first crack at the research process!!

To figure out how to do this we again did some brain storming, and organizing ideas…

 

and designed a study to examine this question using a perceptual adaptation procedure that included using motion capture and video games…

Set up showing prisms, motion capture and game play

Finally presenting our results at this years Undergraduate research forum!

A. Taylor, L. Bryne, J. Noble, & N. Schwabe

As always proud of the work that our students do!!

-JS

Studying Partial Deafferentation

An upcoming project will examine the viability of blood pressure cuffs as an experimental tool for altering sensory information. If found to be effective this will allow us to experimentally induce symptoms of different disorders (such as Anorexia Nervosa) on healthy participants which will help us to better understand the mechanisms by which different disorders work.

-Max Teaford

Studying workflow in a virtual task

What is the best way to approach a complex task?

How might this approach be modified based on changing constraints?

These are questions that Alex Feltz B.A., Sarah Laane B.A., & Valencia Brown, B.A. sought to answer using a virtual task last semester.

They noticed that employees are often asked to follow a certain set of work instructions (procedures) that specify how they are to conduct their job. In many cases they are also given time constraints to ensure that the necessary amount of a given product is produced. Working under these constraints can be stressful, particularly when the constraints are greater than what can be done safely and accurately. Their research was conducted to determine how time constraints could affect the performance of complex tasks.

They tested this by looking at how participants performed in a virtual building task when given a specific set of instructions and varying time constraints.

Set up for the study. Participant's were tasked with building virtual houses

Set up for the study. Participant’s were tasked with building virtual houses

They found that placing unrealistic time constraints on the task resulted in participants deviating from the instructions by either skipping (combining) or omitting (forgetting) steps. However, performance did improve over trials and analysis of their physical movement suggests that participants were getting better even if they couldn’t complete the task in the manner specified.

So time constraints can alter the way in which a task is completed, but may not always be detrimental.

-AF, SL, VB

Welcome Dr. Shen!

Dr. Xunbing Shen will be working with us this year. He is currently affiliated with Jiangxi University of Traditional Chinese Medicine in Nanchang, China. His recent research focuses on the recognition of facial micro-expressions of emotions.   Dr. Shen mentions that in his master’s thesis he explored the effects of perception conflict on pointing and grasping, and that he is currently doing research on patients with traumatic brain injury, including rehabilitation of their gait.  This overlaps with our research interests in perception and action, and we look forward to a mutually beneficial collaboration.

Dr. Shen and Dr. Smart celebrating Chinese New Year

Using Postural Motion to Predict Visually Induced Motion Sickness in Real Time

oculus pic

New: Thanks to your support of our Hawksnest fundraiser and the Department of Psychology, we were able to purchase an HTC Vive system and upgrade our data collection machine!

Virtual Reality technology is becoming a large market force. Facebook recently purchased the Oculus rift VR project for 2 billion dollars. The HTC Vive sold 100,00 units in the first three months following its release. With the reemergence of VR technologies has also come an awareness of certain compatibility issues that arise between the tech and the humans who use it. A substantial number of VR users may experience head ache, nausea or a host of other symptoms which have been collectively dubbed simulator sickness or visually induced motion sickness (VIMS). Different techniques for combating the effect of VIMS have been explored in the form of restricting visual information, or electrophysiological interventions to simulate vestibular motion. These techniques have drawbacks that may reduce the quality of the VR experience for users. The SPoCC lab has explored this issue by attempting to predict the onset of sickness symptoms using data collected in real time from a human user.

In the past semester we in the SPoCC lab conducted an experiment to test an algorithm designed to predict motion sickness incidence while using a virtual reality headset. We brought participants into the Visualization Lab located in Armstrong Interactive Media Studies, and  asked them to view stimuli using an Oculus Rift VR headset. During the experiment we measured participants postural motion using the position tracking capabilities on-board the Oculus Development Kit 2. By measuring this movement data in real time we hoped to be able to predict whether a  participant was going  to become motion sick before the participant themselves  reported symptoms.

Alex for blog

The experiment used a simulated star field as a stimulus. Participants completed one 10-minute baseline trial during which the star field remained stationary. After the baseline, participants were exposed to one of two conditions. In the first condition the star field swayed at a constant rate of 0.2Hz, and was not in any way coupled to the participant’s own motion. In the second condition the star field swayed at a rate that was directly matched to the participant’s own motion, but was reversed in direction. If a participant swayed forward, the star field swayed backward thereby reversing the natural coupling between visual stimuli and postural sway.

Our prediction algorithm was able to correctly predict sickness in 90% of the original participants (whose data the algorithm was based on) and  57% of new participants. Predicting sickness in individuals at a rate that is above chance is a major step into developing a dynamic intervention for users of VR tech. We in the lab seek to improve this accuracy rate by weighting outcome measures and norming parameters to larger samples.

-Anthony Drew, M.S.

Extending Lab work to class work

This semester we “rebooted” a laboratory class in Perception, Action, and Cognition (PSY 375). It is meant to serve as a way for students to get “experiential learning” in our major without having to join an actual lab. The course will rotate among faculty so students can learn a number of techniques and methodologies.

This semester since I facilitated the course – we focused on behavioral techniques (motion capture) that we use in our lab, and since many in the class were new to the research process – how to generate ideas… we started by just jotting down ideas on giant “post-it” sheet on the wall, each class adding a little more detail, a little more structure, and watched as two very solid research questions emerged (and very different questions from what I imagined –  bridging perception/action and social perception research).

Once we had questions, we began to work on methodology and procedure – still using the “post-it” method – our wall was now becoming external memory – helping us remember not only where we were but where we started.

Ideas to questions to methods

Ideas to questions to methods

We then started working on creating our materials and piloting, followed by a bit of data collection (not as much as we would like but that is also part of the research process)…

equipment, set-up, and data

equipment, set-up, and data

And finally presenting our initial results…

presenting at Miami URF and Hinkle (psychology) sessions

presenting at Miami URF and Hinkle (psychology) sessions

While there were challenges – this was a great experience for all and to me cemented the idea the active collaboration and design can yield novel and exciting ideas and knowledge! Looking forward to doing this again next spring!

-JS

An interesting dissociation of perception and action in visual crowding

I have been mainly preparing comps (forming the committee, finalizing the reading list and reading articles) this semester. I am interested in visual crowding, the deterioration of objects identification in cluttered scenes.

A demonstration of the crowding effect . The letter 'r' can be recognized when it is presented alone (left), but becomes unrecognizable when it is surrounded by two letters (right)

A demonstration of the crowding effect . The letter ‘r’ can be recognized when it is presented alone (left), but becomes unrecognizable when it is surrounded by two letters (right)

When reading these comps articles, I am also trying to come up with some research ideas. I put a lot of attention on the ideas of the combination of crowding and other fields, including motion and action, then I found some interesting studies.

In one study by Chen, Sperandio and Goodale (2014), participant were seated in front of a table. On the table, there were some disks, with the central one being the target. Participants were asked to finish two tasks: the perceptual task required participants to report the size of the target disk they perceived by opening their thumb; the visual guided task required them to reach out and grasp the target disk (see Figure below). They found that participants could not consciously perceive the size of the target disk, but even so, they could still scale their grasp to the real size of the disk. In a later study, Chen and colleagues showed the same dissociated effects on perception and visual guided grasping in a shape crowding task. That is, participants could not perceive the shape of the target block, but could scale their grasp to the width of it. Furthermore,
the authors found that when these right-handed participants were asked to the the same task but with their left hand, grasping could not escape the crowding effect, just like the perception.

 

Experimental Setup

Experimental Setup

Chen, J., Jayawardena, S., & Goodale, M. A. (2015). The effects of shape crowding on grasping. Journal of Vision, 15(3).

Chen, J., Sperandio, I., & Goodale, M. A. (2014). Differences in the Effects of Crowding on Size Perception and Grip Scaling in Densely Cluttered 3-D Scenes. Psychological Science, 26(1), 58-69.

-MG