Immersive Media in Medicine Symposium

On October 24th and 25th, the Cornell campuses came together for a cross-campus Immersive Media in Medicine Symposium. Meeting at the Belfer Research Building in NYC, approximately 175 people registered to attend the conference in person, with 25 joining remotely from the Ithaca campus.

The symposium, co-chaired by Dr. Andrea Stevenson Won and Dr. JoAnn Difedefocusing on translational research in immersive media (augmented and virtual reality) for use in medicine and healthcare education, offered talks, panel discussions, workshops, and a poster session.

The symposium included talks on Embodiment in Immersive Media, Accessibility in Immersive Media, Immersive Media and Entrepreneurship, and more. There were also panel discussions highlighting topics such as Immersive Media in Medical Education and Immersive Media in Psychiatry, as well as hands on workshops.

Photo Oct 24, 1 28 13 PM

Dr. Andrea Stevenson Won giving her talk on Embodiment in Immersive Media

PhD candidate Swati Pandita led a panel discussion highlighting VR for beginners, and VEL research assistants Hal Rives, Jessie Yee, and Josh Zhu, also participated.

Photo Oct 25, 3 23 23 PM

Graduate student Swati Pandita leading a panel discussion on VR for beginners. Panel members L-R: Mariel Emrich, Joshua Zhu, Hal Rives, Harrison Resnick, Jessie Yee.

 

VEL featured in Chronicle Article

The results of our first study in collaboration with the Cornell Physics Education Research Lab (C-PERL) were recently featured in the Cornell Chronicle:

Swati Pandita and Jack Madden

Thanks to Linda Glaser for her great article, which points out the importance of distinguishing between enthusiasm for VR and actual learning gains. VEL is currently  preparing to launch the second study in this series this semester.

Embodiment’s Effect on Behavior

Avatar creation is at the forefront of VR technology. Allowing individuals to be embodied by their own created avatar gives rise to a more immersive and engaging experience. But what about when their avatar doesn’t quite look like them? Senior research assistant Aishwariyah Dhyan Vimal aimed to answer this very question. 

Grocery

The Grocery Store

In her recent study, participants were asked to shop in a virtual grocery store for one week’s worth of groceries in a “food desert”. Items varied in relative health benefits as well as price, with each participant having a budget of $60. The experimental variable was the assigned avatar, being either slender or obese.

Aishy set out to find whether or not the embodied avatar would affect the shopping habits of participants.

In an effort to make the experience more authentic, each participant’s created a unique head to their avatar using facial generation technology. Some participants noticed their avatar’s relative obesity immediately, one even saying, “Whoa, I’m fat.”

When asked what was behind this project, Aishy turned to public policy. “I looked at the public policy to reduce obesity and food deserts. The obesity epidemic in the USA continues to worsen and the implementation of the public policy will help reduce this problem.” The research also looked specifically at “food deserts” asking, “would participants be more supportive to public policy to reduce obesity?” 

unnamed

 

Aishy noted the struggles she went through working on a senior honor thesis, having to learn many new skills to make everything work. “From creating a virtual reality environment in Unity, creating customizable avatar heads, Qualtrics survey, data analysis, and conducting an actual lab experiment.” The growing researcher ended with acknowledging how happy she is she pushed through it and finished.

 

 

 

 

 

 

Undergrad research assistants create new “Pit Demo” for VEL

The research team at Cornell University has recently created a “Pit Demo” to observe how the sense of “presence” affects us in a virtual world. Participants in this demo are able to freely move around a world that is an exact replica of the lab. Senior research assistant Sydney Smith modeled the rooms in 3DS Max and imported it into Unity 3D. Functionality was implemented by Jason Wu and Daniel Tagle, who wrote scripts to collapse the floor on keypress, allowing participants to see a “pit” appear below their feet leading to the floor of Mann Library below, and pick up and throw objects from the room into the pit.  Below, research assistant Claudia Morris tests out the pit demo.  The plank that she is standing on matches the digital model of the plank in the virtual scene, providing passive haptic feedback.

Get to know the under-graduate researchers working on this project

DSCN1215[1]

Daniel Tagle (on left) is a senior studying Communication. He’s worked on Perspective Taking in Virtual Reality and now is leading the Pit Demo Team. A research assistant in the lab since September 2016, Daniel originally became interested in working in VR, from being a really big gamer.  He is fascinated with how people can interact with others in virtual worlds. Daniel is looking forward to working with virtual reality for many years to come.

Jason Wu (on right) is a junior majoring in Information Science, with a minor in Architecture. He is interested in the spatial qualities of virtual reality, as well as its potential in facilitating social experiences. He recently competed in HackReality NYC, where his project “Wanderlust” was awarded first place.

SydneySydney Smith is a senior majoring Communication with a focus in media studies. Her main role in this project has been the modeling of the lab in 3DS Max, as well as the “Pit” in the demo. She hopes to continue modeling, creating more realistic worlds as well as sharpening her skills as a 360 videographer.

 

Tracking nonverbal behavior in High Fidelity

Interaction.pngTracking the movements of participants in virtual environments is key to our research.  The above screenshot shows the summed movements of two participants’ heads and hands as they converse in High Fidelity, a shared virtual environment that allows users in different locations to meet in virtual worlds.  Omar Shaikh created the tracking visualizer, and Yilu Sun is conducting experiments using this platform.