Below outline the BIMLab’s current project areas. Here we highlight the objectives, any outcomes, and the researchers working on each project.
Telestration in Surgery
Gestures, deictic referencing (pointing at objects and regions of interest), and manipulation of digital images are an integral aspect of decision making in collaborative scientific and medical work. In modern minimally invasive surgical interventions, medical imaging has come to play an increasingly important role, but due to concerns of asepsis image manipulation during surgical decision making are typically constrained. In this project, wer are exploring the use of new technologies, such as the Kinect, to address this issue, by developing techniques for “touchless” interaction to coordinate and enhance communication among team members in the operating room. The results of this research will provide a deeper understanding of collaborative practices around image use and the benefit of technological tools for annotating and referencing those images, which will significantly benefit patient outcomes. The research aims are: to identify coordinated practices and their relationship to imaging use through a detailed field study of laparoscopic surgery; to iteratively design and implement a gestural image annotation prototype for laparoscopic surgeons to reference and annotate endoscopic video; and to determine the effects of the imaging manipulation on coordinated surgical practice through an experimental study.
Current Researchers: Yuanyuan Feng, Jordan Ramsey,Jatin Chhikara, Bhushan Sontakke, Helena Mentis
Former Researchers: Christopher Wong, Veeha Khanna, Meredith Evans
Output:
Surgical Telementoring
The objective of this project is to investigate the benefit of collaborative image interaction in conveying expert knowledge during distributed work. Although there have been great advances in the compression and transfer of audio and video signals for telecommunication, there is still a significant challenge in providing the appropriate tools for conveying expert knowledge in distributed work settings. Surgical telemedicine is one domain where collaborative interaction with highly specialized images is vital for the efficient and effective conveyance of expert information, and the collaborating individuals may not have the same level of expertise. The research aims are: to determine the verbal and non-verbal mechanisms for conveying expert knowledge in co-located and distributed collaboration with images; to develop a prototype for distributed collaborative image interaction; to determine the effects of collaborative image interaction on expert communication processes and performance outcomes; and to determine the effects of and the reactions to collaborative image interaction on distributed work practices.
Current Researchers: Yuanyuan Feng, Helena Mentis
Seeing Movement
Movement sensors have been touted as providing the next generation of health care through objective measurements that will replace “subjective” health assessments and remove the fallible memory of patients from health decision making. However, assessing the level of disability and assessing the efficacy of treatment are iterative and constructive acts reliant on an alignment of shared perceptions between clinician and patient. The challenge then is to utilize movement sensors as a resource for this co-interpretation as opposed to a replacement. The research aims of this project are: to engage in fieldwork of movement assessment for Parkinson’s and stroke patients; employ design explorations of sensor to evaluate body movements and the visualization of the sensor data; and to investigate the uptake of the system in the assessment and co-interpretation practices by clinicians and patients. Our findings have highlighted how sensors can provide much needed co-interpreted assessment of movement but sensors can also intrude on this process through clinician or sensor authority.
Current Researchers: Adegboyega Akinsiku, Ezana Dawit, Ganesh Pradham,
Former Researchers: Rita Shewbridge, Courtney Pharr, Kyle Althoff
Output: