Projects

Below outline the BIMLab’s current project areas. Here we highlight the objectives, any outcomes, and the researchers working on each project.

Telestration in Surgery

Gestures, deictic referencing (pointing at objects and regions of interest), and manipulation of digital images are an integral aspect of decision making in collaborative scientific and medical work. In modern minimally invasive surgical interventions, medical imaging has come to play an increasingly important role, but due to concerns of asepsis image manipulation during surgical decision making are typically constrained. In this project, wer are exploring the use of new technologies, such as the Kinect, to address this issue, by developing techniques for “touchless” interaction to coordinate and enhance communication among team members in the operating room. The results of this research will provide a deeper understanding of collaborative practices around image use and the benefit of technological tools for annotating and referencing those images, which will significantly benefit patient outcomes. The research aims are: to identify coordinated practices and their relationship to imaging use through a detailed field study of laparoscopic surgery; to iteratively design and implement a gestural image annotation prototype for laparoscopic surgeons to reference and annotate endoscopic video; and to determine the effects of the imaging manipulation on coordinated surgical practice through an experimental study.

Current Researchers: Yuanyuan Feng, Jordan Ramsey,Jatin Chhikara, Bhushan Sontakke, Helena Mentis
Former Researchers: Christopher Wong, Veeha Khanna, Meredith Evans
Output: 
Feng Y, Wong C, Park A, Mentis H. Taxonomy of instructions given to residents in laparoscopic cholecystectomy. Surgical endoscopy. 2016 Mar 1;30(3):1073-7.
Feng Y, Mentis HM. Supporting Common Ground Development in the Operation Room through Information Display Systems. In AMIA Annual Symposium Proceedings 2016 (Vol. 2016, p. 1774). American Medical Informatics Association.
Mentis H, inventor; University Of Maryland Baltmore County, assignee. Annotation of endoscopic video using gesture and voice commands. United States patent application US 15/001,218. 2016 Jan 19.

Surgical Telementoring

The objective of this project is to investigate the benefit of collaborative image interaction in conveying expert knowledge during distributed work. Although there have been great advances in the compression and transfer of audio and video signals for telecommunication, there is still a significant challenge in providing the appropriate tools for conveying expert knowledge in distributed work settings. Surgical telemedicine is one domain where collaborative interaction with highly specialized images is vital for the efficient and effective conveyance of expert information, and the collaborating individuals may not have the same level of expertise. The research aims are: to determine the verbal and non-verbal mechanisms for conveying expert knowledge in co-located and distributed collaboration with images; to develop a prototype for distributed collaborative image interaction; to determine the effects of collaborative image interaction on expert communication processes and performance outcomes; and to determine the effects of and the reactions to collaborative image interaction on distributed work practices.

Current Researchers: Yuanyuan Feng, Helena Mentis

Seeing Movement

Movement sensors have been touted as providing the next generation of health care through objective measurements that will replace “subjective” health assessments and remove the fallible memory of patients from health decision making. However, assessing the level of disability and assessing the efficacy of treatment are iterative and constructive acts reliant on an alignment of shared perceptions between clinician and patient. The challenge then is to utilize movement sensors as a resource for this co-interpretation as opposed to a replacement. The research aims of this project are: to engage in fieldwork of movement assessment for Parkinson’s and stroke patients; employ design explorations of sensor to evaluate body movements and the visualization of the sensor data; and to investigate the uptake of the system in the assessment and co-interpretation practices by clinicians and patients. Our findings have highlighted how sensors can provide much needed co-interpreted assessment of movement but sensors can also intrude on this process through clinician or sensor authority.

Current Researchers:  Adegboyega Akinsiku, Ezana Dawit, Ganesh Pradham,
Former Researchers: Rita Shewbridge, Courtney Pharr, Kyle Althoff
Output: 
Mentis HM, Shewbridge R, Powell S, Armstrong M, Fishman P, Shulman L. Co-interpreting movement with sensors: assessing Parkinson’s patients’ deep brain stimulation programming. Human–Computer Interaction. 2016 Jul 3;31(3-4):227-60.
Feng Y, Wong CK, Janeja V, Kuber R, Mentis HM. Comparison of tri-axial accelerometers step-count accuracy in slow walking conditions. Gait & Posture. 2017 Mar 31;53:11-6.
Mentis HM, Shewbridge R, Powell S, Fishman P, Shulman L. Being Seen: Co-Interpreting Parkinson’s Patient’s Movement Ability in Deep Brain Stimulation Programming. InProceedings of the 33rd Annual ACM Conference on Human Factors in Computing Systems 2015 Apr 18 (pp. 511-520). ACM.