Emerging technologies could allow first responders to call up all sorts of information when responding to an emergency, but there is some uncertainty about what information is useful, how it should be displayed, and how emergency personnel could control which information to access and when. NC State is working with first responders to address these questions.
“We are working with first responders and the Washington Metropolitan Area Transit Authority (DC Metro), and have already developed three virtual reality (VR) scenarios that allow researchers to test new user interfaces for use by emergency responders,” says James Lester, the principal investigator (PI) on the project. Lester is also the director of NC State’s Center for Educational Informatics (CEI) and a Distinguished University Professor of Computer Science.
The work is made possible by a two-year, $1.1 million grant from the National Institute of Standards and Technology (NIST). The project, called IntelliVisor, is focused on developing VR software that can help law enforcement, firefighters and emergency medical technicians respond to crises more rapidly and efficiently. RTI International is collaborating with NC State on the project.
“We’re currently working with first responders to validate the three scenarios we’ve developed, making sure they are sufficiently realistic to be useful,” says Randall Spain, co-PI on the project and a research psychologist in NC State’s Center for Educational Informatics.
That authenticity is important, because the VR scenario software will be used to test two things. First, it will help to determine what sorts of information would be useful to emergency responders in a visual display. For example, which forms of navigation guides are helpful? Or how much visual information is too much, and may distract an emergency responder?
“Second, the software will enable researchers to test various interfaces responders can use to call up or dismiss visual information,” Spain says. “For example, we’re planning to explore the utility of a spoken natural language interface, which would allow users to control visual displays using spoken commands. This may be important, given that first responders often have their hands full, which could make gesture-based controls problematic.”
The research team is working closely with emergency responders and DC Metro personnel to both develop the scenario software, based on real-world situations, and to test different visual display interface prototypes, in order to ensure the software is user-friendly.
“We will likely also be working with them to collect physiological responses to the system as part of its formal evaluation,” Spain says. “This can help us establish which combination of visual display formats and control interfaces is most intuitive and least demanding for responders.”
But the scenario software is likely to have utility beyond simply testing new display technologies.
“This project is also valuable because the VR-based scenario could be used to supplement training by emergency responders both in responding to real-world crises and in familiarizing themselves with emerging technologies before they are deployed in the field,” Lester says.