The San Francisco Chronicle has a story on some new adaptations of virtual reality being developed at Stanford, with one possible application being the improvement of police lineups. This is clearly a ways off if it does make its way to the real world, but if nothing else it’s interesting to see that the movement for lineup reform has achieved such critical mass that it’s on the short list of projects on the agenda of the Virtual Human Interaction Lab (VHIL) at Stanford.
VHIL director Jeremy Bailenson talks about the well-known failures of traditional lineups, and cites possible reasons for their limited accuracy:
Photographs are two-dimensional, and memory encodes people in stereo images. The suspects are dressed differently and have different hairstyles and weights. There’s no context for the crime.
His virtual reality application offers a new take on the police lineup, which Bailenson thinks might minimize some of the problems that he sees in traditional lineups:
[Bailenson’s] high-tech helmet can transport victims to a real-seeming police station where five virtual suspects walk into a white room. As in real police lineups, they resemble each other but not enough to be indistinguishable. With a tap on Bailenson’s keyboard, the suspects are suddenly the same weight, dressed in khakis and sporting identical buzz cuts. Now the victim can’t pick the one person — perhaps the wrong person — who has, say, the long hair she remembers from the crime scene.
If a victim recalls looking up at her mugger in a brick alley, Bailenson can make the suspects taller and suddenly turn the virtual world into that alley.
“In virtual reality, you get unlimited information — you can see someone’s face from any distance and any angle,” he said. “When you give them unlimited information they can use, they’re more likely to be accurate.”
I’m not sure what the social scientists would have to say about shaving the heads of all the suspects, normalizing their height and weight, and dropping them into a computer-generated crime scene for a virtual field lineup, but at least there are some interesting implications.
Another thought crossed my mind when reading the discussion about the head- and body-tracking capabilities of the same technology, which seem to be pretty advanced. Seems like the same equipment could be adapted to do “virtual field testing” of phenomena like the weapon focus effect, which has historically been difficult to test directly because of ethical constraints (i.e. that psychologists can’t subject people to mock-hold-ups with real weapons). Maybe a virtual hold-up would be more likely to get research approval, and the same technology would already have the built-in ability to track head movements to see when a witness is watching a weapon, vs. looking at the face of the virtual attacker, compared to a similar scenario with no weapon — and then both groups could be tested for virtual lineup accuracy. And really, the same formula could be adapted to theoretically unlimited virtual scenarios. It’ll be interesting to see where this leads.
Also, here’s a video (13MB, Windows Media) that shows the technology at work. I’m not sure exactly how it plays out if the witness is able to adjust height and hairstyles at her whim and then fingers the guy in spot #3 as the version of himself that’s 5’4″ with a shaved head, when the real guy with that face is 6’2″ with a four-inch afro, but hopefully those questions will be addressed before this goes prime time.