An article in this month’s issue of Forensic Magazine covers some interesting new software-based approaches to the development of facial composites being developed in the UK. Two research groups have been independently developing facial composite software that attempts to apply existing knowledge about how human memory of faces works.
According to Dr. Charlie Ford at the University of Stirling in Scotland, co-developer with Dr. Peter Hancock of EvoFIT:
Peter knew that the normal method used to make composites does not work very well. We are not good at describing and selecting individual facial features, but are very good at selecting whole faces which look like someone we’ve seen.
EvoFit is a software application that employs a genetic algorithm to generate a series of “generations” of facial composites, based on the theory that humans remember faces holistically rather than piecemeal, and from each “generation” of new faces, the witness selects the one that most resembles the witness’s memory of the perpetrator.
A typical EvoFIT session looks something like this:
From a randomly-generated selection of eighteen faces, a witness is asked to choose the six faces that most resemble the suspect. These six faces become the parents of eighteen offspring faces generated by combining the features of the parents. The witness then chooses another six from the offspring population to become parents, and so on. The “features” being selected and mutated are values of around fifty “principle components” that describe the structure of the face. The process can proceed indefinitely, but usually produces a likeness acceptable to the witness in about four generations, as long as the initial selection (or “search space”) was adequate. This likeness can be saved and embellished with haircuts and clothing using normal photo editing software.
Dr. Chris Solomon at the University of Kent in England has been developing a similar genetic algorithm-based application called EigenFIT, which is similar in concept but applies a different algorithm for generating each new crop of faces.
The results of both applications appear to be fairly impressive, compared to the often “bizarre and inaccurate” composites that can result from the traditional method of having a witness select individual facial features and then stitching them all together to create a single face that often bears little resemblance to the perpetrator.
Here’s one example from EigenFIT, involving a test run in which a witness was shown the top row of images, and the EigenFIT algorithm was able to generate the composites in the bottom row:
Given what we know about wrongful convictions resulting from bad composite images, this seems like reason for hope on that front, not to mention reason to insist that more reliable methods are used in police departments in the U.S.
Popular Science also ran a piece on EvoFIT last month, and noted that a large part of the problem with traditional approaches to composite sketches spawns from the inability of witnesses to provide detailed or accurate descriptions:
As often happens during a crime, a victim gets only a brief glance at the assailant. Later, when police ask him for a description of the perpetrator, he has trouble recalling details. But now, with new identification software developed by two researchers in Scotland, victims no longer have to worry about describing their assailants. A computer does it for them.
Dr. Frowd admits, however, that the genetic algorithm is also capable of producing serious errors.
One drawback of genetic algorithms generally, is that they can converge on the wrong solution if the initial population is poorly chosen. “This may happen,” Frowd admits, “but if it does, the system is rolled back and started again.”
The question then may become how to know when such an error has occurred, which ultimately takes us back to the reliability of the witness’s memory, which of course we know to be highly fallible.