Earlier last year, I taught an audio forensics class that featured Adobe's Audition. I like Audition for teaching audio forensics in the generic, and for my day-to-day audio work.
One of the features that I use all the time is the Spectral View. In it, I can see the audio's problems - like noise - much clearer. I can also make edits to the file in that view. In a recent case, I worked almost exclusively in that view to remove an unwanted noise - which gave me the perfect platform in my notes to visually explain my work.
A question came in from a reader about doing this type of work in Sony's Sound Forge Pro 10. Unfortunately, according to Sony - you can't.
"How can I make correction on the spectral graphic?
Spectrum Analysis is used only to view a specific section of audio. You will not be able to use this tool to make corrections to your audio file, but it gives you important information on what corrections may need to be made using other tools inside Sound Forge Pro 10."
Now I find out that Sony's got a separate product for that, SpectaLayers Pro. But, as good as it might be, and as interoperable as it might be with other Sony products, it's still another purchase.
If you're doing audio enhancement, audio clarification, noise reduction, noise isolation, or identification work with audio, and you're not utilizing some sort of spectral view, I'm not sure how you're seeing the whole picture (intensity, frequency, etc.).
For now, I'm still using Audition CS6 (Creative Cloud style).
Enjoy.
Featured Post
Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...
Thursday, January 24, 2013
Wednesday, January 23, 2013
Grabbing a still image from AVC video
Here's a series of predictive coding sequences from an MPEG-4 (AVC) video file. Those of you familiar with video coding will recognize the format of the structures. Here's your question for today: that image that you want to grab ... is it an I frame, a P frame or a B frame?
IPPPPPPPPPPPIPP..., i.e. one I-slice followed by 11 P-slices, with one reference frame
IBBPBBPBBPBBIBBP..., i.e. 12-frame GOP with one I-slice, three P-slices and eight B-slices in each GOP.
IBBBBBBBBBBBIBB.., 12-frame GOP, hierarchical prediction.
Earlier, we talked about the structure of MPEG video. Here we ask the question, what happens when the frame that you want is a B or P frame? What do you do? How do you respond to the inevitable questions:
IPPPPPPPPPPPIPP..., i.e. one I-slice followed by 11 P-slices, with one reference frame
IBBPBBPBBPBBIBBP..., i.e. 12-frame GOP with one I-slice, three P-slices and eight B-slices in each GOP.
IBBBBBBBBBBBIBB.., 12-frame GOP, hierarchical prediction.
Earlier, we talked about the structure of MPEG video. Here we ask the question, what happens when the frame that you want is a B or P frame? What do you do? How do you respond to the inevitable questions:
- Is this an accurate depiction of the scene?
- Is this an original image?
If you're going to court with digital video - it pays to do your homework. Know the codec and how it works. Know which frame you selected for your exemplar ... and how it was generated. If you do use a B or P frame, know how the pieces were assembled or how object oriented compression works.
Essentially, be prepared for the worst and you should do fine.
Tuesday, January 22, 2013
Why won't my DVD play in Totem movie player?
Totem is the official movie player of the GNOME desktop environment based on GStreamer.
Ubuntu, a popular form of Linux, launches Totem by default when you insert a DVD into your workstation's CD/DVD tray. However, by default, Totem doesn't contain the proper codecs to play DVDs. When you first play a DVD, Ubuntu will ask whether you want to search for the proper codec. If you choose to search, Ubuntu will offer the available codec programs in the repositories for playing DVDs.
However, you may not be able to view your DVD even after installing the proper codec. Watching a DVD in Linux is a bit of a legal quagmire if you live in the United States. The Digital Millennium Copyright Act (DMCA) and other issues make it tricky for any open-source program to navigate the licensing maze when it comes to movies that are encoded or otherwise protected against pirating.
Not all DVDs have these features enabled, so you might be able to watch some DVDs on your Ubuntu workstation. If you burn your own DVDs, they should work just fine.
Ubuntu, a popular form of Linux, launches Totem by default when you insert a DVD into your workstation's CD/DVD tray. However, by default, Totem doesn't contain the proper codecs to play DVDs. When you first play a DVD, Ubuntu will ask whether you want to search for the proper codec. If you choose to search, Ubuntu will offer the available codec programs in the repositories for playing DVDs.
However, you may not be able to view your DVD even after installing the proper codec. Watching a DVD in Linux is a bit of a legal quagmire if you live in the United States. The Digital Millennium Copyright Act (DMCA) and other issues make it tricky for any open-source program to navigate the licensing maze when it comes to movies that are encoded or otherwise protected against pirating.
Not all DVDs have these features enabled, so you might be able to watch some DVDs on your Ubuntu workstation. If you burn your own DVDs, they should work just fine.
Monday, January 21, 2013
Video on Linux?
A few more adventurous officers got a grant to buy a Linux based workstation. They figured that since they were downloading from Linux based DVRs, they should have a Linux based computer. But, much to their dismay, the video that they downloaded from the DVR didn't work on their Ubuntu box.
As a general rule, viewing video files in Ubuntu requires two separate pieces of software:
The video-decoding software package tells the video-playing package how to read and interpret the video file. Hopefully, by now you know that the decoding software is called a codec. Usually, each codec decodes a single type of video file. Ubuntu contains only the codec package for decoding OGG video files. Viewing any other type of video file requires downloading a different codec package. This is where things get messy.
The MPG video file format is by far the most common video format in use, both on the Internet and in many digital movie cameras. It provides good-quality video with relatively small file sizes by using patented compression algorithms. Unfortunately, because of the patented algorithms, MPG is a licensed product, which is a bad thing in the open-source world. Reverse-engineered codec packages are available that can decode MPG video files in Linux systems. However, these packages are not legal for use in some countries.
Add to this the problem of using un-licensed or quasi-legally obtained software in your case work. Do you want to answer questions about the legality of how you obtained your codecs from an Estonian hacker's web site?
For the power-user, having a Linux based workstation can be an advantage. But, most DVRs output a file that is meant to be used in the Windows operating environment.
One of the things that Amped FIVE users know is that FIVE automatically chooses the appropriate decoder - taking out the guesswork. And, to top it off, the choice is documented in your report. How cool is that?!
As a general rule, viewing video files in Ubuntu requires two separate pieces of software:
- A video-playing software package
- A video-decoding software package
The video-decoding software package tells the video-playing package how to read and interpret the video file. Hopefully, by now you know that the decoding software is called a codec. Usually, each codec decodes a single type of video file. Ubuntu contains only the codec package for decoding OGG video files. Viewing any other type of video file requires downloading a different codec package. This is where things get messy.
The MPG video file format is by far the most common video format in use, both on the Internet and in many digital movie cameras. It provides good-quality video with relatively small file sizes by using patented compression algorithms. Unfortunately, because of the patented algorithms, MPG is a licensed product, which is a bad thing in the open-source world. Reverse-engineered codec packages are available that can decode MPG video files in Linux systems. However, these packages are not legal for use in some countries.
Add to this the problem of using un-licensed or quasi-legally obtained software in your case work. Do you want to answer questions about the legality of how you obtained your codecs from an Estonian hacker's web site?
For the power-user, having a Linux based workstation can be an advantage. But, most DVRs output a file that is meant to be used in the Windows operating environment.
One of the things that Amped FIVE users know is that FIVE automatically chooses the appropriate decoder - taking out the guesswork. And, to top it off, the choice is documented in your report. How cool is that?!
Friday, January 18, 2013
Where's the DVR?
Have you ever walked on to a crime scene to retrieve video evidence, only to find a server room full of gear. Where's the DVR? To which one of those components do you connect to retrieve your data? What do you document?
According to Anthony Caputo, The standardized rack-mounted form factor can be open – simply a few rails and a floor panel; it can be partially enclosed; or it can be fully enclosed within a cabinet under lock and key for additional security. A fully enclosed lockable rack, securely fastened to the floor and/or walls, severely restricts physical access by unauthorized personnel.
Standard rack-mounted computing implementations can be custom designed and built using various types and configurations and/or multiples of 1-U components. The standard 1-U unit is 19″ (482.6 mm) wide and 1.75″ (44.45 mm) tall. The most common rack-mounted computing form factor platform is based around a 42-U configuration, which means that a 42-U rack is capable of housing a maximum of 42 individual 1-U units.
There are many types of 1-U devices including network accessed storage (NAS), switches and routers, firewalls, power strips, uninterrupted power supplies (UPS), or even an LCD monitor and keyboard that neatly fold away. The multiple 1-U computers (servers and archivers) can be accessed using a keyboard, video, and monitor (KVM) switch, which allows access to each computer with the click of a button or a couple of keystrokes.
Additional rack-mounted options include lockable sliding rails, which allow a variety of equipment to slide in and out of sight without disconnecting it from the system. Rack-mounted equipment also provides easy access to the rear of the units, which may include inputs and outputs, power, and hardwired connectivity.
You're often better off asking for assistance from the tech that maintains the equipment. They might not know how to retrieve the data, but they should know which computer does what.
Thursday, January 17, 2013
Computer or appliance?
Is a DVR a computer or an appliance? If the DVR software and cards reside in a Windows PC, it seems like a simple answer. But what about those little black boxes we're finding more often now?
The typical digital video recorder (DVR; or networked video recorder, NVR) includes a host application (usually a license-free Linux OS), video conversion software, storage and disc management application, and a VMS interface to manage the integrated video. The embedded video surveillance application and OS provide an appliance rather than a computer, streamlining the process of integration, maintenance, and support. As such, the DVR is the appliance that replaced the analog videocassette recorder (VCR).
So, when your computer crimes investigators don't know what to do with your DVR, fret not - after all, it's an appliance and not a DVR.
The typical digital video recorder (DVR; or networked video recorder, NVR) includes a host application (usually a license-free Linux OS), video conversion software, storage and disc management application, and a VMS interface to manage the integrated video. The embedded video surveillance application and OS provide an appliance rather than a computer, streamlining the process of integration, maintenance, and support. As such, the DVR is the appliance that replaced the analog videocassette recorder (VCR).
So, when your computer crimes investigators don't know what to do with your DVR, fret not - after all, it's an appliance and not a DVR.
Wednesday, January 16, 2013
Privacy concerns and authentication of video and images
I've seen a lot of these types of memes floating around the internet. There's a dedicated group of activists who regularly record police activities and police contacts with the public. If you're LE, and you think what you're doing is private, think again. You're likely about to be the star of your own YouTube channel - unbeknownst to you.
When your internal investigators receive video from the public as part of a complaint, do your investigators automatically take the video as authentic? When YouTube video is involved, does your agency accept what it sees on the screen, or do they question the chain of custody, authenticity, source, and etc?
Chances are, they believe what they see. In my work on People v. Abdullah (BA353334, Los Angeles County Superior Court, December 2009), I was able to determine that the defendant had introduced completely fraudulent video - a complete fabrication. My work in authenticating the video led to a large mea culpa on the part of the defense counsel, but also helped preserve the careers of two of LA's finest.
Just because it's on film, or a thumb drive, doesn't make it authentic. While I am not saying that the activist groups are up to no good, I am saying that efforts should be made by the relevant authorities to determine the authenticity of the video footage submitted. If it's your career, you'll want to know that the video's been authenticated by an independent third party. That's where we come in.
I'll be speaking more about this in the future, and presenting an authentication workshop at the Spring 2013 NaTIA Pacific Chapter meeting. Stay tuned.
Tuesday, January 15, 2013
Video Standards for Law Enforcement Applications
The Criminal Justice Interview Room Recording System (IRRS) Standard, Supplier's Declaration of Conformity Requirements, and Selection and Application Guide is now available for public comment.
Click here.
The National Institute of Justice (NIJ) is in the process of developing a new Interview Room Video System Standard and corresponding supplier’s declaration and user’s guide documents. Through funding to the International Association of Chiefs of Police (IACP) under award 2009-IJ-CX-K009, this work is being performed by a Special Technical Committee (STC), comprised of law enforcement practitioners, researchers, testing experts, certification experts, and representatives from other stakeholder organizations. In late 2009 the STC completed draft versions of the three documents, and in January 2013 the documents were made available for public comment and review.
1. Draft Criminal Justice IRRS Standard
2. Draft Criminal Justice IRRS Supplier's Declaration of Conformity Requirements
3. Draft Criminal Justice IRRS Selection and Application Guide
The opportunity to provide comments on these documents is open to industry technical representatives; criminal justice agencies and organizations; research, development, and scientific communities; and all other stakeholders and interested parties. Please send comments by email to Mark Greene at mark.greene2@usdoj.gov by 5 P.M. Eastern Time on February 25, 2013.
Click here.
The National Institute of Justice (NIJ) is in the process of developing a new Interview Room Video System Standard and corresponding supplier’s declaration and user’s guide documents. Through funding to the International Association of Chiefs of Police (IACP) under award 2009-IJ-CX-K009, this work is being performed by a Special Technical Committee (STC), comprised of law enforcement practitioners, researchers, testing experts, certification experts, and representatives from other stakeholder organizations. In late 2009 the STC completed draft versions of the three documents, and in January 2013 the documents were made available for public comment and review.
1. Draft Criminal Justice IRRS Standard
2. Draft Criminal Justice IRRS Supplier's Declaration of Conformity Requirements
3. Draft Criminal Justice IRRS Selection and Application Guide
The opportunity to provide comments on these documents is open to industry technical representatives; criminal justice agencies and organizations; research, development, and scientific communities; and all other stakeholders and interested parties. Please send comments by email to Mark Greene at mark.greene2@usdoj.gov by 5 P.M. Eastern Time on February 25, 2013.
Monday, January 14, 2013
What is PRNU?
PRNU = photo-response non-uniformity (a camera's noise fingerprint).
Identification of the source that has generated a digital image is considered one of the many open issues in the multimedia forensics community. The extraction of photo-response non-uniformity (PRNU) noise has been so far indicated as a means to identify a sensor's fingerprint. Such a fingerprint can be estimated from multiple images taken by the same camera by means of a de-noising filtering operation." From Crime Prevention Technologies and Applications for Advancing Criminal Investigation by Chang-Tsun Li and Anthony Ho.
Identification of the source that has generated a digital image is considered one of the many open issues in the multimedia forensics community. The extraction of photo-response non-uniformity (PRNU) noise has been so far indicated as a means to identify a sensor's fingerprint. Such a fingerprint can be estimated from multiple images taken by the same camera by means of a de-noising filtering operation." From Crime Prevention Technologies and Applications for Advancing Criminal Investigation by Chang-Tsun Li and Anthony Ho.
Friday, January 11, 2013
An outline of JPEG compression
"There are many options for JPEG compression and many parameters that may be specified. [The] goal [here] is not a “bit-by-bit” account of the algorithm but rather to illustrate the role of the mathematics [involved], specifically the DCT. Here is the basic procedure:
Decoding reverses the steps:
4. Decompress: Recover the quantized DCT blocks.
3. Dequantize.
2. Inverse transform: Apply the inverse DCT to each block.
1. Mix colors: If applicable, and display block."
From Discrete Fourier Analysis and Wavelets: Applications to Signal and Image Processing by S. Allen Broughton and Kurt M. Bryan.
- Separate color, if applicable: A color image is decomposed into its color components, usually using the YCbCr color scheme. This color representation scheme, like RGB, dictates the color of each pixel with three numbers. The entire image can thus be represented by three arrays of appropriate size. The details here need not concern us, since we will be working with monochrome images.
- Transform: Perform a block DCT on the image using 8 × 8 pixel blocks. If the image pixel count in either dimension is not a multiple of 8 the image is padded in some way to a higher multiple of 8.
- Quantize: Each 8 × 8 pixel block has an 8 × 8 DCT consisting of real numbers. Each of the 64 components or frequencies in this DCT is quantized. This is the main lossy step in the compression.
- Compress: The image is compressed by using run-length encoding on each block, and then Huffman coding the result. The dc coefficient is often treated separately. This is the step where actual file compression occurs.
Decoding reverses the steps:
4. Decompress: Recover the quantized DCT blocks.
3. Dequantize.
2. Inverse transform: Apply the inverse DCT to each block.
1. Mix colors: If applicable, and display block."
From Discrete Fourier Analysis and Wavelets: Applications to Signal and Image Processing by S. Allen Broughton and Kurt M. Bryan.
Thursday, January 10, 2013
Purkinje effect
"As the light dims, we literally perceive less color, prompting the famous painter Monet to exclaim, “Terrible how the light runs out, taking color with it.” In low light, our color-blind rods become more dominant in our vision. Additionally, of the three cone types, the one most sensitive in dim light is the cone that responds to blue. Thus in low light levels, we are less sensitive to reds and more sensitive to blues. This is partly why we perceive night as monochromatic and bluish, a phenomenon known as a Purkinje shift, or Purkinje effect (above).
Although this information might seem like a review of a high school science class, understanding how we perceive color explains the appearance of colored materials under colored light. Understanding the eye and how the images you see are translated to the brain also can help you to create images that capture your perception of what you saw ... " - from Illuminated Pixels by Virginia Wisslar
Wednesday, January 9, 2013
Pixel neighborhoods
The pixels surrounding a given pixel constitute its neighborhood, which can be interpreted as a smaller matrix containing (and usually centered around) the reference pixel. Most neighborhoods used in image processing algorithms are small square arrays with an odd number of pixels, for example, the 3 × 3 neighborhood shown below.
In the context of image topology, neighborhood takes a slightly different meaning. It is common to refer to the 4-neighborhood of a pixel as the set of pixels situated above, below, to the right, and to the left of the reference pixel (p), whereas the set of all of p's immediate neighbors is referred to as its 8-neighborhood. The pixels that belong to the 8-neighborhood, but not to the 4-neighborhood, make up the diagonal neighborhood of p (below).
Concept of neighborhood of pixel p (from an image topology perspective): (a) 4-neighborhood; (b) diagonal neighborhood; (c) 8-neighborhood.
From Practical Image and Video Processing Using MATLAB® by Oge Marques.
In the context of image topology, neighborhood takes a slightly different meaning. It is common to refer to the 4-neighborhood of a pixel as the set of pixels situated above, below, to the right, and to the left of the reference pixel (p), whereas the set of all of p's immediate neighbors is referred to as its 8-neighborhood. The pixels that belong to the 8-neighborhood, but not to the 4-neighborhood, make up the diagonal neighborhood of p (below).
Concept of neighborhood of pixel p (from an image topology perspective): (a) 4-neighborhood; (b) diagonal neighborhood; (c) 8-neighborhood.
From Practical Image and Video Processing Using MATLAB® by Oge Marques.
Tuesday, January 8, 2013
3 level MPEG picture hierarchy
"This sketch shows a regular GoP structure with an I-picture interval of n= 9, and a reference picture interval of m= 3. This example represents a simple encoder that emits a fixed schedule of I-, B-, and P-pictures; this structure can be described as IBBPBBPBB. The example shows an open GoP: B-pictures following the GoP’s last P-picture are permitted to use backward prediction from the I-picture of the following GoP. Such prediction precludes editing of the bitstream between GoPs. A closed GoP permits no such prediction, so the bitstream can be edited between GoPs. Closed GoPs lose some efficiency." - from Digital Video and HD, 2nd Edition, by Charles Poynton.
Monday, January 7, 2013
Square sampling
"In modern digital imaging, including computing, digital photography, graphics arts, and HD, samples (pixels) are ordinarily equally spaced vertically and horizontally – they have equal horizontal and vertical sample density. These systems have square sampling (‘square pixels’); they have sample aspect ratio (SAR) of unity. With square sampling, the count of image columns is simply the aspect ratio times the count of image rows.
The term square refers to the sample arrangement: Square does not mean that image information is uniformly distributed throughout a square area associated with each pixel! Some 1080i HD compression systems resample to 4/3 or ¾ pixel aspect ratio.
Legacy imaging and video systems including digital SD systems had unequal horizontal and vertical sample pitch (nonsquare sampling). That situation was sometimes misleadingly referred to as ‘rectangular sampling,’ but a square is also a rectangle! A heated debate in the early 1990s led to the adoption of square sampling for HD. In 1995 the New York Times wrote,
If you use the term pitch, it isn’t clear whether you refer to the dimension of an element or to the number of elements per unit distance. I prefer the term density.
HDTV signals can be sent in a variety of formats that rely not only on progressive or interlaced signals, but on features like square versus round pixels …
A technical person finds humour in that statement; surprisingly, though – and unintentionally – it contains a technical grain of truth: In sampled continuous-tone imagery, the image information associated with each sample is spread out over a neighbourhood which, ideally, has circular symmetry." - from Digital Video and HD, 2nd Edition, by Charles Poynton.
The term square refers to the sample arrangement: Square does not mean that image information is uniformly distributed throughout a square area associated with each pixel! Some 1080i HD compression systems resample to 4/3 or ¾ pixel aspect ratio.
Legacy imaging and video systems including digital SD systems had unequal horizontal and vertical sample pitch (nonsquare sampling). That situation was sometimes misleadingly referred to as ‘rectangular sampling,’ but a square is also a rectangle! A heated debate in the early 1990s led to the adoption of square sampling for HD. In 1995 the New York Times wrote,
If you use the term pitch, it isn’t clear whether you refer to the dimension of an element or to the number of elements per unit distance. I prefer the term density.
HDTV signals can be sent in a variety of formats that rely not only on progressive or interlaced signals, but on features like square versus round pixels …
A technical person finds humour in that statement; surprisingly, though – and unintentionally – it contains a technical grain of truth: In sampled continuous-tone imagery, the image information associated with each sample is spread out over a neighbourhood which, ideally, has circular symmetry." - from Digital Video and HD, 2nd Edition, by Charles Poynton.
Friday, January 4, 2013
When is a composite image OK?
Twitter is again buzzing about a doctored image released by Rep. Pelosi's office.
This image is a fake but accurate accounting of the female members of the US Congress. It's a composite ... but not a very good one according to the flurry of Tweets that it generated.
So, yes ... it's "photoshopped." This kind of thing happens all the time in politics. The content is a composite of more than one image (forgery). The context, a picture of the female members of Congress, is accurate - those are the female members.
It's what you do with the image that makes all the difference in the world.
This image is a fake but accurate accounting of the female members of the US Congress. It's a composite ... but not a very good one according to the flurry of Tweets that it generated.
So, yes ... it's "photoshopped." This kind of thing happens all the time in politics. The content is a composite of more than one image (forgery). The context, a picture of the female members of Congress, is accurate - those are the female members.
It's what you do with the image that makes all the difference in the world.
Thursday, January 3, 2013
More expert analysis
Someone sent this to me via Twitter (@jimhoerricks) and wanted my opinion. Essentially, they're asking for a photographic comparison of the woman in the photos. But, look at all the problems with the images - colour mismatch, aspect ratio mismatch, and so forth. The folks that make these memes care not a bit about the integrity of the image, they just cram together images in an attempt to make their point.
So, in my professional opinion ... the person spends too much time on Facebook.
Wednesday, January 2, 2013
Image Capture
"In human vision, the three-dimensional world is imaged by the lens of the eye onto the retina, which is populated with photoreceptor cells that respond to light having wavelengths ranging from about 400 nm to 700 nm. In video and in film, we build a camera having a lens and a photosensitive device, to mimic how the world is perceived by vision. Although the shape of the retina is roughly a section of a sphere, it is topologically two dimensional. In a camera, for practical reasons, we employ a flat image plane instead of a section of a sphere. Image science involves analyzing the continuous distribution of optical power that is incident on the image plane." - from Digital Video and HD, 2nd Edition, by Charles Poynton.
Tuesday, January 1, 2013
From grainy CCTV to a positive ID: Recognising the benefits of surveillance
This from the UK Independent (via @demuxdispatches): " ... Your identity is manifest in many different ways.” Ears, eyes, foot steps – all can be used to identify people. Even your heartbeat can betray who you are, and it can be detected from a distance without requiring contact with the body.
For those wearing masks or scarves over their faces, there are still plenty of ways computers can identify them. Much of the research has been carried out in the “biometrics tunnel” built in Prof Nixon’s department.
It’s a facility that requires a lot of technical expertise and patience – as Dr Carter tells us: “I’ve spent the last three months tracking down a fault in a cable.”
As I wander down it, eight cameras film my strides from a variety of angles against multicolour backgrounds, allowing electronic silhouettes and a 3D virtual model of my body to be constructed by a computer. The distance between my feet, knees, hips, shoulders and head are measured and the pattern of their motion analysed. Were I suspected of a crime, police would then be able to compare my gait profile to information gathered from CCTV footage of the incident – either eliminating me from their enquiries or encouraging them to delve deeper.
“We helped in a conviction of a bagsnatcher who robbed somebody,” says Prof Nixon. “He’d covered up his face with a motorbike helmet, that withheld his DNA, as there was no spit or breath. He wore gloves, so there were no fingerprints left – everything was covered up. But he still ran. We used images of him and presented images to the judge.”
Some of the work in the new centre will go into online identification. Keystroke analysis, looking at the minute differences in timings and patterns between different computer users’ typing mannerisms, is under development ..."
Continue reading the article by clicking here.
For those wearing masks or scarves over their faces, there are still plenty of ways computers can identify them. Much of the research has been carried out in the “biometrics tunnel” built in Prof Nixon’s department.
It’s a facility that requires a lot of technical expertise and patience – as Dr Carter tells us: “I’ve spent the last three months tracking down a fault in a cable.”
As I wander down it, eight cameras film my strides from a variety of angles against multicolour backgrounds, allowing electronic silhouettes and a 3D virtual model of my body to be constructed by a computer. The distance between my feet, knees, hips, shoulders and head are measured and the pattern of their motion analysed. Were I suspected of a crime, police would then be able to compare my gait profile to information gathered from CCTV footage of the incident – either eliminating me from their enquiries or encouraging them to delve deeper.
“We helped in a conviction of a bagsnatcher who robbed somebody,” says Prof Nixon. “He’d covered up his face with a motorbike helmet, that withheld his DNA, as there was no spit or breath. He wore gloves, so there were no fingerprints left – everything was covered up. But he still ran. We used images of him and presented images to the judge.”
Some of the work in the new centre will go into online identification. Keystroke analysis, looking at the minute differences in timings and patterns between different computer users’ typing mannerisms, is under development ..."
Continue reading the article by clicking here.
Subscribe to:
Posts (Atom)