Featured Post

Welcome to the Forensic Multimedia Analysis blog (formerly the Forensic Photoshop blog). With the latest developments in the analysis of m...

Friday, February 25, 2011

Crime Scene Investigation: Using the right metadata to catch criminals

From the British Journal of Photography: "Police forces risk letting the guilty go free if they don't address their procedures for using digital images as evidence in court. Mark Wood explains why a raw workflow is good for us all.

For some, debates about the veracity of photographic images rage, others have simply moved on from hackneyed arguments about truth and testimony. However, in police work and forensics, questions about digital photography have not been fully addressed. In the UK, there is no common practice on the capture and storage of digital photographs. The guidelines, such as they are, are open to interpretation by each of the UKʼs police forces, and a key issue centres on whether to shoot raw or JPEG.

It is essential that evidence is permissible in court, so the challenge is to foresee problems with a form of evidence such as digital image data. In time, a form of evidence may be found to be unreliable, and therefore discredited. Science drove the use of DNA profiling and, though science is a broad term, similar rigour has to be applied to the use of digital imaging in police work. There is still the notion that anyone who picks up a DSLR is a photography expert, or that being able to tick a box on a staff development form is a panacea for the complexities of image processing.

The leviathan of the legal system moves at a different pace from the fast-changing landscape of digital imaging; official guidelines can be out of step with operational needs. So the diverse and ambiguous implementation of the ACPO (Association Of Chief Police Officers) digital imaging handling guidelines in 2002 could well lead to convictions being quashed on the technicalities of photographic veracity. Though having some flexibility when interpreting the guidelines can make workflows fit for purpose, each force still has its own Standard Operating Procedures and, as long as those protocols are followed, then all is deemed to be in order. There is a disturbing reliance on JPEG files in several forces, for example, and no matter how secure a system might be, it is extremely difficult to prove that JPEG evidence has not been tampered with. In court, an imaging officer might have to testify that a photograph is a true representation of the captured data and has not been manipulated, yet the court must take their word for it. Immutable evidence is required.

Continue reading this article by clicking here.


Wednesday, February 23, 2011

Forensics-reform legislation introduced to Congress

On January 25, Senator Patrick Leahy (D-Vermont) introduced to the Senate the Criminal Justice and Forensic Science Reform Act. The bill aims to strengthen the criminal-justice system by promoting “standards and best practices and ensuring consistency, scientific validity, and accuracy” in forensic testing, analysis, identification, and comparisons.

The bill was designed to address the concerns regarding the field of forensic science that were raised in the February 2009 report released by the National Academy of Sciences.

“Everyone recognizes the need for forensic evidence that is accurate and reliable,” said Leahy in a January 25 press release. “With a new structure in place that draws on both criminal-justice expertise and scientific independence, I believe we will further ensure that only the most reliable forensic evidence is used in our criminal courts. We must provide law enforcement with reliable forensics capabilities, and we cannot allow innocent people to be wrongfully convicted based on faulty forensic evidence.”

The bill that was introduced had few changes from the draft legislation that was initially released on December 22, 2010.

Some of the Criminal Justice and Forensic Science Reform Act’s goals include:
• Establish an Office of Forensic Science within the Department of Justice that would oversee the standards and structure of the forensic-science system as established by the Act;
• Establish a Forensic Science Board that would consist of scientists, practitioners, prosecutors, defense attorneys, and other stakeholders to make recommendations in research priorities, standards, and best practices.
• Establish committees of scientists to be overseen by the National Institute of Standards and Technology that will examine each individual forensic science discipline to determine research needs and help set uniform standards;
• Require that all forensic science laboratories receiving federal funding be accredited according to rigorous standards set by the Forensic Science Board and the Office of Forensic Science, and that forensic scientists meet basic proficiency, education, and training requirements for certification;
• Promote foundational and innovative peer-reviewed scientific research that will strengthen the forensic sciences.

“The bill aims to carefully balance the competing considerations that are so important to getting a review of forensics right,” said Leahy in an official statement. “It also capitalizes on existing expertise and structures, rather than calling for the creation of a costly new agency. It seeks to proceed modestly and cost effectively, with ample oversight, checks, and controls.”

You can download a PDF copy of the legislation here.

Tuesday, February 22, 2011

Ethics in forensics

From the editor at Forensic Magazine: "In the past year, we have had an increased interest from our readers in the question of ethics in forensics. Especially in light of the frequent news reports of questionable laboratory practices, corrupt analysts, and cover-ups.

The question I would raise in answer to this growing concern for the quality of forensic work is: has there been a fundamental change in the way that forensic laboratories are functioning to which we can attribute a decrease in quality? Or, is it in fact a greater awareness of quality that is making it easier to highlight the troubled labs? In other words, are we really seeing a drop in the quality of forensic labs around the country, or are we just more aware of quality concerns due to a focus on forensic techniques in part prompted by the NAS report and other essays on the topic?

Is it possible that the current CSI fad in popular culture—as witnessed by the many primetime TV shows, novels, and magazine articles on the topic—has made the media more aware of forensic work and therefore more likely to investigate laboratory practices, especially if there is even a rumor of malpractice or corruption?

If this is the case, the answer is not to blame TV, the media, or the “uninformed masses” for meddling in what is rightfully within our purview, but rather to raise the quality, accountability, and transparency of our work processes to combat the CSI effect. We must operate knowing that juries are filled with forensic aficionados, our success rates and backlogs are being audited by politicians and citizens alike, and our every misstep will be documented by every paper, radio station, and TV network.

Perhaps this is a tall order given tight budgets, increased caseloads, and greater pressure from local, state, and national governments; however, only success will ease these pressures. By demonstrating greater efficiency and improved results, we will shift the spotlight away from our work and back to the miscarriages of justice we strive to correct ..."


Monday, February 21, 2011

Premiere Pro: Red, yellow, and green render bars and what they mean

Each Premiere Pro class that teach find me answering the question about the "render bars" and what they mean. Here's a good answer direct from Adobe: "If you’ve worked with Adobe Premiere Pro even a little bit, you’ve noticed that colored bars—red, yellow, and green—appear at the bottom of the time ruler at the top of the Timeline panel, above clips in a sequence. These colored bars are often referred to as render bars. But what do they mean, and what does this mean to your work?

First, we need to understand what it means to render a preview.

In the context of computer graphics, rendering is the creation of an image from a set of inputs. For Premiere Pro, this essentially refers to the creation of the frames in a sequence from the decoded source media for the clips, any transformations or interpretations done to fit the source media into a sequence, and the effects applied to the clips.

For clips based on simple source media that match the sequence settings and have only simple effects applied, Premiere Pro can render the frames that make up the sequence in real time. In this case, each frame is rendered for display just before the CTI (current time indicator) reaches it. Premiere Pro caches these results so that it doesn’t unnecessarily redo work when you revisit a frame.

For more complex sets of effects and more difficult source media, Premiere Pro can’t always render the frames of the sequence as fast as needed to play them back in real time. To play these frames in real time, they need to be processed and saved ahead of time, so that they can be read back and played instead of being recalculated on the fly. The creation of these frames to be saved for rapid playback is what is meant by rendering a preview.

By the way, it’s common but confusing and misleading jargon to refer to rendering of previews as rendering all by itself. Rendering for display, rendering for final output, rendering for previews—these are all valid uses of the word rendering. Don’t fall into the trap of using this general term to refer only to the specific case of rendering for the purpose of creating preview files for real-time playback.

Note: Rendering of previews is only for preview purposes. Preview files will not be used for final output unless you have Use Previews option checked on output—which you should not use except in the case of rough previews. Using preview files for final output will in almost all cases cause a decrease in quality. It can speed things up in some cases, so it may be useful for creating a rough preview in less time.

With that preparatory definition out of the way, what do the colored bars mean?

Green: This segment of the sequence has a rendered preview file associated with it. Playback will play using the rendered preview file. Playback at full quality is certain to be in real time.
Yellow: This segment of the sequence does not have a rendered preview file associated with it. Playback will play by rendering each frame just before the CTI reaches it. Playback at full quality will probably be in real time (but it might not be).
Red: This segment of the sequence does not have a rendered preview file associated with it. Playback will play by rendering each frame just before the CTI reaches it. Playback at full quality will probably not be in real time (but it might be).
None: This segment of the sequence does not have a rendered preview file associated with it, but the codec of the source media is simple enough that it can essentially be treated as its own preview file. Playback will play directly from the original source media file. Playback at full quality is certain to be in real time. This only occurs for a few codecs (including DV and DVCPRO).

Note the uses of the word probably above. The colors aren’t a promise. They’re a guess based on some rather simple criteria. If you have a fast computer, than a lot of things marked with red may play back in real time; if you have a slow computer, then some things marked with yellow may need to be rendered to preview files before the segment can be played in real time ..."

Click here to continue reading - and find out what causes a segment to get render bars of a certain color as well as how the Mercury Playback Engine changes things.


Thursday, February 17, 2011

Forensic Photoshop tour stop in D.C.

Sitting here in the airport, heading into the queue for boarding ...

I'll be the guest of the IACP as part of their working group on interview room recording system technology. The guest list is pretty impressive, so I'm confident that productive work will take place. This kind of thinking helps me over my dread of cramming my large body into a small plane for hours on end.

See you in D.C.


Tuesday, February 15, 2011

You can't get there from here

For those moving from older CS versions to the most current, you may have noticed an issue with Version Cue and Adobe Drive. Adobe's James Lockman explains, "To migrate, you’ll need to go from VC CS2 to VC CS3 and finally to VC CS4. There isn’t a direct migration path from CS2 to CS4. You can migrate from CS2 to CS3 from the server administration panel’s Advanced section, and this is best done on the machine where the new server is running."

Click here to find out how to work around this issue.


Monday, February 14, 2011

Caught on Camera

Caught on Camera: the clear capture of officer murders is a grim reality of this powerful technology - from Evidence Technology Magazine.

"The ubiquity of mobile video recording systems in police vehicles illustrates a grim statistic: on-duty deaths were up last year by a staggering 26 percent, highlighted by an equal rise in the number of officers murdered by gunfire: 49 in 2009 and 61 in 2010. Early 2011 statistics are even more frightening, with 11 officers shot in a single 24-hour period in January. The same number of officers were killed by gunfire last month alone, doubling the national trend of each of the previous years of the last decade…and an increasing number of officer deaths have been caught on dashboard cameras.

Videos depicting the last moments of an officer’s life are always shocking, explicit, and immensely disturbing, yet they are often the only voices that officers have when they can no longer speak for themselves. Partially for that reason, mobile video recording is a “technology that is here to stay,” according to 94 percent of law-enforcement professionals who responded to a recent national survey on in-car video systems.

Underscoring the value of in-car video technology, Georgia State Patrol (GSP) Major Mark McDonough announced at a press conference that “…a picture is worth a thousand words,” referring to images that he believed irrefutably identified the man who shot and killed GSP Corporal Chad LeCroy on December 27. The images were recorded to LeCroy’s mobile video recorder and included pictures of the killer actually leaving the scene in the officer’s car.

The significance of mobile video as evidence during police murder investigations played out tragically multiple times across the United States in 2010. In Tampa, Florida, Officers David Curtis and Jeffery Kocab were murdered by a man during a traffic stop on June 29. Kocab’s video system recorded the events leading up to the killings, which included audio of the killer, and of a woman in his company, providing identification information during the stop. The video proved to be the key to the suspects’ later arrest. In another double homicide of police last year, West Memphis (Arkansas) Police Department Sergeant Brandon Paudert and Officer Bill Evan were shot and killed on May 20. Evan’s in-car video showed a 16 year old exiting the passenger side of a vehicle while shooting at the officers with an AK-47. “Since the officers are no longer available, I have to let the dash-cam video speak for itself,” stated Prosecutor Mike Walden.

Despite the growing volume of in-car video images produced during police homicide cases, the images themselves might not always be good enough to act as the “silent witness”, a description often used to suggest that the video quality is adequate for identification and reliability.

“Too often, we’re receiving video evidence in these kinds of cases where the quality of the video is so poor that identification is impossible. Then, who speaks for the officer?” asked Alan Salmon of the Oklahoma State Bureau of Investigation’s Forensic Video Unit. Salmon is also the President of the Law Enforcement & Emergency Services Video Association (LEVA), a professional organization that trains police video analysts from around the country. He said his organization’s members are frustrated with the quality of much of the in-car video they are asked to process, analyze, and eventually take to court ..."

Click here to continue reading this story.


Sunday, February 13, 2011

Forevid - a new tool for video review that's free

I received the following announcement: "I would like to introduce to you a new open-source software for the analysis of surveillance videos, called Forevid. Forevid was developed here at the forensic laboratory of the National Bureau of Investigation, Finland by me and a former intern of mine (Sami Hautamäki). Our goal was to develop a free and easy-to-use tool for the law enforcement and forensic community, containing similar features as the corresponding commercial software.

You can find more information about the software and the download link at http://www.forevid.org/. Please give it a try, and tell me your opinion!"

Needless to say, it grabbed my attention. So, like anyone who is engaged in the work, I went and downloaded the program and gave it a spin. Here's what I think ...

It's free
It's really simple and easy to use
It combines some of the more popular features of other programs in one program
Deinterlace features
It features case management functions

It's free
It's open source
It's feature set is limited
It's Windows only
It only handles "standard windows type videos"

Why would being free be listed in both areas? Freeware is great in a time of tight budgets. For agencies struggling with getting gear, freeware is both a blessing and a curse. My fear is always that the agency won't get the appropriate tool once the budget mess turns around ... since they have the free (albeit limited) tool.

Open source programs can be problematic. The same problem that I have with this tool, I also have with GIMP. Open source means that I can change the program to suit my needs. I don't necessarily have to share the new code with anyone, though I should. There are plenty of so-called experts out there marketing their FVA services with proprietary processes. Repeatability is sacrificed when I can't duplicate their work, leaving the trier of fact to make sense of this battle of the experts. Open source also means that there is a potential that no improvements will ever be made to the program. Agencies that adopt open source programs risk their future for the sake of saving a few dollars.

The limited feature set seems to be an attempt to say, "here's what most people use, and nothing else." Which is fine, until you need the other stuff.

It's windows only. At this point, I'm sure that you're thinking ... here he goes on his Mac rant again. Hold on a minute ... My point here, just like my take on Ocean Systems' dCoder-to-go product (code name Omnivore), is that it runs on/in Windows. More DVRs are being built around Linux today, and the percentage is growing. What we need is something that can decode proprietary Linux video. I'd like to see more time being spent on that particular elephant in our room - that's all I'm saying here. The company that solves that particular problem isn't going to have issues with profitability.

Like the issue above, if you don't have the codec issue sorted out on your system, this won't help you with the video. It has to be DirectShow, Video for Windows, FFmpeg, or AVIsynth script compatible content. You need to already have the codec installed in order to make the video work in Forevid. Sure it comes with some codecs - but most of us have the included codecs already, so there's no big bonus here. If you don't have it, chances are that you'll find it on Larry Compton's Media-Geek. Here's what they say about the issue, "if for some reason, Forevid is not able to import the given video, an error dialog is displayed. By selecting Media info from the dialog, detailed information of the video file can be explored, and e.g. the fourcc code of the required codec can be identified ..." just like GSpot.

All in all, it's a handy little program that some will find useful. Validate it for yourself and you'll see what I mean. As for me, I'll be sticking with what I already have for now.


Friday, February 11, 2011

Facial Comparisons and identification

I get numerous requests to make "identifications" of individuals depicted in CCTV footage. In terms of managing expectations, I try to explain just what it takes to "identify" someone - getting into the difference between "recognize" and "identify."

I've got an excellent reference source on my shelf to help with explanations and I thought that I'd pass it along. Here's a quote from the introduction:

"... law enforcement and intelligence agencies have many more opportunities to acquire and analyze images that depict persons of interest, whether they may be suspects of a crime, witnesses, or victims. In most cases, such images are used for investigative or recognition purposes, wherein an investigator or witness will look at a photograph and because of a prior association or familiarity with the subject , "recognize" the individual and thus be able to "identify" them. In some cases, however, the identity of the individual depicted in an image is subjected to debate. In these cases, analysis by an expert may be necessary to either confirm or exclude a specific individual as being the subject depicted in an image."

What does this mean? If you are capturing images, cleaning them up, putting them on a BOLO poster, then printing/distributing them ... you aren't involved in forming an opinion as to who's in the picture. The investigators take the product of your work and use it to work towards identification of the person in question by others. In this case, your testimony wouldn't include opinions as to the identity of the person in question - just about your process with the footage and subsequent images.

Facial Comparison is quite a different matter all together. If you are interested in getting to the world of opinion based work, I would suggest the book quoted above, Computer-Aided Forensic Facial Comparison, Editors: Martin Paul Evison from the University of Toronto, and Richard W. Vorder Bruegge from the FBI. I would also recommend getting in touch with the SWG that covers this area, FISWG.

With more people looking to get into this line of work, it's important to realize that a large part of what we do is not done with an Adobe product. Just because one is a photographer or an artist does not automatically make one capable of forming and supporting an opinion on facial identification. FISWG saw this recently and said the following in a letter to the IAI, "It is an unfortunate fact that some individuals who testify as experts may occasionally cite a given certification as proof of their expertise in a different, but associated, discipline. The IAI Forensic Art and Forensic Photography certifications relate to disciplines that are associated with, but differ from, the discipline of Facial Comparison. FISWG is concerned that the potential exists for the courts to incorrectly interpret a certification in one of these disciplines to confer certification in the discipline of Facial Comparison. To offer oneself as certified in the discipline of Facial Comparison based upon any current IAI certification would be a misrepresentation."

In the same way, having a certification from a group like ASIS as security professional or an alarm installer doesn't qualify one to work in this area. Having training and experience in this area qualifies one to work in this area. Most professional organizations warn against the ethical violation of working outside of one's expertise.

So ... if you want to get into facial comparison work, or you just want to help your ability to explain your work during testimony, Computer-Aided Forensic Facial Comparison is well worth the price.


Thursday, February 10, 2011

Baby monitors transmit video of unknowing families

From KOMONews.com: "The saying goes, "never wake a sleeping baby." But what if that baby is broadcast for all the neighbors to see?

From Ballard to Queen Anne and Greenlake to Phinney Ridge, KOMO News found unsuspecting families transmitting what's inside their homes without even knowing it.

And they're broadcasting through video baby monitors -- devices designed to give parents peace of mind. But a Problem Solvers investigation found these security devices can be anything but secure.

Monitors can be as cheap as $99. We purchased a model that retails for about $140, and transmits in the 900MHZ band. This frequency is left open by the Federal Communications Commission for all sorts of household uses, including radios, telephones and video cameras.

"It's interesting," said Seattle-based security consultant Eric Rachner. "Baby monitors, for the most part, don't really have security. Technologically, they're just little television stations. There's nothing to prevent you from being able to tune these devices to the channels they're transmitting on."

Rachner, who works for a South Lake Union security firm, is hired by companies to dig out holes in their software and respond when someone breaks into their computer systems. He says intercepting the signal on a baby monitor is simpler than you think.

"How easy is it to intercept? As easy as it is to just go and purchase the receiver for one of these baby monitors," he said. "I would say, it's not just easy; it's trivial."

The Problem Solvers decided to put it to the test. We connected our monitor, which acts as a receiver, in our car, and then drove around the city. Within moments, we started seeing nurseries, bedrooms, and hearing people's conversations. One baby's image we picked up from almost half a mile away.

Using the monitor in West Seattle, we spotted a baby boy sleeping quietly in his crib. Turns out he belongs to Dino Annest, who invested in two baby monitors, one for each of his kids.

"The main thing we were looking for is you want to keep an eye on your kid," Annest said. "I hate the fact that somebody could drive by and watch our baby on their monitor."

Click here to continue reading this story.


Wednesday, February 9, 2011

Adobe FormsCentral

A few readers have asked about Adobe's new service, FormsCentral. With FormsCentral, you can create forms, surveys, plan events, and etc. Questions have focussed on if FormsCentral would be a good fit for evidence processing forms and the like. On the surface, FormsCentral looks great. You can create a form for just about anything that we do in LE. That form can be used on a wide variety of devices. The problem comes when you look at the Terms and Conditions - where no one likes to look. So, I've looked for you ...

"2. b. Unless expressly agreed to by Adobe in writing elsewhere, Adobe has no obligation to store any Materials that you upload, post, email, transmit or otherwise make available through your use of the Services (“Your Content”)."

"6. a. You agree that you, not Adobe, are entirely responsible for all of Your Content that you distribute, perform, display, upload, post, email, transmit or otherwise make available on or through the Services (“Make Available”), whether publicly posted or privately transmitted. You assume all risks associated with use of Your Content, including any reliance on its accuracy, completeness or usefulness."

"7. a. Adobe, in its sole discretion, may (but has no obligation to) monitor or review the Services and Materials at any time. Without limiting the foregoing, Adobe shall have the right, in its sole discretion, to remove any of Your Content for any reason (or no reason), including if it violates the Terms or any Law."

... and so on ...

So, as you can see, Adobe FormsCentral is probably not the best service for your sensitive LE data needs.


Tuesday, February 8, 2011

Reasonable expectation of privacy in a vehicle's "black box"

This just in: Search of a vehicle’s “black box” for data a year after an accident was without a warrant and without probable cause. The motorist retained a reasonable expectation of privacy in the data in the recorder even after a year. People v. Xinos, 2011 Cal. App. LEXIS 153 (Cal. App. 6th Dist. February 8, 2011):

In California v. Acevedo (1991) 500 U.S. 565 [111 S.Ct. 1982], the U.S. Supreme Court eliminated the warrant requirement for searching a closed container located in a vehicle where probable cause supports a search of the container but not a search of the entire vehicle. (Id. at pp. 573, 576, 579.) But the court emphasized that its holding did not expand the scope of searches permissible under the automobile exception. (Id. at p. 580.) Thus, in Acevedo, “the police had probable cause to believe that the paper bag in the automobile’s trunk contained marijuana,” which justified a warrantless search of the paper bag. (Ibid.) But “the police did not have probable cause to believe that contraband was hidden in any other part of the automobile and a search of the entire vehicle would have been without probable cause and unreasonable under the Fourth Amendment.” (Ibid.) Thus, a warrantless search of a vehicle, or the containers within it, under the automobile exception continues to be circumscribed by probable cause. (Ibid.) Its holding indirectly confirms that vehicles continue to be protected by the Fourth Amendment.

We do not accept the Attorney General’s argument that defendant had no reasonable expectation of privacy in the data contained in his vehicle’s SDM. The precision data recorded by the SDM was generated by his own vehicle for its systems operations. While a person’s driving on public roads is observable, that highly precise, digital data is not being exposed to public view or being conveyed to anyone else. But we do not agree with defendant that a manufacturer-installed SDM is a “closed container” separate from the vehicle itself. It is clearly an internal component of the vehicle itself, which is protected by the Fourth Amendment. We conclude that a motorist’s subjective and reasonable expectation of privacy with regard to her or his own vehicle encompasses the digital data held in the vehicle’s SDM.

. . .

The evidence at the suppression hearing established that the vehicle was still being held as evidence of a crime on May 11, 2007 but there had already been a disposition of the case based on “all of the [accident] reconstruction and eyewitness testimony.” The investigating officers had not accessed the data recorder prior to May 11, 2007 because they did not believe it held any relevant data since the airbags had not deployed during the collision. Officer Checke explained, “Prior to going in [on May 11, 2007], we did not believe there would be anything based on the fact that there were no air bags deployed.” Nevertheless, on May 11, 2007, more than a year after the fatal collision, they downloaded the data from the SDM at the request of the District Attorney’s Office. It was only some months later that Officer Checke learned that “a non-deployment event” may register even if air bags do not deploy.

As stated, the scope of a legitimate warrantless search of a vehicle under the automobile exception “is defined by the object of the search and the places in which there is probable cause to believe that it may be found.” (U.S. v. Ross, supra, 456 U.S. at p. 824; cf. Michigan v. Clifford (1984) 464 U.S. 287, 294 [104 S.Ct. 641] [“If the primary object of the search is to gather evidence of criminal activity, a criminal search warrant may be obtained only on a showing of probable cause to believe that relevant evidence will be found in the place to be searched”]; Steagald v. U.S. (1981) 451 U.S. 204, 213 [101 S.Ct. 1642] [“A search warrant ... is issued upon a showing of probable cause to believe that the legitimate object of a search is located in a particular place”].) The scope of a warrantless search authorized by the automobile exception is “no broader and no narrower than a magistrate could legitimately authorize by warrant.” (U.S. v. Ross, supra, 456 U.S. at p. 825.) Moreover, probable cause to conduct a warrantless search must exist at the time the warrantless search is executed. (See Dyke v. Taylor Implement Mfg. Co. (1968) 391 U.S. 216, 221 [88 S.Ct. 1472] [officers conducting warrantless search of automobile must have “‘reasonable or probable cause’ to believe that they will find the instrumentality of a crime or evidence pertaining to a crime before they begin their warrantless search”]; cf. Sgro v. U.S. (1932) 287 U.S. 206, 210 [53 S.Ct. 138] [Proof of probable cause to support issuance of a warrant “must be of facts so closely related to the time of the issue of the warrant as to justify a finding of probable cause at that time”].)

In cases of fatal collisions between a vehicle and a pedestrian, the particular facts and circumstances may give rise to probable cause to believe the SDM contains evidence of a crime. But in this case, the prosecution failed to show that the objective facts known to the police officers at the time of the download constituted probable cause to search the SDM for evidence of crime. The download occurred long after the collision and criminal investigation. The officers who conducted the download were merely complying with an unexplained request of the D.A.’s Office and believed no relevant data would be found. The download of the data was not supported by probable cause."


Thursday, February 3, 2011

Forensic Photoshop turns 3 this month

It's with surprise and gratitude that I say ... Forensic Photoshop turns 3 this month. On behalf of my publisher and myself, thanks for your continued support.

Wednesday, February 2, 2011

Changes to the Federal Rules of Evidence – Rule 26

This handy bit of info comes from Fred Cohen & Associates: "As of December 1, 2010, the rules have changed. The Federal Rules of Evidence (FRE) provide the basis for expert testimony and the requirements for expert reports and qualifications for all Federal cases, and is reflected in many State and local jurisdictions, typically with some delay. After an extensive processes, supported by the legislative and judicial branches of government, including the Supreme Court, the rules have changed. While these changes may seem relatively simple, for the digital forensic evidence examiner and other expert witnesses, there is quite a substantial difference that will reduce costs, ease burdens, and allow examiners and lawyers to focus more clearly on the things they should be doing with regard to legal matters.

Rule 26(a)(2)(B) includes, in pertinent parts:
an expert witness must provide an expert report and “...The report must contain: (I) a complete statement of all opinions the witness will express and the basis and reasons for them; (ii) the facts or data considered by the witness in forming them; (iii) any exhibits that will be used to summarize or support them; (iv) the witness's qualifications, including a list of all publications authored in the previous 10 years; ...”

This rule properly puts the burden for providing the basis for opinions from the side challenging the witness to the side putting forth the witness, in that under the old rules, it was up to the other side to ask for the basis and the facts, and given the time frames for different phases of discovery, this was often problematic.

Perhaps more importantly, this puts the scientific burden for experts where it belongs - on the experts. The courts have long insisted that expert testimony be the result of reliable methods reliably applied, but most expert reports I have reviewed in digital forensics to date failed to provide the vast majority of the key information required in order to evaluate the opinions stated. For example, and without limit, I have seen digital forensics reports stating things like “[strings were] randomly generated” and “[there is] no such person”, but the authors provided no basis at all for these rather startling conclusions ..."

Check out the entire report by clicking here.


Tuesday, February 1, 2011

A first: biometrics used to sentence criminal

This just in from the Homeland Security Newswire: "A judge ruled that biometric facial recognition could be submitted as evidence marking the first time such evidence has been used in a criminal trial; this move surprised many legal and scientific experts as facial recognition technology does not follow basic legal standards required for evidence; the decision may or not become a legal precedent as it was not made by a California appellate or supreme court.

In early January, convicted murder Charles Heard received twenty-five years to life in a California prison for murder.

The case was unique because it was the first time that biometric facial recognition technology had been permitted to be used as evidence in the court room.

In the years following 9/11, DHS and cities around the world began experimenting with closed circuit security cameras and facial recognition software.

After spending billions of dollars into research and development, the results were disappointing.

According to SF Weekly, police departments in Florida, Virginia, and even Germany abandoned the use of such technology as it did not lead to any arrests while it was deployed.

In Germany officials found that the technology only had a 60 percent success rate in identifying people during the day, while at night it dropped to as low as 10 percent due to poor lighting conditions that made accurate identification difficult.

Given this track record, legal and scientific experts were surprised by a San Francisco Superior Court judge’s decision last July to allow biometric facial identification technology to be submitted as evidence to help exonerate a suspected murderer.

Surveillance cameras from a nearby business caught footage of man believed to have shot and killed another in an armed robbery.

The suspect’s defense team submitted still frames from the video footage along with testimony from a biometrics expert who argued that comparisons between current photos and the still frame clearly demonstrate that Charles Heard, the suspect, was not the shooter.

According to David Faigman, an expert on scientific evidence at Hastings College of Law, the judge’s decision came as a bit of a surprise because the technology does not meet a few basic legal standards as it applies to other forms of scientific evidence like DNA or fingerprint analysis.

“I think it is precedent-setting,” he said, “But I also think that the appellate courts might take a dim view of the admission of this evidence…Without the systematic and rigorous evaluation of the evidence, it’s hard to know how much weight to give it ...”

Click here to continue reading this story. Do you think the unnamed author got it right or wrong? Post your comments below.