FireStats is not installed in the database
Laryngitis homøopatiske midler køb propecia edderkop bid retsmidler

CERT

CERT is a system for automated FACS (Facial Action Coding System, see http://mplab.ucsd.edu/grants/project1/research/Fully-Auto-FACS-Coding.html for more details). Each frame of video, or individual image is first given to an automated face and eye detector. The face is cropped and rotated then passed into the action coding system. The result from that system is a set of values for eight different facial action units:

1 Inner brow raise
2 Outer brow raise
4 Brow corrugator
5 Upper lid raise
10 Nasolabial furrow, “disgust”
12 Lip corner pull, “smile”
14 Dimpler
20 Lip stretch, (pull down corners of the mouth) “Eeek!”

Each value gives a relative indication of how much a particular AU is present.

CERT Application
The CERT application is an implementation of CERT that runs on Mac OS X. It can process individual images, folders of images and video files as well as many live Quicktime video sources. For each image or frame of video, it attempts to find a face, then calculate Facial Action Units for that face. Values are displayed alongside the image along with a short graph of previous AU values. A separate window can be used to record AU values along with eye location information which can be saved to a text file.

Basic Use. When you first start CERT the main window should be visible. Along the top of the main window are two buttons allowing you to choose a source (image, folder or video file) to process or to start processing live video (e.g. from a webcam). A wide range of image types can be loaded (jpeg, png, etc.). A third button lets you cancel processing.

To start processing video, make sure your camera is attached and operating, then click on the ‘Live Video’ button.You can adjust the width and height of the video image before starting by using the fields at the bottom of the window. Larger image sizes will require more processing power and/or time.

Detector value graphs. Along the right hand side of the main window are small graphs that display the most recent detector values along with a graph displaying up to one hundred previous values. Only graphs for detectors that have been enabled in the Plugins pane of the Preferences window will be visible. The scale on each graph is adjusted independently based on the largest and smallest values seen so far. The scales are reset whenever the cancel button is clicked. Because of the simplistic scaling algorithm, some changes in value over time may be difficult to see. Recording data and saving it for later analysis is a more accurate way to detect trends and changes in values. Brief descriptions of some detectors will be displayed as you hover the cursor over the graph.

The preferences window lets you turn on or off several features:
Display Gray: Display the converted image that’s actually used for processing.
Capture: Capture up to one image per second if a face has been found.
Multiple Rotations: Will rotate an image in several increments if no face is found. Very slow.
Filter AU Values: Not currently working.
Linear Combo: Not currently working.
In addition any plugins that were found when the application was launched can be enabled or disabled. Different types of plugins will provide different functionality. ‘CERT SVM’ plugins will calculate one or more AU’s. ‘After Faces’ plugins currently calculate smile and blink values.

The “Face/AU Data” window can be used to capture values for later use. Clicking the “Start Recording” button will start recording all data (whether from images or video) and the button title will change to “Stop Recording”. Data won’t show up in the window until the “Stop Recording” button is clicked. Recorded Data includes eye locations, values for any detectors that are enabled, and file name if images are being processed (as opposed to live video). If you click the ‘File…’ button and choose a filename before starting recording, then all data will be written to that file as it’s collected. In that case, when you stop recording, the data will not appear in the data window.

Source Code

The CERT2_0 directory contains all of the platform independent source code needed to build a command line tool that can be used to generate AU values from an image, or a library that can be used to include CERT in another application.

There are four third party libraries that are required to run CERT: FFTW, libXML, cBlas and Boost. See the ThirdPartyLibraries.rtf file for information and URL’s about them.

The CERT2_VS2005 directory contains a Microsoft Visual Studio project that can be used to build a static library and a command line application that demonstrates how to build and use CERT. After building the demo, CERTDemo.exe will be created in the debug subdirectory directory. Note that the demo looks for dlls for the third party libraries and will fail to load if the dlls are not in the windows path or the same directory as the exe. Copy them to the debug subdirectory if necessary. There is a directory with three test images inside CERT2_VS2005. The approximate AU values for these three images are:
1005.bmp
5.6333114, -0.71475668, 1.9317574, -7.7926074, -2.3050803, -1.1298686, -14.421441, -7.4386707

1010.bmp
-3.8970356, -8.6795982, 0.64116601, -5.6882162, -7.4389054, -2.905899, -10.705396, -9.4013099

1014.bmp
1.7217556, -5.3578102, 4.6767152, -7.5143453, -2.8906815, -4.2021838, -16.549772, -11.502446

Starting at the top, the CERTWrapper directory contains a main function (in main.cpp) which reads an image from a path given on the command line or entered when prompted and returns the eight AU values. This file also includes sample code showing how to convert an image into the internal representation needed by CERT (see the RImage class), using either a third-party library called ImageMagick (http://www.imagemagick.org/Magick++/), or using native Window’s code. The main function then uses a class called MP_CERT to generate the AU values. MP_CERT can be used as is, but is also meant to be a demo showing how the CERT algorithm works, and how all of the other pieces work together.

MP_CERT requires two pieces of information. A path to the directory that contains the SVM weight data files (which defaults to “SVMWeights” in the current working directory) and the image to process, in the form of an RImage. The MP_CERT constructor first sets the directory for the SVMWeights files, then pre-loads all of them. Then it adjusts some settings on the eyefinder and the face rotator.

The calcAUs method of MP_CERT performs three main steps.
1) Find face and eyes in the image, returning a std::vector of FaceObjects.
2) Use the best estimate for the eye locations (both_eyes.xLeft, both_eyes.yLeft, etc.) to crop and rotate a face.
3) Use one of the Gabor classes (multi-threaded or single-threaded) to calculate the AU values on the cropped and rotated face.

AU values are returned by CERT for AU’s: 1, 2, 4, 5, 10, 12, 14 and 20. They are returned in that order in a vector.

In addition to the values for the eight AU’s currently being calculated it can also be useful during testing to return the face and eye data. It is possible for the eye and face finder to return more than one face. If you are processing video it might make sense to skip those frames.

SVMWeights: The XML formatted data files in this directory need to be accessible when calculating AU’s. This is done either by having all of the data files available in the application’s current working directory, or by passing a directory path (as a string) to the mp_SVMWeights class. The constructor of the mp_CERT class shows an example of this. Directories can be given either as relative paths from the current working directory, or as absolute paths. The paths should be given as system native paths, e.g. “/Users/Shared/data/SVMWeights” on a Mac OS X system, or “C:\CERT\mp_auCoder\SVMWeights” on a Window’s system. Loading all of the SVMWeights can take a significant amount of time for the first image, but can also be done in a preliminary step (see the loadAllWeights() method of the mp_SVMWeights class).

MP_Gabor: This class calculates Facial Action Units given an aligned and cropped face image. There are two versions, one is single threaded and the other is multi-threaded. Both have the same interface which consists of setting the image to be worked on then calling the main entry method CERT_Gabor, which returns a vector with the AU values. The multi-threaded version runs a portion of the code in three separate threads using the Boost library’s thread classes.

The Misc directory has code that can be used to get release version info either offline or at runtime. Any questions or bug reports should include both version numbers (mpt and cert). In the source files the version is recorded in the line: #define SVN_REV ““. At runtime the getCertRevisionString() and getMPTRevisionString() functions can be used to get version strings.

Code-level documentation generated by Doxygen is available in the CERT2_0 directory at: Documentation/html/index.html