Register or Login To Download This Patent As A PDF
United States Patent Application 
20070288135

Kind Code

A1

Kidd; Scott D.
; et al.

December 13, 2007

Method and apparatus for obtaining photogrammetric data to estimate impact
severity
Abstract
In one embodiment, the present invention includes a computerimplemented
method to collect information to determine damage to a vehicle involved
in a collision using photogrammetric techniques. When determined, this
vehicle damage information, which may be in the form of crush measurement
information such as a crush damage profile, can be displayed in a
computergenerated view of the subject vehicle with a crush damage
profile and used to estimate the impact severity. In some embodiments,
based on the photogrammetric information derived, a direction of any
shifting of the vehicle's components may be obtained and used along with
other information to estimate a principal direction of force (PDOF) for
one or more vehicles involved in the collision. Still further embodiments
may be used to generate and/or audit repair estimates based at least in
part on the photogrammetric information.
Inventors: 
Kidd; Scott D.; (San Antonio, TX)
; Smith; Darrin A.; (San Antonio, TX)

Correspondence Address:

TROP PRUNER & HU, PC
1616 S. VOSS ROAD, SUITE 750
HOUSTON
TX
770572631
US

Serial No.:

519560 
Series Code:

11

Filed:

September 12, 2006 
Current U.S. Class: 
701/29; 701/301; 701/35 
Class at Publication: 
701/29; 701/301; 701/35 
International Class: 
G01M 17/00 20060101 G01M017/00; G06F 19/00 20060101 G06F019/00 
Claims
1. A computerimplemented method comprising:receiving a plurality of
images of a vehicle involved in an accident in a computer;determining, in
the computer, the location of damage points for the vehicle via
triangulation using the plurality of images; andgenerating a crush
profile for the vehicle based on a difference between the location of
damage points for the vehicle and a benchmark vehicle, and storing the
crush profile in the computer for later use.
2. The method of claim 1, further comprising displaying the crush profile
in a graphical user interface including a reference vehicle.
3. The method of claim 1, further comprising evaluating the plurality of
images for pose estimate information.
4. The method of claim 3, further comprising receiving an input to select
a reference pose to correspond to at least one of the plurality of
images.
5. The method of claim 4, further comprising selecting control points and
crush points for at least one of the plurality of images.
6. The method of claim 1, further comprising accessing the crush profile
and selecting a plurality of stored crush profiles having corresponding
crush profiles within a predetermined threshold of the crush profile.
7. The method of claim 6, further comprising auditing a claim associated
with the accident based on a comparison between the crush profile and the
plurality of stored crush profiles.
8. The method of claim 1, further comprising determining, in the computer,
if a camera profile received with the plurality of images corresponds to
a camera profile present in a profile database accessed by the computer.
9. The method of claim 8, further comprising calibrating the plurality of
images based on the camera profile if present, otherwise requesting
additional information from a user of the camera.
10. A computerimplemented method comprising:determining if at least one
component of a first vehicle involved in an accident was shifted, and if
so determining a direction of shift;optimizing overlap of damage patterns
between the first vehicle and a second vehicle involved in the accident;
anddeveloping a principal direction of force for the first vehicle and
the second vehicle based on the optimized overlap of damage patterns and
the direction of shift for the first vehicle.
11. The method of claim 10, further comprising obtaining accident
information regarding the accident and using the accident information to
develop the principal direction of force.
12. The method of claim 11, further comprising determining if the
principal direction of force is consistent with damage information
regarding the first and second vehicles and the accident information.
13. The method of claim 10, further comprising determining a change in
velocity for the first vehicle using the optimized overlap of damage
patterns.
14. The method of claim 13, further comprising determining a change in
velocity for the second vehicle using the optimized overlap damage
patterns.
15. The method of claim 14, further comprising determining the change in
velocity for the first vehicle using photogrammetric information obtained
regarding the first vehicle and determining the change in velocity for
the second vehicle without using photogrammetric information for the
second vehicle.
16. An article comprising a machinereadable medium including instructions
that enable a system to:compare a crush damage pattern of a vehicle
involved in an accident to entries in a database corresponding to crush
damage patterns of vehicles from unrelated accidents;select one or more
of the entries based on similarity to the crush damage pattern and a
direction of impact;generate a report of expected vehicle component
damage for the vehicle based on actual damage information for the
selected one or more entries; andstore the report for later use.
17. The article of claim 16, wherein the instructions further enable the
system to determine at least one component of the vehicle to be repaired
or replaced.
18. The article of claim 16, wherein the instructions further enable the
system to compare a repair estimate associated with the vehicle to the
report.
19. The article of claim 18, wherein the instructions further enable the
system to flag a component listed for repair or replacement in the repair
estimate if the component is not in the report of expected vehicle
damage.
20. The article of claim 18, wherein the instructions further enable the
system to flag a component not listed for repair or replacement in the
repair estimate if the component is in the report of expected vehicle
component damage.
21. The article of claim 18, wherein the instructions further enable the
system to report to a user that the repair estimate is within a threshold
level of the expected vehicle component damage based on the comparison.
22. The article of claim 16, wherein the crush damage pattern is based on
a photogrammetric analysis of the vehicle.
23. The article of claim 16, wherein the instructions further enable the
system to generate the crush damage pattern of the vehicle based on image
data of the vehicle received by the system.
24. The article of claim 16, wherein the instructions further enable the
system to insert an entry in the database, the entry including
information regarding the vehicle and the accident.
Description
[0001]This application claims priority to U.S. Provisional Patent
Application No. 60/811,964 filed on Jun. 8, 2006 in the name of Scott D.
Kidd and Darrin A. Smith entitled METHOD AND APPARATUS FOR OBTAINING
PHOTOGRAMMETRIC DATA TO ESTIMATE IMPACT SEVERITY.
BACKGROUND
[0002]Embodiments of the present invention relate to vehicular accident
analysis and specifically to the collection and analysis of information
for using the principles of photogrammetry to estimate vehicle damage and
impact severity.
[0003]Organizations such as insurance companies and others are tasked with
investigating accidents to resolve property and injury claims. Part of
the investigation is to determine the severity and direction of the
impact. A review of repair estimates and photos is usually done to
develop a qualitative assessment of impact severity. If there is damage
beyond the bumper of the vehicle(s) (i.e., there is crush damage to the
vehicle), the vehicle is often given the qualitative assessment of a
significant impact based on a subjective review of the damage
information. As much as 40% of accidents of a low severity are
subjectively analyzed as a high severity impact, primarily because of the
damage to the vehicle(s).
[0004]One solution to determining crush damage is to measure the crush
when the vehicle is examined for damage and use that information to
determine impact severity. The measurement of crush requires some
training to understand the concepts and insure consistency, and is also
time consuming. With high turnover in the insurance industry and a desire
to improve operational efficiency, this creates an ongoing and
potentially expensive effort.
[0005]Alternately, organizations can retain forensic experts such as
engineers and accident reconstructionists to evaluate accident
information, reconstruct the accident scenario and characteristics
including the determination of the magnitude and direction of the impact.
This is an expensive and nontimely solution to be used on widescale
basis.
[0006]The current solution for determining the components and operations
required to fix a damaged vehicle is to visually inspect the vehicle. A
list of components is created identifying which should be repaired or
replaced. This visual inspection process frequently requires a second or
more inspection to correct errors in the first inspection. This is a
labor intensive and inefficient process. The current process does not
leverage the information from similar impacts to similar vehicles.
[0007]The Principal Direction of Force (PDOF) is the main axis along which
the force of the impact acts on the vehicle. The PDOF is a factor in both
the determining of injury potential of the accident and determining which
components of the vehicle are damaged. The current method of determining
PDOF is to examine photos, descriptions of the accident and possibly
scene information to determine the final resting locations of the
vehicles. In low to moderate severity impacts, the final resting
locations of the vehicles are either not particularly relevant or not
available. The current evaluation of PDOF is done by forensic experts
which is an expensive and time consuming process.
SUMMARY
[0008]In one aspect, the present invention includes a computerimplemented
method to collect information to determine vehicle damage by
photogrammetry. A computer system is used to send pictures and camera
settings from one computer to another computer to determine vehicle crush
damage via photogrammetric techniques. Cameras used to take pictures for
use in the program may be identified and their characteristics stored in
a central location for use when photos taken by the camera are sent to
the central location. The camera and pictures can be selected for
transfer from one computer to another computer via computer network,
wireless transmission, and the like.
[0009]In another aspect, a computergenerated selection of vehicle poses
is presented via computer. The selection of the pose determines the angle
at which the photo was taken and sets the frame of reference. The pose
can also be determined by placing optical markers on the vehicle at the
time the photos are taken. The optical markers can help detect points on
the vehicle such as the tires, and can thus aid in determining the pose
automatically via the computer analysis of the photos.
[0010]Still in another aspect, a computergenerated selection of control
points is presented via computer. Standard points on the vehicle that
were not damaged in the subject accident are marked in each of the photos
of the subject vehicle. These standard points may allow for comparison
between different poses of the vehicle to allow triangulation of the
points between the different photos. The control points can also be
determined by placing optical markers on the vehicle at the time the
photos are taken. The optical markers can help detect points on the
vehicle such as the windshield, and refine the pose automatically via
computer analysis of the photos.
[0011]In another aspect, a computergenerated grid of vehicle damage
points is presented and selected for the subject vehicle. The damage
points allow for comparison between different poses of the vehicle to
allow the triangulation of the points between the different photos. The
damage points can also be determined by placing optical markers on the
vehicle at the time the photos are taken. The optical markers can help to
detect points on the vehicle such as the head lights or tail lights, and
can thus aid in determining the damage automatically via the computer
analysis of the photos.
[0012]In another aspect, a computergenerated crush damage profile may be
calculated for the subject vehicle using photogrammetic techniques and
comparing the points of the damaged vehicle generated by photogrammetry
to an undamaged vehicle. The results may be displayed in a
computergenerated view of the subject vehicle with a crush damage
profile and used to estimate the impact severity.
[0013]In another aspect, a computerimplemented method of determining the
direction of shifting of the vehicle's components may be provided. This
information can be combined with information about preaccident actions
and questions regarding the preaccident vehicle speeds via computer to
estimate the Principal Direction of Force (PDOF) for the vehicles
involved in the collision.
[0014]In another aspect, the depth of damage and direction of impact to
the vehicle can be compared with similar vehicles of similar depth of
damage and direction of impact with respect to the components required to
repair the vehicle. The comparison can be used to generate a list of
components that may need to be repaired or replaced to restore the
vehicle. The comparison can alternatively be used to audit repair
estimates that were developed independently, in some implementations.
BRIEF DESCRIPTION OF THE DRAWINGS
[0015]FIG. 1 is a flow diagram of a method to capture and store camera
calibration characteristics in a central location in accordance with one
embodiment of the present invention.
[0016]FIG. 2 is a flow diagram of a method of sending picture and camera
information to a central location to conduct a photogrammetric analysis
of vehicle damage in accordance with one embodiment of the present
invention.
[0017]FIG. 3 is a diagram defining object, camera and image reference
frames.
[0018]FIG. 4 is a diagram defining the relationship between the camera
reference frame and the image plane.
[0019]FIG. 5 shows the principal point offset.
[0020]FIG. 6 shows an example of a planar calibration pattern.
[0021]FIGS. 7 AC show examples of an undistorted image and lens distorted
images.
[0022]FIG. 8 is a conceptual diagram of the transformation from the object
to the camera space.
[0023]FIG. 9 is a diagram defining the error in image space and object
space.
[0024]FIG. 10 is a flow diagram of a method of evaluating photos using
photogrammetry techniques in accordance with one embodiment of the
present invention.
[0025]FIG. 11 is a diagram showing triangulation from two views.
[0026]FIG. 12 is a conceptual diagram depicting rays back projected from
imperfectly measured image points.
[0027]FIG. 13 is a diagram showing the concept of bundle adjustment.
[0028]FIG. 14 is a diagram showing the concept of comparing a damaged
vehicle with an exemplar vehicle.
[0029]FIG. 15 is a flow diagram of a method of creating a crush damage
profile using photogrammetry techniques in accordance with one embodiment
of the present invention.
[0030]FIG. 16 is a flow diagram of a method of determining PDOF from a
crush damage profile generated via photogrammetry in accordance with one
embodiment of the present invention.
[0031]FIG. 17 is a flow diagram of a method of estimating component damage
for repairing a vehicle from a crush damage profile generated via
photogrammetry in accordance with one embodiment of the present
invention.
[0032]FIG. 18 is a block diagram of a system in accordance with one
embodiment of the present invention.
DETAILED DESCRIPTION
[0033]Photos of vehicles damaged in a collision can be taken to determine
the extent of damage. These photos then can be used to estimate the
amount of deformation to the vehicle and determine the impact severity
for the vehicle. Impact severity can be used to determine the possible
injury potential for the accident or to determine what components should
be repaired or replaced to repair the vehicle.
[0034]Photogrammetry is a measurement technology in which the
threedimensional coordinates of points on an object are determined by
measurements made in two or more photographic images taken from different
positions. Embodiments of the present invention use a photogrammetric
system for measuring vehicular crush from photographs. This system can be
broken down into four steps: (1) camera calibration; (2) camera pose
estimation; (3) triangulation; and (4) bundle adjustment.
[0035]Camera calibration is the process for identifying an individual
camera's geometric and optical characteristics, so that metric
information can be obtained from its images. A camera's characteristics
can be divided into two categoriesextrinsic parameters and intrinsic
parameters. Extrinsic parameters refer to the spatial relationship
between the camera and the object of interest. Intrinsic parameters refer
to the camera's optical characteristics.
[0036]The process of determining a camera's position and aiming direction
(or orientation) from known XYZ coordinates of the object is called
resection. In computer vision literature, this is also known as the
exterior orientation problem or camera pose estimation problem. The known
XYZ coordinates of the object are called control points.
[0037]Photogrammetry uses a principle called triangulation to determine an
object's threedimensional coordinates from multiple photographs. Points
are triangulated by finding the intersection of converging lines of
sight, or rays. By taking photographs of the object from at least two
different locations and measuring the same target in each photograph,
lines of sight are defined from each camera position to the target. If
the camera positions and orientations are known, then these rays can be
intersected to find the 3D coordinate in object space.
[0038]Bundle adjustment is the process of refining a visual reconstruction
to produce jointly optimal structure (3D feature points of an object) and
motion (camera pose).
[0039]Different techniques for obtaining data and analyzing the obtained
data can be used to employ photogrammetric algorithms to determine the
extent of damage to a vehicle. Referring now to FIG. 1, shown is one
embodiment of a process 100 of collecting camera calibration information.
In one embodiment, the information may be provided from the camera to a
central location for determining characteristic information and storing
the information for later use. As shown in FIG. 1, a known calibration
object is photographed to determine the intrinsic errors in the camera
itself, as shown in block 105. Control is passed to decision diamond 110
to determine if the calibration points are to be selected manually from
the calibration photo. If the calibration points are to be selected
manually from the calibration photo, then control is passed to block 115
to allow the calibration points to be selected manually. Once the
calibration points are identified, control is passed to block 120 to send
the calibration photos for evaluation, e.g., at a central location. Then
control passes to block 125 to determine the camera's internal
characteristics and correction. If automatic calibration is to be used,
control is passed from decision diamond 110 to block 120 to send the
photos for evaluation. Then control is passed to block 125 to determine
the camera's internal characteristics and correction factors. Once the
optical characteristics for a camera are known, the camera can be
identified and its parameters stored for future reference as shown in
block 130.
[0040]Referring now to FIG. 2, a flow diagram shows a process 200 of
taking photos and sending a camera selection to a central location as one
embodiment of the present invention. As shown in block 205, photos can be
taken with markers to indicate points on the damaged vehicle or without
markers. The photos can be sent via electronic means, such as a
computertocomputer communication to a central location as shown in
block 210. As shown in block 215, information regarding the camera used
to take the photos can be selected and sent via computer to a central
location. Once the photos are received in the central location, control
is passed to decision diamond 220 to evaluate the camera settings file
that accompanies the photos. If the settings with the photos match the
settings when the camera was calibrated (e.g., in accordance with the
method of FIG. 1), then control is passed to block 230 and the photos are
accepted for further evaluation. If the settings are different, then the
person who took the photos is notified via electronic means (e.g., email)
or via phone to recalibrate their camera to reflect the new settings
(block 225) or send new photos with a previously calibrated and unaltered
camera. Control is passed to decision diamond 240 to determine whether
new calibration settings have been sent. If new calibration settings are
sent, then the control is passed to block 235 and the new camera
characteristics are stored. Control is subsequently passed to block 230.
If new recalibration settings are not sent, then control passes to
decision diamond 245 to evaluate whether new pictures have been sent. If
new pictures have been sent, then control passes to decision diamond 220
to evaluate the camera settings file that accompanies the photos and the
process of camera settings comparison repeats as discussed previously. If
new photos are not sent, then control passes to block 250 and photos are
not accepted for use in further evaluation.
[0041]FIG. 3 shows the coordinate reference frames of the object, O, the
camera, C, and the image, I. The extrinsic parameters map 3D coordinates
in the object reference frame to 3D coordinates in the camera reference
frame. Likewise, the intrinsic parameters map the 3D coordinates in the
camera reference frame to the projected 2D coordinates in the image
plane.
[0042]Given a 3D point, P.sub.o, in the object's reference frame, its
coordinates in the camera reference frame are given by:
P.sub.c=RP.sub.o+T.sub.o
where R is a 3.times.3 rotation matrix and T.sub.o is the position of the
object reference frame with respect to the camera. This can also be
written as:
P C = RP O  RT C = [ R  RT C 0 1 ] {
P O 1 }
where T.sub.c is the position of the camera reference frame with respect
to the object.
[0043]FIG. 4 shows the relationship between the camera reference frame and
the image plane. By way of similar triangles, it can be shown that the
point, P.sub.c, with coordinates (X,Y,Z) in the camera reference frame,
projects to (fX/Z, fY/Z) on the image plane. Using homogenous
coordinates, this can be expressed as the matrix multiplication:
P IC = { fX fY Z } = [ f 0 f
0 1 0 ] { X Y Z 1 }
where P.sub.IC indicates that the projected point is expressed with
respect to the principal point of the image. Usually, the image
coordinate system is offset from the principal point as shown in FIG. 5.
[0044]In nonhomogenous coordinates, the image point is (fX/Z+U.sub.o,
fY/Z+V.sub.o), which can be written as a matrix multiplication using
homogenous coordinates, i.e.,
P I = { fX + ZU O fY + ZV O Z } = [ f
U O 0 f V O 0 1 0 ] { X
Y Z 1 }
Now, writing
[0045] K = [ f U O f V O 1 ]
Here, the matrix K is called the camera calibration matrix.
[0046]The entire mapping from object reference frame to image plane can be
written as:
P I = [ K 0 > ] [ R  RT C 0 1
] { P O 1 }
Or, more succinctly as:
P I = K [ R t ] { P O 1 } where
t =  RT C
It should be noted that the intrinsic parameters are totally contained in
the camera calibration matrix, K, while the extrinsic parameters are
described by the [R t] matrix.
[0047]The pinhole camera model assumes image coordinates are Euclidean
with equal scales in both directions. In the case of digital cameras, it
is possible to have nonsquare pixels. It is also possible, albeit
unlikely, that the pixels aren't perpendicular, i.e., the pixels are
skew. In this case, the camera calibration matrix, K, can have the more
general form:
K = [ f X s U O f Y V O 1 ]
The terms f.sub.x and f.sub.y account for nonsquare pixels and s is the
skew parameter. In most cases, the skew term will be zero.
[0048]The calibration procedure may begin by taking several images of a
planar calibration pattern with precisely known geometry. For example, in
one embodiment a calibration pattern may be a checkerboard pattern with
regularly spaced rectangles or squares such as shown in FIG. 6.
[0049]If the object reference frame is chosen such that the XY plane is
the plane (Z=0), then the relationship between image coordinates and
object coordinates can be expressed as:
P I = K [ r 1 r 2 r 3 t ] {
P Ox P Oy 0 1 } = K [ r 1 r 2
t ] { P Ox P Oy 1 } or P I = H
{ P Ox P Oy 1 }
Here, the 3.times.3 matrix, H, is called a planar homography and it maps
points from the calibration plane to their corresponding image
coordinates. Given an image of the calibration pattern, this homography
can be estimated.
[0050]Denoting the homography H=[h.sub.1 h.sub.2 h.sub.3] gives [h.sub.1
h.sub.2 h.sub.3]=K[r.sub.1 r.sub.2 t]
Since r1 and r2 are orthogonal, it can be shown that
[0051]h.sub.1.sup.TK.sup.TK.sup.1h.sub.2=0
h.sub.1.sup.TK.sup.TK.sup.1h.sub.1h.sub.2.sup.TK.sup.TK.sup.1h.sub.2=
0
Defining .omega. as .omega.=K.sup.TK.sup.1 gives
[0052]h.sub.1.sup.T.omega.h.sub.2=0
h.sub.1.sup.T.omega.h.sub.1h.sub.2.sup.T.omega.h.sub.2=0
Here, .omega. is known as the image of the absolute conic.
[0053]In terms of the calibration matrix, .omega. can be expressed as:
.omega. = K  T K  1 = [ .omega. 11
.omega. 12 .omega. 13 .omega. 21 .omega. 22 .omega. 23
.omega. 31 .omega. 32 .omega. 33 ] = [ 1 f
x 2 0  U 0 f x 2 0 1 f y 2  V 0 f x 2
 U 0 f x 2  V 0 f y 2 ( U 0 2 f x 2 +
V 0 2 f y 2 + 1 ) ]
Since .omega. is symmetric, it can be represented by the 6 vector
[0054]{right arrow over (.omega.)}.times.={.omega..sub.11 .omega..sub.12
.omega..sub.22 .omega..sub.13 .omega..sub.23 .omega..sub.33}.sup.T
[0055]Using the estimated homography of each calibration image and the
constraints on the calibration matrix, a set of linear equations can be
written in {right arrow over (.omega.)}, i.e.
h.sub.i.sup.T.omega.h.sub.j=v.sub.ij.sup.T{right arrow over (.omega.)}
where v.sub.ij=[h.sub.i1h.sub.j1, h.sub.i1h.sub.j2+h.sub.i2h.sub.j1,
h.sub.i2h.sub.j2, h.sub.i3h.sub.j1+h.sub.ilh.sub.j3,
h.sub.i3h.sub.j2+h.sub.i2h.sub.j3, h.sub.i3h.sub.j3]
Therefore, the constraints on each homography result in the following
linear system
[0056] [ v 12 T ( v 11  v 22 ) T ] .omega.
> = V .omega. > = 0 >
For n images, the matrix V is a 2n.times.6 matrix. This system can be
solved if there are at least 3 images of the calibration plane.
[0057]The pinhole camera model is an idealized camera model. Real cameras
have imperfect lenses that produce nonlinear effects. For example, when
the magnification of a lens differs at its edges and at its center, the
image of a square object will be distorted. If the magnification is lower
at the edges than at the center, a square object will appear to have
rounded edges. This type of distortion is called "barrel" distortion. If
the magnification is greater at the edges than at the center, the image
will exhibit "pincushion" distortion. FIGS. 7 AC illustrate both of
these radial distortion effects, along with an undistorted image.
[0058]Radial distortion can be corrected using a nonlinear distortion
factor. Studies have shown that the distortion can be modeled as a
polynomial with respect to the squared radial distance, i.e.,
[ .delta. u .delta. v ] = [ u ~ (
k 1 r 2 + k 2 r 4 + ) v ~ ( k 1 r 2
+ k 2 r 4 + ) ]
where (.delta.u,.delta.v)is the amount of distortion in the x and
ydirections, respectively,( ,{tilde over (v)})is an image point
projected via the pinhole camera model, k.sub.1, k.sub.2, . . . are the
distortion coefficients, and r= {square root over ( .sup.2+{tilde over
(v)}.sup.2)} is the radius.
[0059]Thus, the pinhole camera model can be extended to include nonlinear
distortion, i.e.,
[ u v ] = [ f X ( u ~ + .delta. u )
f Y ( v ~ + .delta. v ) ] + [ U 0
V 0 ] [ u v ] = [ f X u ~ ( 1 +
k 1 r 2 + k 2 r 4 ) f Y v ~ ( 1 + k
1 r 2 + k 2 r 4 ) ] + [ U 0 V 0 ]
It has been shown that radial distortion is adequately modeled by the
first two polynomial terms; thus, the higher order terms have been
dropped.
[0060]To calculate the optimal camera calibration parameters, the problem
is posed as a nonlinear minimization problem. Given n images of a
calibration target with m points, the objective is to minimize the
reprojection error, i.e.,
f = i = 1 n j = 1 m m ij  m ( K , k 1 ,
k 2 , R i , t i , P j )
where {right arrow over (m)}(K,k.sub.1, k.sub.2, R.sub.i, t.sub.i,
P.sub.j) is the projection of point P.sub.j in image i. This nonlinear
minimization problem is solved via the LevenbergMarquardt Algorithm.
[0061]Of the nonlinear least squares algorithms, the LevenbergMarquardt
(LM) algorithm has been the most popular because of its tolerance to
missing data and its convergence properties. The basis for the LM
algorithm is a linear approximation of the objective function in the
neighborhood of the control variables. For a small perturbation of the
control variables, .delta..sub.p, a Taylor series expansion yields a
linear approximation for the residual, i.e.,
f(p+.delta..sub.p).apprxeq.f(p)+J.delta..sub.p
The gradient of this form of the objective function is given by
[0062]g=J.sup.Tf(p+.delta..sub.p)=J.sup.T[f(p)+J.delta..sub.p]
Setting the gradient to zero yields the socalled normal equations, i.e.,
[0063]J.sup.TJ.delta..sub.p=J.sup.Tf(p)
Solving this linear system gives the .delta..sub.p that is a local
minimizer of the objective function.
[0064]The LM algorithm uses a slightly different version of these
equations called the augmented normal equations, i.e.,
N.delta..sub.p=J.sup.Tf(p)
The offdiagonal elements of the matrix N are the same as the
corresponding elements of the J.sup.TJ matrix, but the diagonal elements
of N are given by
[0065]N.sub.ii=.mu.+.left brktbot.J.sup.TJ.right brktbot..sub.ji for
some .mu.>0.
[0066]This strategy of altering the diagonal elements is called damping
and the factor, .mu., is called the damping term. If the solution of the
normal equations yields a new parameter estimate that reduces the value
of the objective function, the update is accepted and the process is
repeated with a decreased damping term. Otherwise, the damping term is
increased and the normal equations are solved again until the objective
function is decreased. In a single iteration of the LM algorithm, the
normal equations are solved until an acceptable parameter estimate is
found.
[0067]The augmented normal equations can also be written as the following
linear system:
(H+.mu.I).delta..sub.p=g
where H is the GaussNewton approximation of the Hessian, J.sup.TJ, and I
is the identity matrix with the same size as H.
[0068]The LM algorithm terminates when one of three stopping criteria are
met: (1) The norm of the gradient of the residual, i.e. j.sup.Tf(p),
falls below a threshold, .epsilon..sub.1; (2) The relative change in the
step size falls below a threshold, .epsilon..sub.2; and (3) The number of
LM iterations exceeds some maximum value, k.sub.max.
[0069]Table 1 shows pseudocode for a LM algorithm in accordance with one
embodiment of the present invention.
TABLEUS00001
TABLE 1
iterations = 0;
p = p0;
residual = x  f(p);
error =  residual ;
H = J.sup.TJ; // Hessian (GaussNewton approximation)
g = J.sup.T residual; // Gradient
Converged = ( Grad < el );
.mu. = maximum diagonal element of Hessian;
while (not converged) and (iterations < max_iterations)
accept_step = false;
while (not accept_step) and (not converged)
// Solve normal equations
step = solution of ( H + .mu.I ) .delta. p = g
// Evaluate error, Gradient, and Hessian at new step
p_new = p + step;
residual_new = x  f(p_new);
error_new =  residual_new ;
H_new = J.sub.new.sup.TJ.sub.new;
g_new = J.sub.new.sup.T residual_new;
// Check step against threshold
if ( step < e2p )
converged = true;
else
// Check acceptability of step
accept_step = error_new < error;
if (accept_step)
// Accept step.
// Update control variables, error,
// Hessian, Gradient, and damping parameter.
p = p_new;
error = error_new;
H = H_new;
g = g_new;
.mu. = .mu. / 10;
else
// Reject step and update damping parameter.
.mu. = .mu. * 10;
end;
end;
endwhile
// Convergence test based on Gradient norm
convergence = ( Grad < el)
// Update iteration count
iterations = iterations + 1;
endwhile
[0070]This section describes a technique whereby the structure of the
Jacobian matrix can be exploited to reduce the overall complexity of a
system and greatly improve the computational performance.
[0071]For illustrative purposes, the following will be applied to bundle
adjustment; however, the technique can be applied to camera calibration,
the pose estimation problem, and many other computer vision problems.
[0072]Assume that n 3D points are visible in m images. Let {circumflex
over (x)}.sub.ij represent the projection of point i on image j. Let
a.sub.j represent the control variables of each camera j and let b.sub.i
represent the control variables for each 3D point i.
For n points in m images, the observed image points are
[0073]x=(x.sub.11.sup.T, . . . , x.sub.1m.sup.T, . . . , x.sub.21.sup.T, .
. . , x.sub.n1.sup.T, . . . , x.sub.nm.sup.T).sup.T
Similarly, the estimated (projected) image points of the n 3D points are
given by
[0074]{circumflex over (x)}=({circumflex over (x)}.sub.11.sup.T, . . . ,
{circumflex over (x)}.sub.1m.sup.T, . . . , {circumflex over
(x)}.sub.2l.sup.T, . . . , {circumflex over (x)}.sub.2m.sup.T, . . . ,
{circumflex over (x)}.sub.n1.sup.T, . . . , {circumflex over
(x)}.sub.nm.sup.T).sup.T
where each {circumflex over (x)}.sub.ij=Q(a.sub.j,b.sub.i) is a predicted
image point from a mathematical camera model, e.g. pinhole projection
model. The error or residual vector is defined as
=x{circumflex over (x)}
[0075]The control variables are partitioned by the vector
P=(a.sub.1.sup.T, . . . , a.sub.m.sup.T, b.sub.1.sup.T, . . . ,
b.sub.m.sup.T).sup.T
For simplicity, the Jacobian for n=4 points in m=3 views is given by
[0076] J = .differential. x ^ ( P ) .differential. P
= ( A 11 0 0 B 11 0 0 0 0 A 12 0 B 12
0 0 0 0 0 A 13 B 13 0 0 0 A 21 0 0
0 B 21 0 0 0 A 22 0 0 B 22 0 0 0 0 A
23 0 B 23 0 0 A 31 0 0 0 0 B 31 0 0
A 32 0 0 0 B 32 0 0 0 A 33 0 0 B 33 0
A 41 0 0 0 0 0 B 41 0 A 42 0 0 0 0 B
42 0 0 A 43 0 0 0 B 43 ) where
A ij = .differential. x ^ ij .differential. a j and
B ij = .differential. x ^ ij .differential. b i
[0077]The first m columns of the Jacobian are the partial derivatives of
the image residuals with respect to the parameters of camera j. Since the
camera parameters for one image do not affect the projected image points
of other images, there are numerous zeros in these columns. Similarly,
the last n columns of the Jacobian are the partial derivatives of the
image residuals with respect to the 3D structure parameters. These
columns also have numerous zeros because of the lack of interaction
between parameters. Reconsider the normal equations, i.e.,
J.sup.TJ.delta.=J.sup.T.epsilon.
The left hand side of the normal equations is given by
[0078] J T J = ( i A i 1 T A i 1
0 0 A 11 T B 11 A 21 T B 21 A 31 T B 31
A 41 T B 41 0 i A i 2 T A i
2 0 A 12 T B 12 A 22 T B 22 A 32 T B 32
A 42 T B 42 0 0 i A i 3 T A i
3 A 13 T B 13 A 23 T B 23 A 33 T B
33 A 43 T B 43 B 11 T A 11 B 12 T A 12
B 13 T A 13 j B 1 j T B 1 j
0 0 0 B 21 T A 21 B 22 T A 22 B 33 T A
33 0 j B 2 j T B 2 j 0 0
B 31 T A 31 B 32 T A 32 B 33 T A 33 0 0
j B 3 j T B 3 j 0 B 41 T A 41
B 42 T A 42 B 43 T A 43 0 0 0 j B 4
j T B 4 j )
And the right hand side is given by
[0079] J T = ( i A i 1 T i 1
i A i 2 T i 2 i A i
3 T i 3 j B 1 j T 1 j
j B 2 j T 2 j j B 3 j T 3
j j B 4 j T 4 j )
[0080]Substituting U.sub.j, V.sub.i, W.sub.ij, .epsilon..sub.aj and
.epsilon..sub.bi for
i A ij T A ij , j B ij T B ij , A ij T
B ij , i A ij T ij and j B ij T
ij respectively ,
the normal equations can be written in a more compact form, i.e.,
( U 1 0 0 W 11 W 21 W 31 W 41 0 U 2
0 W 12 W 22 W 32 W 42 0 0 U 3 W 13 W 23
W 33 W 43 W 11 T W 12 T W 13 T V 1 0 0 0
W 21 T W 22 T W 23 T 0 V 2 0 0 W 31 T W
32 T W 33 T 0 0 V 3 0 W 41 T W 42 T W 43 T
0 0 0 V 4 ) ( .delta. a 1 .delta. a
1 .delta. a 1 .delta. b 1
.delta. b 2 .delta. b 3 .delta. b 4
) = ( a 1 a 2 a 3
b 1 b 2 b 3 b
4 )
The normal equations can be written in even more compact form, i.e.,
[0081] ( U * W W T V * ) ( .delta. a
.delta. b ) = ( a b ) where U *
= ( U 1 * 0 0 0 U 2 * 0 0 0 U 3 * )
, V * = ( V 1 * 0 0 0 0 V 2 * 0 0 0
0 V 3 * 0 0 0 0 V 4 * ) , and W = (
W 11 W 21 W 31 W 41 W 12 W 22 W 32 W 42
W 13 W 23 W 33 W 43 )
[0082]Multiplying the compact form of the augmented normal equations by
( I  W V *  1 W T V * )
results in
( U *  W V *  1 W T 0 W T V *
) ( .delta. a .delta. b ) = ( a  W
V *  1 b b )
[0083]Since the upper right block of the left hand matrix is zero, the
.delta..sub.a vector can be determined by solving the upper j set of
equations, i.e.,
(U*WV*.sup.1W.sup.T).delta..sub.a=.epsilon..sub.aWV*.sup.1.epsilon..su
b.b
After solving for .delta..sub.a, .delta..sub.b can be solved by
backsubstitution into the bottom i set of equations. Denoting
Y.sub.ij=W.sub.ijV*.sub.i.sup.1, the upper j set of equations becomes
( U 1 *  i Y i 1 W i 1 T 
i Y i 1 W i 2 T  i Y i
1 W i 3 T  i Y i 2 W i
1 T U 2 *  i Y i 2 W i 2 T
 i Y i 2 W i 3 T  i
Y i 3 W i 1 T  i Y i 3
W i 2 T U 3 *  i Y i 3 W i
3 T ) ( .delta. a 1 .delta. a 2
.delta. a 3 ) = ( a 1  i Y
i 1  bi a 2  i Y i 2 
bi a 3  i Y i 3  bi )
which can be solved for .delta..sub.a. Each .delta..sub.bi is then given
by
.delta. bi = V i *  1 ( bi  j w ij T .delta.
aj )
Table 2 summarizes the technique, which can be generalized to n points in
m views.
TABLEUS00002
[0084]TABLE 2
1. Compute the derivative matrices,
A ij = .differential. x ^ ij .differential. a j
and B ij = .differential. x ^ ij .differential. b i
,
and error vector .epsilon..sub.ij = x.sub.ij  {circumflex over
(x)}.sub.ij
2. Compute the intermediate expressions
U j = i A ij T A ij V i = j B
ij T B ij W ij = A ij T B ij , aj
= i A ij T ij bi = j B ij T ij
3. Multiply the diagonals of U and V by 1 + .mu.
4. Compute the inverse of V. Since V is not used again, it can
be replaced by the inverse.
5 Find .delta..sub.a by solving (U*  WV*.sup.1W.sup.T).delta..sub.a =
.epsilon..sub.a  WV*.sup.1.epsilon..sub.b
6. Find .delta..sub.b by backsubstitution
[0085]Estimating the spatial relationship between the object and the
camera, or the pose estimation problem, is a central issue in closerange
photogrammetry and computer vision applications. The goal is to determine
the rigid transformation that relates the object and camera reference
frames. Typically, this rigid transformation is parameterized by a
rotation matrix, R, and a translation, t. FIG. 8 illustrates this
transformation.
[0086]The data used to solve this problem are a set of point
correspondences 3D coordinates of the object, or "control points" and
their 2D projections onto the image plane. Typically, the control points
are expressed with respect to the object reference frame and their
projections are expressed with respect to the camera reference frame. The
algorithm described herein is based on the work by Hager, et al.
[0087]Given a set of at least 3 control points, {p.sub.i}, the
corresponding camera space coordinates, {q.sub.i}, are given by:
q.sub.i=Rp.sub.i+t
The camera reference frame is chosen such that the origin is at the center
of projection and the optical axis is in the positive zdirection. The
control points are projected onto the plane in the camera reference frame
where z=1, the socalled normalized image plane. In the camera reference
frame, the control points are given by:
{ q ix q iy q iz } = { r 1 T p i + t
x r 2 T p i + t y r 3 T p i + t z }
where r.sub.1.sup.T, r.sub.2.sup.T, and r.sub.3.sup.T are the rows of the
rotation matrix, R. If a ray is drawn from the camera reference frame
origin to the control point, it intersects the normalized image plane at
the point V.sub.i=(u.sub.i, v.sub.i, 1). This can be expressed as:
u i = r 1 T p i + t x r 3 T p i + t z v i
= r 2 T p i + t y r 3 T p i + t z Or V i =
1 r 3 T p i + t z ( Rp i + t )
[0088]This is known as the collinearity equation. In classical
photogrammetry, the collinearity equation is often used as the basis for
solving the pose estimation problem. The pose is iteratively refined such
that the image residual is minimized, i.e.,
min R , t i = 1 n [ ( u ^ i  r 1 T p i
+ t x r 3 T p i + t z ) 2 + ( v ^ i  r 2 T
p i + t y r 3 T p i + t z ) 2 ]
Here, (u.sub.i, {circumflex over (v)}.sub.i) are the coordinates of the
control point observed in the normalized image plane.
[0089]This problem can also be expressed in terms of minimizing the
overall residual in object space as shown in FIG. 9. The lineofsight
projection matrix is defined as:
V ^ i = v ^ i v ^ i T v ^ i T v ^ i
[0090]When a scene point is multiplied by this matrix, it projects the
point orthogonally to the line of sight defined by the image point
{circumflex over (v)}.sub.i. In the presence of error, there will be a
residual vector between the scene point, q.sub.i and its orthogonal
projection, i.e.,
e.sub.i={circumflex over (V)}.sub.i(Rp.sub.i+t)(Rp.sub.i+t)
[0091]Therefore, the optimal camera pose is that which minimizes the
overall residual in object space, i.e.,
min R , t i = 1 n e i 2 = ( I  V ^
i ) ( R p i + t ) 2
[0092]If the camera space coordinates of the control points could be
obtained by other means, e.g. digitized with a FARO arm, then each
control point is related by the rigid transformation:
q.sub.i=Rp.sub.i+t
Given at least 3 or more noncollinear control points, R and t can be
obtained by solving the leastsquares problem:
min R , t R p i + t  q i 2 , subject
to R T R = I
[0093]This type of constrained leastsquares problem can be solved
analytically using singular value decomposition (SVD). Defining the
centroids of the camera and scene points as:
q _ = 1 n i = 1 n q i p _ = 1 n i = 1 n
p i
And defining the position of camera and scene points, relative to their
centroids as:
q'.sub.i= qq.sub.i
p'.sub.i= pp.sub.i
The sample cross covariance is
[0094] 1 n M , where M = q ' p ' = i = 1 n
q i ' p i ' T
[0095]If R* and t* are the optimal rotation and translation, then they
must satisfy
R*=arg max.sub.R trace(R.sup.TM)
t* = qR* p
Let (U,.SIGMA.,V) be a SVD of M, then the solution of R* is given by
[0096]R*=VU.sup.T
[0097]Thus, the only data required to calculate the optimal rotation
matrix are the 3D coordinates (in camera and object space) relative to
their centroids. The optimal translation is then a simple function of the
optimal rotation and the centroids.
[0098]In one embodiment, an algorithm may be referred to as the orthogonal
iteration (01) algorithm. This approach is to use the objectspace
collinearity error and restructure the problem so that it resembles the
absolute orientation problem. The first step is to define the objective
function based on objectspace error, i.e.,
E ( R , t ) = i = 1 n e i 2 = ( I  V ^
i ) ( Rp i + t ) 2
Since the objective function is quadratic in t, the optimal translation
can be calculated in closedform as:
t ( R ) = 1 n ( I  1 n j V ^ j )  1
j ( V j  I ) Rp j
Defining
[0099]q.sub.i(R)={circumflex over (V)}.sub.i[Rp.sub.i+t(R)]
Then the objective function can be rewritten in the following form:
E ( R ) = i = 1 n Rp i + t ( R )  q i (
R ) 2
This equation has the same form as the absolute orientation problem;
however, SVD cannot be used to solve for R because the sample cross
covariance is also a function of R, i.e.,
[0100] M ( R ) = q ' ( R ) p ' = i = 1 n q
i ' ( R ) p i ' T
where p'.sub.i=p.sub.i p and q'.sub.1(R)=q.sub.i(R) q(R)Instead, the
following iterative approach is used. Given the k.sup.th estimate
R.sup.(k), t.sup.(k)=t(R.sup.(k), and
q.sub.i.sup.kR.sup.(k)p.sub.i+t.sup.(k), the (k+1).sup.th estimate of R,
R.sup.(k+1) is obtained by solving the following absolute orientation
problem:
R ( k + 1 ) = arg min R i = I n Rp i
+ t  V ^ i q i ( k ) 2 = arg max R
trace [ R ( k ) T M ( R ( k ) ) ]
Then, the (k+1).sup.th estimate of t is given by
[0101]t.sup.(k+1)=t(R.sup.(k+1))
This process is repeated until the estimate of R satisfies:
R * = arg min R i = I n Rp i + t  V ^ i
( R * p i + t ( R * ) ) 2
within some specified tolerance.
[0102]Referring now to FIG. 10, photos of a vehicle involved in an
accident may be analyzed. As shown in FIG. 10, method 300 may begin by
receiving photos in a computer system at a central location and which can
be evaluated to determine if they were taken with or without markers in
the photos, as shown in block 305. More specifically, in decision diamond
310, the computer evaluates whether markers including, for example,
control points and damage points, were included in the photos. Various
markers may be used.
[0103]Referring still to FIG. 10, if the photos were taken with markers,
control is passed to block 370, and the location of pose, control points
and damage points can be determined automatically by finding the points
in the photos via an algorithm. Automatic target recognition may be used
in photogrammetric systems to reduce the amount of user interaction and
to increase measurement accuracy. Targets can be automatically detected
by examining variations in image intensity levels. Targets made from
retroreflective material are often used because they reflect
significantly more light than a normal white textured surface. In some
implementations, a circular retroreflective target can be used. The
first step in identifying the location of the target is to find areas
within the image where the image intensity is rapidly changing. In some
embodiments, the change in intensity can be defined by the steepness or
the slope of the image intensity, which is also known as the image
gradient. Images of retroreflective targets are characterized by large
image gradients.
[0104]After candidate regions (i.e., areas with large image gradients)
have been identified, the next step is to check the geometry of the
region. For circular targets, candidate regions can be verified by
comparing them with an ellipse. Points on the perimeter of candidate
regions are used to calculate the leastsquares fit of an ellipse.
Spurious regions are eliminated if the region perimeter does not fit the
ellipse within some statistical measure.
[0105]Once candidate regions have been identified, the final step is to
locate the center of the target. One commonly used estimate for this
center is the intensityweighted centroids, which is defined by
( x c y c ) = i = 1 n j = 1 m g ij
( x i y j ) i = 1 n j = 1 m g ij
where x.sub.i, y.sub.i are the pixel coordinates and g.sub.ij are the
intensity levels within a (n.times.m) window covering the target region.
[0106]Note the pose helps set the frame of reference regarding which
direction the pictures were taken relative to the vehicle. For example,
the pose determines if the photo was taken from the driver's or
passenger's side of the vehicle. Once the frame of reference is
determined, then points of known location are identified to help
determine the scale of the photo from known vehicle dimensions. These
control points could include areas of the vehicle that are unlikely to be
damaged in an accident such as the four corners of the windshield, the
center of the wheel axle, etc. Finally, the points of potential damage
can be located on each picture that is taken. For example, a standard set
of points on the hood, grille and bumper may be identified on each of the
different views of the front of a vehicle. Control is then passed to
block 375 to store the estimate of the pose for each of the photos using
the pose and control point data. Such stored pose estimates may be later
used in determining a crush profile for the vehicle associated with the
estimates.
[0107]Still referring to FIG. 10, if the photos were taken without
markers, control is passed to block 315, to determine the appropriate
vehicle example photos to display to allow manual input of the pose for
each photo. Control is passed to decision diamond 320 to determine if the
damage was to the rear of the vehicle. If the damage was to the rear of
the vehicle, control is passed to block 325 to display the rear of an
appropriate vehicle to allow pose to be selected. If the damage was not
to the rear of the vehicle, then control is passed to decision diamond
330 to determine if the damage was to the front of the vehicle. If the
damage was to the front of the vehicle, control is passed to block 335 to
display the front of an appropriate vehicle to allow pose to be selected.
If the damage was not to the front of the vehicle, control is passed to
block 350 to display the side of an appropriate vehicle to allow pose to
be selected.
[0108]Still referring to FIG. 10, control is passed to block 340 for user
manual selection of the pose for each photo. Once pose has been selected,
control is passed to block 345 to display control points for each of the
photos. This may be performed based on the location of damage to the
vehicle as well as the type of vehicle. For example, control points may
be placed on the four corners of the back windshield for a vehicle
involved in a rear collision. Once control points are displayed, control
is passed to block 355 where the control points are selected by the user
and manually placed on each vehicle picture. Control is passed to block
360 to display the crush points for the vehicle, which may also be based
on damage location and vehicle type. Control is then passed to block 365
where the crush points are manually placed on each of the photos. Control
is passed to block 370 to evaluate the photos for pose, control points
and damage points. Control is then passed to block 375, discussed above.
[0109]Once the camera has been calibrated to determine its intrinsic
parameters and the camera pose problem has been solved to determine the
extrinsic parameters, the next step in the reconstruction process is to
estimate the object's 3D structure. Photogrammetry uses a principle
called triangulation to determine an object's threedimensional
coordinates from multiple photographs. Points are triangulated by finding
the intersection of converging lines of sight, or rays as shown in FIG.
11.
[0110]Since the camera positions and orientations are only known
approximately and there are errors in the measured image points, rays
will often not backproject to a common intersection shown in FIG. 12.
The techniques in this section describe how to estimate the 3D structure
of the object. This estimate of structure, as well as the camera poses,
are later refined in a process called bundle adjustment.
[0111]In each image, the 3D point, X, is projected to a measured image
point, i.e., x=PX and x'=P'X. Here, P and P' are 33 4 projection matrices
given by
P=K[RRC]
where K is the 3.times.3 camera calibration matrix, R is the 3.times.3
rotation matrix from object space to camera space, and C is the position
of the camera with respect to the object.
[0112]The definition of vector cross product can be used to form a linear
system. By definition, a cross product of two identical vectors is a
vector of all zeros. Therefore, for each point, the cross product of the
measured image point and the 3D point projected to that image is zero,
i.e.,
x.times.(PX)=0
This results in the following linear system in X for each image.
[0113]x(p.sup.3TX)p.sup.1T0
y(p.sup.3TX)p.sup.2T=0
x(p.sup.2TX)y(p.sup.1TX)=0
where p.sup.iT is the ith row of the projection matrix.
[0114]Using both images, a system of the form AX=0 can be composed, where
A = [ x ( p 3 T X )  p 1 T y ( p
3 T X )  p 2 T x ' ( p ' 3 T
X )  p ' 1 T y ' ( p ' 3 T X
)  p ' 2 T ]
Since only two of the three equations from each image are linearly
independent, only two equations from each image are included. The
solution of this system is calculated via singular value decomposition
(SVD). The 3D point is then given by the smallest singular value of A.
Specifically, if UDV.sup.T is the singular value decomposition of A, then
the solution X is the last column of V.
[0115]An alternative to the DLT method is to calculate the optimal depth.
In this method, a 3D point is first calculated by backprojecting a ray
through one of the measured image points for some distance, d, i.e.,
X ^ = C + dR T ( ( x  u 0 ) f x ( y  v 0
) f y 1 )
where the measured point is (x,y), the principal point is (u.sub.0,
v.sub.0) and the focal lengths in the x and y direction are (f.sub.x,
f.sub.y).
[0116]This 3D point is then projected into the other image, i.e.,
{circumflex over (x)}'=P'{circumflex over (X)}
The optimal depth is then the depth which minimizes the reprojection error
in the other image.
[0117]The final step of the visual reconstruction is known as bundle
adjustment. In this step, the visual reconstruction is refined to produce
a jointly optimal structure (3D feature points of an object) and motion
(camera pose). The name refers to the "bundles" of light which are
reflected by the object's features into the camera lens, and are
projected by the camera onto the 2D image surface. The bundles are
optimally "adjusted" by varying both the 3D feature coordinates and the
camera pose parameters, such that the total reprojection error between
observed and predicted image points is minimized as shown in FIG. 13.
Bundle adjustment is robust in that it is tolerant to missing data and it
provides a true maximum likelihood estimate. Given initial estimates of
the camera poses and 3D structure of the object, bundle adjustment is
carried out using a sparse implementation of the LevenbergMarquardt
algorithm. Numerous problems in photogrammetry and computer vision are
posed as nonlinear least squares problems. Nonlinear least squares
problems have an objective function of the form
F ( p ) = 1 2 i M [ f ( p ) ] 2 = 1 2
f ( p ) 2 = 1 2 f ( p ) T f ( p )
[0118]The vector f(p) is called the residual and the vector p=[p.sub.I, .
. . P.sub.N] is the set of control variables. Each element of the
residual vector is the difference between an observation, x.sub.i, and a
prediction, {circumflex over (x)}.sub.i. For example, in the case of
bundle adjustment, the control variables are the 3D feature coordinates
and camera pose parameters and the residual is the difference between
observed image points and predicted image points. Nonlinear least
squares problems can be solved if they are overdetermined, i.e. if the
number of observations, M, is greater than the number of control
variables, N.
[0119]Like many other optimization problems, the necessary conditions for
optimality are based on the first and secondorder partial derivatives of
the objective function with respect to the control variables. At a local
minimizer, the gradient of the objective function should tend toward zero
and the Hessian should be positive semidefinite.
[0120]The gradient of the objective function is a vector whose elements
are the firstorder partial derivatives with respect to the control
variables, i.e.,
g = F ' ( p ) = [ .differential. F .differential. p 1
.differential. F .differential. p N ]
Similarly, the Hessian of the objective function is a matrix whose
elements are the secondorder partial derivatives with respect to the
control variables, i.e.,
[0121] H = F n ( p ) = [ .differential. 2 F
.differential. p 1 2 .differential. 2 F .differential. p
1 .differential. p 2 .differential. 2 F
.differential. p 1 .differential. p N .differential. 2
F .differential. p 2 .differential. p 1
.differential. 2 F .differential. p 2 2 .differential. 2
F .differential. p 2 .differential. p N
.differential. 2 F .differential. p N .differential. p 1
.differential. 2 F .differential. p N .differential.
p 2 .differential. 2 F .differential. p N 2 ]
The Jacobian matrix is a matrix of all firstorder partial derivatives of
a vectorvalued function.
The Jacobian of the residual is defined as
[0122] J i , j = .differential. f i ( p ) .differential.
p j
Based on the form of the objective function, the gradient can be expressed
in terms of the Jacobian of the residual, i.e.,
[0123] g = .differential. .differential. p j { 1 2 i M
[ f ( p ) ] 2 } = i M [ f ( p )
.differential. f i ( p ) .differential. p j ] = J T
f ( p )
Similarly, the Hessian can be expressed in terms of the Jacobian of the
residual, i.e.,
[0124] H = F i , j '' = .differential. .differential. p
j .differential. p k i M [ f ( p )
.differential. f i ( p ) .differential. p j ] =
i M [ f ( p ) .differential. 2 f i ( p )
.differential. p j .differential. p k + .differential. f
i ( p ) .differential. p j .differential. f i ( p )
.differential. p k ] H = J T J + i M [ f
( p ) f '' ( p ) ]
When the residual is small, the higher order terms are negligible.
Neglecting the higher order terms results in the GaussNewton
approximation of the Hessian, i.e.,
[0125]After completing the bundle adjustment, the reconstructed 3D points
of the subject vehicle are compared with the geometry of an undamaged
vehicle. For vehicles with front and rear damage, the difference between
these points in the fore and aft direction yields the residual crush
shown in FIG. 14. Likewise, for vehicles with side damage, the difference
in the lateral direction yields the residual crush. Referring now to FIG.
15, shown is a flow diagram of the determination of a vehicle damage
profile from a comparison of the damage measurements with an undamaged
vehicle's measurements in accordance with one embodiment of the present
invention. After the pose is estimated as described in method 300,
control is passed to method 400 to refine pose and determine the crush
points via triangulation. Method 400 may be performed in a computer
system, e.g., a central system in which image data regarding a vehicle is
received and processed. The initial pose estimate is refined using a
bundle adjustment, as described above (block 405). Control is passed to
block 410 to determine the location of the damage points using
triangulation of the points on the different photos. Control is passed to
block 415 to compare the damage point locations to the same points on a
nondamaged vehicle. This nondamaged vehicle may be a reference or
benchmark vehicle that is the same as the vehicle involved in the
accident, or may be a substantially similar vehicle, such as having a
common body type. Control is passed to block 420 to create a crush damage
profile based on the comparison of the damaged versus the nondamaged
vehicle. Control is passed to block 425 to display a crush profile on a
representative vehicle, e.g., on a display of the computer system.
Furthermore, the crush profile may be stored in the system for later use.
For example, using the crush profile information various measures of
accident severity, including potential for injury, likely damage to the
vehicle and so forth may be determined. Furthermore, by obtaining the
crush profile information using photographs of the vehicle via a
photogrammetric analysis, embodiments may be efficiently used and may
streamline an evaluation process, as photographs may be more easily
obtained than further detailed information regarding damage to a vehicle.
This information also may be used, e.g., to audit claim estimates to
determine whether the claim repair estimate is appropriate in light of
the crush profile and other information obtained via a photogramrnmetric
analysis.
[0126]Once the crush damage profile has been determined, several data
points of interest can be developed. One piece of information is to
estimate the impact severity of the collision by calculating the change
in velocity of the vehicle from the energy required to deform the
vehicle. In one embodiment, the impact severity may be estimated in
accordance with the methods described in U.S. Pat. No. 6,885,981 (herein
the `981 patent`), which is commonly assigned with the present
application, the contents of which are hereby incorporated by reference.
Another piece of information is the Principal Direction of Force or PDOF
of the collision.
[0127]FIG. 16 shows a flow diagram of the determination of PDOF from a
crush profile generated by photogrammetry analysis in accordance with one
embodiment of the current invention. As shown in FIG. 16, method 500,
which may be performed in the same system used to receive photographic
information and process the same to obtain crush data, begins at block
505, where individual components of the vehicle (e.g., hood, fenders,
etc.) are evaluated for the direction in which they were moved in the
accident by comparing the damaged location of the component versus the
nondamaged location of the component, as present on a reference vehicle.
As an example, the driver's side headlight and hood may have been shifted
toward the passenger's side as a result of the impact. This result would
be stored as a shift to the left. Control is passed to decision diamond
510 to determine if there were any components shifted. If components were
shifted, control is passed to block 515 and the direction of each
component's shift is stored. Control is passed to decision diamond 520 to
evaluate if all components that are damaged are shifted in the same
direction. If all components are shifted in the same direction as
evaluated by decision diamond 520, control is subsequently passed to
block 527 to store the resultant direction of shift. Control is
subsequently passed to block 535 to evaluate input about the accident and
vehicle motion before the accident, such as to determine which vehicle
had a greater velocity at impact. Such input may correspond to
usersupplied input regarding information associated with the accident,
such as obtained by police reports, accident reports, event data recorder
(EDR) data or other such means.
[0128]If decision diamond 520 results in components shifted in different
directions, control is passed to evaluate the component shift (decision
diamond 522). If the components are equally dispersed (e.g., a component
point on the right side is shifted 2 inches to the right and the
corresponding component point is shifted 2.5 inches to the left), then
the PDOF would be estimated without additional input and control is
passed to block 527 to store the resultant direction of shift and then to
block 535 as discussed above. If the components are not equally dispersed
(e.g., a component point on the right side is shifted 2 inches to the
right and the corresponding component point is shifted 5 inches to the
left), control may be passed to block 525 to gather more information
about the component shift via requesting input from a user. Such
information may include, for example, the nature of the impact (e.g., the
impact was to a pole) or the nature of the component shift (e.g., are the
components on the front of the vehicle shifted more to (a) the driver's
side, (b) the passenger's side, or (c) neither side). Control is passed
to block 540 to finalize estimate of the resultant component shift by
collecting the results of the component analysis and determining the
overall component shift pattern (e.g., driver's side shift, passenger
side shift, or neutral shift) and control is subsequently passed to block
527 to store the resultant direction of shift. Control is subsequently
passed to block 535, discussed above. Control then passes to block 530.
[0129]If no components were shifted, control is also passed to block 530.
At block 530, the overlap of the damage patterns on the two vehicles may
be optimized by comparing and aligning the damage profiles on each of the
vehicles. Control is passed to block 545 to develop a preliminary
estimate of PDOF for the first vehicle. In one embodiment, such a
determination may be performed by examining the damage pattern, the
overall component shift pattern and the input about vehicle motion before
impact and determining the direction of the impact that is consistent
with these data (e.g., a vehicle with significant component shift toward
the passenger's side and crush to the head light and fender on the
driver's side, and was moving slower than the other vehicle might be
consistent with a 10 to 11 o'clock PDOF).
[0130]Control then passes to block 550 to develop a PDOF for the second
vehicle which may be performed as previously described for the first
vehicle. Control is passed to decision diamond 555 to evaluate
consistency of the PDOF between the two vehicles. This consistency may be
measured, e.g., based on damage information regarding the vehicles,
accident characteristics and so forth. If PDOF is consistent between the
vehicles, control is passed to block 565 to assign a final PDOF estimate
for each vehicle, as well as generate a change in velocity (DV) for each
vehicle by using the initial estimates for crush damage and PDOF to
estimate the impact severity, e.g., in accordance with the methods
described in the '981 patent. Otherwise, if the PDOF is not consistent as
evaluated in decision diamond 555, the PDOF estimates are revised (as
shown in block 560) by adjusting the PDOF estimate and/or the vehicle
damage overlap optimization within reasonable parameters in an iterative
process. Control is passed back to decision diamond 555 for reevaluation.
When consistent PDOFs are determined, control passes to block 565, where
PDOFs and change in velocity values may be finalized for the vehicles by
using the initial estimates for crush damage and PDOF to estimate the
impact severity, such as in accordance with the methods described in the
'981 patent. These finalized estimates of both PDOF and change in
velocity may be stored in the system for later use. Furthermore, this
information may be reported to a user, e.g., via display or transmission
to a remote location. Based on this estimated information, a user may use
the information in considering whether claim information, such as
property damage, personal injury and so forth is consistent with the PDOF
and change in velocity. This estimated information can possibly be used
in determination of liability for the accident as well by determining the
point and angle of impact, preimpact vehicle speeds, etc.
[0131]Referring now to FIG. 17, shown is a flow diagram of a method of
estimating damaged components from a crush profile in accordance with one
embodiment of the current invention. Method 600 may begin at block 605,
where a comparison occurs between collisions of similar severity (i.e.,
change in velocity or DV) and direction (i.e., principal direction of
force or PDOF) for the same or similar vehicles in a database and the
vehicle under analysis. In some implementations, the database may be
arranged in groups of files that group such similar vehicles, severity
and direction. As one example, the database may be of a central system,
such as an accident analysis firm that receives data regarding accidents
from multiple client sources, such as insurance companies and others, and
compiles the database continually as additional accident data is
received, such that that database may be of a dynamic nature which may
aid in obtaining accurate estimate information. Accordingly, method 600
may be used in some embodiments to act as an audit source for a claim
associated with a vehicle accident to determine whether a claim level
asserted is appropriate, e.g., if it falls within a range for similar
accidents involving similar vehicles. Control is passed to block 610 to
select and return all similar impacts, the components and operations
(i.e., repair or replace) for the components for the similar collisions
involving the same or a similar vehicle(s). Control is passed to block
615 to create a list of expected components and component operations for
the current accident. This list may be created by identifying
repair/replace components for the similarly damaged vehicles of the
database. In one embodiment, the list may be created by combining or
analyzing the repair/replace components of these other vehicles in
accordance with a statistical analysis, a mathematical analysis or
another such means. While not shown in FIG. 17, this list may be used to
determine a list of repair/replace components for the vehicle, and in
some embodiments, an expected cost for such repair/replacement.
[0132]Control is passed to decision diamond 620 to determine whether an
independent estimate has been created. If an independent assessment of
the components that need to be repaired or replaced has been developed,
e.g., via a claims adjuster, control is passed to block 625 so this new
assessment can be compared with the components that are predicted to need
repair or replacement. Control is passed to decision diamond 630 to
identify components on the estimate that are not on the expected list. If
there are components that are not on the expected list, control is passed
to block 635 to flag an exception with respect to the comparison. For
example, any outliers may be indicated for further review and an adjuster
or other remote entity may be notified, e.g., electronically (e.g., email
or to the insurance company computer system) or similar communication.
Control is then passed to decision diamond 645. If no components on the
independent estimate are different than the expected list as determined
at diamond 630, control is also passed to decision diamond 645 to
determine if there are components on the expected list but not on the
independent estimate. If there are components on the expected list that
are not on the independent estimate, control is passed to block 640 with
any outliers indicated for further review. If no components on the
expected list are missing on the independent estimate, then control is
passed to block 650 and the estimate is determined to have passed the
audit. The audit may also be considered to pass so long as the report and
the estimate are within a threshold amount (e.g., by number of
components, matching score or other measures) of each other. Note also
that the audit results can be stored along with an entry in the database
to include information regarding the vehicle and accident, to further aid
in analyzing of future accidents. While shown with this particular
implementation in the embodiment of FIG. 17, it is to be understood that
the scope of the present invention is not limited in this regard. For
example, in other implementations in addition to or instead of analyzing
crush pattern information, accident information from such similarly
situated accidents may be compared to injury information, e.g., from a
claim for the present accident to determine whether the contended injury
level is in substantial relation with injuries occurring in these
similarly situated accidents.
[0133]Referring now to FIG. 18, shown is a block diagram of a system in
accordance with one embodiment of the present invention. As shown in FIG.
18, system 700 may be a computer system, such as a personal computer,
server computer or other such system. System 700 may include a processor
710, which may be a microprocessor such as a central processing unit.
Processor 710 is coupled via a memory controller hub (MCH) 720 that in
turn is coupled to a memory 730 and a display 740, which may be a flat
panel display, for example. During operation, memory 730 may store
software in accordance with an embodiment of the present invention that
includes instructions to perform the various techniques described herein.
[0134]As further shown in FIG. 18, MCH 720 is coupled to an input/output
controller hub (ICH) 750. In turn, ICH 750 may be coupled to various
peripherals 760 and a network adapter 770. Network adapter 770 may be
used to communicate between system 700 and one or more other computers
via a computer network, such as a local area network (LAN), a wide area
network (WAN), or a wireless network, such as a wireless LAN (WLAN).
Furthermore, network adapter 770 may communicate with remote systems,
such as computers of an insurance company or other third party that
desires to send vehicle and accident information (e.g., including
photographic information) to system 700 for analysis in accordance with
an embodiment of the present invention. Such communication may be via the
Internet or another such computer network. In some implementations, these
communications may be made secure, e.g., via encryption or in another
secure format.
[0135]While the present invention has been described with respect to a
limited number of embodiments, those skilled in the art will appreciate
numerous modifications and variations therefrom. It is intended that the
appended claims cover all such modifications and variations as fall
within the true spirit and scope of this present invention.
* * * * *