The key problem of the tight groupings
The
flaw in Milam’s reasoning
The criticism of Guinn’s heterogeneity
is wrong because it is incomplete. It takes one step in logic when it needs to
take two or three. After the critics of the heterogeneity saw that it apparently
blends the groups together and destroys Guinn’s conclusions, they stopped
there. It was as if they had done what they wanted to (destroy the dangerous
NAA) or had gotten the answer that they wanted. In essence, they assumed (but
never stated) that the two groups were just artifacts of chance from some
undetermined number of bullets. The statistical analysis below shows this to be impossible, however. The critics did what critics of
the official explanation have done for the past 37 years—they just criticized and
stopped there. Had they taken that one extra step and looked for a real
explanation, they would have seen that their view could not hold up. They would
have realized that there was much more to the story, which they (and everybody
else) had missed. What a pity that they defaulted and left the explanation for
others to find.
Here is the logical conundrum as it stood
at the end of Milam’s criticism in 1994:
Something has to give. These three points cannot
all be true. I can see only two possible ways out of this dilemma: (1)
heterogeneities of quarters of bullets do not apply to the five evidence
fragments; or (2) the groupings did, improbable as it might seem, actually arise
by chance.
The rest of this section considers the second of these
possibilities. The next section deals with heterogeneities on various scales. We
begin with the groupings because the statistical arguments used in dealing
with it are understandable to more readers than are the arcane details of how
jacketed bullets shatter.
Deriving
the probabilities of the groupings in two different ways
Anyone who works
with probabilistic explanations faces the dilemma of presenting a simplified
approach that everyone can understand but may not give the best possible answer
versus presenting a rigorous approach that almost no one can understand even
though it gives the best available answer. This section resolves this dilemma by
presenting both approaches: a simplified version that I developed (in response
to my habit of starting simple and only becoming complex as needed) and a much
more rigorous approach that Larry Sturdivan has developed. We are gratified to
report that the two approaches give essentially the same answers. We begin with
my simplified version; Larry's approach is presented in his own words wherever
possible.
A SIMPLIFIED APPROACH TO PROBABILITIES OF THE GROUPINGS (K. RAHN)
A
simple formula for calculating probabilities of groupings
The probability of the two groups of
fragments arising by chance contains three components, one qualitative (the
membership of each group) and two quantitative (having to do with the tightness
of each group). Since these three properties are independent, the overall
probability of the observed groups arising by chance is the product of each of
its three properties arising by chance.
P_{overall} = P_{grouping} x P_{tightness1} x P_{tightness2}
Calculating
the membership probability P_{overall}
We can define P_{grouping} as the probability that
the two groups contained solely by chance the only combination of fragments that
made physical sense: one group with the two fragments from the body shot and the
other group with the three fragments from the head shot. To get the value of P_{grouping}, we have to
find how many possible ways the five fragments can be grouped. This is the sum
of the number of ways that the fragments can make one group, the number of ways
that they can make two groups, three groups, four groups, and five groups. P_{grouping}
will then be the ratio of the number of ways to make the observed group (1) and
the total number of way to group the fragments:
P_{grouping} = 1/[total number of groups]
Other than its value, the most
important property of P_{grouping} is that its value is the same for all
the scenarios considered below. No matter whether one considers all the
fragments to be genuine or some of them to be false matches (from other bullets
or from planting), the fact remains that five fragments were found, in only five
places. It is these places that go into the calculation of P_{grouping},
not the validity of the fragments found in them.
The following table shows all possible groups of five
fragments taken in one to five groups. Assuming that each fragment has a
distinct identity, there are 52 such groups.
Table 18. The 52 ways that five fragments can be grouped.
No. groups 
No. fragments 
Fragments 
No. groups 
No. fragments 
Fragments 
1 (N = 1) 
5 (N = 1) 
12345 

1,2,2 (N = 15) 
12345 
2 (N = 15) 
1,4 (N = 5) 
12345 


12435 


21345 


12534 


31245 


21345 


41235 


21435 


51234 


21534 

2,3 (N = 10) 
12345 


31245 


13245 


31425 


14235 


31524 


15234 


41235 


23145 


41325 


24135 


41523 


25134 


51234 


34125 


51324 


35124 


51423 


45123 
4 (N = 10) 
2,1,1,1 
12345 
3 (N = 25) 
1,1,3 (N = 10) 
12345 


13245 


13245 


14235 


14235 


15234 


15234 


23145 


23145 


24135 


24135 


25134 


25134 


34125 


34125 


35124 


35124 


45123 


45123 
5 (N = 1) 
1,1,1,1,1 
12345 
Since the five fragments can be grouped in 52 different ways, the probability of getting the only group that makes physical sense, the right one of the 2,3 cases, is 1/52, or 0.0192 = 1.92%.
Calculating the
tightness probabilities P_{tightness1}
and P_{tightness2}
The second and third numbers to be
calculated are the probability that the observed degree of tightness of each
group originated by chance. Here the heterogeneity of antimony plays no real
role because the scatter that it imposes on each number will artificially bring
two fragments closer together as often as it will more them apart. Thus, its
effect will ultimately cancel out, and so may be neglected. To approach this
problem in the simplest way, imagine that the concentration of antimony in
fragments of WCC/MC bullets is distributed evenly over its range of about
0–1200 ppm (which it very nearly is—Fig 10). This means that all values between 0 and 1200 ppm will
appear with about the
same frequency. (Of course, realworld values would probably tend to cluster
somewhat toward the middle part of the range, but the actual values measured by
Guinn do not show much of that. Anyhow, it would not change the basic sense of
the answer, which appears very clearly.)
Consider first a group that contains one
big fragment and one little fragment, and consider the big fragment to be the
reference point. If the concentration of Sb in the big and little fragments are
C_{big} and C_{little}, respectively, the probability that the
little particle’s concentration by chance as close to the big one’s
concentration as it did is just twice the difference between the two
concentrations divided by the total range of possible concentrations (1200 ppm),
because the little particle could fall above or below the big particle:
P_{tightness1} = │2(C_{little} – C_{big})/1200│
This situation is shown in Figure 19. (Since probabilities must always be positive, the sign of the difference term is reversed if it is negative.)
Figure 19. The scheme for calculating the probability that a little particle would fall randomly within its actual distance from a larger particle.
If the group contains two little particles, the probability is the product of the two individual probabilities:
P_{tightness2} = │[2(C_{little1} – C_{big})/1200] x [2(C_{little2} – C_{big})/1200]│
The actual case found by both Guinn and the FBI is one group with one little particle and the other group with two. The probability of both groups independently arising by chance is the product of the two probabilities given above. If we call the two groups 1 and 2 and the little particles in group 2 also 1 and 2, the product becomes:
P_{tightness1} x P_{tightness2} = │[2(C_{little} – C_{big1})/1200] x [2(C_{little1} – C_{big2})/1200] x [2(C_{little2} – C_{big2})/1200]│
This overall probability is extremely small. To get a sense of just how small, we can use the values from the FBI’s run 3 as an example:
P_{tightness1} x P_{tightness2} = │[2(C_{little} – C_{big1})/1200] x [2(C_{little1} – C_{big2})/1200] x [2(C_{little2} – C_{big2})/1200]│
= │[2(773 – 813)/1200] x [(2(614 – 626)/1200] x [2(629 – 626)/1200]│
= 6.67 x 10^{6}
Thus there is only about one chance in 10^{5},
or 100,000, that the tightness of the two groups of fragments as measured in the
FBI's run 3 arose solely by chance.
When this very low probability is
multiplied by the probability of getting the right fragments in the two groups
by chance, the overall probability decreases by another two orders of magnitude:
P_{overall} = P_{grouping} x [P_{tightness1} x P_{tightness2}]
= (1.92 x 10^{2}) x (6.67 x 10^{6} ) = 1.28 x 10^{7}
Thus the overall probability of getting these two
groups at their observed tightness solely by chance is 1.28 x 10^{7}, or about
one in 10 million. This is equivalent to there being no practical possibility that
these groupings and their tightness arose by chance. Even if we changed somewhat
the way we calculated the probabilities, the sense of the result would not
change: these two tight groups are not the work of chance, i.e., are not from
some illdefined number of bullets of highly heterogeneous composition that just
happened to fall into physically meaningful tight groups. Some much more
organized and meaningful process must have been at work.
This first calculation is on the low side, however. The
corresponding values for the FBI's runs 1, 2, and 4 and Guinn's data are,
respectively, 5.2 x 10^{6}, 1.15 x 10^{4}, 1.95 x 10^{6},
and
2.4 x 10^{6}, equivalent to 5, 115, 2, and 2 chances in a million. The
best value for the five results as a whole would be their geometric
mean, because each value was determined multiplicatively and because the
geometric mean handles the large and small extreme values equivalently. The
geometric mean for the five sets of data is 3.2 x 10^{6}, or about 3
chances in a million, of arising by chance. This number is still vanishing
small. The groups are not a result of chance.
A
more general suite of probabilities
The above calculations are illustrative only, and for the
specific case where all five fragments fell into place "randomly,"
whatever that means. Other lessrestrictive scenarios are possible, such as that
only one or two of the fragments were random, and the others were genuine. Let
us now examine the full suite of possibilities in an orderly sequence.
Six basic scenarios could explain the origin of the five
fragments. In each scenario, the two large fragments Q1 and Q2 (stretcher and
tip from front seat) are assumed to be genuine. That they represent different
bullets is a given because Q1 is virtually a complete one, forcing Q2 to be a
separate one. The three tiny fragments can be all genuine, all false positives,
or some of each. Here are the scenarios, in order of decreasing probability:
Scenario 50: All five fragments are genuine and just what they seem.
Scenario 41: Four fragments are genuine; one of the
three tiny ones (not known which) is a false positive from a third bullet or a
plant (equivalent probabilistically).
Scenario 41a: Four fragments are genuine; one
specific one of the tiny ones is a false positive from a third bullet or a plant
(equivalent probabilistically). The differences between scenarios 41 and 41a
is that in 41 it is not known which of the tiny ones is the false positive,
whereas in 41a it is known.
Scenario 32: Three fragments are genuine; two of the
three tiny ones are false positives from a other bullet or plants (again
equivalent probabilistically).
Scenario 32a: Three fragments are genuine; two
specific ones of the three tiny ones are false positives. (Scenarios 32 and
32a are analogous to 41 and 41a.)
Scenario 23: Two fragments are genuine; all three
tiny ones are false positives.
The probability for each of these scenarios contains the three terms described above, although each is somewhat different in form. Recall that the membership probability P_{grouping} is the same 1/52 for all scenarios. To calculate the tightness probabilities, the fragments considered genuine are assigned probabilities of unity, and only the false positives are calculated out. Here are the probabilities for each scenario, using Guinn's NAA data as the best set. Recall that these data (for concentrations of Sb) are Q1: 833 ppm, Q2: 602 ppm, Q4,5: 621 ppm, Q9: 797 ppm, Q14: 642 ppm. These values lead to the three probabilities of tightness for the tiny fragments as:
P_{tightness4,5} = │2(C_{4,5} – C_{2})/1200│= │2(621 – 602)/1200│= 38/1200 = 0.032 (3.2%)
P_{tightness9} = │2(C_{9} – C_{1})/1200│= │2(797 – 602)/1200│= 72/1200 = 0.060 (6.0%)
P_{tightness14} = │2(C_{14} – C_{2})/1200│= │2(642 – 602)/1200│= 80/1200 = 0.067 (6.7%)
The full calculations for each of the scenarios are:
Scenario 50
P_{grouping50}
= 1/52.
P_{tightness50} = 1.
P_{overall50} = 1/52 = 0.0192
= 1.9%. This is well below the 5% level that is often used to state
scientifically that something is "true."
Scenario 41
P_{grouping41}
= 1/52.
P_{tightness41} = P_{4,5}
+ P_{9} + P_{14} = 0.032 + 0.060 + 0.067 = 0.159 (15.9%)
P_{overall41} =
(0.0192)(0.159) = 0.0031 (0.3%)
Scenario 41a
P_{grouping41a}
= 1/52.
P_{tightness41a} = P_{4,5}
or P_{9} or P_{14} = 0.032 or 0.060 or 0.067 (3.2% or 6.0% or
6.7%)
P_{overall41a} =
(0.0192)(0.032 or 0.060 or 0.067) = 0.0006 or 0.0012 or 0.0013 (0.06% or 0.12%
or 0.13%). Average = 0.1%.
Scenario 32
P_{grouping32}
= 1/52.
P_{tightness32} = P_{4,5}P_{9}
+ P_{4,5}P_{14} + P_{9}P_{14} = 0.00192 +
0.00214 + 0.00402 = 0.0081 (0.8%).
P_{overall32} =
(0.0192)(0.0081) = 1.56x10^{4} (0.016%)
Scenario 32a
P_{grouping32a}
= 1/52.
P_{tightness32a} = P_{4,5}P_{9}
or P_{4,5}P_{14} or P_{9}P_{14} = 0.00192 or
0.00214 or 0.00402 (0.19% or 0.21% or 0.40%)
P_{overall32a} =
(0.0192)(0.00192 or 0.00214 or 0.00402) = 3.69x10^{5} or 4.11x10^{5}
or 7.72x10^{5} (0.0037% or 0.0041% or 0.0077%) Average = 0.0052%.
Scenario 23
P_{grouping23}
= 1/52.
P_{tightness23} = P_{4,5}P_{9}P_{14}
= 1.29x10^{4} (0.013%).
P_{overall23} =
(0.0192)(1.29x10^{4}) = 2.47x10^{6} (0.0002%)
These results can be summarized in a short table:
Scenario  Meaning  Probability 
50  All 5 fragments genuine  1.9% 
41  4 genuine; 1 of 3 tiny ones a false match  0.3% 
41a  4 genuine; one specific tiny one a false match  0.1% avg. 
32  3 genuine; 2 of 3 tiny ones false matches  0.016% 
32a  3 genuine; two specific tiny ones false matches  0.0052% 
23  2 genuine; all 3 tiny ones false matches  0.0002% 
Now if these six scenarios are the only possible ones, their overall probabilities (P_{overall}) must sum to 1. The situation can be expressed as the Venn diagram for the union of P_{overall4,5}, P_{overall9}, and P_{overall14}, as shown in Figure 19a:
The mathematical expression of this diagram is:
P_{overall4,5} + P_{overall9} + P_{overall14} P_{overall4,5}P_{overall9}  P_{overall4,5}P_{overall14}  P_{overall9}P_{overall14} + P_{overall4,5}P_{overall9}P_{overall14} + P_{overall50} = 1
This is equivalent to:
Scenario 41  Scenario 32 + Scenario 23 + Scenario 50 = 1,
or
P_{overall41}  P_{overall32} + P_{overall23} + P_{overall50} = 1
Solving for P_{overall50} gives:
P_{overall50} = 1  (P_{overall4,5} + P_{overall9} + P_{overall14} P_{overall4,5}P_{overall9}  P_{overall4,5}P_{overall14}  P_{overall9}P_{overall14} + P_{overall4,5}P_{overall9}P_{overall14}),
which is equivalent to
P_{overall50} = 1 P_{overall41} + P_{overall32}  P_{overall23} =
1  0.0031 + 0.00016  2.47x10^{6} =
1  0.0033 = 0.9967 (99.7%)
This result means that the total probability of all ways to incorporate one to
three false positives (from MC bullets) into the basic grouping of the five
fragments, whether by stray fragments from additional bullets or by deliberate
plants (falsification of evidence by a conspirator who didn't know about the
distinctive properties of antimony in WCC/MC ammunition), amounts to no more
than 0.3%. Thus there is a 99.7% chance that all the fragments are genuine and
the groupings are real.
But we should not take these probabilities seriously, for
they represent (all except the base case 50) hypothetical situations with
absolutely no supporting evidence. They are answering the question "Suppose
I have solid evidence that one or more of the tiny fragments come from
additional WCC/MC bullets or are tampered evidence. What is the chance that they
could then randomly fall into the "proper" groups?" Since this
question presupposes evidence that does not exist, we may not legitimately ask
the question. Thus all the above probabilities are moot.
A RIGOROUS APPROACH TO PROBABILITIES OF THE GROUPINGS (LARRY STURDIVAN)
The
data were taken from Vincent Guinn’s Neutron Activation Analysis (NAA)
published in the House Select Committee on Assassinations (HSCA) Hearings Volume
I, Appendix B, Table I, p. 538 and Appendix D, Table IIA, p. 547.
Following Guinn’s and Rahn’s lead, I concentrated on the antimony
level as the main discriminator among individual rounds from the four lots of
ammunition manufactured by Western Cartridge Company for use in the 6.5mm
MannlicherCarcano Rifle, referred to by Rahn as WCC/MC.
Other metals are ignored (though relevant to Rahn’s more comprehensive
argument).
Table
IIA from the referenced hearings contains Guinn’s NAA analysis of samples
from the open base of two bullets from Lot 6000 and four bullets each from Lots
6001 
6003. From the wide variety of levels of
antimony content within the bullet lots, it appears that there are no lottolot
differences. Assuming, for the moment,
that this is true, we can look at the means, standard deviations, etc., of the
antimony levels of all 14 samples in the first entry of the Appendix.
Notice the large gap between the mean of 406.9 ppm and the median level
of 250.5 ppm. This is one indication of
skewness in the data. A normal plot of
these data is shown in Figure 20. The
tight curvature at the left end and the very low AndersonDarling Pstatistic of
0.03 indicates that the data are far from Gaussian in distribution. For the natural logs of the same antimony concentrations, the mean
and median are 5.567 and 5.523 (which are equivalent to a geometric mean of
261.6 and median of 250.4). A normal plot
of the log values is given in Figure 20a. Here
the increased linearity is evident and the AndersonDarling Pvalue increases to
a respectable, but not great, 0.77. The reason it isn’t better is the point above the line on
the right, indicating that the upper tail is truncated, and the point above the
line on the left, indicating that the lower tail is heavier than “normal.”
Both of these are understandable when one considers the way that
different concentrations of antimony in the original constituents move toward
the mean with increased mixing. The
larger quantity of lowantimony lead in the mix can leave relatively larger
pockets of lowconcentration alloy, while the smaller pockets of highantimony
alloy form a secondary peak that shifts downward and flattens with mixing, but
retains something of its shape as it merges into a unimodal, skewed
distribution. This leaves the upper tail
truncated. (A pencil sketch would
help here, but is hard to do in a word processor.)
The
bottom line is that we want to stay away from the tails of this distribution
when using a Gaussian approximation to do statistics. Though the recovered bullets and fragments have cores that are
above the geometric mean of antimony concentration, they are on the flat part of
this curve, well below the outlying point on the right in Figure 20a.
To check the assumption mentioned above that there was no lottolot
difference in the antimony content, look at the General Linear Model analysis of
the data from Guinn’s Table IIA in the Appendix.
Not only is the lot number variable (LotNo) not significant, the Weight
covariate is far from being significant also. Thus,
the systematic error observed in the FBI data is not present 
or at least not found 
in Guinn’s NAA results.
The
data from Guinn’s Table I in the above reference are his measurements of the
metal content of metals other than lead in the same recovered bullet and bullet
fragments previously analyzed by the FBI. We
see that these are a bit different than the FBI data in run 4 (analyzed in the
previous paper) and even further from the FBI’s other three runs.
This resulted from the systematic error in the FBI data found by Guinn
and further explained by Rahn. Thus,
let us work with Guinn’s measurements, as they are more accurate and more
compatible with his similar measurements on the population as a whole.
Guinn’s data are reproduced in Table 20 below.
Data on the concentration of metals other than antimony (Sb) are left
out. We use the natural log of the Sb
concentration, LnSb, to calculate the standard normal variate, x.
This x is calculated simply by subtracting the estimated population mean
of 5.567 and dividing the difference by the estimated standard deviation of the
population, 1.074 (both also measured on the log scale 
see Appendix). These standard normal
variates are used to calculate the cumulative probability distribution, P, up to
the point x.
CE #  FBI #  Sb (ppm)  LnSb  x  P 
399  Q1  833  6.7250  1.0782  0.8595 
842  Q9  797  6.6809  1.0371  0.8501 
567  Q2  602  6.4003  0.7758  0.7811 
843  Q4,5  621  6.4313  0.8048  0.7894 
840(1)  Q14(1)  638  6.4583  0.8299  0.7967 
840(2)  Q14(2)  647  6.4723  0.8430  0.8003 
840(m)  Q14(m)  642.5  6.4653  0.8364  0.7985 
Figure 20. NPlot of Guinn’s 14 Measurements of Antimony Levels in WCC Bullets Made for the MC Rifle
Figure 20a. NPlot of Guinn’s 14 Antimony Measurements on the Natural Log Scale
Differences in the Pvalues in Table 20 may be used to define how
“close” two samples are to each other, as the difference is a measure of
what proportion of the population lie between the two levels.
For instance, the difference between Q1 and Q9 is 0.0084, indicating that
less than 1% of the population lies between these two concentrations.
Two different determinations were made of Commission Exhibit (CE) 840, so
the difference between determinations 1 and 2, 0.0036 gives us a measure of the
lowest detectable difference in antimony concentration, about 1/3 of 1%.
The mean of the two determinations is indicated by (m).
The point of
these calculations is that not all conspiracy scenarios justify assuming that
the samples are drawn from five independent WCC/MC bullets.
Some conspiracy scenarios frankly admit that Oswald was a probable
shooter, but that only one of the five samples being derived from a different
source would prove a conspiracy. The
other four could be the very nonrandom samples the Warren Commission (the HSCA
, and many others, including us) claimed they were.
How likely is it, then, that the one random sample would match one group
or the other.
Consider the
scenario that someone “planted” CE 399 in Parkland to frame Oswald for the
crime. This would only work, of course,
if they somehow gained possession of a bullet that had been fired from
Oswald’s rifle (and ignores the fact that CE 567 also has engraving from the
Oswald rifle on it). Then we need to estimate the probability that a randomly selected
bullet would have an antimony concentration at its base that was as close as the
antimony concentration of CE 339 is to that of CE 842.
But the difference in the Pvalue of the two, in Table 20, only gives us
about half the desired probability estimate. The
concentration could actually be lower than that of CE 842 and still be as close
to CE 842 as CE 399 is. So we need to
consider the standard normal variate in the interval as far below as CE 399 is
above. Subtracting the difference from
that of CE 842 gives us a standard normal variate, x = 0.9960 with a
corresponding cumulative probability of P of 0.8403.
Thus, the total interval width is the difference 0.8595 
0.8403 = 0.0192, or about 1.9%. A sample
from the base of a random bullet from any box of WCC/MC ammunition would have
less than a 2% probability of being as close to the recovered fragments from
John Connally’s wrist as CE 399 is.
Random matches
to members of Group 2 are a bit more involved as well as more difficult to use
in constructing a conspiracy scenario. The
unconvincing argument that somebody planted fragments in the upholstery that
would match the head fragments has little probability (about 1.8%), and even
less credibility. Table 2 has the
relevant calculations. As in the last
paragraph, we need to calculate a standard normal variate as far above CE 843 as
CE 840 is below. The row labeled “Other
side” of CE 840 has the relevant xvalue and the corresponding P.
The difference is: 0.8073 
0.7894 = 0.0179. Planting a bullet
fragment with engraving from the Oswald rifle (i.e., CE 567) that would match
the other two samples makes more sense, in spite of the difficulty of obtaining
such a fragment to plant. If we calculate
a standard normal variate that is as far above the mean of CE 840 and CE 843 as
CE 567 (the engraved fragment) is below, we get the x and P values listed on the
fourth line of Table 21. Subtracting the
Pvalue for CE 567 listed in Table 20 (0.8066 
0.7758 = 0.0308) leaves about 3% probability of this close a match by chance
alone.
Antimony Concentration from:  x  P 
CE 843  0.8048  0.7894 
“Other side” of CE 840 (from CE 843)  0.8680  0.8073 
Mean of CE 840 and CE 843  0.8206  0.7941 
“Other side” of the above (from CE 567)  0.8654  0.8066 
Mean of Group 1 (CE 842 and CE 399)  1.0576  0.8548 
Mean of Group 2 (head & car fragments)  0.8057  0.7897 
“Other side” of Group 2  0.5538  0.7101 
We
have noted that all the fragments and bullets have an antimony concentration
above the geometric mean for the population. So
how probable is the random selection of two bullets that the Warren Commission
concluded caused all the injuries to the two men?
For this we need to select one group as the “reference” and calculate
how probable it is that a random sample would be as close as the other is to it.
The two different calculations, with two different reference groups, give
slightly different results. Let us use
the fragments from JFK’s head and car as the reference (call it Group 2), as
almost everybody would agree that this shot is the fatal shot that was shown so
dramatically in the Zapruder film. It is
also very likely that some of the remnants of this bullet would be found in the
car.
The xvalue of
the last entry in Table 21 is the same distance from the xvalue of Group 2 as
the xvalue of Group 1 is, but on the lower side.
The difference spanned in the Pvariable is (0.8548 
0.7101 = 0.1447) about 14.5% probability of selecting two bullets that would
fall this near to one another. Thus,
about one in seven pairs of bullets from a box of WCC/MC ammunition would have
antimony concentrations this close. There
is nothing particularly relevant about this calculation except to note that
events with probabilities of 14%, while unusual, are not considered particularly
rare; whereas those with probabilities of 2% to 3% are unusual enough to be
considered statistically significant.
We note that the
fragments deposited in Connally’s wrist are thought to have come from the base
of CE 399, as were the samples taken, first by the FBI and later by Guinn, for
analysis. Thus, they both came from nearby sources and, indeed, the
difference in these two samples is about 4% of their average concentration.
The fragments deposited in JFK’s head and the small lead fragments
found in the car were thought to have come from the bullet represented by the
large fragment containing engraving (Group 2).
Not surprisingly, the range of concentrations in samples in Group 2 is
about 6.7%, a bit higher than the range in group 1, but still much narrower than
the range Guinn found throughout the cores of WCC/MC bullets he analyzed at
widely separated locations. This is
because the small fragments all come from the lead surface exposed at the crack
in the bullet jacket. As this surface was
stretched when the bullet was torn in two, the sample taken from the surface of
the recovered large fragment was still relatively near the origin of the small
fragments, not representative of the wholebullet variance found by Guinn.
Some
General Statistical Philosophy
These probability measurements represent the least
possible restriction on the initial assumptions. They are the best that a coconspirator could do if he had complete
knowledge of the type ammunition Oswald was using and attempted to duplicate it
as best he could. Thus, these
probabilities are the starting point from which one can calculate other
probabilities under other reasonable scenarios. For instance, my first thought regarding the high Fvalue for
Group in the General Linear Model analysis of FBI NAA, run 4, was that the 1/52
consideration was lost in the noise. That
is, even on the log scale, the normal curve of Figure 20a is not so perfect a
fit that we can truly distinguish between one chance in ten thousand and one
chance in a hundred thousand. (The actual
probability of very low probability events can never be accurately measured or
predicted, unless we maintain absolute control, like the state does in their
lotteries.) On second thought,
however, I realize that Rahn was quite correct. The
fact that the observed grouping uniquely complements the other physical evidence
is a very real effect that cannot be “lost in the noise.”
However low the probability is of getting any 2,3 grouping of five
random samples, the fact that there are ten such possible groups reduces the
probability we started with by an order of magnitude.
Nine of those not only fail to lend support to the other physical data,
most would tend to refute it. So, if the
original probability is between one in a thousand and one in a hundred thousand,
this additional order of magnitude of reduction means that the true probability
of getting this particular grouping at random is between one in ten thousand and
one in a million. Nor does this reduction
in probability assume lack of knowledge on the part of the alleged conspirator.
The orderofmagnitude reduction in probability is completely
unavoidable. If we further assume that the conspirator did not know or did not
care that Oswald was using a MannlicherCarcano rifle, and picked another rifle
along with the physical characteristics that go with its ammunition, the
probability of that core matching the cores randomly selected from the WCC/MC
bullets is another one or two orders of magnitude lower.
We are now down to one in a hundred thousand to one in a hundred million
chance of a match being this close (i.e., 1 in 10^{5} to 1 in 10^{8}).
At this point,
the other, more probable scenarios come to the fore; for instance, the
“planted” WC399 bullet 
or another shooter contributing one hit to the mix while all other injuries were
caused by WCC/MC bullets from the Oswald rifle.
As seen above, however, even the most likely of these scenarios also has
a low probability of being possible, even if we assume that the “other
shooter” had complete knowledge of Oswald’s intent and matched it as well as
possible. Like the other set of
statistics, these are only a starting point. Of
course, planting a bullet fired from Oswald’s rifle in Parkland Memorial would
require knowledge of the Oswald weapons, as well as access to them, but this
would not be a requirement for a second shooter.
In any case, lack of complete knowledge would lead to ordersofmagnitude
reduction of even the small probabilities calculated above.
The conspiracy
“theories” run the gamut from coconspirator with Oswald to a complete set
of alternate assassins who managed to frame Oswald for the crime.
Therefore, it makes sense to present levels of probability that diminish
to the vanishing point as one tries to separate Oswald from the crime.
Most of the conspiracy “theories” are of the latter type, featuring
“Oswald as patsy” and multiple shooters all over Dealey Plaza.
It is this scenario that approaches the one in a million chance of being
true, on the basis of the NAA data alone. The
more credible scenarios involving a knowledgeable coconspirator, with 2% to 3%
probability, are more uncommon. In
my view, this is because they are less spectacular 
so less profitable.
Appendix: Minitab Output for Guinn Samples from Four Lots of Western Cartridge Company Rounds for the MannlicherCarcano Rifle
Descriptive
Statistics
For raw antimony levels in parts per million, independent of
Lot Number.
Variable  N  Mean  Median  TrMean  StDev  SE Mean 
Sb  14  406.9  250.5  371.2  364.5  97.4 
Variable  Minimum  Maximum  Q1  Q3 
Sb  24.0  1218.0  148.8  730.5 
Descriptive
Statistics
For Natural Logs of antimony levels.
Variable  N  Mean  Median  TrMean  StDev  SE Mean 
LnSb  14  5.567  5.523  5.638  1.074  0.287 
Variable  Minimum  Maximum  Q1  Q3 
LnSb  3.178  7.105  4.996  6.594 
General
Linear Model
Analysis of Variance on differences in antimony content
between lots of WCC/MC bullets.
Neither
Lot Number nor Weight was significant. Thus,
the levels of antimony and the variance in those levels is consistent within the
cores of all lots. Nothing unusual
was found except that the low value (24 ppm) was identified as an outlier.
Factor  Type  Levels  Values 
LotNo  Fixed  4  1,2,3,4 
Analysis of Variance for LnSb, using Adjusted SS for Tests
Source  DF  Seq SS  Adj SS  Adj MS  F  P 
Wt  1  1.542  2.296  2.296  1.81  0.211 
LotNo  3  2.041  2.041  0.680  0.54  0.669 
Error  9  11.406  11.406  1.267  
Total  13  14.989 
Term  Coef  StDev  T  P 
Constant  0.330  3.885  0.08  0.934 
Wt  0.10374  0.07708  1.35  0.211 
LotNo  
1  0.0084  0.6602  0.01  0.990 
2  0.1490  0.5459  0.27  0.791 
3  0.6114  0.5445  1.12  0.291 
Unusual Observations for LnSb
Obs  LnSb  Fit  StDev Fit  Residual  St Residual 
9  3.17805  5.45497  0.59495  2.27692  2.38R 
(R denotes an observation with a large standardized residual.)
Summary
The simplified approach and the rigorous approach to
calculating the probabilities of the two groups arising by chance both assumed
that all the fragments in question were from WCC/MC ammunition. The simplified
approach derived probabilities of 2% to 0.0002% (1 in 52 to 1 in 500,000) for
the various scenarios, with the 2% representing the probability of 5 genuine
particles grouping properly by chance, and the 0.0002% representing the
probability that the two large particles were genuine and the three tiny ones
were false matches. The rigorous approach got values between 1 in 10,000 and 1
on 1,000,000 for the second scenario above, i.e., it bracketed the result from
the simplified approach. Thus both approaches found vanishingly small
probabilities for the observed pattern having originated by chance.
But even these probabilities cannot be taken seriously
because no reliable evidence exists for any false matches, whether stray
fragments from some additional bullet or fragments planted by a conspirator.
Thus the calculated probabilities are strictly hypothetical and may not
legitimately be introduced into any discussion unless or until some physical
evidence of additional bullets or tampering is produced, which it has not in the
last 37 years.
Ahead
to Resolving the Logical Incompatibility
Back to Milam and Heterogeneity
Back to NAA and the Assassination of JFK
Back to NAA