Reply to Art Snyder’s Note “On Ken Rahn’s Statistical Analysis of the Neutron Activation Data in the JFK Assassination” of 27 May 2001
Kenneth A. Rahn
(With help from Larry Sturdivan)
28 October 2001
At the outset, I would like to thank Art Snyder for
producing his note, for only through questioning ideas can we refine and
strengthen them. The fact that his criticisms work out oppositely from what he
intended (most of them turning out to be wrong or irrelevant) does not diminish
the inherent worth of the process he started. This is how we have to work with
scientific ideas.
This document first considers Art’s
major points, and then turns to his other remarks in roughly the order they
appear in his note. At the end, it will be clear that Art’s note illustrates
the dangers of focusing too heavily on statistics for their own sake, at the
expense of basic reasoning and familiarity with the literature on the topic. The
“comprehensive rebuttal” to my monograph on NAA (neutron activation
analysis) and the JFK assassination, as Stuart Wexler has described it to one of
the newsgroups, is neither comprehensive nor a rebuttal. It is flawed at every
turn. It is, to say the least, a nonstarter. Stuart and the others who so
reflexively endorsed Art’s note as destroying my monograph now need to
seriously reconsider their standards for accepting and rejecting arguments, for
they have erroneously accepted arguments they were predisposed toward and
erroneously rejected arguments they were predisposed against.
Misdirected
criticisms
One of the biggest errors in Art’s note
is that it was misdirected. The NAA monograph presented two independent
statistical analyses of the same NAA results, a simplified one by me and a more
sophisticated one by Larry Sturdivan. The introduction to the methods stated
clearly that mine sacrificed accuracy for understandability, and that Larry’s
was sounder. It also noted that both approaches gave essentially the same
answers. Art limited himself to mine, claiming that it gave the wrong answer
because of a number of statistical errors. He didn’t touch Larry’s. But if
mine was wrong and Larry’s was right, then their results could not agree. Art
avoided dealing with this inconvenient truism. More importantly, if I said that
we should rely on Larry’s (“it gives the best available answer”), then Art
should have dealt with Larry’s rather than mine. Art avoided this problem,
too. These omissions alone make Art’s note irrelevant.
NAA
much broader than its statistics
The narrowness of Art’s note is another
of its biggest errors. The NAA monograph constructs a multipronged argument for
the correctness of the NAA and Dr. Vincent P. Guinn’s interpretation of it,
but Art ignores everything except the statistics. This is no more valid than
cutting off one head of a hydra and claiming that you’ve killed it—you
haven’t even begun. Statistics is no substitute for gray cells.
Art’s
basic conclusion
Art’s basic conclusion is that my
statistical analysis was flawed and led to the false conclusion that the NAA
results were definitive (that the fragments came from two and only two bullets)
when in fact they were inconclusive. Anyone (he says) can see the
inconclusiveness just by examining Guinn’s results from 14 different bullets,
which Art reproduces in his Appendix A. I (he says) used “elaborate
statistical arguments” that obscured his obvious conclusion. In other words,
both Guinn and I, who have made their careers in NAA, somehow missed this
“fact” that was so obvious to physicist Art. At best, this conclusion is
highly questionable. Art erred by not checking himself. He should have taken the
scientific approach and considered other possible reasons why his conclusions
differed so much from Guinn’s and mine.
Wrong
appeal to “straw man”
Art identifies four major “problems”
in the monograph. The first is that it allegedly introduced and refuted a
“straw man” of five separate bullets being responsible for the five
fragments recovered. Art states that this only confuses the issue because no one
is seriously proposing that scenario.
The problem with this criticism is that it
ignores context. That particular scenario was offered solely for illustrative
purposes: “The above calculations are illustrative only, and for the specific
case where all five fragments fell into place ‘randomly,’ whatever that
means.” My text then went on to consider in more detail a series of six graded
scenarios, beginning with all five fragments being genuine and ending with two
genuine and three false positives. Art doesn’t tell you that. He makes it seem
as though I made a big deal out of five false positives, which I obviously did
not. It’s easy to score points if you attack a nonexistent target.
Creating
statements out of whole cloth
Art’s second “major problem” is that
I purportedly assumed an errant probability of one for genuine matches between
fragments and thereby failed to properly compare competing hypotheses. In other
words, by arbitrarily setting P = 1 for true matches even though the actual
probability was much lower than that, I preordained the outcome to the
explanation that I liked. Or, as Art says it, I failed to consider the
possibility of my “preferred hypothesis” failing.
The problem with this criticism is that I
did no such thing. I did not assume any probability for a “preferred”
hypothesis. I restricted myself to calculating probabilities for hypotheses with
false positives (accidental matches) and showed that they were too small to
accept. I added that none of these alternative scenarios could be taken
seriously because no solid evidence for any of them has been produced, even
after 38 years of trying. I coupled these considerations with overwhelming
reasoning in favor of the genuineness of the fragments and their matches. By
creating things that I didn’t do and leaving out things that I did do, Art
seriously trivialized my efforts.
Misguided criticism of Ptightness
Art’s third “major problem” is that
my probability for the tightness of a group of fragments, Ptightness,
is a poor way to discriminate genuine and accidental matches because it
allegedly is insensitive to separations that are too big for genuine matches.
While there may be some truth to this, the monograph showed that it can serve as
a reasonable introduction to the subject. It also gave the same basic answer as
Larry’s better approach. Ironically, Art’s approach shows the definitiveness
of the NAA better than mine did. He missed this because he used the wrong data.
Our right data coupled with his approach confirm our earlier results and rebut
his.
Confusing treatment of a
priori probabilities
Art’s fourth “major problem” is my
alleged use of a priori probabilities, or the probabilities for getting
the actual situation that we stated with (two hits, five fragments, etc.). I
found this section to be particularly confusing. By definition, an a priori
probability is one that exists before one gets new information that allows one
to refine the calculations. The problem I see here is that one could in
principle have more than one stage of new information and refined calculations.
The a posteriori probability from the first round would then become the a
priori probability for the next round, and so on. Art seemed to be using a
priori to refer to the situation before the shots were fired. If Art is
interested in continuing this discussion, it would be very helpful for him to
state exactly what he means by a priori probabilities so that we can all
be talking about the same thing.
That said, it appears that Art misread the
monograph, for I don’t believe I used a priori probabilities in any
sense that he may define them. I used his conditional probabilities instead, as
he recommended, by starting with the actual situation of five fragments and
examining competing explanations for their origins. A priori
probabilities would have considered the chances of having gotten five fragments,
four fragments, etc., which I definitely did not do. It does not seem germane to
determine how improbable the actual result was relative to other ways the
assassination might have played out, because it didn’t played out those other
ways. Starting with the five actual fragments removes a priori
probabilities from consideration.
But there is something much more important
here that Art doesn’t tell you. After calculating all those probabilities that
Art complains about, I stated explicitly that we should not take any of them
seriously, for all the cases with planted fragments or fragments from additional
rifles represent hypothetical situations with absolutely no documentation.
Statistics applied to these ungrounded ideas will be just as hypothetical as the
ideas themselves are. The chance that a particular fragment might have come from
another rifle means nothing until we have reason to believe that another rifle
was involved, and we have no such reason. Thus Art’s whole note is criticizing
an approach that I had already dismissed as moot. Major false target!
Art’s
only two issues
Art concludes his opening section by
stating that only two issues are relevant to the case for or against conspiracy:
whether the group of three fragments could have contained any accidental matches
from additional bullets, and whether the group of two fragments could have
contained an accidental match. I believe he is trying to say that only a certain
number of false matches are of interest to him, maybe only one, and specifically
that the case of five false fragments is irrelevant to these issues. He thus
seems to be interested only in low-level conspiracies even though that
restriction does not follow from his two issues. I consider it arbitrary,
dogmatic, and unjustified to so limit the questions beforehand. Art is saying
“My issues are the only issues,” and this is patently untrue.
Misquoted
assumption
In referring to my simplified picture that
antimony in WCC/MC bullet ranges smoothly between 0 ppm and 1200 ppm, Art
misquotes the monograph. He said that I asserted that the simplification
“doesn’t matter,” whereas I actually stated that “it would not change
the basic sense of the answer, which appears very clearly.” Big difference!
Art is shooting at another false target.
Blind
statistics
In a misguided attempt to demonstrate the
uselessness of my Ptightness, Art at the bottom of his page 5 tries
to claim that its small value for the “group” composed of the Walker
fragment and the unfired round would somehow lead to the “absurd” conclusion
that “the bullet found in General Walker’s wall and the unfired bullet found
in the Carcano rifle left on the 6th floor are the same bullet.”
The only absurd thing here is this blind use of statistics. Statistics enters
only after reasonable and competing hypotheses are formulated and justified, not
before. Like drinking fine wine, thou must not use statistics “before its
time.” But Art tries it here, even if he is using it only to cast aspersions
of Ptightness. Art surely knows that he will find no such use of Ptightness
in the monograph, and the effort to imply otherwise does not become him.
On a more fundamental level, Art erred
here by applying his statistical test inappropriately. His test required samples
to be chosen randomly, but he specifically selected the Walker bullet and the
unfired round for their similarity. Statistical tests used by any of us are
invalidated when we use nonrandom samples.
Art’s basic calculations
The problems with Art’s basic
calculations can be discussed in four steps. First, he uses a technique for
calculating accidental matches that is unjustified because it does not take
account of the underlying log-normal distribution of antimony in the fragments.
Second, he inserts wrong data into his wrong equations and so gets seriously
wrong answers. Third, the right data used in his equations provide an answer
that confirms our answers. Fourth, the right data in the right equations show
more properly and more clearly that the two groups are distinct and composed of
genuine fragments—our conclusion reached in our way. Let us walk through
Art’s reasoning and calculations and see where he goes off the track.
We begin with an overview of the
probability calculations. Given two or more fragments with concentrations of
antimony that are close together (that form a group), we need to be able to
estimate (a) the probability that any two of these fragments actually represent
the same starting material (are a genuine match) and (b) the probability that
the two fragments represent different starting materials such as different
bullets (are an accidental match). Dr. Guinn claimed, as do Larry and I, that
the groups are what they seem—genuine matches. Critics such as Art Snyder and
Stuart Wexler think otherwise, and feel that it is likely or even probable that
one or more of the five fragments came from outside sources (another gunman or a
plant). At the minimum, the critics claim that the NAA cannot demonstrate
definitively that the fragments are genuine. If probabilities (a) and (b) can be
calculated reliably, they may resolve the question.
The traditional statistical view is that
probability (a) cannot be estimated reliably because it amounts to trying to
prove a negative to a certainty, a phrase familiar from the Warren Commission
Report. The negative here would be that no more than two bullets are represented
in the five fragments. I did not attempt this in the monograph, but Art claims
that I did (see above), and tries to show in his note that even when that
probability is calculated correctly, it leads to flawed conclusions. His
improved method, which I think is reasonable extension of what I was trying to
do, is to reduce the negative to a positive at a lower level. He starts with the
two fragments having the same true concentration of antimony, and then allows
each concentration to be degraded (made uncertain) by the vagaries of the
analytical procedure. He then calculates a joint analytical uncertainty for the
fragments and uses the normal distribution of uncertainty (valid for NAA) to
find the probability that their actual separation can be explained solely in
this way. (In other words, they are really the same concentration, but the
analysis made them look different.) It’s like a Gaussian tightness approach.
His new positive is their difference from the same concentration. Credit to Art
for coming up with a good way to test for a genuine match.
But then he goes off the track. First, and
most importantly, he uses a seriously wrong value for the analytical
uncertainty. He went to Guinn’s report to the HSCA, found his table with
concentrations and uncertainties, which Art reproduces as his Appendix B, and
takes them at face value. This is a very serious error, for Guinn stated
explicitly in his report and his testimony to the HSCA, and I reiterated in the
monograph, that those values are for counting statistics only, and so represent
only part of the analytical uncertainties. Guinn emphasized that the actual
uncertainties of measurement are about two to three times the uncertainties from
counting, which in turn are about 1% of the concentration. Thus we have to use
measurement uncertainties of 2%–3%, not 1%. Art’s error showed that he
hadn’t read Guinn’s report, his testimony, or my monograph carefully enough.
(Art also claimed to be using the values of Appendix B only for the sake of
argument, but he actually went on to draw conclusions from them.)
To the nonspecialist the difference
between 2%–3% and 1% may not seem like a big deal, but it made the difference
between getting the right answer and one that was completely wrong. Art
calculates that the joint uncertainty for the two fragments is 11.4 ppm of
antimony. Since the two fragments differ by 35 ppm, that makes them appear to be
3.1 standard deviations apart (35 ppm divided by 11.4 ppm). That is a very big
separation, and gives a high probability that they were from the same source. To
get that value, Art skips over the Gaussian formula he has just presented as his
equation 3. Instead, he takes the 3.1 standard deviations of separation, goes to
a table of statistics for the normal distribution, and finds that the area from
0 to 3.1 standard deviations is about 99.7% (0.997) of the total area under the
normal curve (but using a simplified (standardized) version of the formula that
he didn’t show). That allows him to state that the probability of getting a
separation of 35 ppm or less (with that standard deviation) is about 0.997. In
other words, the probability that two fragments could differ by up to 35 ppm and
still represent a genuine match was 0.997. This answer is wrong because the
standard deviation of 11.4 ppm is wrong. In any event, Art’s probability (a),
for a genuine match between the stretcher bullet and the wrist fragment with the
tightness approach, is 0.997.
Art then calculates his probability (b),
of an accidental match between the stretcher bullet and the wrist fragment. For
this he uses my tightness approach directly, whose resulting probability is [2 x
35 ppm]/1200 ppm = 0.058. This value, he notes, is some 17 times lower than the
probability for the genuine match (0.997) calculated above. That is nonsensical,
he says. Why? I think because he realized that the 3.1 standard deviations of
separation for the fragments must mean that it is highly unlikely they
represented the same source. In that he is both right and wrong, right with
respect to the 3.1 standard deviations but wrong because it is really 1.25
standard deviations (35 ppm divided by the 28 ppm derived below). He is also
wrong because my “linear tightness” approach was noted in the monograph not
to be the best way to calculate. Thus, Art is wrong to use it here. In effect,
the numerator of his ratio is wrong because of incorrect data, and the
denominator because of improper theory. The ratio of two improper numbers can
never be meaningful.
Art then starts down the right road (I
think) by reasoning that we don’t care so much about probabilities of
separations up to a particular value (whether from a genuine match or an
accidental one) as about separations of that value. He correctly notes
that it is rather improbable (about 0.003) to have a separation of 35 ppm.
Art stops this line of reasoning here,
leaving us with a sort of apples-and-oranges comparison, or a reasonable
comparison of areas followed by an implied comparison of an ordinate with an
area. The result is confusion.
Then he returns to “likelihood
ratios,” the approach that he had just dropped, but does not connect the new
material to the old. To simplify a bit, a likelihood is the probability that a
given separation (35 ppm of antimony in this case) is found where it is. It is
something like the value of the probability curve at the specified separation,
where separation is usually expressed in standard deviations from zero. (He
calls it the “probability density function,” or PDF.) The likelihood ratio
is just the ratio of the PDFs for two competing scenarios, the two here being a
genuine match and an accidental match.
Art now calculates the PDFs and their
ratio in order to drive home his point that either the stretcher bullet or the
wrist fragment probably came from an external source. For the numerator, the
probability of a genuine match, he returns to his Gaussian with the same wrong
standard deviation, but this time uses a different formula that gives an answer
an order of magnitude lower—0.0003 vs the earlier 0.003. The PDF for the
accidental match he calculates (incorrectly) as 1/1200 (0.00083)—incorrect for
the reason noted above that will be described in detail below). Now the ratio
for genuine/accidental is about 1/3, which means that the accidental match (from
the external source) is about three times as probable as the genuine match. He
notes with evident satisfaction that his likelihood ratio (of PDFs) has changed
the odds of an accidental match from 17:1 against to 3:1 for.
The problem with this great turnaround is
that it is wrong for two reasons, either of which would disqualify it. The first
error lies in the likelihood of a genuine match. To get the true value, we must
use the proper standard deviation. We can assume that the starting numbers are
close to Guinn’s overall estimate of 1% of the concentration (which they are).
We can then keep things general by using an average antimony concentration of
800 ppm for the fragments. Lastly, we can assume that the typical analytical
uncertainty will be the average of Guinn’s range of 2% to 3%, or 2.5%. For a
concentration of 800 ppm, that works out to be 20 ppm, or 28 ppm for the joint
uncertainty of the two measurements (1.4 x either uncertainty). That makes the
true value of the PDF for the genuine match either 0.0065 or 0.1827, depending
on which of Art’s formulas for the normal distribution you use.
The second error in the likelihood ratio
is the value of its denominator, which is the PDF for an accidental match. Art
again used 1/1200, which is correct for a uniform distribution of antimony in
WCC/MC bullets, but is incorrect because the antimony is not distributed
uniformly between 0 and 1200 ppm. We explained that in detail in the monograph,
but Art ignores it and thereby shoots at a false target. But even if we accept
Art’s way of calculating, the proper value of his likelihood ratio would
change to 0.0065/0.000833 or 0.1827/0.000833, which amounts to 8/1 or 220/1 in
favor of a genuine match. Thus with either of Art’s formulas, the two
fragments are much more probably a genuine match than an accidental match, the
reverse of what Art had calculated. I repeat that he got this wrong answer and
Stu Wexler agreed with it because neither one had sufficiently read Dr.
Guinn’s testimony to the HSCA, his report to the HSCA, or my monograph, all of
which presented the proper standard deviations.
But all these calculations are flawed
because they involve the simplified assumption that antimony in WCC/MC lead is
distributed uniformly between 0 and 1200 ppm, which was shown in the monograph
to be incorrect. Let us now briefly review what the true distribution is and how
to properly calculate the probability of getting an accidental match. The best
available estimate of the distribution comes from Dr. Guinn’s 14
“background” bullets that he reported to the HSCA. A plot of these 14
concentrations of antimony shows that they are not really distributed uniformly,
although that simplification forms a good starting point. The next-best guess
would be that they are distributed normally (in Gaussian fashion). This can be
checked by plotting the data on a “normal probability plot,” as Larry
Sturdivan did. That plot shows that they are also not distributed normally, for
they have a sharp downturn at low concentrations as well as one or more outliers
at the upper end. The next-best guess is that they are distributed log-normally,
a pattern that would also be expected from the statistics of mixing virgin lead
and its low antimony with recycled hardened lead and its much higher antimony. A
normal probability plot with logs of concentrations confirms that the
distribution is nearly log-normal. (At the time of this writing, Larry and I are
evaluating the possibility that the distribution is actually bimodal, with two
separate normal or log-normal zones representing the incompletely mixed starting
materials. I hope to be able to report on this at the Lancer conference.)
From the normal probability plot with logs
can be read the mean and standard deviation, which then can be used to construct
a regular normal (Gaussian) distribution. The stretcher bullet and the wrist
fragment can then be “positioned” on this distribution, with the
“distance” between them representing one-half the probability that one of
them is an accidental match with the other. This probability works out to be
just under 2%. But this is the most optimistic value for this probability,
however, for it assumes maximum knowledge on the part of whoever was firing
another rifle or planting a fragment. Specifically, it assumes that the person
knew of the special properties of WCC/MC lead and took pains to duplicate it in
his weapon or fragment. To put it mildly, it is problematic whether any
conspirator in 1963 or 1964 could have known this information or been able to
put it into practice in such a short time after the assassination. For example,
had the person selected a weapon that used hardened lead, or planted a fragment
of hardened lead, the probability of an accidental match would have declined to
zero because antimony in hardened lead does not overlap antimony in WCC/MC lead.
Similarly, the probability of an accidental match within the fragments of the
head-shot group works out to be about 3%, again under the most optimistic
scenario. Although these most-optimistic probabilities are not vanishingly
small, they are low enough to generally be regarded as statistically
insignificant. The others are far smaller, if not zero. Probabilities of
multiple external fragments are the products of the separate probabilities. Thus
it is easy to see how the probabilities of various conspiracy scenarios can
easily reach the one-in-a-million point, with probabilities of one in a hundred
million not being impossible.
Thus, there is no statistical support for
the idea of accidental matches from external bullets or fragments. Since the
only other possibility is that all the fragments are genuine, this is the
hypothesis that we must accept.
“Sampling
errors”
Art then claims that “sampling
errors,” or heterogeneities within WCC/MC lead, have to be considered, and
that they are large enough and unpredictable enough to invalidate all attempts
to find a definitive answer from the NAA. This is totally false. Again he did
not read other parts of the monograph, where it is explained, with reasons and
mechanisms, why the scale of these heterogeneities is too large to affect the
suite of particles produced when WCC/MC bullets shatter. Art’s error can serve
as a lesson for all of us—read the whole document before getting into detailed
statistics that may be irrelevant. One should also know something about how
bullets break before assuming some specific behavior on their part.
“Ringers”
in the group of three
Here Art considers accidental matches in
the group with three fragments. He uses an irrelevant “sampling error” of 30
ppm, irrelevant because the physical scale of the heterogeneities in WCC/MC lead
is too large (as noted above). As a result, the inconclusive answers he gets
have to be disregarded. Because the fragments in this group have the same kind
of spread seen in the group of two, the answers here will be similar to that
one, namely that the probabilities are 100 to 1 or so in favor of genuine
matches over accidental matches. No contest!
Other
types of lead
Art considers only WCC/MC lead. That
narrow approach eliminates scenarios in which a conspirator planted or fired a
bullet with a different type of lead. As shown in the monograph, this lowers the
probabilities of a false match by a factor of something like five. (There are
other reasons why it may lower the probability to zero.) That changes the
roughly 1:100 likelihood ratio of accidental match to genuine match to something
like 1:500, which is a significant reduction.
False
prospects
Art closes his note with a discussion of
the future. He says that the probabilities can be calculated better with two
types of additional measurements, “more and more accurate measurements of
repeated samples from the same bullet,” especially for CE 399, and “more
measurements of a larger number of [WCC/MC] bullets.” But these projections
are based on the flawed calculations and improper understanding that
characterize his entire note. This invalidates them. The proper probabilities
are so clear-cut that no more measurements are needed. It’s the old story that
appears so often in JFK “research,” when results are wrongly interpreted and
additional research is unnecessarily called for. We have everything we need to
understand the strong message that the NAA data are trying to tell us.
Summary
The major characteristics of Art’s note
include:
1. It is narrowly focused, not broad in any sense.
2. It wrongly states that only two question are of interest to conspiracy.
3. It displays ignorance of Guinn’s work and my monograph.
4. It is misdirected enough to be meaningless.
5. It attacks at least one major argument that I didn’t make.
6. It fails to deal with several important things that I did.
7. It uses statistics blindly and wrongly on the Walker fragment and the unfired round.
8. It overemphasizes the role of the five-false-positive argument in the monograph.
9. It misstates the role of a priori probabilities.
10. It misquotes my statement on the assumption of smoothly varying antimony between 0 and 1200 ppm in WCC/MC bullets.
11. It seriously miscalculates the probabilities of genuine matches because it uses wrong analytical uncertainties, even though the proper values were available from both Guinn and me.
12. It considers only WCC/MC lead.
13. It misunderstands the irrelevance of large-scale heterogeneities.
14. It properly proposes the use of likelihood ratios but calculates them wrongly.
All in all, the note manages to get nearly everything it touches wrong or irrelevant, primarily because it focuses too narrowly on pure statistics at the cost of ignoring the data and reasoning in the rest of the monograph and in Guinn’s testimony and report to the HSCA. In the process, an important opportunity to engage in critical dialog is lost.
How
did my predictions fare?
My initial post to the newsgroups noted
that Art’s critique would not be understood by most JFK “researchers.” The
responses so far have abundantly confirmed that prediction. It is clear that
none of the responders have understood his statistics enough to see where he
went wrong. More tellingly, none of them spotted that he had used wrong data and
thereby gotten wrong answers. Moreover, none of them spotted his basic
statistical errors, either. That’s what can happen when you use fancy
statistics, omit major steps, and don’t explain your procedures in terms that
others can grasp.
My initial post to the newsgroups also
predicted that the final discussion could come down to narrow statistics versus
broader logical thinking, and that is the first part of what has happened. On
one level, Art has been narrow by not reading the available literature enough to
know the correct values to insert into his formulas. I find it ironic, and
extremely illustrative, that the right values totally reverse his answer and
turn it into our earlier one. On a deeper level, Art has neglected the reasoning
in the NAA monograph that show that most of these statistics are worthless
because they apply to completely hypothetical situations. Art’s note became
irrelevant the moment he began to critique something that didn’t matter. It
also didn’t help when he obscured things by failing to note this extremely
important aspect of the monograph. One must spend a lot of time with the
monograph in order to grasp its full reasoning, and I do not apologize for this.
But the final discussion must also deal
with Art’s statistical errors. I was surprised by this. I had expected that
Art’s long experience in physics would prevent him from making the errors that
his note revealed. I was wrong here, however.
How
to respond to my response
Since I have noted here that one of
Art’s big errors was to not check himself by considering alternative
explanations for his major points, I will show good faith by offering a
self-critique of this response to him.
First, I admit that neither Larry
Sturdivan nor I were able to understand everything in Art’s note. I would be
grateful if Art would clarify some of these points, which include his seeming
use of two forms of the Gaussian formula where only one is needed, and his
meaning of a priori probabilities. Neither response will affect the basic
soundness of my original conclusions, and particularly so for Larry’s,
however, the first for mathematical reasons and the second for conceptual
reasons.
Second, Art may wish to comment further on
his contention that I arbitrarily set P = 1 for genuine matches. I am convinced
that I did not, but if he can show convincingly that I did and was not aware of
it, I will listen.
I may add to this list of weak points of
my response in the future.
What’s
to come
Art’s note and my response will form
part of my presentation at JFK Lancer. They won’t be a big part, however,
because his note doesn’t warrant it. Preparing this response has offered an
opportunity for me to review once again the NAA and its place in the physical
evidence from the JFK assassination, and this has proven to be a valuable
exercise. In the spirit of getting as many ideas as possible onto the table well
before the conference, I offer a summary of the broad train of thought that I
plan on giving in Dallas.
In the broadest sense, JFK conspiracists
should be terrified of the NAA and its strong links to other physical evidence
and basic logic, for it knocks the legs out from under most, if not all,
contemporary JFK “research.” I know that is an extremely strong statement,
but I intend to make it and to justify it.
I will begin by addressing the two groups
of fragments found by both the FBI and Dr. Guinn’s NAA. I will first
demonstrate that once the large-scale heterogeneities are removed from the
picture, as they must be because they do not affect the properties of the tiny
fragments generated as jacketed bullets break when encountering bone, the groups
are revealed to be extremely robust statistically (odds of something like 400:1
that they are distinct groups). Second, I will use logic and statistics to
demonstrate that all the fragments are genuine (incorporating pieces of Art’s
note and our proper calculations). Third, I will show the importance of the
physical meaning of the two groups, by noting all the other ways the fragments
might have arranged themselves but didn’t. Fourth, I will show how this means
that every fragment recovered came from Oswald’s rifle to the virtual
exclusion of other rifles. (The best scenarios of a plant are something like 2%
to 3%; the worst, and probably the most realistic, are orders of magnitude
lower.)
Then I will turn more general and consider
the implications of these results. I will first review the extremely important
logical dictum that without specific physical evidence for an idea, we may not
consider it with any legitimacy. (Speculation is easy but meaningless without
evidence.) I will combine this with the NAA data to show that we may not
consider anything beyond a single shooter.
Then I will take an aggressive next step
and show how the NAA knits together all the rest of the physical evidence into
an extremely strong wall that no one has been able to break in 38 years of
trying. I will also show that the NAA at the same time puts most or all of the
other physical evidence into the category of “doesn’t matter.” This
includes but is not limited to details of most of the wounds, the photos and
X-rays from the autopsy, the Zapruder film, and the chains of custody. For
example, since we now know that the head shot came from Oswald’s rifle, left
telltale fragments in the brain, and deposited two large fragments in the front
seat, it no longer matters just where it entered the head and just where it
exited. It makes no sense to continue to fight over these immaterial details.
That same approach can be combined with one or two other pieces of physical
evidence to demonstrate the double-body hit, or DBH (formerly the single-bullet
theory, or SBT).
I will conclude by moving to the most
general level, of considering all other scenarios that lack physical evidence.
This of course includes everybody’s favorite conspiracy theories as well as
all those smaller and smaller details that are now being discussed on the
newsgroups. Since they are purely speculative in nearly all cases, they are
wasting everybody’s time and, worse, giving them a false sense of doing
something meaningful about the assassination. Images of Nero fiddling while Rome
burned come to mind.
I will then summarize as follows. (1) The
NAA settles the question of the fragments—two groups representing two bullets
from Oswald’s rifle. (2) The NAA knits together the rest of the physical
evidence and simultaneously renders most of it moot. (3) All scenarios other
than the lone gunman must be summarily rejected because they lack physical
evidence.
For the record, I am continually
astonished by the emerging power of the NAA to affect our interpretation of the
JFK assassination. I had no such idea when I began to study the NAA data a few
years ago, and am coming to terms with it just as everyone else is (or should).
In other words, it is as much of a learning experience for me as for anyone
else.
Responsibilities
of Stu Wexler and Stewart Galanor
I have now put the thrust of my
presentation on the table for all to see, at least to the extent that I can
determine it three weeks ahead of the Lancer conference. Since this is supposed
to be an open discussion whose only goal is to ascertain the truth (Debra
Conway), it is incumbent on the other two panelists to similarly lay their cards
on the table as early as possible so that I and others can evaluate them
thoroughly beforehand. Only in that way can we have an open, honest dialog at
the high level that Debra is expecting and that every attendee deserves. If
someone springs something during the panel, I will not hesitate to declare dirty
pool and that the person was more interested in winning the debate than in
understanding the assassination. So I expect to see any new information from Stu
and Stewart well before the conference, and preferably in these newsgroups. In
this category, for example, would fall results from the “work” of Albert
Frasca that we keep getting hints about. Is he really doing anything? If so,
let’s see it early.
Work
in progress
As the reader can see, this response to
Art Snyder and the whole explanation of the NAA and its implications for the
assassination are very much works in progress. That does not mean that their
basic result is in doubt, but rather that they are constantly being refined.
Minor updates will be posted on my web site (http://karws.gso.uri.edu),
major updates to the newsgroups.
I also want it to be clear that I am
posting this response earlier than I would prefer, so that others attending the
Lancer conference will have three weeks to evaluate it and respond from
positions of greatest strength. I judge that the greater good is served in this
way.