Do you throw out highest and lowest?

model14

Member
Joined
Dec 4, 2008
Messages
60
Reaction score
0
Location
Grass Lake, Mi (75 miles west of Detroit)
When averaging your Chrony data do you throw out your highest and lowest values (like when we tend to ignore flyers
icon_wink.gif
. Whenever I do a 10 shot batch of Chrony data, there always seems to be one or two velocities way off. I believe that is from the Chrony and not my revolver, so I average just the 8 remaining.
 
Register to hide this ad
Well, thanks for the heads-up: I'll ignore your posted data from now on!
icon_biggrin.gif


No, I don't throw out the "flyers."
icon_smile.gif
The spread is the spread, the deviation is the deviation, and the mean is the mean. If they're wild, I know I've got some work to do.
 
Duh, no you can't throw out the high and low like the score from the Olympic Bulgarian Judge.

The loads aren't prejudiced, and you can't just blame the chronograph.
The load is probably off.
 
Originally posted by Erich:


No, I don't throw out the "flyers."
icon_smile.gif
The spread is the spread, the deviation is the deviation, and the mean is the mean. If they're wild, I know I've got some work to do.

Well blow me down!! That is specious reasoning, Bro. E. Nothing but specious reasoning. I don't guess you would think too much of me shooting holes in plywood and then coloring in the bullseyes with my little crayons either would you?
icon_biggrin.gif
 
If I was sure it was a mis-read from the chronograph I probably would delete it. But for the most part. No- the data is the data.

When I'm working up some new loads or just piddling with different loads I tend to see some large fluctuations due to the fact I'm using unsorted brass and using a powder measure to charge the cases. The data is still useful to me because even with the large variations you can compare it to other loads that have been prepared with the same variables. When I want clean chrono data, I switch gears a little. I use either virgin brass or brass out of the same lot, trimmed and measured and I weigh each powder charge individually. This typically makes a big difference in the extreme spread and standard deviation but the average velocity is usually very close to the original.
 
If I run 30 or more rounds (generally do)...I look at all the data, then pull the high and low, look at that...then determine if I have a lot of highs and lows grouped in the same areas...then settle on an average. I'm
looking for consistency.

If you are dropping a hi and low out of 10 shots, I'd think that's a bit tight...run 30 rounds down the same tube...dropping the high & low, I'll have no issues with your data.

Bob
 
I just talked to Chrony tech support (a very nice and knowledgeable Japanese man who spent a lot of time talking to me) about the problem of random fluctuations in readings. I explained that out of a 10 shot group I would sometimes get a reading that was significantly different than the others. He asked me if I was shooting magnum loads and how far away I was placing the Chrony. When I said I was shooting medium to heavy 357 and 44 loads with the Chrony at 10 feet, he said the random fluctions are probably caused by muzzle blast and that I should keep the Chrony a minimum of 15 feet away. He also said that if the sun is getting past the diffusers (low angle) that this can also cause fluctuations. I am going to the range tomorrow and I will give his suggestions a try.
 
As careful as I can be, I find it difficult to get consistent data - the same group twice from the same components, load, gun, repeatedly. So I test three times, a trial, test and proof. For the trial I shoot several grain weights noting which group is best. In the test I do it again, checking which ones are comparable bests. For proof I shoot the best ones again. I may have one that was best all three times, more likely two. That's the load I settle on.
 
When I was still using a Chrony, I would occasionally get a reading that was so bizarre that I knew it was an artifact. I recall seeing a 5000 fps.+ reading when chronographing some .45 ACP, and also one around 200fps. I'm pretty sure I would have noticed some differences in recoil, muzzle blast, and firearm integrity...
icon_wink.gif
Readings of exactly half or exactly double what they should have been were more common.

Now, if some of those loads had run 900 fps. and others 775, well, I'd count them all and seek out the source of the inconsistency in my loading process.

So, the answer is, it depends. Sounds to me like you have it figured out.
 
Originally posted by model14:
When averaging your Chrony data do you throw out your highest and lowest values (like when we tend to ignore flyers
icon_wink.gif
. Whenever I do a 10 shot batch of Chrony data, there always seems to be one or two velocities way off. I believe that is from the Chrony and not my revolver, so I average just the 8 remaining.

Sir, FWIW, I don't "throw out" any highs, lows, or unexplained flyers. If you're looking for accuracy (largely a function of consistency), those highs and lows tell you a lot more about a given load than the "good" numbers do.

Also, in a match or in the game fields, you don't get to pretend a bad shot didn't happen. You're stuck with it. "Cheating" on your chrony numbers increases the chances of failure when it actually matters.

All that said, chronographs do sometimes lie. Mine once told me a powderpuff .44 special load was going 1,400+ fps. In a case like that where the numbers are obviously wrong, I prefer to discount all the data from that session, fix the chrony problem, and try again.

Hope this helps, and Semper Fi.

Ron H.
 
Statistics like average and standard deviation only have real meaning when all of the data come from the same population. If you mix two different populations together, the calculated mean and standard deviation don't tell you much. It doesn't matter what the reason for the second population is - chrono error, bad components, improper powder weights, etc. Unless you can rationally remove the second population, any numbers you calculate are just numbers, not information.

The best way to find out quantitatively what fits and what doesn't is to make a Normal probability chart of all the data. This is easy if you have the right software program available, but Normal probability chart paper is available at engineering supply houses. If all the data lay along a straight line on a Normal probability chart, it's all part of the same Normal distribution. Something that's away from the line is not part of the distribution. If you can't easily decide, leave it in - it won't change anything appreciably.

Buck
 
Mr. Buck,
Would you consider a population:

10, 50, 100, or 500 rounds all loaded alike (same "recipe"---powder, primers, brass, press)

If so, if I ran 3-4 strings of 10 rounds (the maximum my Chrony will take at a time is ten) of the same "loaded alike" batch would you consider these to be all of the same population?

Bob
 
Light on the diffusers is often the source of random highs and lows. The most foolproof way around this is to build a box for the chronograph that shields it all the way around and then add lighting to it as well as a consistent background for the top to replace the skyscreens (don't go too light or you get reflections). I haven't done this yet, but I'm going to when I get around to it. For that reason I try to chronograph on slightly overcast days or when I can get some partial shade to setup the chronograph (easier here in the desert). That ususally prevents the problems, but it's certainly not convenient. R,
 
Originally posted by VonFatman:
Mr. Buck,
Would you consider a population:

10, 50, 100, or 500 rounds all loaded alike (same "recipe"---powder, primers, brass, press)

If so, if I ran 3-4 strings of 10 rounds (the maximum my Chrony will take at a time is ten) of the same "loaded alike" batch would you consider these to be all of the same population?

Bob

Bob,

Yep. Any number of things can be a population, if each of its members is supposedly "the same". In your example above, any or all of those groups might be considered a population. For practical purposes, you need about 10-15 measurements to get good statistics from a well behaved population. Your 3-4 groups of 10 shots each would give you pretty good results, and a Normal probability plot of all of them would likely identify any "flyers" that were not part of the population.

The main reason for using statistics is to find small differences that would not be apparent to your naked eye. If a given measurement is way away from the rest of the data, statistics are mostly redundant, from a practical point of view.


Buck
 
Although I'm not a statistician, in that my degree is in statistics, I've been using statistics to analyze experimental data for a few decades, so have some familiarity with the subject. Buck is definitely on the right track here, and I'd like to add a couple of other things that may be useful for those who really want to get into this topic, or may simply be confusing, in which case you can ignore them.

First, there's no basis whatsoever for arbitrarily discarding any data point, even if seemingly very high or very low. You're interested in the properties of your entire statistical population - which in this case means not only that all the loads were constructed to be identical, but also that there was no equipment malfunction or other additional variable during testing (freak gust of wind, stray shadow on the skyscreen, flinch [if analyzing accuracy], etc.) that applies to a single datum but not to the population of interest. That said, it is possible using very simple statistics to determine if it's likely that a velocity reading does in fact come from a different population and therefore can be discarded. Note that, as in statistics generally, the answer is in the form of probability rather than an absolute yes/no.

Standard deviation (sd) is a statistic that can be easily calculated on many cheap hand calculators or via free software available on the 'net - I know it's also provided by most chronys, but that won't help in this case for reasons to be seen in a moment. It's a property of normally distributed (i.e., the so called "bell-shaped curve") populations that about 67% of the data will fall within plus or minus one standard deviation of the mean (average). About 95% of the population will be within +/- 2sd, 99% within +/- 3sd, and 99.9% within +/- 4sd.

We can use that property of the normal distribution to determine if the probability of a single reading being part of our population is so low that we can safely discard it. To start, first calculate the mean (average) velocity of the string without the suspect reading. If you then calculate the absolute deviation of the suspect reading from that average (i.e., subtract the average from it and drop the minus sign if necessary) and then divide that number by the standard deviation, you've calculated the number of standard deviations that measurement is away from the average - this is known in statistics as the z-score. If your z-score is 3 (or higher), there's only a 1% (or smaller) chance that the aberrant reading actually came from the population. If it's greater than 4, that probability drops to 0.1%. Only you can decide at what point you're prepared to discard the suspect measurement, but certainly a z-score of 4 is pretty good justification to do so - again, it's not telling you that it's impossible that it's a valid reading, only that it's highly unlikely.

Two caveats: This applies only to normally distributed populations. There are both mathematical and graphical ways to test a population for normality, and also ways to convert non-normal populations to normal via mathematical transformation, the most common of which is working with the logarithms of the measurements rather than the measurements themselves. That's all a bit beyond this discussion, I'm afraid. Second, these probabilities only apply to populations of about 15 measurements or more. For smaller data sets, you need to refer to a Student's t-table, also generally available on-line, which provides equivalent probabilities that reflect the greater uncertainty inherent in smaller data sets.

This is chewy stuff and it's difficult to explain in this type of forum, but I hope it was of interest.
 
haggis,
Thank you for the follow-up...I'm no rocket scientist, but enjoy learning.

Thanks FlyFish...I appreciate the information.

Bob
 
Back
Top