Although I'm not a statistician, in that my degree is in statistics, I've been using statistics to analyze experimental data for a few decades, so have some familiarity with the subject. Buck is definitely on the right track here, and I'd like to add a couple of other things that may be useful for those who really want to get into this topic, or may simply be confusing, in which case you can ignore them.
First, there's no basis whatsoever for arbitrarily discarding any data point, even if seemingly very high or very low. You're interested in the properties of your entire statistical population - which in this case means not only that all the loads were constructed to be identical, but also that there was no equipment malfunction or other additional variable during testing (freak gust of wind, stray shadow on the skyscreen, flinch [if analyzing accuracy], etc.) that applies to a single datum but not to the population of interest. That said, it is possible using very simple statistics to determine if it's likely that a velocity reading does in fact come from a different population and therefore can be discarded. Note that, as in statistics generally, the answer is in the form of probability rather than an absolute yes/no.
Standard deviation (sd) is a statistic that can be easily calculated on many cheap hand calculators or via free software available on the 'net - I know it's also provided by most chronys, but that won't help in this case for reasons to be seen in a moment. It's a property of normally distributed (i.e., the so called "bell-shaped curve") populations that about 67% of the data will fall within plus or minus one standard deviation of the mean (average). About 95% of the population will be within +/- 2sd, 99% within +/- 3sd, and 99.9% within +/- 4sd.
We can use that property of the normal distribution to determine if the probability of a single reading being part of our population is so low that we can safely discard it. To start, first calculate the mean (average) velocity of the string without the suspect reading. If you then calculate the absolute deviation of the suspect reading from that average (i.e., subtract the average from it and drop the minus sign if necessary) and then divide that number by the standard deviation, you've calculated the number of standard deviations that measurement is away from the average - this is known in statistics as the z-score. If your z-score is 3 (or higher), there's only a 1% (or smaller) chance that the aberrant reading actually came from the population. If it's greater than 4, that probability drops to 0.1%. Only you can decide at what point you're prepared to discard the suspect measurement, but certainly a z-score of 4 is pretty good justification to do so - again, it's not telling you that it's impossible that it's a valid reading, only that it's highly unlikely.
Two caveats: This applies only to normally distributed populations. There are both mathematical and graphical ways to test a population for normality, and also ways to convert non-normal populations to normal via mathematical transformation, the most common of which is working with the logarithms of the measurements rather than the measurements themselves. That's all a bit beyond this discussion, I'm afraid. Second, these probabilities only apply to populations of about 15 measurements or more. For smaller data sets, you need to refer to a Student's t-table, also generally available on-line, which provides equivalent probabilities that reflect the greater uncertainty inherent in smaller data sets.
This is chewy stuff and it's difficult to explain in this type of forum, but I hope it was of interest.