XmR limit calculation in practice

I’m looking for some perspective on the practical side of XmR limit calculation.

I understand Wheeler’s claim that the 2.66 scaling factor is better than three standard deviations because the moving range doesn’t make assumptions about the underlying distribution. What I don’t know is how significant the difference is in practice.

I’m preparing a talk on using process behavior charts to detect if a performance improvement intervention (like training) ended up impacting the business process. My audience will be very statistically naive, and I am looking to give them perspective on how important this particular calculation is. If using a typical standard deviation calculation increases the false positive rate from, e.g., 1 out of 100 to 2 out of 100, that’s a different conversation than if it increases from 1 out of 100 to 10 out of 100.

Does anyone have practical examples of datasets where the difference between the two calculations made a big difference in detecting signals vs noise? Or any resources that can help quantify the degree to which this choice matters?

I think I’ve finally settled on an explanation of the scaling constants calculation that’s palatable to a non-math inclined audience: here.

If you’d like a good example, use the one in this Quality Digest column by Donald Wheeler (but beware that the explanation is very stats heavy). He gives an example of a dataset and calculates the standard deviation using the global standard deviation calculation and using the moving averages method. You’ll see that the moving averages method is superior (less sensitive to the presence of non-homogenous data) than the global std dev calculation, but that both methods eventually converge to the same values under large N.

1 Like

One way of looking at the “scaling factor”:

The XmR chart methodology uses plus/minus 3 sigma limits. The question is how do we estimate sigma?

One way of estimating sigma is the calculated standard deviation.

The XmR chart method is better since it doesn’t care that the underlying distribution of the data is.

So the estimate of 3 sigma = 3* Average Moving Range / 1.128

Or think of it as 3 * (AMR / 1.128)

The estimate of sigma is AMR / 1.128.

Where 1.128 is a statistical constant. As a practitioner, I’m fine with Wheeler and other statisticians saying the constant is the constant.

3 / 1.128 is 2.66, so that sometimes gets confused as “2.66 sigma limits” which is not the case.

3 Likes

To the other question about “how much it matters” – using the “wrong” control chart or the “wrong” control chart calculation method is qualitatively FAR better than not using control charts at all.

I’d suggest playing around with some data sets that you have to see the differences in how many signals are generated.

3 Likes