Locking your limits is one of the most essential practices people should follow when using XmR charts, but it is also one of the most routinely underused. I regularly receive Xmrit links from subscribers who need help understanding what their chart is telling them because they have not locked their limits.
In this article, I will explain why not locking your limits is akin to chasing a moving target, and show this using a case study from Commoncog.
Why Lock At All? - Fix Your Target
Every new point you add to a standard XmR chart changes your process limits. This is because each new data point changes the process average, and generates a new moving range value. Since your process limits are based solely on the process average, the average of the moving range, and fixed scaling constants, each new point mechanically changes your process limits.
In case you forgot, here are the formulas for the process limits, with only those three items involved.
ππππΏ=π΄π£πππππ π+2.66 π΄π£πππππ ππ
πΏπππΏ=π΄π£πππππ πβ2.66 π΄π£πππππ ππ
Your process limits changing is beneficial when you have only a small number of data points on your XmR chart. With a small number of data points, your process limits are unreliable and may not reflect your process’s βtrueβ behaviour. However, the benefit of adding new data points to your XmR chart has diminishing returns after around 20 points.
You can see the diminishing returns of new data points in Dr. Donald Wheeler’s chart below. The graph shows the coefficient of variation, a statistical measure of the uncertainty of your process limits in this context, declining as you add more data points to your XmR chart. As you can see, most of the benefits of adding new data points come before 20, with each further data point only providing a marginal improvement.
But you may say:
Sam, if my limits get better as I add new data points, why donβt I just continue to add? Even if it is only a marginal benefit, what is the harm?
The problem is that this assumes that your process is stable (only routine variation), but that is often not the case in the real world. Over long periods of time, your processes will almost certainly experience some level of process shift (exceptional variation). If you have not locked your limits, you will include the data from the process shift in your limit calculations, potentially causing you to mistake exceptional variation as routine.
Instead of being able to detect changes to your process, your limits will change in response to your process changing. This problem is the origin of the moving target title of this article.
The solution is to lock your limits once you have enough data points to give you confidence in them. As we saw before, this is usually around the 20-point mark. By locking your limits, you prevent them updating when exceptional variation comes along, allowing you to identify process shifts.
Let us look at an example of this challenge using a real-life situation we faced at Commoncog.
Commoncog βIn Depth Readersβ - Slow Trend Case Study
Below is an XmR chart of a metric we monitor called βin depth readersβ. In depth readers measures the number of people who visit more than one page in a single session. The team had been focused on improving this metric, and had seen it increase over 2024.
However, despite an upward shift, the XmR chart did not signal any exceptional variation. A few points in July and August had come close to breaking the upper process limit, but no rule had yet been triggered.
What gives?
The problem was that we had not locked our limits, and the upward trend was slowβso slow, in fact, that although the trend shifted the process average upwards, no individual point was extreme enough to trigger the upper or lower process limit. Eventually, this trend would have been picked up by the XmR chart, but this graph is already 10 months long.
Waiting for a signal of exceptional variation for over a year is unacceptable in a fast-paced business.
But what if we had locked our limits, even on only 15 data points which results in quite soft limits. On the XmR chart below, I have locked the limits based on the first 15 data points. You can see the run of 8 process shift can be identified as starting in early April (and would have been caught sometime late May).
Even if you were more conservative and locked with 25 points, which includes some of the data post April, the exceptional variation would still have been caught. Albeit a month later in late June when it broke the upper process limit.
The critical thing to remember is that you need a stable baseline to assess whether a process is changing. If you donβt lock your process limits, they will change with your process, causing you to chase a moving target.