While the sales teams love to spruik the latest “clean sheet design” of your favourite mining asset, have you considered the reliability penalty of this “clean sheet” design?
When you start with a clean sheet, you also eliminate all the learning and development that went into the old platform, and there is increased risk to your operating costs and reliability performance.
I have been fortunate to work for three Original Equipment Manufacturers. One was an automotive manufacturer (when Australia still built cars), one was a global earthmoving equipment supplier, and one was an industrial engine distributor.
These three OEMs - and any other reputable OEM for that matter - had, or will have, a well-established reliability improvement program. I will explain the general process, as they were all very similar.
Reliability improvement starts with performance feedback. Before the asset is ever released to production and to end-user customers, field testing is undertaken to “stress test” the asset. The whole objective is to induce and fix the problems that appear. However, there is always a limit to the quantity of assets and time available to conduct the testing. Ideally, you’d test 1000 assets for 40,000 hours, but that simply isn’t economically viable.
The asset experiences some reliability growth during the testing period as the early issues are identified and designed out. One of these OEMs used “learning curves” or “Duane plots” to track the reliability growth during the development period.
Once the asset is released to the field, the OEM continues the reliability growth program, but the source of the data is now different. Rather than being from their own proving grounds or customer test machines, it now comes via their dealers or distributors in the field.
In two of the OEMs I worked with, the field data came from the warranty system. The dealers or distributors had to use the OEM’s software platform to submit warranty claims. The OEMs could force the appropriate field data to be supplied with the required detail in the desired format. This data was then made available to the OEM’s internal reliability engineers to continue the reliability growth program.
The third OEM required their dealers to all use their global business system. In this case, work order data in the business system was summarised into a single line description, damage code and part information, and this was keyed into the field failure reporting system. This field reporting system was the data source for the corporate reliability engineers located in the various engineering design groups.
The OEM reliability engineers performed analysis just like engineers in the field. They drilled down into the data looking for failure patterns, identified potential issues, quantified them via Weibull analysis and inspection of failed parts, then built a case for an engineering change to be made.
Again, and no different to mining operations, the OEMs are all constrained by time and money, so the field problems representing the biggest impact were prioritised for action first. The engineers would assess the problem and aim to redesign a part or assembly to eliminate the failure mode identified by the reliability engineers.
Once this was complete, some prototype parts would be manufactured and installed onto engineering test machines (or cars) to validate the improvement performed as intended. The reliability improvement would be confirmed prior to the part or assembly being implemented in production. Once the production line implemented the change, documentation and parts were released to the field to retrofit earlier machines with the improved parts.
Over many years, an effective reliability improvement program would greatly improve the field performance of physical assets. Later units off the production line would be vastly more reliable than the earliest. They’d be inherently more reliable due to the reliability improvement program. In fact, the running joke at the automotive company was to “buy the last one off the line, not the first one”. This is just a simplistic reflection of the reliability growth program in action.
If I’m going to fly on a commercial airliner, I’d prefer to be on a Boeing 737. According to Boeing fact sheets, they have built 10,000 of the type, and amassed 264 million flight hours! There is a great chance you will be flying on a highly developed and reliable asset when you board a 737.
When you’re offered a “clean sheet design”, your asset might have passed its field-testing phase, but you don’t get the benefit of years of continuous reliability improvement. Make no mistake, you will be part of the improvement program by feeding data back to the OEM during the early days.
Additionally, how confident can you be of predicted operating costs and reliability performance of a “clean sheet” design? There is no data to generate cost and performance scenarios. All models are wrong, but models for “clean sheet” designs will be very wrong.
This same philosophy applies to a major change to an established asset. Maybe you re-power an older machine with a newer engine, or an engine platform has been replaced. While the remainder of the machine might be inherently reliable, the engine might be a source of significant unreliability until it has progressed through the reliability growth program.
Of course, these risks must be weighed against potential production benefits a “clean sheet” design might offer, the maturity of your own reliability engineering organisation in managing a “clean sheet” design, the relationship you have with the OEM and the supply arrangements you can negotiate.
Be aware of the possible impacts to your business of selecting a “clean sheet design" asset. Carefully assess the cost and performance claims, and ensure your organisation is prepared to manage the impacts of unreliability as the improvement program is underway.