Asset Criticality Ranking (ACR) is a structured process for evaluating and scoring every asset in an operation based on the consequences and likelihood of failure, then using those scores to prioritize maintenance resources, inspection intervals, spare parts investment, and condition monitoring programs. The output is a tiered ranking — typically Tier 1 (critical), Tier 2 (essential), and Tier 3 (non-critical) — that tells the maintenance organization where failure risk is highest and where reliability investment delivers the greatest return.
ACR is the foundation of a risk-based maintenance strategy. Without it, maintenance resources are allocated based on habit, squeaky-wheel pressure, or uniform PM schedules applied regardless of asset consequence. With it, every major maintenance decision — which assets get condition monitoring, which get stocked spare parts, which get run-to-failure — is anchored to a documented, defensible assessment of failure risk.
ACR is not a one-time exercise. It is a living input to the maintenance management system that should be updated when assets are modified, when failure history changes the risk profile, or when operational context shifts. An ACR score entered into a CMMS and never reviewed is a snapshot of past thinking, not a reliable guide to current decisions.
Why ACR Matters
Maintenance capacity is finite. Every operation has more assets than it has technician hours, more potential PM tasks than it can execute, and more condition monitoring candidates than it can afford to instrument. ACR resolves that constraint by making explicit what was previously implicit: which assets matter most when they fail.
The cost of misallocation runs in both directions. Over-maintaining low-criticality assets consumes time and budget that should be directed at high-consequence equipment. Under-maintaining high-criticality assets produces the unplanned failures, safety incidents, and production losses that make reactive maintenance so expensive. ACR prevents both errors by creating a shared, documented understanding of asset risk that drives consistent decision-making across the maintenance organization.
ACR also provides the input that other reliability tools require to function effectively. FMEA should be performed first on Tier 1 critical assets. RCM analysis is most valuable where failure consequence is highest. Condition monitoring investment should be concentrated on assets where failure is hard to detect and consequences are severe. Without ACR, these tools are applied without a principled basis for prioritization.
How ACR Works in Practice
The Three ACR Factors
ACR scores are built from three independent assessments:
Consequence of Failure evaluates what happens when the asset fails — across multiple dimensions that each stakeholder group assesses from their own perspective. Production impact measures lost throughput and downtime cost. Safety impact measures the risk to personnel. Environmental impact measures regulatory and remediation exposure. Quality impact measures defect generation and customer consequence. Maintenance cost measures repair, parts, and labor expense. Each dimension is rated on a defined scale — typically 1 to 5 — and the scores are combined into a consequence rating. Critically, production technicians should not rate safety consequences and maintenance technicians should not rate production impact. Each stakeholder rates only what they can assess accurately.
Likelihood of Failure evaluates how probable failure is within a defined time horizon — typically one year. This rating draws on failure history, OEM data, operating condition severity, and maintenance program maturity. An asset with a history of frequent failures in demanding conditions rates higher than a similar asset operating within design parameters with a strong PM program in place.
Failure Detectability evaluates how difficult it is to identify the onset of failure before it produces the full failure effect. An asset with no condition monitoring and a failure mode that provides no warning signs rates at the high end of the detectability scale. An asset with continuous vibration monitoring and a failure mode that develops over weeks with clear indicator trends rates at the low end. High detectability scores indicate candidates for condition monitoring investment.
The ACR Process
Effective ACR requires cross-functional participation. Reliability engineers, maintenance planners, production supervisors, safety officers, and operations managers each bring perspectives that no single discipline can replicate. An ACR performed exclusively by maintenance misses production and safety consequence data. One performed exclusively by engineering misses the operational reality that experienced technicians carry.
The process follows a defined sequence: establish the scoring criteria before evaluating any assets — so that the same scale is applied consistently across all equipment. Then evaluate assets systematically, starting with the production-critical equipment where the stakes for getting the ranking wrong are highest. Document the rationale for each score, not just the number, so that rankings can be reviewed and updated as conditions change.
Once scores are assigned, tier the asset population — Tier 1 assets receive the highest maintenance attention, most rigorous PM programs, stocked spare parts, and condition monitoring. Tier 3 assets may be managed on a run-to-failure basis with corrective maintenance only. Tier 2 assets receive intermediate treatment based on their specific score profile.
ACR in the CMMS
ACR scores only deliver value when they are embedded in the CMMS and used to drive operational decisions. A criticality score stored in a spreadsheet is consulted occasionally. A criticality score attached to every asset record in the CMMS is visible on every work order, every PM schedule, and every parts request — making criticality a constant input to day-to-day maintenance decisions rather than an annual reference exercise.
In a CMMS, ACR drives work order prioritization (Tier 1 assets get expedited response times), spare parts stocking decisions (Tier 1 failure components are held in inventory), PM interval setting (Tier 1 assets receive more frequent and more thorough inspection), and condition monitoring investment (Tier 1 assets with high detectability scores get instrumented first).
ACR by Industry
Manufacturing: In manufacturing, ACR identifies the production-critical assets on the line where failure stops throughput — typically the constrained resource or the asset with no redundancy. These assets receive the most rigorous PM programs, the fastest work order response times, and the most detailed failure mode analysis. ACR in manufacturing also informs TPM programs by identifying which assets operators should monitor through basic care routines and which require specialist technician attention.
Mining: Mining operations use ACR to manage the risk profile of large, expensive asset fleets operating in severe conditions. Haul trucks, primary crushers, and conveyor drive systems typically rank Tier 1 because their failure shuts down the production circuit. ACR in mining also informs spare parts strategy — Tier 1 critical components with long lead times and single-source suppliers are stocked on-site because the cost of a stockout far exceeds the carrying cost of the inventory.
Oil and Gas: Safety consequence weighting makes ACR in oil and gas particularly rigorous. Pressure-containing equipment, safety instrumented systems, and rotating machinery in hazardous service carry high safety consequence scores that drive mandatory inspection intervals and mechanical integrity program requirements regardless of failure likelihood. ACR provides the documented risk basis that regulators expect to see when auditing maintenance programs for covered process equipment.
Crane and Rigging: Every load-bearing component on a crane carries an inherently high safety consequence score — failure under load can be catastrophic. ACR in crane operations confirms what regulation already requires: that structural and load-bearing components receive the most rigorous inspection and maintenance regimes. ACR also identifies secondary systems — hydraulics, controls, braking — where failure consequence varies by application and where maintenance intensity should be calibrated accordingly.
Common ACR Program Failures
Performing ACR without cross-functional input: Maintenance-only ACR produces consequence scores that reflect maintenance cost but miss production impact, safety risk, and quality consequences. The result is a ranking that does not represent the full business risk of failure and does not command credibility with operations or safety leadership.
Scoring without defined criteria: ACR exercises that ask participants to rate assets on a 1-to-5 scale without defining what each score means produce inconsistent results. A rating of 4 for production impact means different things to different people without a definition that specifies what production loss volume or duration corresponds to that score.
Treating ACR as a one-time exercise: An ACR performed during initial CMMS implementation reflects the operation as it existed at that moment. When assets are modified, redundancy is added or removed, production criticality changes, or failure history accumulates, the rankings change. ACR without a defined review cycle becomes outdated and misleading.
Not connecting ACR to maintenance decisions: An ACR that produces a ranked list but does not drive changes to PM intervals, spare parts stocking, condition monitoring investment, or work order prioritization has consumed resources without delivering value. Every Tier 1 ranking should trigger a review of the maintenance strategy for that asset to confirm it matches the criticality score.
Ranking too many assets as Tier 1: When every asset gets ranked critical, the ranking provides no guidance. If everything is a priority, nothing is. ACR criteria should be calibrated so that Tier 1 represents a meaningful minority of the asset population — the assets where failure truly has severe and immediate business consequences.
ACR vs. Related Concepts
- Asset Criticality Ranking (ACR): Scores and tiers assets based on failure consequence, likelihood, and detectability. The input that drives maintenance strategy prioritization across the entire asset population.
- FMEA (Failure Mode and Effects Analysis): Analyzes the specific failure modes of individual assets and ranks them by risk. FMEA goes deeper than ACR on a single asset — ACR determines which assets warrant FMEA investment. See: Failure Mode and Effects Analysis (FMEA).
- RCM (Reliability-Centered Maintenance): A maintenance strategy development methodology that uses ACR to identify which assets to analyze and FMEA to analyze their failure modes. ACR is the entry point to RCM. See: Reliability-Centered Maintenance (RCM).
- Risk-Based Maintenance (RBM): A maintenance strategy that explicitly uses risk scoring — probability times consequence — to set maintenance intervals and resource allocation. ACR provides the consequence and probability inputs that RBM requires. See: Risk-Based Maintenance (RBM).
- Asset Hierarchy: The organizational structure that defines how assets relate to each other within a facility. ACR scores are assigned at the asset level within the hierarchy. See: Asset Hierarchy.
Frequently Asked Questions
What is Asset Criticality Ranking?
Asset Criticality Ranking (ACR) is a structured process for scoring every asset in an operation based on the consequences of failure, the likelihood of failure, and the detectability of failure onset. The scores are used to tier the asset population — typically into critical, essential, and non-critical categories — and to prioritize maintenance resources, PM intensity, spare parts stocking, and condition monitoring investment based on where failure risk is highest.
What factors are used in Asset Criticality Ranking?
The three primary ACR factors are consequence of failure (what happens when the asset fails, assessed across production, safety, environmental, quality, and maintenance cost dimensions), likelihood of failure (how probable failure is within a defined time horizon, based on failure history and operating conditions), and detectability (how difficult it is to identify failure onset before it produces the full failure effect). Each factor is rated on a defined scale and combined into a criticality score that drives the asset’s tier assignment.
How often should Asset Criticality Rankings be updated?
ACR should be reviewed when significant changes occur — asset modifications, changes in production criticality, addition or removal of redundancy, or accumulation of failure history that changes the reliability assessment. At minimum, a formal ACR review should occur every two to three years. Organizations that embed ACR scores in their CMMS and use them daily are more likely to identify when a ranking needs updating than those that store ACR results in a separate document.
How does ACR integrate with a CMMS?
ACR scores should be stored at the asset record level in the CMMS so that criticality is visible on every work order, PM schedule, and parts request associated with that asset. In practice this means Tier 1 assets automatically receive higher work order priority, trigger stocked spare parts requirements, and appear on condition monitoring inspection routes. The CMMS makes ACR an operational input to daily decisions rather than a periodic reference document consulted during planning sessions.
Related Terms
- Failure Mode and Effects Analysis (FMEA)
- Reliability-Centered Maintenance (RCM)
- Risk-Based Maintenance (RBM)
- Asset Hierarchy
- Preventive Maintenance (PM)
- Condition-Based Maintenance (CBM)
- Mean Time Between Failures (MTBF)
Put Asset Criticality to Work in Redlist
Redlist stores ACR scores at the asset level and surfaces them on every work order and PM schedule — so criticality drives daily maintenance decisions, not just annual planning sessions.