TEA II - Building Credible Models

February 18, 2026

II - Building Credible Models – How to Implement OpEx and Maintenance Analysis

Articles

>

II - Building Credible Models – How to Implement OpEx and Maintenance Analysis

Introduction: From Theory to Practice

Recognising the importance of operational expenditure modelling is the first step; implementing credible analysis is the practical challenge. A poorly constructed OpEx (operational expenditure) model built on weak assumptions or misguided granularity can be worse than no model at all, creating false confidence in numbers that lack empirical foundation.

This article explores how to build OpEx and maintenance models that stakeholders trust because they rest on evidence, transparent logic, and realistic assessment of project-specific conditions. The journey from concept to credible analysis follows a logical sequence: defining scope, gathering benchmarks, constructing baseline models, testing sensitivities, comparing scenarios, validating against actual costs, and maintaining the model throughout the project lifecycle.

Gathering the Right Data: Foundation of Credibility

Credible OpEx modelling begins with evidence. The best sources of OpEx data are historical records from comparable systems: what did maintenance cost on similar installations operating in comparable conditions? Equipment manufacturers provide guidance on service intervals and expected maintenance costs, though these often underestimate real-world complexity. Equipment specification documents reveal design life, intended maintenance schedules, and spare parts availability. Operational profiles how intensively will the equipment be used? are critical because maintenance frequency often correlates directly with utilisation hours.

Start with literature review. Industry associations publish maintenance cost benchmarks for common technologies. Academic research documents failure rates and maintenance requirements for equipment operating in field conditions. Published case studies of comparable projects reveal actual cost patterns, often showing where assumptions diverged from reality. This background work establishes realistic cost ranges against which your project-specific estimates can be calibrated.

Engage equipment manufacturers, but with healthy scepticism. Manufacturer data often reflects ideal conditions: well-maintained equipment, optimal operating environments, authorised service providers, and best-practice operations. Real-world systems operate in dusty, corrosive, temperature-fluctuating environments with supply-chain delays and imperfect maintenance discipline. A realistic model applies contingency factors of 20 to 30 percent above manufacturer estimates to account for the gap between specification and operational reality.

Survey comparable installations if possible. Contacting operators of similar systems reveals actual maintenance costs, frequency of failures, spare parts availability challenges, and labour requirements. These conversations often surface hidden costs: environmental remediation for waste disposal, permitting delays for equipment replacement, site access limitations during maintenance windows. This qualitative intelligence is invaluable for building realistic models.

Beyond equipment-specific data, understand cost escalation drivers. Labour costs typically inflate faster than general inflation, particularly for specialised technicians. Spare parts and materials follow their own inflation trajectories, influenced by supply chains and commodity prices. Establishing realistic cost escalation assumptions ideally based on 10+ year historical data prevents models from becoming systematically wrong as they age. Many OpEx models fail not because initial estimates were inaccurate but because escalation assumptions underestimated real-world cost growth.

Structuring Costs Logically: Categories That Clarify

OpEx models thrive on clear categorisation that reveals cost drivers and enables scenario analysis. Structure costs in the following hierarchy:

Fixed costs recur regardless of utilisation: insurance, licensing, routine inspections, allocated base staff. These costs provide predictability but reduce flexibility when operations change.

Variable costs scale with usage: fuel consumption, filter replacement, wear-part renewal. These costs create incentives for operational efficiency but introduce uncertainty about future costs if utilisation patterns change.

Scheduled maintenance occurs at defined intervals measured in hours, years, or production cycles. A heat pump requires servicing every 2 years or every 5,000 operating hours, whichever comes first. The model must capture when these intervals are reached and cost the associated labour and parts.

Contingency allowances, typically 10 to 20 percent of planned maintenance, account for unscheduled repairs and anomalies. This category is often omitted by optimistic modellers but is essential to realistic planning. Even well-maintained equipment fails unexpectedly; realistic models provision for it.

Overhead allocation covers administrative staff, management systems, training, and documentation easily overlooked but material, particularly for complex systems. A simple rule of thumb allocates 15 to 25 percent overhead on direct maintenance costs, though specific projects may call for different figures. Large, sophisticated installations with dedicated maintenance teams may justify higher allocations; simple systems might operate with lower overhead.

This logical structure enables scenario analysis. Want to explore the impact of improved predictive maintenance reducing emergency repairs? Modify the contingency factor and rerun the model. Want to assess the financial impact of automating routine inspections? Reduce the labour component of fixed costs. The structured approach transforms the model from a static forecast into an analysis tool.

Matching Simulation Granularity to Purpose

A critical decision involves simulation timestep: whether to model OpEx at daily, monthly, yearly, or some other frequency. The answer depends on the system's characteristics and analysis aims.

Many analyses are adequately served by annual or seasonal granularity, where maintenance costs are estimated year by year based on expected utilisation in that year. This approach is straightforward to implement and communicate but misses temporal details that matter for some systems.

However, systems with strong operational coupling between multiple components, or where specific maintenance windows conflict with operational seasons, benefit from higher-resolution modelling. Linking OpEx forecasting to granular operational simulation where equipment usage is tracked at hourly or half-hourly resolution and maintenance requirements emerge from that detailed simulation provides several advantages:

  • It captures how operational strategy (equipment dispatch priority, storage usage patterns, etc.) influences maintenance demand. A system that cycles a battery intensively daily approaches replacement sooner than one cycling it moderately.
  • It shows seasonal clustering of maintenance events. A wind turbine requiring seasonal gearbox servicing during winter months might conflict with other maintenance activities, creating operational bottlenecks.
  • It surfaces unexpected interactions where multiple systems simultaneously require maintenance. A district heating network where the heat pump requires summer servicing while the boiler requires winter inspections might have minimal temporal conflict. But discovering this through detailed simulation prevents operational surprises.
  • It enables assessment of operational flexibility to defer or advance maintenance activities based on operational demand, enabling better cash flow management.

For projects where operational decisions drive financial performance particularly hybrid systems combining multiple technologies this integration of operations and OpEx is invaluable. The additional modelling complexity is justified by the strategic insights it enables.

Planning for Scenario Flexibility

The best OpEx models stay flexible rather than embedding single-point estimates. Parameterise key assumptions so scenarios can assess different maintenance strategies, equipment choices, or cost escalation paths:

  • Compare preventive maintenance versus reactive maintenance strategies for specific equipment, showing the financial impact of each approach
  • Explore equipment replacement timing: what if batteries require replacement at year 12 instead of year 15?
  • Test cost escalation sensitivity: how does the model change if labour costs escalate 1 percent faster than general inflation?
  • Assess technological alternatives: how does maintaining two smaller heat pumps compare to one large unit?

This flexibility transforms the model from a forecast into an analysis tool, enabling sensitivity studies and scenario comparison that reveal the fiscal impact of different operational choices. Stakeholders gain confidence when they can see not just the base case but the range of plausible outcomes and the drivers of those outcomes.

Building the Baseline Model: Practical Implementation

With data gathered and structure defined, construct the baseline model using straightforward spreadsheet tools or dedicated software depending on complexity. Build year-by-year OpEx forecasts incorporating the assumptions developed from benchmarking. Use the categorical structure to organise costs clearly.

Document assumptions rigorously. For each cost estimate, record the source of the data, the reliability of that data, what contingency factors were applied and why. This documentation serves multiple purposes: it builds credibility with investors and lenders; it enables systematic updating as actual costs appear during operations; it preserves institutional knowledge when project teams change.

Run the model to generate initial cost profiles. Then sanity-check results against total lifecycle costs. Does OpEx represent a reasonable fraction? For a solar array, OpEx might be 5-10 percent of total cost; for a gas engine, 30-40 percent. If your OpEx estimates fall far outside historical ranges, investigate. Either your estimates are wrong, or your project has unusual characteristics that justify deviation from historical norms.

Conducting Sensitivity Testing: Revealing What Matters

Sensitivity testing is not optional; it is essential to understanding model reliability. Vary key parameters maintenance labour rates, spare parts inflation, equipment life, failure rates by ±10 to 20 percent and see how results change. Create sensitivity tornado diagrams showing which parameters most strongly influence outcomes.

These results serve multiple purposes. Parameters that barely move project outcomes can be estimated roughly; those that substantially influence results warrant additional data gathering and scrutiny. If labour cost escalation varies project NPV by ±30 percent, labour cost forecasting deserves deep attention. If spare parts inflation varies outcomes by 2 percent, it can be estimated more casually.

Sensitivity testing also reveals model robustness. A model where results vary dramatically with small parameter changes is fragile and relies heavily on accurate forecasting. A model where results vary modestly is more robust to estimation errors. Fragile models demand more rigorous data gathering; robust models tolerate greater uncertainty.

Developing Scenario Comparisons: Strategic Analysis

Build alternative scenarios comparing different approaches:

  • Maintenance philosophy: Preventive versus reactive strategies for specific components
  • Redundancy: Single versus backup equipment, showing the maintenance cost penalty for resilience
  • Technology choices: Comparing equipment from different manufacturers or different technological approaches
  • Replacement timing: Early replacement versus extended operation

Compare lifecycle costs across scenarios to reveal which choices deliver greatest value. Often these analyses surface non-obvious insights: investing in predictive monitoring might reduce emergency repairs enough to justify the monitoring cost; operating equipment past manufacturer-recommended service life might cost more in increased failures than planned replacement; investing in backup capacity might reduce risk more cost-effectively than maintenance planning.

Validation and Refinement: Learning from Reality

As the project advances and actual cost data appears during procurement and early operations, compare against model predictions. Investigate deviations. Did labour costs run higher than estimated? Was spare parts availability better or worse than assumed? Did equipment fail more or less frequently than predicted?

Use this feedback to refine estimates and escalation factors. A model developed during planning phase should be revisited annually, or when operational data becomes available, or when project scope changes. This iterative refinement transforms the model from a one-time planning exercise into an active management tool, continuously improving through exposure to operational reality.

From Model to Tool

Building credible OpEx and maintenance models requires disciplined methodology: gathering evidence-based data, structuring costs logically, matching analytical granularity to purpose, testing sensitivities relentlessly, and validating against reality. The investment in this analytical rigour pays substantial returns through improved decision-making and greater confidence in financial projections throughout the project lifecycle.

Ready to build models in real-time?

Join the community of project owners, analysts and energy consultants who are already using Encast to deliver faster, more accurate energy resilience planning for themselves, their investments and their customers.
Try For Free >>
Encast is a member of:
Join our newsletter to stay up to date on features and releases.
By subscribing you agree to with our Privacy Policy and provide consent to receive updates from our company.
Thank you! Your email has been received!
Oops! Something went wrong while submitting the form.
© 2025 Encast Ltd. All right reserved.