December 22, 2025
Building a Model That Lives with You: Digital Twinning, Optimisation, and the Benefits of a Manual Process

The energy transition represents one of the most profound transformations of our industrial age. Yet the tools we use to model, design, and optimise energy systems have not kept pace with the complexity of the challenge. This article explores the evolution of energy modelling from static spreadsheets to living digital twins and argues that the path forward lies not in mathematical perfection, but in engineering wisdom applied iteratively to problems that are constantly changing.
For decades, energy system modelling has been synonymous with Excel. The typical workflow was straightforward: gather a snapshot of data during a feasibility study, construct a deterministic model around fixed assumptions, produce a single report, and move on. The model, once created, typically saw no further evolution. It was a deliverable, not a tool.
This approach carried profound limitations that were tolerated only because no practical alternative existed. Spreadsheet models suffered from opaque complexity as nested formulas and cross-sheet references multiplied, even their creators struggled to understand their own logic months later. The time cost of updating such models meant that only a handful of scenarios were ever explored. Critical "what-if" questions went unasked because posing them to the model was impractical.
Most damaging of all, these models bore no continuing relationship to the real assets they represented. Once a plant was built or a system installed, the model was archived. There was no feedback loop between measured performance and predicted performance, no mechanism to incorporate operational experience back into the design thinking. The model froze in time, even as the world it was meant to represent changed continuously.
The proliferation of smart meters, IoT sensors, and cloud-based data platforms has fundamentally altered what is possible in energy modelling. Where once an energy analyst relied on a handful of monthly utility bills and manufacturer datasheets, today's systems can ingest half-hourly (or finer) metered data from every significant load and generation source on site.
This shift transforms modelling from assumption-driven to evidence-driven. Actual half-hourly electrical and thermal demand profiles reveal the true shape of energy consumption including peak events, seasonal variation, and anomalous patterns such as meter disconnections or equipment faults. Understanding these patterns is not optional; it is foundational. When a dataset shows a sudden spike in demand or a period of zero readings where a metering device was disconnected, these are not minor data quality issues they are signals that your model's foundation may be unstable.
Similarly, real weather data ambient temperature, solar irradiance, wind speed replaces theoretical averages. This matters because energy systems are exquisitely sensitive to environmental conditions. Gas turbines derate significantly in warm weather, losing approximately 0.5–0.7% of output for each degree Celsius above their 15°C design temperature. Solar arrays' output varies directly with irradiance and temperature. Anaerobic digesters' biogas production depends on maintaining reactor temperatures within narrow bands with mesophilic systems operating at 30–40°C and thermophilic systems at 50–60°C. These sensitivities are not abstract; they determine whether a project pencils out financially.
Continuous validation creates a feedback loop that was simply impossible with legacy tools. The ability to compare modelled predictions against measured performance in near-real-time flags discrepancies that signal either data quality issues or model assumptions in need of refinement. Calibration transforms from a one-off exercise at project inception into an ongoing process of continuous improvement.
A digital twin is more than a sophisticated simulation. It is a computational model that maintains a living, bidirectional relationship with the physical asset it represents. It evolves with the plant, incorporating new data, adapting to changed conditions, and providing continuous operational insight.
A static model captures a frozen moment of this system. A digital twin captures its behavior over time adapting as feedstock quality changes, equipment degrades, demand patterns shift, or new assets are installed. The modular approach, where technologies are prioritized and interact through defined interfaces, provides the architectural foundation for this kind of living model.
The operational value is tangible: scenario planning without disruption, continuous commissioning that tracks whether installed systems meet specification, sensitivity and headroom analysis that assess how much margin the system has under various conditions, and regulatory reporting built on an evidence-based, auditable foundation.
Mixed Integer Linear Programming (MILP) represents the gold standard of mathematical optimisation for energy systems. It formulates the design and dispatch problem as a set of linear equations with continuous variables (how much power a turbine generates at each timestep) and integer variables (whether a turbine is on or off, how many units to install). A solver finds the combination of decisions that minimises total system cost or maximises another objective subject to all defined constraints.
The appeal is obvious: MILP guarantees a globally optimal solution within the defined problem space. It excels at coordinating complex trade-offs across multiple interacting technologies simultaneously, balancing the startup costs and minimum runtime constraints of a gas turbine against the intermittency of solar generation and the state-of-charge dynamics of battery storage. It can handle large combinatorial problems that would be intractable by manual analysis.
Yet MILP solutions rarely survive contact with reality. The quality of the optimisation is entirely bounded by the quality of the formulation. Every simplification, linearisation, or omitted constraint introduces a gap between the model's "optimal" solution and what is actually achievable in practice.
Real-world systems are full of non-linear, stochastic, and hard-to-quantify effects: equipment degradation that follows complex curves, operator preferences about equipment cycling patterns, permitting constraints and community objections, supply chain delays, and maintenance realities. These are either impossible or extremely difficult to encode as linear constraints. A MILP solution may recommend, for example, cycling a gas turbine in a pattern that is technically feasible according to the model but would be rejected by any experienced plant operator as impractical and damaging to equipment longevity.
Moreover, formulating a MILP model is time-intensive. Every new technology, constraint, or operating mode must be mathematically described and validated. Changing the problem structure adding a new technology option, altering the network topology, incorporating a new constraint often requires significant reformulation rather than simple parameter adjustment. Debugging infeasible or unbounded problems can be extremely difficult, particularly for large models with hundreds of constraints. The cycle time from "new question" to "trusted answer" is measured in weeks or months, not days or hours.
An alternative approach empowers the engineer to build, test, and refine the energy system model iteratively using their domain knowledge, operational experience, and engineering judgment as the primary optimisation engine.
The process follows a pragmatic, iterative cycle: build a baseline model from measured data and known equipment parameters; validate against actual performance using headroom studies and data quality metrics; explore alternatives by adjusting scenarios; evaluate using sensitivity studies to test tolerance for changes in base assumptions; refine based on new information; repeat continuously, throughout the project lifecycle.
The strengths of this approach flow directly from its embracing of human judgment rather than mathematical formulation:
Engineering experience becomes the optimiser. An experienced engineer brings knowledge that no mathematical formulation can capture: understanding of how equipment actually behaves under real operating conditions, awareness of failure modes and maintenance realities, intuition about which design choices will create operational headaches versus which will prove robust. The engineer can incorporate soft constraints that are difficult to formalise "the client prefers not to have more than two gas engine starts per day," "the local planning authority is unlikely to approve a turbine above 5 MW," "the feedstock supply is reliable in summer but uncertain in winter."
Permitting and regulatory insight are integrated from the outset. Energy projects exist within complex regulatory environments environmental permits, planning consent, waste classification, grid connection standards, emission limits, noise regulations, and safety requirements. These constraints are often qualitative, negotiable, or evolving, making them poorly suited to fixed mathematical formulation. An engineer-led process naturally incorporates permitting realities: sizing decisions consider what is likely to be approved, not just what is theoretically optimal. When regulatory conditions change a new emission standard, an updated grid code requirement, a revised planning condition the model adapts in the next iteration rather than requiring complete reformulation.
The approach is collaborative and transparent. Multiple stakeholders can review, question, and contribute to scenarios using accessible interfaces no mathematical programming expertise required. Model assumptions are visible and adjustable. When a sensitivity study shows that the business case is highly sensitive to a particular fuel price assumption, that finding is immediately actionable.
Yet there are trade-offs. A human-in-the-loop process cannot explore the solution space as exhaustively as a mathematical solver. There will always be combinations of design choices that the engineer does not think to test. For very large, highly interconnected systems with many degrees of freedom, manual exploration may be impractical.
But here is the paradox: this apparent weakness is the approach's greatest strength. Because the model is designed for rapid iteration rather than mathematical perfection, it can absorb new information quickly updated demand data, revised equipment specifications, changed economic assumptions, or operational lessons learned. The cycle time from "new question" to "useful insight" is measured in minutes or hours, not weeks. The model stays current, relevant, and trusted.
The choice between mathematical optimisation and manual, engineer-focused optimisation is not binary it is contextual. MILP has its place, particularly in greenfield system design where the problem is well-defined, the data is complete, and the constraints are stable. But in the messy, evolving reality of industrial energy transition where feedstock quality varies, equipment degrades, regulations change, and operational experience reveals surprises the model that delivers the most value is the one that lives with the project.
A digital twin built on quality measured data, structured around modular technology representations, and refined through agile iteration provides something that a one-off mathematical optimisation cannot: continuous, trusted, actionable insight. The engineer's experience becomes the optimisation algorithm. The sensitivity study becomes the discovery mechanism. The living model becomes the asset.
Consider a waste-to-energy facility integrating anaerobic digestion and gas engines. The MILP approach freezes the optimal solution at project inception, based on assumptions about feedstock composition, methane yield, maintenance costs, and electricity prices. But in year two, the facility discovers that incoming organic waste contains more sulfur than anticipated, requiring additional gas cleaning equipment and increasing maintenance costs. The mathematical solution is now obsolete. Meanwhile, the digital twin absorbs this new information, updates its economic assumptions, and immediately flags the need for operational adjustments and potential equipment upgrades.
Or consider a manufacturing facility with multiple CHP technologies, cooling systems, and thermal storage. The MILP solver optimises the dispatch schedule once, at planning time. But real-world demand fluctuates based on production schedules that are themselves volatile. The digital twin, updated continuously with actual demand data, recalculates the optimal operating strategy week by week, responding to actual conditions rather than theoretical forecasts.
The hidden cost of slow modelling whether from over-reliance on mathematical optimisation or from the rigidity of legacy spreadsheet tools is not just delayed decisions or missed opportunities. It is the absence of a model that can respond when reality diverges from the plan. And in the energy transition, reality always diverges from the plan.
The organisations that thrive will be those whose models are ready for that divergence not because they found the mathematically perfect answer once, but because they built a process that finds good answers continuously. A digital twin that lives with the project, that evolves with actual performance, and that harnesses engineering wisdom in rapid iterative cycles is not a compromise. It is the future of energy system design.
The energy transition is not a one-time optimisation problem. It is a continuous process of adaptation to a world that is changing in real time. The modelling tools that will matter most are those that can keep pace with that change not by solving the problem perfectly once, but by helping engineers solve it well, repeatedly, in the face of inevitable surprises.

