What’s important about “what-if?”
Power systems studies are vital for grid reliability and economic planning, operations, and generation development. In reliability planning, engineers model and evaluate the physical response of the grid under future “what-if” scenarios, including changes to generation, load levels, topology, and the effects of network upgrades. Planning processes also entail evaluation of the existing system subject to disturbances, such as asset failure and the impacts of both probable and rare events. In the industry, these disturbances are often referred to as “contingencies.”
Planning studies that include simulations of these scenarios play a role in almost all analyses in utility and system operator planning teams, and this applies to interconnection studies as well. The goal of understanding the grid subject to contingencies is necessary for ensuring system reliability and identifying any possible weaknesses along with subsequent necessary upgrades. In this article we’ll explore contingency analysis in more detail, including definitions and applications, considerations in power flow analysis, and, more specifically, their role in interconnection studies.
Contingencies: Definitions and overview
A contingency is a defined event whose simulation is used to ensure the reliability of the grid holds up against plausible disturbances to nominal conditions. In our earlier article on power flow analysis, we discussed how the general purpose of a power flow simulation is to determine grid behavior based on the balance of power between generation and load. Steady-state simulations of contingencies have essentially the same goal, but with changes applied to the initial system to model some deviation from normal operating conditions. This deviation is typically captured by some combination of changes to generation, load, or topology.
In practical terms, contingencies are often defined in text-based, human-readable files that explicitly describe the event(s). These files serve as inputs to power flow software tools. The creation of contingency definition files can be automated to some extent, but it is often necessary for planning teams to develop many events to simulate based on their unique knowledge of their jurisdictional area in the power system. For example, contingency definitions are often created to reflect individual breaker actions within substations. While defining these contingencies is becoming more straightforward and explicit with the advent of models with those elements defined (i.e., “node-breaker” models), most power flow modeling is performed using models without this level of granularity at the substation level (i.e., “bus-branch” models).
NERC TPL standards, such as TPL-001-4, define many of the various types of contingencies that are necessary to satisfy various reliability criteria. These criteria include maintaining steady-state voltages and thermal line and transformer loadings to within acceptable bounds (i.e., approved, off-nominal bounds considering the extent and temporary nature of a particular event), as well as to ensure the system does not experience voltage collapse. It’s important to note that while our focus here is on power flow, contingencies are not limited to power flow simulations. It’s also necessary to study these events in the time domain, such as via transient stability or EMTP simulations.
Some contingency events are considered relatively minor or simple from a simulation standpoint, such as the outage of a single transmission line or a breaker failure in a strong area of the grid, while others can be very complex to capture rare or extreme events, such as a multi-event outage intended to evaluate sequences or collections of asset failures. A study in which contingency analysis is performed often involves tens to hundreds of thousands or even millions of event simulations. Even still, in a large power system it is not feasible or practical to exhaustively simulate every possible combination of every possible outage or deviation for every study. Doing so would entail billions of unlikely or unimportant scenarios. For example, a utility in Pennsylvania would neither have much impact nor responsibility due to a line outage in Nebraska. The events that are simulated are intended to capture only those events with some degree of probable certainty, often as defined in NERC standards.
Contingency analysis and interconnection
When considering influences of a new generation facility on the grid, it is important to flag any potential risks it poses to system reliability, as well as appropriately cost allocate any necessary upgrades should the project proceed with construction. To do both, most utility and system operator interconnection study processes leverage contingency analysis.
Regardless of modeling preferences for studying generation in an interconnection queue (e.g., serial vs. cluster), an interconnection study typically consists of two “rounds” of contingency analysis: one to establish a baseline of reliability violations with the new generation modeled as off-line (pre-project), and another with the generation modeled as on-line (post-project) to determine its impact. These contingency studies must consider AC power flow for accuracy, but they often leverage the speed of DC analysis to expedite the process. DC power flow can help narrow the scope of simulations to a set of contingencies that are most likely to present actual criteria violations, and those are validated using AC power flow. It is understood that violations can (and often do) exist in pre-project study models. These can be due to either updates that have been added by model stakeholders, or they can simply be violations that are ignored due to study-specific criteria (e.g., a voltage violation on sub-transmission networks). The purpose of this dual analysis is to ensure fairness to project developers, so that they are not allocated costs to fix issues for which they are not responsible.
Both rounds of analysis consider a subset of all contingencies deemed relevant given the studied project’s geographic location, and a comparison of results is made to look for new (or, in some cases, excessively exacerbated) reliability criteria violations. These new violations are flagged as requiring mitigation, and subsequent network upgrades to mitigate them are proposed. Note that a violation need not occur in all contingency scenarios to require mitigation: if a certain transmission line is overloaded in just one contingency event out of tens of thousands being simulated, it will still require mitigation. The number of unique contingency events in which a facility is violated does not specifically dictate the type or cost of mitigation, although whatever mitigation is selected must fix the worst-offending violation. Common mitigations include construction of new transmission lines, upgrades to existing transmission lines (e.g., rebuilding or reconductoring existing lines to upsize their ratings), and placement of reactive power compensation devices. These upgrades are proposed initially based on the extent and cause of the violation, and a more detailed evaluation of upgrades is performed in conjunction with transmission owners and often refined over the phases of an interconnection study, and is ultimately contingent on detailed facilities studies.
In order to appropriately cost allocate newly-identified network upgrades to mitigate thermal overloads, a DC-based analysis is used to determine linear distribution factors. These factors define a studied project’s relative influence in causing a flow violation in the system. For voltage and collapse mitigation, a similar relative impact-based allocation process is followed, and this process is often defined by the utility or system operator. A project may or may not be the sole cause of a violation, so these factors help determine how to allocate costs of proposed upgrades across two or more projects. Following cost allocation of upgrades, developers may deem a generation project viable or non-viable due to the expense to interconnect. Network upgrades can range from tens of millions to even billions of dollars, and the additional assigned costs to a project may or may not make the project worthwhile to build. It should be noted that selecting network upgrades is primarily a manual, subjective process.
Takeaways
Contingency analysis forms the core analytical component of an interconnection study.
Models the reliability of the grid under numerous “what-if?” scenarios;
Establishes a generation project’s relative impacts on the grid;
Provides an initial economic assessment of its cost to interconnect.
Mitigations (proposed grid upgrades) are selected to alleviate all relevant violations across all contingencies.
If a transmission line overload occurs in even a single contingency event, it will still require mitigation.
Mitigation selection, short of verification in software, is often largely subjective.
Automation and optimization of mitigation selection would create an explicit and repeatable process;
Mitigation costs could be validated or explored in advance via an automated process.
Robust simulation tools can help.
Ensure generation projects are appropriately assigned network upgrade costs;
Avoid unnecessary non-convergence and verify true voltage collapse;
Help with determining appropriate mitigations.
Questions?
If you enjoyed this article and want to learn more about some of the things we’re working on at Pearl Street, reach out to us at hello@pearlstreettechnologies.com. Thanks for reading!