Back to articles

Winter Storm Fern: Impact on AI & Data Centers

Winter Storm Fern: A Stress Test for Data Centers as Flexible Load

Winter Storm Fern was not simply another cold-weather reliability event. It was one of the broadest winter stress tests the U.S. grid has faced since Uri and Elliott, but it arrived in a market environment that looks materially different than it did during either of those storms. Large computing load has become a more important part of the demand stack, grid operators have more operational playbooks for winter preparation, and market structures in some regions have continued to evolve. In ERCOT, Fern also arrived shortly after the go-live of RTC+B and Virtual Ancillary Services ᵃ, both of which matter for how flexibility and reserve value are expressed in market outcomes.¹

That is why Fern matters for data centers. The central question is no longer whether data centers are “good” or “bad” for the grid. The more useful question is whether large, price-sensitive load can behave as a commercially intelligent grid participant when the system is under stress. During Fern, the answer was increasingly yes, but only where economics, operating capability, and market readiness were aligned. Ahead of the storm, DOE explicitly asked grid operators to be prepared to make backup generation at data centers and other major facilities available if needed, noting the scale of unused backup generation nationally.² At the same time, grid operators were clearly treating flexible demand as part of the reliability toolkit rather than an afterthought.

Based on what we saw across customer portfolios, data center load in ERCOT curtailed to varying degrees during the event. Those outcomes were not driven by a single template. They were shaped by site-specific breakeven economics, operating instructions, and the market products each facility was positioned to use. In ERCOT, some clients were already engaged in ancillary services such as Non-Spinᵇ and ECRSᶜ, which created a second layer of decision-making beyond simply reacting to energy prices. Across the portfolio, Fern reinforced a basic truth that is easy to overlook in abstract policy debates: flexible load is only valuable when the site can translate market signals into real operating decisions.

Figure 1. Representative Real-Time pricing during Winter Storm Fern compared against a market participant’s data center breakeven threshold. At many sites, the relevant question was not whether prices were elevated in absolute terms, but whether hourly prices crossed a level that justified curtailment or a change in operating posture.

Fern also highlighted that ERCOT’s operating posture was notably conservative. ERCOT’s post-event report emphasized that the grid performed well, that no conservation call or Energy Emergency Alert was required, and that there were no systemwide outages. ERCOT also noted it coordinated with large customers, including data centers and crypto facilities, for awareness of grid conditions ahead of and during the storm.³ In this case, conservatism was a feature, not a bug. From the perspective of large load, conservative operations can reduce the probability of the worst outcomes, but they do not eliminate commercial volatility. They shift the question toward how well load is positioned to manage price, congestion, and product opportunities when the system tightens.

From a commercial perspective, that distinction matters because data centers do not experience “the market” as a single number. Even when the headline story is about energy prices, realized cost and opportunity often hinge on spreads, basis, and congestion. During Fern, many virtual positions were effectively capped by customer breakevens going into the event. Some sites cycled repeatedly as Real-Time prices dipped and spiked. In several cases, congestion contributed meaningfully to DART (the spread between Day-Ahead and Real Time nodal price) deviations, alongside shifts in fuel inputs like gas and power costs. That does not mean the strategy failed. It means the event exposed a gap that many data center owners still have: they may have flexibility, but they do not always have a robust framework for turning that flexibility into a storm-ready DART strategy.

Figure 2. DART during Winter Storm Fern at a representative ERCOT node. The DA versus RT divergence illustrates why storm readiness for large load is not only about “high prices,” but about managing intra-day risk when congestion and fast-moving fundamentals drive spreads.

 

While Figure 2 shows the risk side of the story, ERCOT’s evolving market design also highlights the opportunity side. In particular, Virtual Ancillary Services and the broader RTC+B transition increase the need for market participants to understand not just energy price volatility, but how value can shift across products in stressed conditions.¹ In events like Fern, the “right” commercial posture for a data center is not always binary curtailment. It can also involve preserving optionality for reserve products when the economics support it, or at minimum, understanding when ancillary value is materially changing the revenue stack.

This is where Virtual Ancillary Services becomes more than a policy headline. For large load and flexible resources, it increases the importance of benchmarking whether a site can realistically participate in ancillary products, what the operational requirements are, and whether the economics justify participation versus a pure energy-and-curtailment posture. During Fern, ancillary pricing dynamics offered a clear illustration that value can shift meaningfully between day-ahead and real-time, particularly when the system is managing uncertainty and operators are maintaining conservative reliability margins.

Figure 3. Ancillary service price dynamics during Winter Storm Fern. 

Figure 3.1. Non-Spin DA versus RT clearing prices during Winter Storm Fern. 

 

Figure 3.2. ECRS DA versus RT clearing prices during Winter Storm Fern.  
These charts illustrate how reliability value can express differently across DA and RT, which matters for load and flexible portfolios evaluating whether and how to participate in new and evolving ERCOT ancillary constructs.¹ 

The practical takeaway is that data centers should be managed as a portfolio of economic choices, not a single lever. Some hours call for curtailment. Some call for staying online while hedging basis and congestion risk more carefully. Some call for preserving optionality for ancillary participation where feasible. Some call for doing nothing because the economics do not justify intervention. The value is not in having one capability, whether that is backup generation or the ability to curtail. The value is in knowing which lever to pull, in which market construct, and at what price, while maintaining operational integrity and uptime constraints.

 

Is your site actually market-ready? Many data centers have flexibility on paper that doesn’t translate into real operating decisions when prices move. We benchmark exactly that. Schedule a call: drew.hamilton@cwpenergy.com

This is also why the right benchmark for a data center is broader than a simple power price forecast. The first benchmark is economic. Can the site actually monetize flexibility across multiple products, or is it effectively confined to buying energy at whatever the market gives it? In practice, that means asking whether the facility can curtail into demand response structures, whether it can participate in ancillary services where available, whether virtuals and basis hedges can be used intelligently against local congestion exposure, and where its true breakeven sits when the underlying business model is volatile. A crypto-linked tenant, for example, may have a very different curtailment value than a hyperscale AI training load, and that difference matters when storm pricing drives real operating decisions.

The second benchmark is operational. A surprising number of otherwise attractive sites are not truly market-ready because their telemetry, controls, or metering are too coarse to support high-quality optimization. The question is not simply whether a facility can curtail. It is whether it can curtail predictably within required time windows, with sufficient granularity to settle accurately, support repeatable strategy, and meet program requirements when participating in market products. In a storm, optionality that exists on paper but not in operations does not protect margin and does not help the grid.

The stakes are significant: the EIA projects data centers will drive the strongest four-year growth in U.S. electricity demand since 2000,⁴ and EPRI estimates AI infrastructure could represent up to 9% of U.S. electricity consumption by 2030.⁵ For data centers with a multi-ISO footprint, this variation is precisely why site-level benchmarking and market-readiness cannot be one-size-fits-all.

Although this article focuses on ERCOT, we also manage large-load and asset portfolios in other ISOs, and Fern underscored that regional outcomes are not uniform. In MISO, public reporting described tighter reserve conditions and emergency actions in certain areas, which reinforces how quickly conditions can shift from normal to stressed in winter operations.⁶ In NYISO, public commentary framed Fern as a period where weather, supply constraints, and fuel pressures combined to challenge the grid and raise prices, with implications for both reliability posture and customer cost exposure.⁷

If your data center portfolio weathered Fern without a clear view of your breakeven, your DA/RT exposure, or which ancillary products your sites could realistically access, that gap will cost you in the next event. We work with large-load customers across North American wholesale power markets to build that readiness before the storm, not during it.

If you want a site-level benchmarking, please reach out to :
Danny Lambert at danny.lambert@cwpenergy.com or
Drew Hamilton at drew.hamilton@cwpenergy.com to schedule a call.