Unreasonably Short Construction Schedule

Welcome to the Real Estate Espresso podcast, your morning shot at what’s new in the world of real estate investing. I’m your host, Victor Menasce.

On today’s show, we’re looking at the compressibility of construction schedules. We often hear that projects will take longer and there’s all kinds of reasons why they legitimately need to take longer. It’s a large site and the site work is going to take a long time. The utilities are not available. The city infrastructure isn’t ready. We can’t find enough labor. The excuses are abundant.

We, as developers, we’re no strangers to delays, and we have experienced many of them. A lot of the delays have to do with the entitlement process. Sometimes there’s delays at the hands of lenders, delays as the result of it taking longer to raise capital. But then there is the xAI Colossus data center.

This supercomputer is the largest AI supercomputer in the world, and it has leapfrogged the competition. xAI went from being behind to having at least 132% of its hits this year, using the operating hardware to propel it into the lead.

The first phase of the data center was built in 122 days. I want to let that number sink in. 122 days, that’s just over four months. I don’t know how many days or weeks planning went into it, but I can say that there are many examples of decisions that were desired to save time in the most significant ways.

The Colossus data center started out as the former Electrolux factory in Memphis, Tennessee. They turned it into one of the largest AI-training supercomputers. To put that in perspective, similar data centers now take years to plan, permit, and bring online.

Here’s a breakdown of how xAI moved so quickly. There’s five major components to this project. There is the building, power, cooling, networking, and computing. A number of key milestones along the way.

The data center is designed to handle one gigawatt worth of computing online. But Memphis only had about 15 megawatts of spare capacity, nowhere near enough power to power this massive data center. xAI needed 20x more electricity than Memphis had available to spare. The regulators denied the project.

So, for the first phase, they used dozens of containerized temporary natural gas generators. That only got the first phase going. Then there’s Colossus II, which uses the 100 acres next door to the site. This involved new construction, which started in March of this year.

Here too, there was not enough power. So they went across the border into the state of Mississippi and commandeered the abandoned Duke Energy power plant. That plant had natural gas pipelines and connections to the electrical grid already in place, but there were no turbines. It was a mothballed power plant.

They couldn’t even get the turbines they needed, so they bought used ones on the open market in Europe and had them decommissioned. They disassembled them, got them onto a ship, and brought them over to Memphis, Tennessee, where they could get going fairly quickly.

Some reports put the total power generating capacity at 460 megawatts of natural gas generators, either operating or under construction, at this site.

At the same time, xAI also made more permanent infrastructure investments. They funded a substation to handle the required grid load. They paid about $24 million for a substation to help stabilize power for the site.

That wasn’t enough. Arctic data centers need extremely stable power, and the data center is capable of surging its power demand much faster than the electrical network can respond. So, they needed another innovation. They added 168 Tesla MegaPack batteries to smooth out the power supply. So, when demand drops, the excess energy goes into the battery bank. And then when demand surges, the batteries supply the extra power. All of that is designed to protect the grid.

The second problem is heat. A gigawatt data center needs to get rid of a gigawatt worth of heat. High-performance GPUs generate a tremendous amount of heat. And the cooling system is a critical component.

xAI is using a liquid cooling system where they have a metal plate directly on the back of each chip, and that’s supplying a closed-loop system. Then there’s a secondary system that is based on chilled water. These racks were custom designed to house about 100,000 of Nvidia’s H100 GPUs. That comes complete with the integrated quick-disconnect liquid plumbing.

For the expanded Colossus II data center, which was announced March of this year, they installed 119 air-cooled chillers by August of this year. And that produces another 200 megawatts worth of cooling capacity.

The problem is that to get rid of this much heat, it requires a lot of water. Let me put this amount of heat in perspective. We’re talking about getting rid of a gigawatt worth of heat spread over a couple of million square feet. And that comes to about 1,200 watts per square foot worth of heat.

Each equipment rack needs to get rid of 15,000 watts worth of heat. That’s the same energy that it takes to heat my entire house in the wintertime, except instead of being spread over 4,000 square feet like in my home, that heat is concentrated into about six square feet. And then there’s another rack right next to it, and then another 15,000 watts, and then another, and another. You get the idea.

Getting rid of that much heat requires a lot of water, and Memphis didn’t have that kind of water available. So xAI built a wastewater recovery system consisting of ceramic reverse osmosis filters that clean the water coming out of the city’s main sewage treatment plant. That amounts to about 50 million gallons of water per day that xAI has been able to recover from the backside of the sewage treatment plant.

On the computing side, the original Colossus build had 100,000 of Nvidia’s H100 GPUs, and again that was launched in 122 days. The second phase, another 100,000 GPUs, was done in a little under six months from March to October of this year. That includes building the data center, bringing it online with all the power, cooling, networking, and computing in less than six months.

This is a project that anywhere else would have taken several years under normal conditions. What it does is it demonstrates precisely how compressible construction schedules can be if you take the constraints off the problem.

This proof point is changing the way I’m thinking about our own construction schedules, and how we look at construction schedules that we receive from our contractors.

As you think about that, have an awesome rest of your day. Go make some great things happen. We’ll talk again tomorrow.

Stay connected and discover more about my work in real estate and by visiting and following me on various platforms:

Real Estate Espresso Podcast:

Y Street Capital: