Inside Amazon’s Indiana AI Megaproject: A Million Trainium Chips, Anthropic’s Bet, And The Power Problem

Inside Amazon’s Indiana AI Megaproject A Million Trainium Chips, Anthropic’s Bet, And The Power Problem

From Cornfields To Compute In Record Time

It was dirt a year ago. Today, seven buildings hum with hundreds of thousands of Amazon Trainium 2 chips in New Carlisle, Indiana—phase one of “Project Rainier,” Amazon’s largest non‑Nvidia AI cluster and one of the fastest mega‑builds ever brought online. The site is already training Anthropic’s latest Claude models at scale, with AWS executives saying capacity will surpass 1,000,000 Trainium 2 chips by year‑end and Trainium 3 to follow.

The pace is jaw‑dropping: site selection in late 2022, visits in spring 2023, ground broken September 2024, production October 2025. Amazon marshaled four general contractors, industrial‑grade liquid cooling, and a revised building type optimized for AI density to compress timelines normally measured in years.

An Anthropic‑First Campus—Without Nvidia

Unlike many AI builds anchored on Nvidia GPUs, the Indiana campus is designed around AWS’s in‑house silicon. Two Trainium 2 racks form an “ultra server” of 64 AI chips, with double‑density layouts to raise compute per square foot while easing power and cooling curves.

Why buck the GPU tide? Cost, availability, and control. AWS claims Trainium 2 offers 30–40% better price‑performance than comparable GPU instances, with near‑term supply less constrained than Nvidia’s H‑class parts. Owning the full stack—chips, fabric, data center design—lets AWS tune exactly for Anthropic’s training profiles and skip features they don’t need. Trainium 1 was a learning platform; Trainium 2 and the forthcoming Trainium 3 are the production bets.

Anthropic isn’t monogamous—it will also access up to 1 million Google TPUs under a separate deal. But the Indiana footprint makes clear: multi‑chip strategies are real, and AWS is the primary cloud home for Anthropic’s model training.

Scale, Subsidies, And A New Industrial Geography

Project Rainier will ultimately span 30 buildings across 1,200 acres over multiple phases, backed by one of Indiana’s largest capital commitments on record—an $11 billion anchor deal—plus billions more for power and municipal upgrades.

  • Local property/technology tax exemptions: >$4B over 35 years
  • State‑level incentives (2019 legislation): ~another $4B over 50 years
  • Amazon‑funded infrastructure: $7M for highways (with up to $15M more in negotiation), $114M for water/sewer and related utilities
  • Jobs: ~9,000 peak construction positions; ~1,000 long‑term roles (≥600 above county average wage)

Indiana’s draw isn’t just incentives: it’s fiber rights‑of‑way, legacy substations, and extra‑high‑voltage transmission. The state is rapidly becoming a data‑center corridor, with Microsoft, Meta, Google, and now Amazon committing multi‑gigawatt footprints.

Scale, Subsidies, And A New Industrial Geography

The industrial map of America is being redrawn. Indiana’s $11B Project Rainier and billions more in incentives mark the rise of a new data-center corridor—powered by infrastructure, policy, and scale. As hyperscalers cluster around fiber and energy corridors, the next decade’s growth story lies at the intersection of technology, logistics, and public investment.

Post Jobs & Build the Next-Gen Infrastructure Workforce →

The Hard Constraints: Power, Water, Community

When complete, the campus will consume about 2.2 GW—roughly the electricity of 1.5M homes in Indiana Michigan Power’s service area. The utility expects peak demand to more than double from ~2.8 GW (2024) to >7 GW by 2030, fueled heavily by data‑center load. Nearby, GM/Samsung’s $3.5B EV battery plant adds another draw.

Residents worry—about farmland loss, grid stability, and water. One analysis found monthly electricity bills near new data centers up to 267% higher than five years ago. I&M says 53% of its current supply is nuclear and 35% coal (a retirement slated for 2028 may slip, with federal signals to keep coal longer to meet AI demand). The utility is also acquiring a gas plant in Ohio that would provide ~15% of supply by 2026. Amazon, for its part, points to >635 MW of Indiana wind/solar PPAs, small modular reactor exploration, and on‑site cooling strategies that use outside air for ~98% of operating hours, with on‑site treatment cutting water use ~23%.

Local governments are demanding—and getting—road, water, sewer, and safety upgrades, but the bargain is stark: vast land and resource intensity for relatively modest permanent headcount. Expect more scrutiny of wetlands, aquifers, and air permits in subsequent phases.

Is This Overbuild—or Table Stakes?

Skeptics see a capex race that could exceed near‑term demand. Deals “sound great on paper” but only count once racked, burned‑in, and producing measurable customer outcomes. AWS counters that Rainier isn’t a press release—it’s live training today, with booked demand from a top model provider and a global roadmap to replicate the template.

The truth likely sits between: compute demand is surging, but it’s uneven. Some models and workloads deliver clear ROI; others won’t. Builders with vertical integration (chips to grid), speed, and actual take‑or‑pay customers have the best shot at avoiding stranded assets. Everyone else is racing against power, permitting, and depreciation clocks.

Key Data

  • 7 buildings live in ~12 months; 30 planned across 1,200 acres
  • ~500,000 Trainium 2 chips installed; >1,000,000 targeted by year‑end
  • 2.2 GW projected campus load; utility peak demand to >7 GW by 2030
  • $8B in combined state/local tax relief over multi‑decades
  • $114M utility upgrades; $7–$22M road funding under negotiation
  • ~1,000 long‑term jobs; ~9,000 peak construction jobs

What It Means For The AI Stack

  • Non‑Nvidia pathways are real at hyperscale—on cost, availability, and integration speed.
  • Multi‑chip hedging (Trainium + TPU) reduces single‑vendor risk for frontier labs.
  • Power is the master bottleneck. Sites with transmission, nuclear baseload, flexible gas, and community buy‑in will outcompete pure‑cash bids.
  • The capex clock is ticking. Chips depreciate fast; workloads must show ROI within 12–24 months.

Key Takeaway

Amazon’s Indiana build is proof that hyperscalers can stand up non‑Nvidia AI campuses at warp speed—if power, incentives, and an anchor customer align. The upside is control and cost; the risk is that grid, water, and ROI constraints bite before the depreciation curve runs its course.

FAQ

Why no Nvidia?

Cost and control. Trainium 2/3 let AWS tune the stack for Anthropic, ease supply constraints, and hit lower $/compute targets—even if peak specs trail top GPUs.

Will this strain local grids and water?

Load growth is steep. Utilities are adding gas, stretching coal timelines, and layering nuclear/renewables. Expect rate and infrastructure debates to intensify.

Is this an AI bubble?

Some builds may overrun demand, but Rainier is live with a named anchor tenant. Overbuild risk is highest where power is speculative and customers are not locked in.

Leave a Comment