Data Centers &
AI Compute Campuses
We build and invest in hyperscale data center campuses engineered for the most demanding AI workloads--where power density, cooling architecture, and network connectivity define success.
The Thesis
AI training and inference are fundamentally power-intensive operations. As models scale and GPU densities increase, traditional data center architectures reach their limits.
The next generation of compute infrastructure requires liquid cooling, robust power delivery (often exceeding 100W per square foot), dual-feed redundancy, and sites with expansion optionality.
We focus on campuses that can scale from 100MW to 500MW+, in locations where power access, permitting pathways, and network density align.
MW Under Development
Campus Developments
DLC Ready
What we look for.
Power Infrastructure
- •Dual-feed substation proximity
- •100+ MW capacity with expansion path
- •Grid stability and utility partnership
- •Renewable energy integration optionality
Cooling Architecture
- •Direct liquid cooling readiness
- •High-density power train (30-100kW/rack)
- •Efficient heat rejection systems
- •Water availability and rights
Network Connectivity
- •Diverse fiber routes and carriers
- •Low-latency cloud on-ramp access
- •Peering and interconnection density
- •Network redundancy and resiliency
Constructability
- •Clear permitting pathway
- •Zoning and land use compatibility
- •EPC and OEM partner availability
- •Timeline confidence and risk mitigation
Site Characteristics
- •100+ acre site with expansion land
- •Low seismic and flood risk
- •Favorable climate for cooling efficiency
- •Community support and workforce access
Customer Demand
- •Hyperscaler interest or LOI
- •Long-term lease structure (10+ years)
- •Build-to-suit or powered-shell optionality
- •Credit quality and covenant strength
Engineering-first approach.
Our team conducts in-house technical diligence on every aspect of data center design and operation.
Power Train Design
We model power flow from substation to rack, ensuring redundancy (N+1, 2N), efficiency (PUE targets), and scalability. Every design is reviewed by our in-house electrical engineering team.
Cooling System Architecture
We evaluate cooling solutions--air, hybrid, and direct liquid--based on workload density, climate, and efficiency targets. DLC readiness is now a baseline requirement for all new campuses.
Network Architecture
We design network infrastructure for low latency, high bandwidth, and carrier diversity. Every campus includes dark fiber, carrier-neutral meet-me rooms, and interconnection optionality.
Operational Readiness
We build for uptime from day one: monitoring systems, security protocols, maintenance access, and staff training are integrated into the design process--not added afterward.
Campus Phoenix
A 200MW hyperscale campus designed for liquid-cooled AI workloads.
MW Total Capacity
kW per Rack
Design PUE
Located in a high-growth corridor with dual-feed substation access, Campus Phoenix is designed for next-generation GPU clusters with direct liquid cooling infrastructure.
The site benefits from fiber diversity, favorable climate for cooling efficiency, and strong utility partnership for phased expansion to 500MW+.
Let's build together.
We're building the infrastructure that powers emerging markets' digital future. Join us.