Google has announced a new research initiative, Project Suncatcher, which aims to explore the feasibility of scaling artificial intelligence (AI) compute in space using solar-powered satellite constellations equipped with Tensor Processing Units (TPUs).
“Inspired by our history of moonshots, from quantum computing to autonomous driving, Project Suncatcher is exploring how we could one day build scalable ML compute systems in space, harnessing more of the sun’s power (which emits more power than 100 trillion times humanity’s total electricity production),” said Google CEO Sundar Pichai, in a post on X. `
The project, led by Travis Beals, senior director of Paradigms of Intelligence at Google, proposes a system where solar-powered satellites in low Earth orbit (LEO) perform machine learning (ML) workloads while communicating through free-space optical links.
“Space may be the best place to scale AI compute,” Beals said in a statement. “In the right orbit, a solar panel can be up to 8 times more productive than on Earth, and produce power nearly continuously, reducing the need for batteries.”
According to a preprint paper released alongside the announcement — Towards a future space-based, highly scalable AI infrastructure system design — the initiative focuses on building modular, interconnected satellite networks that can function like data centres in orbit.
Technical design and challenges
The proposed system would operate in a dawn–dusk sun-synchronous orbit, ensuring near-constant exposure to sunlight and reducing reliance on heavy batteries.
To achieve data centre-level performance, the satellites would need to support inter-satellite links capable of tens of terabits per second. Google’s researchers said they have already achieved 1.6 terabits per second transmission in lab conditions using a single optical transceiver pair.
Achieving such bandwidth in orbit requires satellites to fly in close formations, just a few hundred meters apart. “With satellites positioned this closely, only modest station-keeping manoeuvres would be needed to maintain stability,” the paper noted.
Radiation tolerance was another major focus. Tests conducted on Google’s Trillium v6e Cloud TPU chips under a 67 MeV proton beam showed that the chips could withstand nearly three times the expected five-year mission radiation dose without failure.
Historically, launch costs have made large-scale space infrastructure unviable. However, Google’s analysis suggests that if launch costs fall below $200 per kilogram — as projected by the mid-2030s — operating a space-based AI system could be cost-competitive with terrestrial data centres.
Next steps
The company’s next milestone is a partnership with Planet Labs to launch two prototype satellites by early 2027. The mission will test TPU performance in orbit and evaluate optical inter-satellite links for distributed ML tasks.
“Our initial analysis shows that the core concepts of space-based ML compute are not precluded by fundamental physics or insurmountable economic barriers,” Beals said. “Significant engineering challenges remain, such as thermal management, high-bandwidth ground communications, and on-orbit system reliability.”

