Planetary Scale AI: Watching, Visualizing And Forecasting The Entire World In Realtime

It was a tremendous honor to speak at the Americas Google Developer Experts (GDE) Summit last week to my fellow GDEs following Google I/O 2025 about the GDELT Project and what applying planetary-scale AI looks like in our mission to watch, understand, visualize and even forecast the entire world each day. The video of the talk will be coming soon!

What does truly planetary-scale AI look like in practice? What does it look like to look across the world’s information in realtime across text, imagery, audio and video in more than 400 languages and attempt to make sense of the entire planet each day? Gemini as a planetary-scale persona-based news recommendation service that accepts a personality like “US manufacturing supply chain analyst” and deeply reasons over global events, teasing out the underlying trends and macro stories and constructing personalized research reports that offer the ultimate annotated bibliography. Combining multi-trillion-edge knowledge graph reasoning in BigQuery, visual and textual embeddings, realtime Spanner search, advanced Timeseries and Inference API detection and new hybrid large + classical model innovations to scale beyond current context window limitations towards global reasoning. OCRing 19 billion seconds of global television news through Vision AI to yield 2PB of annotations from 6 quadrillion pixels spanning a quarter century. Transcribing 3 million hours of highly multilingual speech in 150 languages spanning 50 countries through Chirp. Translating and analyzing through Cloud Translation, Cloud Video and NLP API. Bigtable as a digital twin over it all. Here’s what it looks like to apply the world’s most advanced analytics and AI at truly planetary scale across nearly the entire GCP analytics stack and the incredible new insights we gain about the heartbeat of the planet we call home.