Kalev spoke today as part of the Strategic Multilayer Assessment (SMA) Speaker series on "The Promise & Perils Of AI: What We’ve Learned From Some Of The Largest Real World Deployments Of Generative & Classical AI To Understand Global News":
From the large language models at the vanguard of the “generative AI” revolution to more humble technologies like speech recognition, image and video analysis, translation and textual analysis, AI is everywhere today. How can we make use of these advanced tools? Where can they be most effective and what are their hidden dangers, from hallucination and plagiarizing to biases and drift?
The potential of AI for understanding our world is unprecedented. From automated alerts of the earliest glimmers of tomorrow’s biggest stories like Covid to mapping the spread and evolution of global stories like Ukraine, AI can look across billions of articles in hundreds of languages to distill down the chaotic cacophony of the world to give analysts the insights and leads they need. Video and speech tools can transform video archives of hundreds of news channels across dozens of countries spanning millions of hours over decades into rich searchable archives that allow analysts to look beyond the printed word at scale for the first time to document visual narratives and “see” global stories from the ground. Machine translation and text analysis makes it possible to search global sources in hundreds of languages. Automated fact checking systems can identify inconsistencies and conflicting sources, while narrative analysis can help sift which perspectives and information are spreading where. Given a set of narrative goals, “Autonomous Diplomacy” systems can monitor global media about or from a given country or context and autonomously write point-by-point amplifying or counter narratives in realtime, creating multiple versions for different audiences and drawing imagery and assets from across an organization to transform it into summaries, podcasts and video takes, taking advantage of all of the different modalities and platforms available today. Rich multimodal models and embeddings make it possible to assess the visual landscape of an entire nation’s media and compare visual storytelling across all participants in a rapidly evolving narrative. Autonomous agents powered by LLMs, LSMs, LMMs and classical AI and statistical systems can scan the global news landscape in realtime across languages and modalities, summarize global developments in realtime for myriad customized audiences and react completely autonomously to those developments.
At the same time, AI has enormous limitations and the large language model revolution brings with it newly existential and unique risk. From the most obvious pitfalls like hallucinated details, plagiarized summaries and fabricated sources to far less discussed topics like the existential biases encoded in AI- powered “semantic search engines” and “generative search” tools, we’ll explore the dangers, both visible and hidden, confronting organizations as they increasingly adopt these systems, including some forthcoming emergent risks that few organizations are likely even aware of and some of the existential failures of current architectures that render them inapplicable to key use cases.
Here we’ll draw from the GDELT Project’s collaborations across the world to showcase real-world examples across all of these topics, from classical AI and mass analytics to Large Language, Speech and Multimodal Models, crystalizing both the promise and pitfalls of AI for organizations of today and tomorrow.