We're tremendously excited to announce that Kalev will be speaking at the 35th MINDS Conference in Toronto next month!
From the large language models at the vanguard of the “generative AI” revolution to more humble technologies like speech recognition, image and video analysis, translation and textual analysis, AI is everywhere today. How can journalists and newsrooms make use of these advanced tools, where can they be most effective and what are their hidden dangers, from hallucination and plagiarizing to biases and drift?
The potential of AI for journalism is unprecedented. From automated alerts of the earliest glimmers of tomorrow’s biggest stories like Covid to mapping the spread and evolution of global stories like Ukraine, AI can look across billions of articles in hundreds of languages to distill down the chaotic cacophony of the world to give reporters the insights and leads they need. Video and speech tools can transform video archives of hundreds of news channels across dozens of countries spanning millions of hours over decades into rich searchable archives that allow reporters to look beyond the printed word at scale for the first time to document visual narratives and “see” global stories from the ground. Machine translation and text analysis makes it possible to search global sources in hundreds of languages. Automated fact checking systems can identify inconsistencies and conflicting sources, while narrative analysis can help sift which perspectives and information are spreading where. Automated rewriters allow newsrooms to generate multiple versions of stories for different audiences and even draw imagery and assets from across the newsroom to transform a single textual article into summaries, podcasts and video takes, taking advantage of all of the different modalities and platforms available today.
At the same time, AI has enormous limitations and the large language model revolution brings with it newly existential and unique risk to the journalism profession. From the most obvious pitfalls like hallucinated details, plagiarized summaries and fabricated sources to far less discussed topics like the cultural, gender and racial biases encoded in AI-powered “semantic search engines” and “generative search” tools, we’ll explore the dangers, both visible and hidden, confronting news rooms as they increasingly adopt these systems, including some forthcoming emergent reputational and legal risks that few newsrooms are likely even aware of.
For more than a quarter-century we’ve been working with journalists and researchers across the world to apply advanced AI to understand the heartbeat of Planet Earth. With a special focus on the past year and the rise of generative AI, we’ll draw from the GDELT Project’s collaborations across the world to showcase real-world examples across all of these topics, crystalizing both the promise and pitfalls of AI for the newsroom of today and tomorrow.