As nations try to navigate the public health complexities of encouraging mask wearing during the COVID-19 pandemic, a key question is what the public sees when they turn on the news. Do viewers of television news see an endless stream of mask wearing? Are elected and health officials, first responders and reporters themselves shown wearing masks at all times, removing them only for brief moments to speak, or are they unmasked most of the time? Does the news depict mass mask-wearing compliance across the nation or are viewers confronted with endless hours of maskless crowds enjoying pre-pandemic freedoms?
What kinds of masks are seen the most? Are lawmakers wearing unobtainable N95 masks, while chiding the public to leave them for healthcare workers or are they following their own advice in wearing primarily surgical and cloth face coverings? How common is each mask type such as cloth coverings, neck gators, surgical masks and full face respirators? Are cloth coverings more common than surgical masks in news imagery?
What if we could use Google's Cloud Video and Vision APIs to scan television news coverage and worldwide online news imagery over the course of this year, compiling a list of every second of airtime and every online image URL it saw in which a mask appeared? While the built-in Video and Vision API models don't distinguish between different kinds of masks like surgical vs N95 vs cloth masks, they allow us to rapidly triage enormous volumes of footage and imagery and identify a subset of mask imagery.
This would make it possible to perform rich secondary analyses of this imagery, creating new kinds of filters designed to distinguish specific kinds of masks (cloth, scarfs, neck gators, surgical, N95, KN95, PAPR, etc), differentiate between a firefighter wearing a hazmat respirator from a civilian wearing a surgical mask and help catalog just how prevalent masks are in imagery this year.
Ideally, we might even be able to count how many human faces are present in each image or second of airtime and what percentage of them are wearing a mask, to identify, for example, what percentage of a crowd are wearing masks or whether a mask-wearing politician at a podium is surrounded by unmasked aides, etc. Simply having a density measure of what percentage of faces were masked during each second of airtime would be a huge step towards understanding the messaging seen by the public.
To help explore these questions, today we release several COVID-19-related extracts of the Google Cloud Video API annotations in the Visual Global Entity Graph V2, cataloging television news programming on BBC, CNN, MSNBC and Fox News since January 1, 2020. We're also releasing an extract of Google Cloud Vision API annotations in the Visual Global Knowledge Graph, which catalogs more than half a billion worldwide online news images since 2016. For each of the television news extracts below, we scanned all of the airtime-aggregated Video API annotations for seconds of airtime depicting one of the labels and compiled that annotation record into the extract. Each record includes a condensed summary of the Video API's annotations and includes the URL of the reduced thumbnail image that shows the starting frame of that second of airtime, which could be non-consumptively filtered by additional computer vision tools to perform more in-depth mask analysis. Similarly, for online news imagery, we scanned all of the Cloud Vision API annotations for that image and provide its complete output, enriched with any EXIF metadata found in the image and provide the source URL of the image and the URL of the first article it was seen in.
Mask Imagery On Television News
This extract contains Cloud Video API annotations for 43,249 seconds of airtime on BBC, CNN, MSNBC and Fox News Jan. 1, 2020 – Aug. 21, 2020 in which the Video API assigned a label of "mask" to at least one frame during that second of airtime. This includes a wide range of masks from football helmets and hockey masks to surgical masks. Each row is a JSON record containing a summarized version of the Cloud Video API annotations for that second of airtime, along with the URL of a thumbnail image that can be used for further visual analysis.
- See Examples Of Matching Imagery Using The AI Television Explorer.
- Download Dataset. (22MB compressed / 162MB uncompressed).
This is the actual query used to compile the dataset:
SELECT * FROM `gdelt-bq.gdeltv2.vgegv2_iatv`, UNNEST(entities) entity WHERE DATE(date) >= "2020-01-01" and (entity.name='mask')
Expanded Mask Imagery On Television News
This extract contains all of the annotations from above plus matches for "scarf" and "headgear", totaling 1.3M seconds of airtime, which return a wider range of mask-related images, especially scarfs, neck gators and other informal cloth face coverings, but with a much higher density of unrelated imagery, such as people wearing hats and scarves in an ordinary way. This dataset is provided to offer a wider range of mask-related imagery that covers these more non-traditional examples. Each row is a JSON record containing a summarized version of the Cloud Video API annotations for that second of airtime, along with the URL of a thumbnail image that can be used for further visual analysis.
- See Examples Of Matching Imagery Using The AI Television Explorer.
- Download Dataset. (763MB compressed / 5.2GB uncompressed).
This is the actual query used to compile the dataset:
SELECT * FROM `gdelt-bq.gdeltv2.vgegv2_iatv`, UNNEST(entities) entity WHERE DATE(date) >= "2020-01-01" and (entity.name='mask' OR entity.name='scarf' OR entity.name='headgear')
Medical Imagery On Television News
This extract contains Cloud Video API annotations for 1.9M seconds of airtime on BBC, CNN, MSNBC and Fox News Jan. 1, 2020 – Aug. 21, 2020 depicting medical-related imagery. This includes a wide range of depictions and is included because it may depict many other kinds of medically protective imagery of relevance to mask cataloging. Each row is a JSON record containing a summarized version of the Cloud Video API annotations for that second of airtime, along with the URL of a thumbnail image that can be used for further visual analysis.
- See Examples Of Matching Imagery Using The AI Television Explorer.
- Download Dataset. (1.1GB compressed / 7.9GB uncompressed).
This is the actual query used to compile the dataset:
SELECT * FROM `gdelt-bq.gdeltv2.vgegv2_iatv`, UNNEST(entities) entity WHERE DATE(date) >= "2020-01-01" and (entity.name='medical' OR entity.name='physician' OR entity.name='surgeon' OR entity.name='hospital' OR entity.name='medical equipment')
Online News Imagery Of Masks
This extract contains Cloud Vision API annotations for 91,000 worldwide online news images Jan. 1, 2020 – Aug. 21, 2020 where "mask" appears anywhere in any of the labels assigned by the Vision API. This includes a wide range of depictions and is included because it may depict many other kinds of medically protective imagery of relevance to mask cataloging. Each row is a JSON record containing extracted and summarized versions of the Vision API's output, along with the API's full JSON output in the last field, enriched with a set of additional fields including any embedded EXIF metadata (NOTE – the raw JSON is escaped JSON embedded within a JSON field, so you will have to parse the JSON record to extract that field and then parse a second time to access the full record).
- See Examples Of Matching Imagery Using The AI Television Explorer.
- Download Dataset. (822MB compressed / 4.4GB uncompressed).
This is the actual query used to compile the dataset:
SELECT * FROM `gdelt-bq.gdeltv2.cloudvision_partitioned` WHERE DATE(_PARTITIONTIME) >= "2020-01-01" and Labels like '%mask%'
We hope these datasets help you explore mask-wearing in the visual landscape of news in 2020!
This work was made possible through the Media-Data Research Consortium (M-DRC)'s Google Cloud COVID-19 Research Grant to support “Quantifying the COVID-19 Public Health Media Narrative Through TV & Radio News Analysis.”