When not codifying the world's news, GDELT Creator Kalev Leetaru collaborated earlier this year with the Internet Archive and Flickr to extract the images from 600 million digitized book pages dating back 500 years from over 1,000 libraries worldwide and make them all browseable and searchable (via both the metadata of the original book and the text surrounding each image), “reimagining” the world’s books. When the project debuted this past August it attracted substantial global media attention.
Over the three and a half months since it debuted, the collection seems to have sparked widespread interest, used for everything from illustrating Atlantic articles to medical visuals for Mashable and Medical Daily, to old diagrams for Tech Crunch, to historical scenes of old London's water transportation networks for Engadget. Indeed, a Google search for "internet archive book images" today yields over 124,000 web pages that use one of the 2.7 million images and actually include a formal textual citation back to the collection.
Most recently, as part of the National Novel Generation Month (NaNoGenMo), Liza Daly created an algorithmically-generated book based on the Voynich Manuscript that used the collection as the source for all of its imagery. In particular, her approach uses the keyword search feature of the collection to retrieve topically-specific images for each section. The results are quite haunting and a powerful showcase of the potential of "reseeing" the images of 500 years of the world's books. She even released all of her code on GitHub for others to use!