Director, Digital Humanities Lab
Although commonly used by commercial companies to process millions of images generated from smartphones, artificial neural networks have not yet seen wide adoption in the cultural heritage space. This talk examines how such “machine vision” techniques can be used to analyze and organize large visual collections with tens of thousands of images. The focus is on two practical use cases: Visual Similarity and Collection-Level Visualization. In addition, it explores the use of Generative Adversarial Networks in “forging” hitherto-unseen images from large digital collections, as well as this technique’s possible role for artistic and interpretive purposes. All code shown is open-source and all examples are drawn from Yale libraries, museums, and galleries datasets.