80 Million Tiny Images
80 Million Tiny Images is a dataset intended for training machine learning systems constructed by Antonio Torralba, Rob Fergus, and William T. Freeman in a collaboration between MIT and New York University. It was published in 2008.
The dataset has size 760 GB. It contains 79,302,017 32×32 pixel color images, scaled down from images scraped from the World Wide Web over 8 months. The images are classified into 75,062 classes. Each class is a non-abstract noun in WordNet. Images may appear in more than one class. The dataset was motivated by non-parametric models of neural activations in the visual cortex upon seeing images.[1][2]
The CIFAR-10 dataset uses a subset of the images in this dataset, but with independently generated labels, as the original labels were not reliable. The CIFAR-10 set has 6000 examples of each of 10 classes and the CIFAR-100 set has 600 examples of each of 100 non-overlapping classes.[3]
Construction
[edit]It was first reported in a technical report in April 2007, during the middle of the construction process, when there were only 73 million images.[4] The full dataset was published in 2008.[1]
They began with all 75,846 nonabstract nouns in WordNet, and then for each of these nouns, they scraped 7 Image search engines: Altavista, Ask.com, Flickr, Cydral, Google, Picsearch and Webshots. After 8 months of scraping, they obtained 97,245,098 images. Since they didn't have enough storage, they downsized the images to 32×32 as they were scraped.
After gathering, they removed images with zero variance and intra-word duplicate images, resulting in the final dataset.
Out of the 75,846 nouns, only 75,062 classes had any results, so the other nouns did not appear in the final dataset.
The number of images per noun follows a Zipf-like distribution, with 1056 images per noun on average. To prevent a few nouns taking up too many images, they put an upper bound of at most 3000 images per noun.[1]
Retirement
[edit]The 80 Million Tiny Images dataset was retired from use by its creators in 2020,[5] after a paper by researchers Abeba Birhane and Vinay Prabhu found that some of the labeling of several publicly available image datasets, including 80 Million Tiny Images, contained racist and misogynistic slurs which were causing models trained on them to exhibit racial and sexual bias. The dataset also contained offensive images.[6][7] Following the release of the paper, the dataset's creators removed the dataset from distribution, and requested that other researchers not use it for further research and to delete their copies of the dataset.[5]
See also
[edit]References
[edit]- ^ a b c Torralba, Antonio; Fergus, Rob; Freeman, William T. (November 2008). "80 million tiny images: a large data set for nonparametric object and scene recognition" (PDF). IEEE Transactions on Pattern Analysis and Machine Intelligence. 30 (11): 1958–1970. doi:10.1109/TPAMI.2008.128. ISSN 1939-3539. PMID 18787244. S2CID 7487588.
- ^ 80 Million Tiny Images, IPAM Workshop on Numerical Tools and Fast Algorithms for Massive Data Mining, Search Engines and Applications October 23rd 2007
- ^ A. Krizhevsky. Learning multiple layers of features from tiny images. Tech Report, 2009. University of Toronto
- ^ A Torralba, R Fergus, WT Freeman. "Tiny images". Tech. Rep. MIT-CSAIL-TR-2007-024, 2007.
- ^ a b "80 Million Tiny Images". groups.csail.mit.edu. Retrieved 2020-07-02.
- ^ Prabhu, Vinay Uday; Birhane, Abeba (2020-06-24). "Large image datasets: A pyrrhic win for computer vision?". arXiv:2006.16923 [cs.CY].
- ^ Quach, Katyanna (1 July 2020). "MIT apologizes, permanently pulls offline huge dataset that taught AI systems to use racist, misogynistic slurs". www.theregister.com. Retrieved 2020-07-02.