#PAGE_PARAMS# #ADS_HEAD_SCRIPTS# #MICRODATA#

THINGS: A database of 1,854 object concepts and more than 26,000 naturalistic object images


Autoři: Martin N. Hebart aff001;  Adam H. Dickter aff001;  Alexis Kidder aff001;  Wan Y. Kwok aff001;  Anna Corriveau aff001;  Caitlin Van Wicklin aff001;  Chris I. Baker aff001
Působiště autorů: Laboratory of Brain and Cognition, National Institute of Mental Health, National Institutes of Health, Bethesda MD, United States of America aff001
Vyšlo v časopise: PLoS ONE 14(10)
Kategorie: Research Article
doi: https://doi.org/10.1371/journal.pone.0223792

Souhrn

In recent years, the use of a large number of object concepts and naturalistic object images has been growing strongly in cognitive neuroscience research. Classical databases of object concepts are based mostly on a manually curated set of concepts. Further, databases of naturalistic object images typically consist of single images of objects cropped from their background, or a large number of naturalistic images of varying quality, requiring elaborate manual image curation. Here we provide a set of 1,854 diverse object concepts sampled systematically from concrete picturable and nameable nouns in the American English language. Using these object concepts, we conducted a large-scale web image search to compile a database of 26,107 high-quality naturalistic images of those objects, with 12 or more object images per concept and all images cropped to square size. Using crowdsourcing, we provide higher-level category membership for the 27 most common categories and validate them by relating them to representations in a semantic embedding derived from large text corpora. Finally, by feeding images through a deep convolutional neural network, we demonstrate that they exhibit high selectivity for different object concepts, while at the same time preserving variability of different object images within each concept. Together, the THINGS database provides a rich resource of object concepts and object images and offers a tool for both systematic and large-scale naturalistic research in the fields of psychology, neuroscience, and computer science.

Klíčová slova:

Computational neuroscience – Computer imaging – Neural networks – Neuroscience – Psychology – Semantics – Vision – Graphical user interfaces


Zdroje

1. Oliva A, Torralba A. The role of context in object recognition. Trends in cognitive sciences. 2007;11(12):520–7. doi: 10.1016/j.tics.2007.09.009 18024143

2. Deng J, Dong W, Socher R, Li L-J, Li K, Fei-Fei L, editors. Imagenet: A large-scale hierarchical image database. Computer Vision and Pattern Recognition, 2009 CVPR 2009 IEEE Conference on; 2009: Ieee.

3. Einhäuser W, König P. Getting real—sensory processing of natural stimuli. Current opinion in neurobiology. 2010;20(3):389–95. doi: 10.1016/j.conb.2010.03.010 20434327

4. Felsen G, Dan Y. A natural approach to studying vision. Nature neuroscience. 2005;8(12):1643. doi: 10.1038/nn1608 16306891

5. Pereira F, Lou B, Pritchett B, Ritter S, Gershman SJ, Kanwisher N, et al. Toward a universal decoder of linguistic meaning from brain activation. Nature communications. 2018;9(1):963. doi: 10.1038/s41467-018-03068-4 29511192

6. Krizhevsky A, Sutskever I, Hinton GE, editors. Imagenet classification with deep convolutional neural networks. Advances in Neural Information Processing Systems; 2012.

7. Mikolov T, Yih W-t, Zweig G, editors. Linguistic regularities in continuous space word representations. Proceedings of the 2013 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies; 2013.

8. Pennington J, Socher R, Manning C, editors. Glove: Global vectors for word representation. Proceedings of the 2014 conference on empirical methods in natural language processing (EMNLP); 2014.

9. Battig WF, Montague WE. Category norms of verbal items in 56 categories A replication and extension of the Connecticut category norms. Journal of Experimental Psychology. 1969;80(3p2):1–46.

10. Van Overschelde JP, Rawson KA, Dunlosky J. Category norms: An updated and expanded version of the Battig and Montague (1969) norms. Journal of Memory and Language. 2004;50(3):289–335.

11. Fellbaum C. WordNet: An electronic lexical database: MIT press; 1998.

12. Snodgrass JG, Vanderwart M. A standardized set of 260 pictures: norms for name agreement, image agreement, familiarity, and visual complexity. Journal of experimental psychology: Human learning and memory. 1980;6(2):174.

13. Brodeur MB, Dionne-Dostie E, Montreuil T, Lepage M. The Bank of Standardized Stimuli (BOSS), a new set of 480 normative photos of objects to be used as visual stimuli in cognitive research. PloS one. 2010;5(5):e10773.

14. Brodeur MB, Guérard K, Bouras M. Bank of standardized stimuli (BOSS) phase II: 930 new normative photos. PLoS One. 2014;9(9):e106953. doi: 10.1371/journal.pone.0106953 25211489

15. Bracci S, Daniels N, de Beeck HO. Task context overrules object- and category-related representational content in the human parietal cortex. Cerebral Cortex. 2017:1–12. doi: 10.1093/cercor/bhw362

16. Bracci S, de Beeck HO. Dissociations and associations between shape and category representations in the two visual pathways. Journal of Neuroscience. 2016;36(2):432–44. doi: 10.1523/JNEUROSCI.2314-15.2016 26758835

17. Coggan DD, Liu W, Baker DH, Andrews TJ. Category-selective patterns of neural response in the ventral visual pathway in the absence of categorical information. Neuroimage. 2016;135:107–14. doi: 10.1016/j.neuroimage.2016.04.060 27132543

18. Proklova D, Kaiser D, Peelen M. MEG sensor patterns reflect perceptual but not categorical similarity of animate and inanimate objects. bioRxiv. 2017:238584.

19. Proklova D, Kaiser D, Peelen MV. Disentangling representations of object shape and object category in human visual cortex: The animate–inanimate distinction. Journal of cognitive neuroscience. 2016.

20. Krizhevsky A, Hinton G. Learning multiple layers of features from tiny images. Citeseer, 2009.

21. Griffin G, Holub A, Perona P. Caltech-256 object category dataset. 2007.

22. Everingham M, Van Gool L, Williams CK, Winn J, Zisserman A. The pascal visual object classes (voc) challenge. International journal of computer vision. 2010;88(2):303–38.

23. Chang N, Pyles JA, Gupta A, Tarr MJ, Aminoff EM. BOLD5000: A public fMRI dataset of 5000 images. arXiv preprint arXiv:180901281. 2018.

24. Kuznetsova A, Rom H, Alldrin N, Uijlings J, Krasin I, Pont-Tuset J, et al. The open images dataset v4: Unified image classification, object detection, and visual relationship detection at scale. arXiv preprint arXiv:181100982. 2018.

25. Lin T-Y, Maire M, Belongie S, Hays J, Perona P, Ramanan D, et al., editors. Microsoft coco: Common objects in context. European conference on computer vision; 2014: Springer.

26. Zhou B, Lapedriza A, Khosla A, Oliva A, Torralba A. Places: A 10 million image database for scene recognition. IEEE transactions on pattern analysis and machine intelligence. 2018;40(6):1452–64. doi: 10.1109/TPAMI.2017.2723009 28692961

27. Brysbaert M, Warriner AB, Kuperman V. Concreteness ratings for 40 thousand generally known English word lemmas. Behavior research methods. 2014;46(3):904–11. doi: 10.3758/s13428-013-0403-5 24142837

28. Brysbaert M, New B, Keuleers E. Adding part-of-speech information to the SUBTLEX-US word frequencies. Behavior research methods. 2012;44(4):991–7. doi: 10.3758/s13428-012-0190-4 22396136

29. Keuleers E, Lacey P, Rastle K, Brysbaert M. The British Lexicon Project: Lexical decision data for 28,730 monosyllabic and disyllabic English words. Behavior research methods. 2012;44(1):287–304. doi: 10.3758/s13428-011-0118-4 21720920

30. Davies M. The corpus of contemporary American English: BYE, Brigham Young University; 2008.

31. Mehrer J, Kietzmann TC, Kriegeskorte N. Deep neural networks trained on ecologically relevant categories better explain human IT. Poster presented at Conference on Cognitive Computational Neuroscience. 2017;Submission ID 3000198.

32. Simonyan K, Zisserman A. Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:14091556. 2014.

33. Pilehvar MT, Collier N. De-conflated semantic representations. arXiv preprint arXiv:160801961. 2016.

34. Cichy RM, Khosla A, Pantazis D, Torralba A, Oliva A. Comparison of deep neural networks to spatio-temporal cortical dynamics of human visual object recognition reveals hierarchical correspondence. Scientific reports. 2016;6:27755. doi: 10.1038/srep27755 27282108

35. Khaligh-Razavi S-M, Kriegeskorte N. Deep supervised, but not unsupervised, models may explain IT cortical representation. PLoS computational biology. 2014;10(11):e1003915. doi: 10.1371/journal.pcbi.1003915 25375136

36. Yamins DL, Hong H, Cadieu CF, Solomon EA, Seibert D, DiCarlo JJ. Performance-optimized hierarchical models predict neural responses in higher visual cortex. Proceedings of the National Academy of Sciences. 2014;111(23):8619–24.

37. Kietzmann TC, McClure P, Kriegeskorte N. Deep Neural Networks in Computational Neuroscience. Oxford University Press; 2019.

38. Cichy RM, Kaiser D. Deep neural networks as scientific models. Trends in cognitive sciences. 2019.

39. Kubilius J, Schrimpf M, Nayebi A, Bear D, Yamins DL, DiCarlo JJ. CORnet: Modeling the Neural Mechanisms of Core Object Recognition. bioRxiv. 2018:408385.

40. Lvd Maaten, Hinton G. Visualizing data using t-SNE. Journal of machine learning research. 2008;9(Nov):2579–605.

41. Konkle T, Oliva A. Canonical visual size for real-world objects. Journal of Experimental Psychology: Human Perception & Performance. 2011;37(1):23–37.

42. Jozwik KM, Kriegeskorte N, Storrs KR, Mur M. Deep convolutional neural networks outperform feature-based but not categorical models in explaining object similarity judgments. Frontiers in psychology. 2017;8:1726. doi: 10.3389/fpsyg.2017.01726 29062291

43. Zheng CY, Pereira F, Baker CI, Hebart MN. Revealing interpretable object representations from human behavior. arXiv. 2019;1901.02915:https://arxiv.org/abs/1901.02915.

44. Long B, Konkle T, Cohen MA, Alvarez GA. Mid-level perceptual features distinguish objects of different real-world sizes. Journal of Experimental Psychology: General. 2016;145(1):95.

45. Kiran S, Thompson CK. Effect of typicality on online category verification of animate category exemplars in aphasia. Brain and Language. 2003;85(3):441–50. doi: 10.1016/s0093-934x(03)00064-6 12744956

46. Kirchner H, Thorpe SJ. Ultra-rapid object detection with saccadic eye movements: Visual processing speed revisited. Vision research. 2006;46(11):1762–76. doi: 10.1016/j.visres.2005.10.002 16289663

47. Rajalingham R, Schmidt K, DiCarlo JJ. Comparison of object recognition behavior in human and monkey. Journal of Neuroscience. 2015;35:12127–36. doi: 10.1523/JNEUROSCI.0573-15.2015 26338324

48. Konkle T, Brady TF, Alvarez GA, Oliva A. Conceptual distinctiveness supports detailed visual long-term memory for real-world objects. Journal of Experimental Psychology: General. 2010;139(3):558.

49. Brady TF, Konkle T, Alvarez GA, Oliva A. Visual long-term memory has a massive storage capacity for object details. Proceedings of the National Academy of Sciences. 2008;105(38):14325–9.

50. Klein KA, Addis KM, Kahana MJ. A comparative analysis of serial and free recall. Memory & Cognition. 2005;33(5):833–9.

51. Rotello CM, Macmillan NA, Van Tassel G. Recall-to-reject in recognition: Evidence from ROC curves. Journal of Memory and Language. 2000;43(1):67–88.

52. Huth AG, Nishimoto S, Vu AT, Gallant JL. A continuous semantic space describes the representation of thousands of object and action categories across the human brain. Neuron. 2012;76(6):1210–24. doi: 10.1016/j.neuron.2012.10.014 23259955

53. Naselaris T, Prenger RJ, Kay KN, Oliver M, Gallant JL. Bayesian reconstruction of natural images from human brain activity. Neuron. 2009;63(6):902–15. doi: 10.1016/j.neuron.2009.09.006 19778517

54. Kay KN, Naselaris T, Prenger RJ, Gallant JL. Identifying natural images from human brain activity. Nature. 2008;452(7185):352–5. doi: 10.1038/nature06713 18322462

55. Kriegeskorte N, Mur M, Ruff DA, Kiani R, Bodurka J, Esteky H, et al. Matching categorical object representations in inferior temporal cortex of man and monkey. Neuron. 2008;60(6):1126–41. doi: 10.1016/j.neuron.2008.10.043 19109916

56. Haxby JV, Gobbini MI, Furey ML, Ishai A, Schouten JL, Pietrini P. Distributed and overlapping representations of faces and objects in ventral temporal cortex. Science. 2001;293(5539):2425–30. doi: 10.1126/science.1063736 11577229

57. Eger E, Ashburner J, Haynes J-D, Dolan RJ, Rees G. fMRI activity patterns in human LOC carry information about object exemplars within category. Journal of cognitive neuroscience. 2008;20(2):356–70. doi: 10.1162/jocn.2008.20019 18275340

58. Edelman S, Grill-Spector K, Kushnir T, Malach R. Toward direct visualization of the internal shape representation space by fMRI. Psychobiology. 1998;26(4):309–21.

59. Rice GE, Watson DM, Hartley T, Andrews TJ. Low-level image properties of visual objects predict patterns of neural response across category-selective regions of the ventral visual pathway. Journal of Neuroscience. 2014;34(26):8837–44. doi: 10.1523/JNEUROSCI.5265-13.2014 24966383

60. Tranel D, Logan CG, Frank RJ, Damasio ARJN. Explaining category-related effects in the retrieval of conceptual and lexical knowledge for concrete entities: Operationalization and analysis of factors. 1997;35(10):1329–39.

61. Gerlach C. A review of functional imaging studies on category specificity. Journal of Cognitive Neuroscience. 2007;19(2):296–314. doi: 10.1162/jocn.2007.19.2.296 17280518

62. Liu H, Agam Y, Madsen JR, Kreiman G. Timing, timing, timing: Fast decoding of object information from intracranial field potentials in human visual cortex. Neuron. 2009;62(2):281–90. doi: 10.1016/j.neuron.2009.02.025 19409272

63. Hung CP, Kreiman G, Poggio T, DiCarlo JJ. Fast readout of object identity from macaque inferior temporal cortex. Science. 2005;310(5749):863–6. doi: 10.1126/science.1117593 16272124

64. Connolly AC, Guntupalli JS, Gors J, Hanke M, Halchenko YO, Wu Y-C, et al. The representation of biological classes in the human brain. Journal of Neuroscience. 2012;32(8):2608–18. doi: 10.1523/JNEUROSCI.5547-11.2012 22357845

65. Caramazza A, Shelton JR. Domain-specific knowledge systems in the brain: The animate-inanimate distinction. Journal of Cognitive Neuroscience. 1998;10(1):1–34. 9526080

66. Warrington EK, Shallice T. Category specific semantic impairments. Brain. 1984;107(3):829–53.

67. Martin A. The representation of object concepts in the brain. Annual Review of Psychology. 2007;58:25–45. doi: 10.1146/annurev.psych.57.102904.190143 16968210

68. Murphy G. The big book of concepts: MIT press; 2004.

69. Grill-Spector K, Weiner KS. The functional architecture of the ventral temporal cortex and its role in categorization. Nature Reviews Neuroscience. 2014;15(8):536. doi: 10.1038/nrn3747 24962370

70. Mahon BZ, Caramazza A. Concepts and categories: A cognitive neuropsychological perspective. Annual Review of Psychology. 2009;60:27–51. doi: 10.1146/annurev.psych.60.110707.163532 18767921

71. Devereux BJ, Tyler LK, Geertzen J, Randall B. The Centre for Speech, Language and the Brain (CSLB) concept property norms. Behavior research methods. 2014;46(4):1119–27. doi: 10.3758/s13428-013-0420-4 24356992

72. McRae K, Cree GS, Seidenberg MS, McNorgan C. Semantic feature production norms for a large set of living and nonliving things. Behavior research methods. 2005;37(4):547–59. 16629288

73. Russakovsky O, Deng J, Su H, Krause J, Satheesh S, Ma S, et al. Imagenet large scale visual recognition challenge. International Journal of Computer Vision. 2015;115(3):211–52.

74. Russell BC, Torralba A, Murphy KP, Freeman WT. LabelMe: a database and web-based tool for image annotation. International Journal of Computer Vision. 2008;77(1–3):157–73.

75. Kiani R, Esteky H, Mirpour K, Tanaka K. Object category structure in response patterns of neuronal population in monkey inferior temporal cortex. Journal of neurophysiology. 2007;97(6):4296–309. doi: 10.1152/jn.00024.2007 17428910

76. Baldassi C, Alemi-Neissi A, Pagan M, DiCarlo JJ, Zecchina R, Zoccolan D. Shape similarity, better than semantic membership, accounts for the structure of visual object representations in a population of monkey inferotemporal neurons. PLoS computational biology. 2013;9(8):e1003167. doi: 10.1371/journal.pcbi.1003167 23950700

77. Rust NC, DiCarlo JJ. Selectivity and tolerance (“invariance”) both increase as visual information propagates from cortical area V4 to IT. Journal of Neuroscience. 2010;30(39):12978–95. doi: 10.1523/JNEUROSCI.0179-10.2010 20881116

78. Thorpe S, Fize D, Marlot C. Speed of processing in the human visual system. Nature. 1996;381(6582):520–2. doi: 10.1038/381520a0 8632824

79. Peelen MV, Fei-Fei L, Kastner S. Neural mechanisms of rapid natural scene categorization in human visual cortex. Nature. 2009;460(7251):94–7. doi: 10.1038/nature08103 19506558

80. Torralba A, Oliva A, Castelhano MS, Henderson JM. Contextual guidance of eye movements and attention in real-world scenes: the role of global features in object search. Psychological review. 2006;113(4):766. doi: 10.1037/0033-295X.113.4.766 17014302

81. Cohen L, Dehaene S, Naccache L, Lehéricy S, Dehaene-Lambertz G, Hénaff M-A, et al. The visual word form area: spatial and temporal characterization of an initial stage of reading in normal subjects and posterior split-brain patients. Brain. 2000;123(2):291–307.


Článek vyšel v časopise

PLOS One


2019 Číslo 10
Nejčtenější tento týden
Nejčtenější v tomto čísle
Kurzy

Zvyšte si kvalifikaci online z pohodlí domova

KOST
Koncepce osteologické péče pro gynekology a praktické lékaře
nový kurz
Autoři: MUDr. František Šenk

Sekvenční léčba schizofrenie
Autoři: MUDr. Jana Hořínková

Hypertenze a hypercholesterolémie – synergický efekt léčby
Autoři: prof. MUDr. Hana Rosolová, DrSc.

Svět praktické medicíny 5/2023 (znalostní test z časopisu)

Imunopatologie? … a co my s tím???
Autoři: doc. MUDr. Helena Lahoda Brodská, Ph.D.

Všechny kurzy
Kurzy Podcasty Doporučená témata Časopisy
Přihlášení
Zapomenuté heslo

Zadejte e-mailovou adresu, se kterou jste vytvářel(a) účet, budou Vám na ni zaslány informace k nastavení nového hesla.

Přihlášení

Nemáte účet?  Registrujte se

#ADS_BOTTOM_SCRIPTS#