In the second week of May 2021, Special:MediaSearch will become the default search landing page for all users. This feature is an image-focused interface that makes it easier to find what you’re looking for on Wikimedia Commons and it will make the millions of images contributed by libraries and cultural institutions much more accessible to a broad global audience.
‘Add an Image’ task on Android
If you have the Wikipedia Android app, you can help train the Image Suggestion API we introduced last month. A new ‘train algorithm’ task will ask logged-in users to determine if a suggested image is a good illustration of the contents of the article displayed. Unlike other suggested edits, this task won’t save edits to any Wiki projects. It’s a temporary task to gather data to improve the image matching algorithm and inform the design of future releases. You can see a GIF preview of the feature on the project page.
The GLAM & Culture team also held two days of office hours in April to talk about Structured data on Commons. On the first day, there were presentations from Carly Bogen, the Foundation's Program Manager for Structured data, and Jennie Choi, General Manager of Collection Information for the Metropolitan Museum of Art. The second day’s presentations were by John Cummings, Wikimedian in Residence at UNESCO, and Alicia Fagerving, Developer at Wikimedia Sverige (WMSE). Both shared Wikimedia Sweden’s work with Structured data, GLAM content, and Wiki Loves Monuments.
We had 39 participants across the two meetings and a few institutions present, such as the Metropolitan Museum of Art, Digital Public Library of America, Yale Center for British Art, Minneapolis Institute of Art, Smartify, meemoo (the Flemish Institute for Archive), and Flickr.
Carly Bogen presented Media Search, which uses Structured data on Commons metadata to enhance the search results. It also powers the visual editor on Wikipedia, allowing more image results, in more languages, to show up to illustrate Wikipedia articles. To utilise these discovery features, Carly recommended adding multiple depicts statements, both general and specific, to media files on Commons.
Work-in-progress includes an assessment filter for Media Search, to surface files that have been assessed as quality or featured images. And bot writers in the Cebuano and Arabic communities are experimenting with the Image Suggestion API to test the feasibility of adding images to Wikipedia automatically.
Carly shared early concepts for how Upload Wizard could prompt contributors to add both general and specific depicts; how image suggestions could be applied within page editing or VisualEditor flows; and how notifications could be used to suggest articles for images that have just been uploaded, or suggest images for articles on watchlists.
There was an important discussion about balancing automated suggestions with respect for the experience of on-wiki “lived consensus” and some participants suggested an “opt-out” flag for certain pages to be excluded.
Jennie shared how she has added more than 10,000 Structured data statements to the Metropolitan Museum of Art files on Commons. Her hope is to improve the use of the institution’s images on Wikipedia, after learning that only 6% of their images have been used on Wikimedia projects. This would greatly improve the already high number of views they receive. Between January and April 2021, for example, the MET had 94 million views on media files across all Wikimedia languages and sites.
Lecture about the Structured Data on Commons uploads by the Metropolitan Museum of Art using M-IDs, PetScan, and QuickStatements
Jennie also raised important questions about Structured data on Commons modeling and the differences between metadata on Commons and Wikidata.
John is working at Wikimedia Sweden to share guidelines related to Structured data on Commons with GLAMs. His presentation was an introduction to how both Wikidata and Commons can make content from institutions more searchable, especially with Structured data on Commons properties, such as depicts, creator, source of file, rights statements, location, multilingual captions, and so on. There was a good discussion about the need to find agreement on data modeling and share example queries.
Alicia’s presentation brought some use cases for the discussion, with the Structured data on Commons uploads for Wiki Loves Monuments on Sweden, Israel, and Poland.
Bots have already added statements like creator, inception, and coordinates of point of view. Now, Alicia has added depicts statements to make images more searchable and participant in statements to allow new analysis. There was interest from other attendees in applying this approach to other contexts and a discussion about the use of participant in versus on focus list of Wikimedia project.