The project Structured Data on Wikimedia Commons adds functionalities for structured and machine-readable data to files on Wikimedia Commons, so that they become easier to view, search, edit, organize and re-use. To achieve that, the Commons backend is migrated to Wikibase, the same technology as used for Wikidata.
Already live: multilingual captions and Depicts statements
As mentioned in the previous Structured Data on Commons update in This Month in GLAM, it is now possible to add multilingual captions and Depicts statements to files on Wikimedia Commons. This is both possible on the file page, and via UploadWizard. A few demonstrations (animated gifs):
In the next iterations, the following features will be released on Wikimedia Commons:
Search depicts statements
Other statements than depicts
Filter search results
Depicts of depicts
Depicts and annotations
UploadWizard using structured data for Wiki Loves... style campaigns
Satdeep Gill (WMF) is working together with a group of Punjabi volunteers on a Structured Data on Commons pilot project to practice the full workflow of digitizing publications, uploading them to Wikimedia Commons, transcribing them on Wikisource, and re-using the data on Wikidata across Wikimedia projects.
The pilot project consists of:
Digitization of a small set of out-of-copyright Punjabi books (in the Qisse genre) YDone
Upload of the digitized files to Wikimedia Commons, in structured data format
Upload of the books' metadata (and author data) to Wikidata YDone
Indexing and transcribing the books on Wikisource
Inclusion of the metadata of the books on Wikisource
This pilot project will inspire new thinking and ideas about efficient workflows and re-use of data across Wikidata, Wikimedia Commons, and Wikisource. It will also lead to improved documentation about this workflow.
If you are interested in the process of uploading digitized books in structured data to Wikimedia Commons, please mention this on the project's talk page, or e-mail Satdeep (sgillwikimedia.org). Your help is very welcome!
If you read and write Punjabi, your help will be welcome later in 2019, when transcribing the books on Wikisource.
We are looking for people who are interested in correct data modelling of books on Wikidata and Wikimedia Commons, and people interested in (tools around) improving the workflows across Wikidata, Wikimedia Commons, and Wikisource.
Wikimedia developers and developers working for cultural institutions are very welcome to join this focus area. Feel free to indicate your interest by signing the dedicated page! Areas to work on include (but are not limited to) Structured Data on Commons, IIIF, upgrades of GLAM-Wiki tools, batch uploads, and more. Feel free to add any topics you want to work on.
We especially welcome developers who work at a cultural institution and who want to experiment with Structured Data on Commons. Do you know (or are you) someone who may be interested and would you be interested some guidance? Feel free to get in touch with Sandra Fauconnier (sfauconnierwikimedia.org).