Editing Workflow:Workflow for ingesting digitized books into a digital archive

Jump to navigation Jump to search

Warning: You are not logged in. Your IP address will be publicly visible if you make any edits. If you log in or create an account, your edits will be attributed to your username, along with other benefits.

The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.

Latest revision Your text
Line 1: Line 1:
{{Infobox COW
+
[[File:workflow.png|Upload file (Toolbox on left) and add a workflow image here or remove]]
 +
[[Category:COW Workflows]]
 +
 
 +
{{Infobox_cheese
 +
|name=Ingest of digitized books
 +
|category=Ingest
 
|status=Testphase
 
|status=Testphase
|tools=7-Zip, DROID (Digital Record Object Identification), FITS (File Information Tool Set), Fedora Commons, DocuTeam Feeder, cURL, Saxon, ClamAV
+
|tools=
|input=Digitized content
+
<span style="color:#FFFFFF">b</span> 
|output=Packages of digitized content ready for ingest
+
* [[7-Zip]]
|organisation=Universitätsbibliothek Bern
+
* [http://www.docuteam.ch/en/products/it-for-archives/software/ docuteam feeder]
|organisationurl=http://www.unibe.ch/university/services/university_library/ub/index_eng.html
+
* [https://en.wikipedia.org/wiki/CURL cURL]
|name=Ingest of digitized books
+
* [https://en.wikipedia.org/wiki/Saxon_XSLT Saxon]
 +
* [[DROID]]
 +
* [[FITS_(File_Information_Tool_Set)]]
 +
* [https://en.wikipedia.org/wiki/Clam_AntiVirus Clam AV]
 +
* [[Fedora_Commons]]
 +
|organisation=[http://www.ub.unibe.ch/ Universitätsbibliothek Bern]
 
}}
 
}}
 +
 
==Workflow Description==
 
==Workflow Description==
 
<div class="toccolours mw-collapsible mw-collapsed" data-expandtext="Show Diagram" data-collapsetext="Hide Diagram" >
 
<div class="toccolours mw-collapsible mw-collapsed" data-expandtext="Show Diagram" data-collapsetext="Hide Diagram" >
Line 13: Line 24:
 
</div>
 
</div>
  
# The data provider provides their content as an input for the transfer tool (currently in development).
+
# The data provider provides his content as an input for the transfer tool (currently in development)
# The transfer tool creates a zip-container with the content and calculates a checksum of the container.
+
# The transfer tool creates a zip-container with the content and calculates a checksum of the container
# The zip-container and the checksum are bundled (another zip-container or a plain folder) and form the SIP together.
+
# The zip-container and the checksum are bundled (another zip-container or a plain folder) and build the SIP
# The transfer tool moves the SIP to a registered, data provider specific hotfolder, which is connected to the ingest server.
+
# The transfer tool moves the SIP to a registered, data provider specific hotfolder, which is connected to the ingest server
# As soon as the complete SIP has been transfered to the ingest server, a trigger is raised and the ingest workflow starts.
+
# As soon as the complete SIP has been transfered to the ingest server, a trigger is raised and the ingest workflow starts
# The SIP gets unpacked.
+
# The SIP gets unpacked
# The zip container that contains the content is validated according to the provided checksum. If this fixity check fails, the data provider is asked to reingest their data.
+
# The included zip container is validated according to the provided checksum. If this fixity check fails, the data provider is asked to reingest his data.
# The content and the structure of the content are validated against the submission agreement, that was signed with the data provider (this step is currently in development).
+
# The content and the structure of the content are validated against the submission agreement, that was signed with the data provider (this step is currently in development)
# Based on an unique id (encoded in the content filename) descriptive metadata is fetched from the library's OPAC over an OAI-PMH interface.
+
# Based on the OPAC-systemnumber (encoded in the content filename) descriptive metadata is fetched from the library's OPAC over its OAI-PMH interface
 
# The OPAC returns a MARC.XML-file.
 
# The OPAC returns a MARC.XML-file.
# The MARC.XML-file is mapped to an EAD.XML-file by a xslt-transformation.
+
# The MARC.XML-file is mapped into a EAD.XML-file by a xslt-transformation
 
# The EAD.XML is exported to a designated folder for pickup by the archival information system
 
# The EAD.XML is exported to a designated folder for pickup by the archival information system
# Every content file is analysed by DROID for format identification and basic technical metadata is extracted (e.g. filesize).
+
# Every content file is analysed by DROID for format identification and basic technical metadata is extracted (e.g. filesize)
# The output of this analysis is written to a PREMIS.XML-file (one PREMIS.XML per content object).
+
# The output of this analysis is saved into a PREMIS.XML-file (one PREMIS.XML per content object).
 
# Every content file is validated and analysed by FITS and content specific technical metadata is extracted.
 
# Every content file is validated and analysed by FITS and content specific technical metadata is extracted.
 
# The output of this analysis (FITS.XML) is integrated into the existing PREMIS.XML-files.
 
# The output of this analysis (FITS.XML) is integrated into the existing PREMIS.XML-files.
 
# Each content file is scanned for viruses and malware by Clam AV.
 
# Each content file is scanned for viruses and malware by Clam AV.
# For each content object and for the whole information entity (the book) a PID is fetched from the repository.
+
# For each content object and for the whole information entity (the book) a PID is fetched from the repository
# For each content object and for the whole information entity an AIP is generated. This process includes the generation of RDF triples, that contain the relationships between the objects.
+
# For each content object and for the whole information entity an AIP is generated. This process includes the generation of RDF-tipples, that contain the relationships between the objects.
# The AIPs are ingested into the repository.
+
# The AIPs are ingested into the repository
# The data producer gets informed that the ingest finished successfully.
+
# The data producer get's informed, that the ingest finished successfully
  
The tools and their function in the workflow:
+
 
 +
==List of Tools==
 +
<!-- List the tools in your workflow in a bulleted list (begin each line with an asterisk). Link to tool entries in COPTR where possible -->
 
* [[7-Zip]] - Pack and unpack content / SIPs
 
* [[7-Zip]] - Pack and unpack content / SIPs
 
* [http://www.docuteam.ch/en/products/it-for-archives/software/ docuteam feeder] - Workflow- and Ingestframework
 
* [http://www.docuteam.ch/en/products/it-for-archives/software/ docuteam feeder] - Workflow- and Ingestframework
Line 44: Line 57:
 
* [https://en.wikipedia.org/wiki/Clam_AntiVirus Clam AV] - Virus check
 
* [https://en.wikipedia.org/wiki/Clam_AntiVirus Clam AV] - Virus check
 
* [[Fedora_Commons]] - Digital Repository
 
* [[Fedora_Commons]] - Digital Repository
 +
 +
 +
==Organisation==
 +
<!-- Add the name of your organisation here -->
 +
[http://www.ub.unibe.ch/ Universitätsbibliothek Bern]
  
 
==Purpose, Context and Content==
 
==Purpose, Context and Content==

Please note that all contributions to COPTR are considered to be released under the Attribution-ShareAlike 3.0 Unported (see COPTR:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

To edit this page, please answer the question that appears below (more info):

Cancel Editing help (opens in new window)

Template used on this page: