Editing Workflow:PDF/A validation and metadata extraction

Jump to: navigation, search

Warning: You are not logged in.

Your IP address will be recorded in this page's edit history.
The edit can be undone. Please check the comparison below to verify that this is what you want to do, and then save the changes below to finish undoing the edit.
Latest revision Your text
Line 1: Line 1:
{{Infobox_COW
+
[[File:workflow.png|Upload file (Toolbox on left) and add a workflow image here or remove]]
|name=PDF/A validation and metadata extraction
+
|status=Experimental
+
|tools=UKWA search interface<br />[[GNU_Wget]]<br />[[veraPDF]]<br />XMLstarlet<br />Excel
+
|input= Corpus of PDF/A files
+
|output= CSV with validationresult and metadata
+
|organisation=[http://dpconline.org/ Digital Preservation Coalition]
+
}}
+
 
+
 
[[Category:COW Workflows]]
 
[[Category:COW Workflows]]
 +
 
==Workflow description==
 
==Workflow description==
 
<!-- Describe your workflow here. If necessary add a diagram -->
 
<!-- Describe your workflow here. If necessary add a diagram -->
Line 17: Line 10:
 
*Excel (view and analyse results in spreadsheet form)
 
*Excel (view and analyse results in spreadsheet form)
  
The workflow begins with creation of a corpus of test files which is constructed using the UK Web Archive search interface (for example see [https://www.webarchive.org.uk/shine/search?query=content_type:%22application/pdf%22%20content_type_version:%221b%22 this]). The result is a list of URLs. These are fetched with Wget to create a large test corpus of predominantly PDF/A files. VeraPDF is then used to validate and extract metadata for each file. XMLstarlet is applied to extract fields of interest from the resulting XML creating a CSV. The CSV is then imported to Excel for analysis.
+
The workflow begins with creation of a corpus of test files which is constructed using the UK Web Archive search interface (for example see [https://www.webarchive.org.uk/shine/search?query=content_type:%22application/pdf%22%20content_type_version:%221b%22 this]). The result is a list of URLs. These are fetched with Wget. VeraPDF is then used to validate and extract metadata for each file in the test corpus. XMLstarlet is then applied to extract fields of interest from the resulting XML creating a CSV. The CSV is then imported to Excel for analysis.
  
 
==Purpose, context and content==
 
==Purpose, context and content==

Please note that all contributions to COPTR are considered to be released under the Attribution-ShareAlike 3.0 Unported (see COPTR:Copyrights for details). If you do not want your writing to be edited mercilessly and redistributed at will, then do not submit it here.
You are also promising us that you wrote this yourself, or copied it from a public domain or similar free resource. Do not submit copyrighted work without permission!

Cancel | Editing help (opens in new window)

Template used on this page: