<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://coptr.digipres.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=172.104.134.96</id>
	<title>COPTR - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="https://coptr.digipres.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=172.104.134.96"/>
	<link rel="alternate" type="text/html" href="https://coptr.digipres.org/Special:Contributions/172.104.134.96"/>
	<updated>2026-04-10T01:16:15Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.14</generator>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=ArchiFiltre&amp;diff=6196</id>
		<title>ArchiFiltre</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=ArchiFiltre&amp;diff=6196"/>
		<updated>2024-09-26T21:16:03Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Fixed the license used to Apache 2.0&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Overview of folder trees with fine diagrams&lt;br /&gt;
|homepage=https://archifiltre.fabrique.social.gouv.fr/&lt;br /&gt;
|license=Apache 2.0&lt;br /&gt;
|platforms=?&lt;br /&gt;
|function=File Management, Appraisal&lt;br /&gt;
|content=Database, Container, Binary Data, 3D, Audio, Document, Email, Ebook, Geospatial, Image, Research Data, Video, Web, Image, Software&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
Archifiltre gives you orientation in deeply nested folder trees. It visualises folder contents in diagrams. It helps with cleaning up storage gone out of control. Further development is aimed at supporting archival processes like appraisal and transfer.&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
* https://github.com/SocialGouv/archifiltre/wiki/Wiki-Archifiltre (French only)&lt;br /&gt;
&lt;br /&gt;
== Development Activity ==&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
&amp;lt;!-- Add the OpenHub.com ID for the tool, if known. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Archifiltre-Mails&amp;diff=6198</id>
		<title>Archifiltre-Mails</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Archifiltre-Mails&amp;diff=6198"/>
		<updated>2024-07-23T13:18:52Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Created page with '{{Infobox tool |purpose=Archifiltre-Mails connects to email containers and visualizes their content, helping you in exploring and adding metadata. |homepage=https://github.com...'&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Archifiltre-Mails connects to email containers and visualizes their content, helping you in exploring and adding metadata.&lt;br /&gt;
|homepage=https://github.com/SocialGouv/archifiltre-mails/wiki/Wiki-Mails-par-Archifiltre&lt;br /&gt;
|sourcecode=https://github.com/SocialGouv/archifiltre-docs/wiki/Wiki-Archifiltre&lt;br /&gt;
|license=Version 2.0&lt;br /&gt;
|formats_in=PST&lt;br /&gt;
|function=Appraisal, Metadata Processing, Annotation, Data capture and Deposit, Transfer&lt;br /&gt;
|content=Email&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
Mails is a product of the Archifiltre solution. Its purpose is to allow any holder of an Outlook messaging to be able to analyze it via the archive format Outlook PST. Archifiltre-Mails was born out of the need for apprehension of a messaging. Currently, there is no tool for understanding a messaging system. Until then, it is impossible to know the oldest e-mail, the most important recipient, the volume of attachments contained in the messaging system, etc.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=ArchiFiltre&amp;diff=6195</id>
		<title>ArchiFiltre</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=ArchiFiltre&amp;diff=6195"/>
		<updated>2024-07-23T12:57:25Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Added support for various content types&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Overview of folder trees with fine diagrams&lt;br /&gt;
|homepage=https://archifiltre.fabrique.social.gouv.fr/&lt;br /&gt;
|license=MIT License&lt;br /&gt;
|platforms=?&lt;br /&gt;
|function=File Management, Appraisal&lt;br /&gt;
|content=Database, Container, Binary Data, 3D, Audio, Document, Email, Ebook, Geospatial, Image, Research Data, Video, Web, Image, Software&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
Archifiltre gives you orientation in deeply nested folder trees. It visualises folder contents in diagrams. It helps with cleaning up storage gone out of control. Further development is aimed at supporting archival processes like appraisal and transfer.&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
* https://github.com/SocialGouv/archifiltre/wiki/Wiki-Archifiltre (French only)&lt;br /&gt;
&lt;br /&gt;
== Development Activity ==&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
&amp;lt;!-- Add the OpenHub.com ID for the tool, if known. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Puremagic&amp;diff=6253</id>
		<title>Puremagic</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Puremagic&amp;diff=6253"/>
		<updated>2024-04-30T10:26:54Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: add puremagic&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Puremagic is a cross-platform pure python module that will identify a file based off it's magic numbers&lt;br /&gt;
|sourcecode=https://github.com/cdgriffith/puremagic&lt;br /&gt;
|license=MIT&lt;br /&gt;
|cost=Free&lt;br /&gt;
|function=File Format Identification&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
It is designed to be minimalistic and inherently cross platform compatible. It is also designed to be a stand in for python-magic, it incorporates the functions from_file(filename[, mime]) and from_string(string[, mime]) however the magic_file() and magic_string() are more powerful and will also display confidence and duplicate matches.&lt;br /&gt;
&lt;br /&gt;
It does NOT try to match files off non-magic string. In other words it will not search for a string within a certain window of bytes like others might.&lt;br /&gt;
&lt;br /&gt;
Advantages over using a wrapper for 'file' or 'libmagic':&lt;br /&gt;
&lt;br /&gt;
* Faster   &lt;br /&gt;
* Lightweight&lt;br /&gt;
* Cross platform compatible&lt;br /&gt;
* No dependencies&lt;br /&gt;
&lt;br /&gt;
Disadvantages:&lt;br /&gt;
&lt;br /&gt;
* Does not have [https://github.com/cdgriffith/puremagic/blob/master/puremagic/magic_data.json as many] file types. (&amp;quot;Only&amp;quot; 1600 at the time of posting)&lt;br /&gt;
* No multilingual comments&lt;br /&gt;
* Duplications due to small or reused magic numbers&lt;br /&gt;
&lt;br /&gt;
(Help fix the first two disadvantages by contributing!)&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Library_(xklb)&amp;diff=6206</id>
		<title>Library (xklb)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Library_(xklb)&amp;diff=6206"/>
		<updated>2024-04-27T04:56:08Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: remove input formats&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Media indexing multi-tool&lt;br /&gt;
|sourcecode=https://github.com/chapmanjacobd/library/&lt;br /&gt;
|license=BSD 3-Clause&lt;br /&gt;
|formats_out=DB&lt;br /&gt;
|function=File Management, Quality Assurance, Web Capture&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Web Capture subcommands:&lt;br /&gt;
&lt;br /&gt;
* web-add: index open web directories using ffprobe and exifTool to fetch additional metadata from remote file headers (without downloading the full file) for later automated selective downloading.&lt;br /&gt;
* tube-add: index video site metadata via yt-dlp&lt;br /&gt;
* gallery-add: index image gallery site metadata via gallery-dl&lt;br /&gt;
* extract-links: extract links from within a webpage, even if the page uses ShadowDOM, postMessage, and nested frames&lt;br /&gt;
* links-add: build updatable link-scraping databases for paginated content&lt;br /&gt;
&lt;br /&gt;
Local file management subcommands:&lt;br /&gt;
&lt;br /&gt;
* fs-add: index local files with ffprobe, exifTool, and textract&lt;br /&gt;
* cluster-sort: sort lines of text by similarity (a common use for this is to identify similar file paths)&lt;br /&gt;
* merge-folders: merge file trees (similar to [https://github.com/chapmanjacobd/journal/blob/main/programming/linux/misconceptions.md#mv-src-vs-mv-src rclone move] but it will print detailed information about overwrites and trumps (future overwrites from multiple source folders) before moving anything)&lt;br /&gt;
* relmv: move but preserve parent folder information&lt;br /&gt;
* process-image: convert large images as scaled AVIF files as an alternative to file deletion&lt;br /&gt;
* process-ffmpeg: convert large video/audio files to AV1/Opus as an alternative to file deletion&lt;br /&gt;
&lt;br /&gt;
Quality Assurance subcommands:&lt;br /&gt;
&lt;br /&gt;
* media-check: check video and audio files for corruption by decoding small sections or the whole file&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [https://old.reddit.com/r/opendirectories/comments/1adbv4b/i_made_a_little_cli_opendirectory_scanner_tool/ Introducing webadd to the /r/opendirectories community]&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Library_(xklb)&amp;diff=6205</id>
		<title>Library (xklb)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Library_(xklb)&amp;diff=6205"/>
		<updated>2024-04-27T04:48:22Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: /* make sentence simpler */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Media indexing multi-tool with more than 70 CLI subcommands&lt;br /&gt;
|sourcecode=https://github.com/chapmanjacobd/library/&lt;br /&gt;
|license=BSD 3-Clause&lt;br /&gt;
|formats_in=DB, Text&lt;br /&gt;
|formats_out=CSV (Comma Separated Values), DB, SQL&lt;br /&gt;
|function=File Management, Quality Assurance, Web Capture&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Web Capture subcommands:&lt;br /&gt;
&lt;br /&gt;
* web-add: index open web directories using ffprobe and exifTool to fetch additional metadata from remote file headers (without downloading the full file) for later automated selective downloading.&lt;br /&gt;
* tube-add: index video site metadata via yt-dlp&lt;br /&gt;
* gallery-add: index image gallery site metadata via gallery-dl&lt;br /&gt;
* extract-links: extract links from within a webpage, even if the page uses ShadowDOM, postMessage, and nested frames&lt;br /&gt;
* links-add: build updatable link-scraping databases for paginated content&lt;br /&gt;
&lt;br /&gt;
Local file management subcommands:&lt;br /&gt;
&lt;br /&gt;
* fs-add: index local files with ffprobe, exifTool, and textract&lt;br /&gt;
* cluster-sort: sort lines of text by similarity (a common use for this is to identify similar file paths)&lt;br /&gt;
* merge-folders: merge file trees (similar to [https://github.com/chapmanjacobd/journal/blob/main/programming/linux/misconceptions.md#mv-src-vs-mv-src rclone move] but it will print detailed information about overwrites and trumps (future overwrites from multiple source folders) before moving anything)&lt;br /&gt;
* relmv: move but preserve parent folder information&lt;br /&gt;
* process-image: convert large images as scaled AVIF files as an alternative to file deletion&lt;br /&gt;
* process-ffmpeg: convert large video/audio files to AV1/Opus as an alternative to file deletion&lt;br /&gt;
&lt;br /&gt;
Quality Assurance subcommands:&lt;br /&gt;
&lt;br /&gt;
* media-check: check video and audio files for corruption by decoding small sections or the whole file&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [https://old.reddit.com/r/opendirectories/comments/1adbv4b/i_made_a_little_cli_opendirectory_scanner_tool/ Introducing webadd to the /r/opendirectories community]&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Library_(xklb)&amp;diff=6204</id>
		<title>Library (xklb)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Library_(xklb)&amp;diff=6204"/>
		<updated>2024-04-27T04:36:55Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: fix bullet-points&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Media indexing multi-tool with more than 70 CLI subcommands&lt;br /&gt;
|sourcecode=https://github.com/chapmanjacobd/library/&lt;br /&gt;
|license=BSD 3-Clause&lt;br /&gt;
|formats_in=DB, Text&lt;br /&gt;
|formats_out=CSV (Comma Separated Values), DB, SQL&lt;br /&gt;
|function=File Management, Quality Assurance, Web Capture&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Web Capture subcommands:&lt;br /&gt;
&lt;br /&gt;
* web-add: index open web directories using ffprobe and exifTool to fetch additional metadata from remote file headers without downloading the full file into SQLite for later automated selective downloading.&lt;br /&gt;
* tube-add: index video site metadata via yt-dlp&lt;br /&gt;
* gallery-add: index image gallery site metadata via gallery-dl&lt;br /&gt;
* extract-links: extract links from within a webpage, even if the page uses ShadowDOM, postMessage, and nested frames&lt;br /&gt;
* links-add: build updatable link-scraping databases for paginated content&lt;br /&gt;
&lt;br /&gt;
Local file management subcommands:&lt;br /&gt;
&lt;br /&gt;
* fs-add: index local files with ffprobe, exifTool, and textract&lt;br /&gt;
* cluster-sort: sort lines of text by similarity (a common use for this is to identify similar file paths)&lt;br /&gt;
* merge-folders: merge file trees (similar to [https://github.com/chapmanjacobd/journal/blob/main/programming/linux/misconceptions.md#mv-src-vs-mv-src rclone move] but it will print detailed information about overwrites and trumps (future overwrites from multiple source folders) before moving anything)&lt;br /&gt;
* relmv: move but preserve parent folder information&lt;br /&gt;
* process-image: convert large images as scaled AVIF files as an alternative to file deletion&lt;br /&gt;
* process-ffmpeg: convert large video/audio files to AV1/Opus as an alternative to file deletion&lt;br /&gt;
&lt;br /&gt;
Quality Assurance subcommands:&lt;br /&gt;
&lt;br /&gt;
* media-check: check video and audio files for corruption by decoding small sections or the whole file&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [https://old.reddit.com/r/opendirectories/comments/1adbv4b/i_made_a_little_cli_opendirectory_scanner_tool/ Introducing webadd to the /r/opendirectories community]&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Library_(xklb)&amp;diff=6203</id>
		<title>Library (xklb)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Library_(xklb)&amp;diff=6203"/>
		<updated>2024-04-27T04:35:22Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: add library (xklb)&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Media indexing multi-tool with more than 70 CLI subcommands&lt;br /&gt;
|sourcecode=https://github.com/chapmanjacobd/library/&lt;br /&gt;
|license=BSD 3-Clause&lt;br /&gt;
|formats_in=DB, Text&lt;br /&gt;
|formats_out=CSV (Comma Separated Values), DB, SQL&lt;br /&gt;
|function=File Management, Quality Assurance, Web Capture&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Web Capture subcommands:&lt;br /&gt;
&lt;br /&gt;
- web-add: index open web directories using ffprobe and exifTool to fetch additional metadata from remote file headers without downloading the full file into SQLite for later automated selective downloading.&lt;br /&gt;
- tube-add: index video site metadata via yt-dlp&lt;br /&gt;
- gallery-add: index image gallery site metadata via gallery-dl&lt;br /&gt;
- extract-links: extract links from within a webpage, even if the page uses ShadowDOM, postMessage, and nested frames&lt;br /&gt;
- links-add: build updatable link-scraping databases for paginated content&lt;br /&gt;
&lt;br /&gt;
Local file management subcommands:&lt;br /&gt;
&lt;br /&gt;
- fs-add: index local files with ffprobe, exifTool, and textract&lt;br /&gt;
- cluster-sort: sort lines of text by similarity (a common use for this is to identify similar file paths)&lt;br /&gt;
- merge-folders: merge file trees (similar to [https://github.com/chapmanjacobd/journal/blob/main/programming/linux/misconceptions.md#mv-src-vs-mv-src rclone move] but it will print detailed information about overwrites and trumps (future overwrites from multiple source folders) before moving anything)&lt;br /&gt;
- relmv: move but preserve parent folder information&lt;br /&gt;
- process-image: convert large images as scaled AVIF files as an alternative to file deletion&lt;br /&gt;
- process-ffmpeg: convert large video/audio files to AV1/Opus as an alternative to file deletion&lt;br /&gt;
&lt;br /&gt;
Quality Assurance subcommands:&lt;br /&gt;
&lt;br /&gt;
- media-check: check video and audio files for corruption by decoding small sections or the whole file&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
- [https://old.reddit.com/r/opendirectories/comments/1adbv4b/i_made_a_little_cli_opendirectory_scanner_tool/ Introducing webadd to the /r/opendirectories community]&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=FileTrove&amp;diff=6183</id>
		<title>FileTrove</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=FileTrove&amp;diff=6183"/>
		<updated>2024-02-16T14:34:27Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Created page with &amp;quot;{{Infobox tool |purpose=FileTrove indexes files and creates metadata from them. The single binary application walks a directory tree and identifies all regular files by type w...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=FileTrove indexes files and creates metadata from them. The single binary application walks a directory tree and identifies all regular files by type with Siegfried.&lt;br /&gt;
|homepage=https://github.com/steffenfritz/FileTrove&lt;br /&gt;
|function=Metadata Extraction&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
FileTrove indexes files and creates metadata from them. The single binary application walks a directory tree and identifies all regular files by type with siegfried, giving you the MIME type, PRONOM identifier, Format version, Identification proof and note.&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=PREMIS_Utility&amp;diff=6164</id>
		<title>PREMIS Utility</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=PREMIS_Utility&amp;diff=6164"/>
		<updated>2024-01-29T23:02:33Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Added new PREMIS Utility page to COPTR.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=The PREMIS Utility is a graphical program used to generate PREMIS metadata records for use in digital preservation systems and digital asset management systems in JSON and XML format, and attempts to cover gaps not programmatically generated by system logs.&lt;br /&gt;
|homepage=https://github.com/rochester-rcl/premis-generator&lt;br /&gt;
|sourcecode=https://github.com/rochester-rcl/premis-generator/archive/refs/tags/v1.1.0.zip&lt;br /&gt;
|license=Apache License, version 2.0&lt;br /&gt;
|cost=None&lt;br /&gt;
|platforms=Windows&lt;br /&gt;
|language=Python&lt;br /&gt;
|formats_in=CSV (Comma Separated Values)&lt;br /&gt;
|formats_out=XML, PREMIS (Preservation Metadata Implementation Strategies), JSON&lt;br /&gt;
|function=Metadata Processing&lt;br /&gt;
|content=Metadata&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
The PREMIS Utility is a graphical utility used to generate PREMIS metadata records for use in digital preservation systems and digital asset management systems. Records can be exported in both XML and JSON formats. This utility is specifically created to address gaps in administrative metadata that might not be automatically created by a software platform, and specifically seeks to address the following:&lt;br /&gt;
&lt;br /&gt;
- Unambiguous assertion of whether the resource is born digital or digitized&lt;br /&gt;
- Rights related information about the resource, such as if the intellectual content is in the public domain or protected by copyright&lt;br /&gt;
- Information about any digital preservation activities happening outside the software platform, such as manual migrations&lt;br /&gt;
&lt;br /&gt;
The graphical utility was created to make the creation of these records easier and more approachable for librarians, archivists, and other cultural heritage workers that may not be comfortable in the command line or know Python. The overall workflow is that a list of identifiers are provided to the utility, selections are made in the interface and information is provided, and ultimately it spits out XML or JSON files that each have that identifier as the filename. The idea here is that while I don't have any notion what system you might be working within, you should hopefully have a means of using that identifier in the filename to link up or import the metadata into your system.&lt;br /&gt;
&lt;br /&gt;
Currently this utility is only packaged as an .exe for Windows computers. If you are comfortable with some Python you should be able use the raw Python files (after installing the nonstandard Python libraries listed below) to run the graphical utility in a Linux or MacOS environment. If there are Linux/MacOS users out there that would like to help me create packages for those operating systems, I'm super game.&lt;br /&gt;
&lt;br /&gt;
Additionally this utility is designed and meant for a United States locality. A big part of this tool relates to copyright, and while I am not a lawyer and the utility in graphical format or code and the metadata records that it exports absolutely do not constitute legal advice, I do have training in US copyright law and am leveraging that for the tool. I do not however have any non-US copyright training, so this will be of very limited use outside the United States.&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
Full and current documentation on how the utility functions and what each field and button does can be found at the GitHub repository readme.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Rescarta&amp;diff=6163</id>
		<title>Rescarta</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Rescarta&amp;diff=6163"/>
		<updated>2024-01-26T15:38:42Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=The ResCarta Tools software empowers users to create non-proprietary digital objects with LOC standard METS, MODS, MIX and AudioMD metadata from existing TIFF, JPEG, PDF and WAV data through user-friendly interfaces.&lt;br /&gt;
|homepage=http://www.ResCarta.org&lt;br /&gt;
|license=Apache License v2.0&lt;br /&gt;
|platforms=Linux, Windows, and OSX operating systems&lt;br /&gt;
|formats_out=METS (Metadata Encoding and Transmission Standard)&lt;br /&gt;
|function=Access, Personal Archiving, Preservation System&lt;br /&gt;
|content=Audio, Document, Research Data, Metadata&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details&lt;br /&gt;
|ohloh_id=rescarta&lt;br /&gt;
}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on its digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
ResCarta Toolkit allows users to create digital archives from scans, digital photographs or recordings of analog objects.  Metadata is added using simple forms and is written into each digital object to Library of Congress Standards of METS, MODS, MIX, AudioMD, and reVTMD.  Audio/Video files containing spoken words can be automatically transcribed.  Textural content of documents, audio or video can be edited to create highly accurate transcriptions using graphical tools.  Standard directory and file naming is produced during use with autogenerated checksums. A complete Lucene tm index of metadata and textural content can be created for use in a fully functional web application for discovery and display of digital objects.  A checksum validation tool is included to assure long term stability of the archive.  &lt;br /&gt;
&lt;br /&gt;
====Platform====&lt;br /&gt;
The ResCarta Toolkit runs on Windows, Mac and Linux operating systems in single user or coordinated multiuser mode. &lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the &lt;br /&gt;
&lt;br /&gt;
effectiveness (or otherwise) of the tool. --&amp;gt;&lt;br /&gt;
* https://sourceforge.net/projects/rescarta/reviews?source=navbar&lt;br /&gt;
&lt;br /&gt;
== Development Activity ==&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or &lt;br /&gt;
&lt;br /&gt;
commits. --&amp;gt;&lt;br /&gt;
Years of archived releases can be found at...&lt;br /&gt;
http://sourceforge.net/projects/rescarta/files/ResCarta%20Tools/&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add the Ohloh.com ID for the tool, if known. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Workflow:Validation_Error_Analysis_and_Treatment_for_PDF-hul_122_Invalid_destination_-_Destination_NULL&amp;diff=6161</id>
		<title>Workflow:Validation Error Analysis and Treatment for PDF-hul 122 Invalid destination - Destination NULL</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Workflow:Validation_Error_Analysis_and_Treatment_for_PDF-hul_122_Invalid_destination_-_Destination_NULL&amp;diff=6161"/>
		<updated>2024-01-21T21:34:02Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox COW&lt;br /&gt;
|status=Production&lt;br /&gt;
|tools=Pdfcpu, Qpdf, JHOVE, Adobe Acrobat Pro, PDF Checker&lt;br /&gt;
|input=File with JHOVE validation error PDF-HUL-122 “Invalid Destination” / Well-Formed, but not valid. &lt;br /&gt;
This specific workflow documents the sample treatment for a file currently found here: https://www.schleswig-holstein.de/DE/landesregierung/ministerien-behoerden/VIII/Service/Broschueren/Broschueren_VIII/Kita/Mein_Kind_kommt_in_die_Kita_Rechte.pdf?__blob=publicationFile&amp;amp;v=2&lt;br /&gt;
|output=Fixed file&lt;br /&gt;
|organisation=TIB&lt;br /&gt;
|organisationurl=https://wiki.tib.eu/confluence/display/lza/Digital+preservation+at+TIB&lt;br /&gt;
}}&lt;br /&gt;
==Workflow Description==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- To add an image of your workflow, open the &amp;quot;Upload File&amp;quot; link on the left in a new browser tab and follow on screen instructions, then return to this page and add the name of your uploaded image to the line below - replacing &amp;quot;workflow.png&amp;quot; with the name of your file. Replace the text &amp;quot;Textual description&amp;quot; with a short description of your image. Filenames are case sensitive! If you don't want to add a workflow diagram or other image, delete the line below  --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
The workflow describes the analysis and the fix of a specific instance of a PDF-HUL-122 error. &lt;br /&gt;
&amp;lt;b&amp;gt; CAUTION! PDF-HUL-122 errors can be very different in nature and impact. &amp;lt;/b&amp;gt;&lt;br /&gt;
The process described here can be used for internal document links that are not working. It is a manual workflow. More information about this specific case has been discussed in a blog (see link under &amp;quot;Further information&amp;quot;). The methodology used here is that introduced in https://hdl.handle.net/2142/121092.&lt;br /&gt;
&lt;br /&gt;
'''Step 1: Validation Error'''&lt;br /&gt;
&lt;br /&gt;
JHOVE v1.28 PDF-hul v1.12.4 PDF-HUL-122 Invalid Destination with offset given. Well-formed, but not valid.&lt;br /&gt;
&lt;br /&gt;
'''Step 2: Cross-Check with other Tools'''&lt;br /&gt;
&lt;br /&gt;
Cross-checked with: &lt;br /&gt;
pdfcpu v0.6.0dev relaxed mode - no error &lt;br /&gt;
pdfcpu v0.6.0dev strict mode - unrelated error (Font error) &lt;br /&gt;
qpdf v9.1.1 - no error&lt;br /&gt;
PDF Checker 2.1.0 - no error&lt;br /&gt;
'''&lt;br /&gt;
Step 3: Matching Results?'''&lt;br /&gt;
&lt;br /&gt;
No. Error not reported by other tools, most likely due to low priority of error (only impacts validity, not well-formedness). &lt;br /&gt;
One additional un-related error picked up.&lt;br /&gt;
&lt;br /&gt;
'''Step 4: Choose Error to Treat'''&lt;br /&gt;
&lt;br /&gt;
Original PDF-HUL-122. Ignore Font error.&lt;br /&gt;
&lt;br /&gt;
'''Step 5A: Locate Error in Spec'''&lt;br /&gt;
&lt;br /&gt;
ISO 32000-2:2017 12.3.2.4 Named destinations must contain name as well as target destination. In case of internal destinations this is typically an object reference.&lt;br /&gt;
&lt;br /&gt;
'''Step 5B: Locate Error in File&lt;br /&gt;
'''&lt;br /&gt;
Offset given by JHOVE only points to place where reference was used in GoTo destination. The reader tries to resolved the named destination used in the GoTo action via the name tree. Here the object is missing and replaced by &amp;quot;Null&amp;quot;.&lt;br /&gt;
(Rechte_von_Eltern_in_der_Kita_2018_V7_bf.indd:.45593:62)[null/Fit ]&lt;br /&gt;
&lt;br /&gt;
'''Step 6: Match?'''&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
'''Step 7: Fixable?'''&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
'''Step 8: Fix'''&lt;br /&gt;
&lt;br /&gt;
Find location by checking page object the wrong destination is used on. With Adobe Acrobat Pro's &amp;quot;Edit Link&amp;quot; option, the erronous link can be removed and replaced by a correct one, if known.&lt;br /&gt;
&lt;br /&gt;
'''Step 9: Check'''&lt;br /&gt;
&lt;br /&gt;
Re-validated file with JHOVE: now well-formed and valid. Link is now actionable.&lt;br /&gt;
&lt;br /&gt;
'''Step 10: Success?'''&lt;br /&gt;
&lt;br /&gt;
Yes.&lt;br /&gt;
&lt;br /&gt;
[[File:PDF-hul-122 1.jpg|Textual description]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Describe your workflow here with an overview of the different steps or processes involved--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Purpose, Context and Content==&lt;br /&gt;
&amp;lt;!-- Describe what your workflow is for - i.e. what it is designed to achieve, what the organisational context of the workflow is, and what content it is designed to work with --&amp;gt;&lt;br /&gt;
This workflow describes the analysis and treatment of JHOVE PDF-Hul error message PDF-HUL-122 Invalid destination. It describes the process and results of a manual validation error analysis. &lt;br /&gt;
&lt;br /&gt;
==Evaluation/Review==&lt;br /&gt;
&amp;lt;!-- How effective was the workflow? Was it replaced with a better workflow? Did it work well with some content but not others? What is the current status of the workflow? Does it relate to another workflow already described on the wiki? Link, explain and elaborate --&amp;gt;&lt;br /&gt;
The workflow is effective for this specific instance of the error. It should be replicable for pdfs with similar invalid destination problems, i.e., internal links have been replaced with NULL. However, the fix needs to be considered carefully as it impacts the internal structure of the PDF file. &lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&amp;lt;!-- Provide any further information or links to additional documentation here --&amp;gt;&lt;br /&gt;
See a further discussion of this in the OPF blog: https://openpreservation.org/blogs/destination-null-one-of-the-many-cases-of-pdf-hul-122/?q=1&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add four tildes below (&amp;quot;~~~~&amp;quot;) to create an automatic signature, including your wiki username. Ensure your user page (click on your username to create it) includes an up to date contact email address so that people can contact you if they want to discuss your workflow --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Note that your workflow will be marked with a CC3.0 licence --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Warc_Analyzer&amp;diff=6156</id>
		<title>Warc Analyzer</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Warc_Analyzer&amp;diff=6156"/>
		<updated>2024-01-02T15:32:30Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Created page with &amp;quot;{{Infobox tool |purpose=A proof-of-concept client side webapp for analyzing WARC data using Webrecorder's warcio.js. No WARC data is uploaded anywhere it runs on your machine....&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=A proof-of-concept client side webapp for analyzing WARC data using Webrecorder's warcio.js. No WARC data is uploaded anywhere it runs on your machine. The idea is that it would be useful for archivists who have been given a pile of WARC data and they would like to quickly know what it contains.&lt;br /&gt;
|homepage=https://github.com/edsu/warc-analyzer&lt;br /&gt;
|formats_in=WARC&lt;br /&gt;
|function=Discovery&lt;br /&gt;
|content=Web&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Workflow:OVP_migration_flow&amp;diff=6150</id>
		<title>Workflow:OVP migration flow</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Workflow:OVP_migration_flow&amp;diff=6150"/>
		<updated>2023-10-16T21:18:29Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox COW&lt;br /&gt;
|status=Production&lt;br /&gt;
|tools=python, OVP, API&lt;br /&gt;
|input=Media assets and descriptive metadata in source OVP.&lt;br /&gt;
|output=Media assets and descriptive metadata in destination OVP.&lt;br /&gt;
|organisation=La Digitalizadora de la Memoria Colectiva&lt;br /&gt;
|organisationurl=https://ladigitalizadora.org/&lt;br /&gt;
}}&lt;br /&gt;
OVP migration flow - an agnostic blueprint&lt;br /&gt;
&lt;br /&gt;
To initiate the transition from any given Source [https://en.wikipedia.org/wiki/Online_video_platform On-line Video Platform] to a Destination OVP, the first step involves retrieving video information from the OVP Source. You should be able to achieve this by making a GET request to the OVP Source's API authenticating against it, typically using an access token and video ID. This request should typically be directed to a generic API endpoint.&lt;br /&gt;
&lt;br /&gt;
Upon receiving a response, you should have the opportunity to extract valuable video metadata, including details like the title, description, and tags. Additionally, you can identify the download link for the highest quality version available.&lt;br /&gt;
&lt;br /&gt;
This process should result in acquiring a video asset along with its associated metadata, often structured in XML format. Once these initial steps are executed, you may offer the user the choice to remove the video entry from OVP Source.&lt;br /&gt;
&lt;br /&gt;
Following this, the natural progression leads into the OVP Destination ingest process, aligning with their API documentation and specifications.&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- To add an image of your workflow, open the &amp;quot;Upload File&amp;quot; link on the left in a new browser tab and follow on screen instructions, then return to this page and add the name of your uploaded image to the line below - replacing &amp;quot;workflow.png&amp;quot; with the name of your file. Replace the text &amp;quot;Textual description&amp;quot; with a short description of your image. Filenames are case sensitive! If you don't want to add a workflow diagram or other image, delete the line below  --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:AVmigrationflow.png|OVP migration flow UML]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Describe your workflow here with an overview of the different steps or processes involved--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Purpose: The purpose of the document is to outline a generic blueprint for the migration of video content from any Source On-line Video Platform (OVP) to a Destination OVP. This blueprint provides a high-level overview of the key steps and considerations involved in such a migration process.&lt;br /&gt;
&lt;br /&gt;
Context: In the context of digital content migration, this blueprint serves as a guide for initiating the transition from one OVP to another. It emphasizes the importance of retrieving video information from the source platform, including metadata extraction and potential deletion of content. The document sets the stage for a seamless migration process, aligning with the API documentation and specifications of the destination OVP.&lt;br /&gt;
&amp;lt;!-- Describe what your workflow is for - i.e. what it is designed to achieve, what the organisational context of the workflow is, and what content it is designed to work with --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- How effective was the workflow? Was it replaced with a better workflow? Did it work well with some content but not others? What is the current status of the workflow? Does it relate to another workflow already described on the wiki? Link, explain and elaborate --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Some On-line Video Platforms (OVPs) and the sources of their API documentation:&lt;br /&gt;
&lt;br /&gt;
YouTube:&lt;br /&gt;
Source: [https://developers.google.com/youtube/registering_an_application]&lt;br /&gt;
Documentation URL: YouTube Data API Documentation&lt;br /&gt;
Description: YouTube provides a Data API that allows developers to access YouTube's features programmatically. It covers functionalities related to video uploads, playlists, video information retrieval, and more.&lt;br /&gt;
&lt;br /&gt;
Vimeo:&lt;br /&gt;
Source: [https://developer.vimeo.com/api/reference]&lt;br /&gt;
Documentation URL: Vimeo API Documentation&lt;br /&gt;
Description: Vimeo's API documentation offers resources for integrating Vimeo's video hosting and sharing capabilities into applications. It provides access to features like video upload, video information retrieval, and advanced privacy controls.&lt;br /&gt;
&lt;br /&gt;
Kaltura:&lt;br /&gt;
Source: [https://developer.kaltura.com/]&lt;br /&gt;
Documentation URL: Kaltura Developer Documentation&lt;br /&gt;
Description: Kaltura's Developer Documentation provides extensive resources for developers to leverage Kaltura's video platform capabilities. It covers video management, publishing, and monetization, as well as features like video uploading, player customization, and analytics.&lt;br /&gt;
&lt;br /&gt;
Brightcove:&lt;br /&gt;
Source: [https://apis.support.brightcove.com/]&lt;br /&gt;
Documentation URL: Brightcove API Documentation&lt;br /&gt;
Description: Brightcove's API documentation provides comprehensive resources for developers to interact with the Brightcove platform programmatically. It includes features related to video publishing, content management, player customization, analytics, and monetization.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Provide any further information or links to additional documentation here --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add four tildes below (&amp;quot;~~~~&amp;quot;) to create an automatic signature, including your wiki username. Ensure your user page (click on your username to create it) includes an up to date contact email address so that people can contact you if they want to discuss your workflow --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Note that your workflow will be marked with a CC3.0 licence --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DLCM&amp;diff=6133</id>
		<title>DLCM</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DLCM&amp;diff=6133"/>
		<updated>2023-10-09T09:04:36Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DLCM is an archiving solution based on the OAIS model developed in Java by Swiss universities. It is also the technological stack of the OLOS service.&lt;br /&gt;
|homepage=https://gitlab.unige.ch/dlcm&lt;br /&gt;
|sourcecode=https://gitlab.unige.ch/dlcm&lt;br /&gt;
|license=GNU GPL v2.0&lt;br /&gt;
|formats_out=METS (Metadata Encoding and Transmission Standard), PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|function=Preservation System&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DLCM&amp;diff=6132</id>
		<title>DLCM</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DLCM&amp;diff=6132"/>
		<updated>2023-10-09T09:04:15Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DLCM is an archiving solution based on the OAIS model (ISO 14721) developed in Java by Swiss universities. It is also the technological stack of the OLOS service.&lt;br /&gt;
|homepage=https://gitlab.unige.ch/dlcm&lt;br /&gt;
|sourcecode=https://gitlab.unige.ch/dlcm&lt;br /&gt;
|license=GNU GPL v2.0&lt;br /&gt;
|formats_out=METS (Metadata Encoding and Transmission Standard), PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|function=Preservation System&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DLCM&amp;diff=6131</id>
		<title>DLCM</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DLCM&amp;diff=6131"/>
		<updated>2023-10-09T09:02:39Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DLCM is an archiving solution based on the OAIS model (ISO 14721) developed by Swiss universities. It is also the technological stack of the OLOS service.&lt;br /&gt;
|homepage=https://gitlab.unige.ch/dlcm&lt;br /&gt;
|sourcecode=https://gitlab.unige.ch/dlcm&lt;br /&gt;
|license=GNU GPL v2.0&lt;br /&gt;
|language=Java&lt;br /&gt;
|formats_out=METS (Metadata Encoding and Transmission Standard), PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|function=Preservation System&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DLCM&amp;diff=6130</id>
		<title>DLCM</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DLCM&amp;diff=6130"/>
		<updated>2023-10-09T08:16:36Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Created page with &amp;quot;{{Infobox tool |purpose=DLCM is an archiving solution based on the OAIS model (ISO 14721) developed by Swiss universities. It is also the technological stack of the OLOS servi...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DLCM is an archiving solution based on the OAIS model (ISO 14721) developed by Swiss universities. It is also the technological stack of the OLOS service.&lt;br /&gt;
|homepage=https://gitlab.unige.ch/dlcm&lt;br /&gt;
|sourcecode=https://gitlab.unige.ch/dlcm&lt;br /&gt;
|license=GNU GPL v2.0&lt;br /&gt;
|formats_out=METS (Metadata Encoding and Transmission Standard), PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|function=Preservation System&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Workflow:Deaccessioning_data&amp;diff=6124</id>
		<title>Workflow:Deaccessioning data</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Workflow:Deaccessioning_data&amp;diff=6124"/>
		<updated>2023-10-05T11:39:44Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Procedure to remove datasets but retaining the PID&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox COW&lt;br /&gt;
|status=Production&lt;br /&gt;
|tools=DataCite&lt;br /&gt;
|input=Datasets for which there is reason to remove them from the collection.&lt;br /&gt;
|output=A tombstone page (or similar) to ensure the resolving of the PID of the dataset, leading to information that the dataset was removed.&lt;br /&gt;
|organisation=SEADDA community of archaeologists and digital specialists&lt;br /&gt;
|organisationurl=https://www.seadda.eu/&lt;br /&gt;
}}&lt;br /&gt;
==Workflow Description==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- To add an image of your workflow, open the &amp;quot;Upload File&amp;quot; link on the left in a new browser tab and follow on screen instructions, then return to this page and add the name of your uploaded image to the line below - replacing &amp;quot;workflow.png&amp;quot; with the name of your file. Replace the text &amp;quot;Textual description&amp;quot; with a short description of your image. Filenames are case sensitive! If you don't want to add a workflow diagram or other image, delete the line below  --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:workflow.png|Textual description]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Describe your workflow here with an overview of the different steps or processes involved--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Datasets may need to be deaccessioned or deleted, depending on institutional policy, when:&lt;br /&gt;
* Errors are discovered in a published dataset that renders it unusable for research.&lt;br /&gt;
* If the content is unlawful or the dataset is fraudulent from a scientific point of view.&lt;br /&gt;
* If one or more of the authors or rights holders did not give permission for publication.&lt;br /&gt;
* If personal data appears to have been deposited without a legal ground such as permission of research subjects.&lt;br /&gt;
* If the content consists of personal data, and a research subject rightfully objects to the preservation of a digital object with an appeal to the GDPR (e.g. right to be forgotten; or revocation of informed consent).&lt;br /&gt;
* If there is a legally binding maximum preservation period for the content.&lt;br /&gt;
&lt;br /&gt;
If there is sufficient ground to decide to deaccession a dataset, the deaccessioning is done by a Data Manager who will conduct an appraisal of the request.&lt;br /&gt;
&lt;br /&gt;
In all cases where access to a published dataset is terminated, a notice will be added to the landing page of the Persistent Identifier (PID) associated with the dataset to indicate that the dataset is no longer available: this is called a 'tombstone page'. This is done to ensure that anyone that has citations to resources that have subsequently been removed are retained, and furthermore that the provenance of the deaccession or removal is recorded within all associated metadata.&lt;br /&gt;
&lt;br /&gt;
==Purpose, Context and Content==&lt;br /&gt;
&amp;lt;!-- Describe what your workflow is for - i.e. what it is designed to achieve, what the organisational context of the workflow is, and what content it is designed to work with --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Within the SEADDA consortium, workflows for dealing with deaccessioning of datasets were examined from three long standing digital repositories for archaeological data: the Archaeology Data Station of Data Archiving and Networked Services (DANS), the Swedish National Data Service (SND), and the Archaeology Data Service (ADS).&lt;br /&gt;
&lt;br /&gt;
==Evaluation/Review==&lt;br /&gt;
&amp;lt;!-- How effective was the workflow? Was it replaced with a better workflow? Did it work well with some content but not others? What is the current status of the workflow? Does it relate to another workflow already described on the wiki? Link, explain and elaborate --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&amp;lt;!-- Provide any further information or links to additional documentation here --&amp;gt;&lt;br /&gt;
https://docs.google.com/document/d/1mAL20vvhZnYkZuc_60awac5wc-kiaBzYZ_AJahS_nkM/edit?usp=sharing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add four tildes below (&amp;quot;~~~~&amp;quot;) to create an automatic signature, including your wiki username. Ensure your user page (click on your username to create it) includes an up to date contact email address so that people can contact you if they want to discuss your workflow --&amp;gt;&lt;br /&gt;
[[Special:Contributions/172.104.134.96|172.104.134.96]] 11:39, 5 October 2023 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Note that your workflow will be marked with a CC3.0 licence --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Workflow:Assessing_information&amp;diff=6123</id>
		<title>Workflow:Assessing information</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Workflow:Assessing_information&amp;diff=6123"/>
		<updated>2023-10-05T11:20:26Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox COW&lt;br /&gt;
|status=Production&lt;br /&gt;
|input=Submitted datasets.&lt;br /&gt;
|output=Assessed datasets. This workflow is part of the curation to ensure FAIR datasets within a Trusted Digital Repository&lt;br /&gt;
|organisation=SEADDA community of archaeologists and digital specialists&lt;br /&gt;
|organisationurl=https://www.seadda.eu/&lt;br /&gt;
}}&lt;br /&gt;
==Workflow Description==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- To add an image of your workflow, open the &amp;quot;Upload File&amp;quot; link on the left in a new browser tab and follow on screen instructions, then return to this page and add the name of your uploaded image to the line below - replacing &amp;quot;workflow.png&amp;quot; with the name of your file. Replace the text &amp;quot;Textual description&amp;quot; with a short description of your image. Filenames are case sensitive! If you don't want to add a workflow diagram or other image, delete the line below  --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:workflow.png|Textual description]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Describe your workflow here with an overview of the different steps or processes involved--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A member of staff runs through a checklist to ensure that:&lt;br /&gt;
* The dataset confirms to the collection policy of the repository&lt;br /&gt;
* Data deposit contains no malware (all files).&lt;br /&gt;
* Digital objects are in correct formats (all files).&lt;br /&gt;
* Data deposit has collection-level metadata.&lt;br /&gt;
* All digital objects have core descriptive metadata (all files).&lt;br /&gt;
* Digital objects have additional technical metadata (all files).&lt;br /&gt;
* Digital objects can be opened, are valid, and can be reused (all files, representative sample for large datasets) .&lt;br /&gt;
* The data deposit has no sensitive data concerns (all files, representative sample for large datasets).&lt;br /&gt;
* Content is appropriate and complete (all files, representative sample for large datasets).&lt;br /&gt;
* The dataset is structured in a manner which is clear for the purposes of reusing data.&lt;br /&gt;
&lt;br /&gt;
==Purpose, Context and Content==&lt;br /&gt;
&amp;lt;!-- Describe what your workflow is for - i.e. what it is designed to achieve, what the organisational context of the workflow is, and what content it is designed to work with --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Within the SEADDA consortium, workflows for dealing with assessing information were examined from three long standing digital repositories for archaeological data: the Archaeology Data Station of Data Archiving and Networked Services (DANS), the Swedish National Data Service (SND), and the Archaeology Data Service (ADS). &lt;br /&gt;
&lt;br /&gt;
==Evaluation/Review==&lt;br /&gt;
&amp;lt;!-- How effective was the workflow? Was it replaced with a better workflow? Did it work well with some content but not others? What is the current status of the workflow? Does it relate to another workflow already described on the wiki? Link, explain and elaborate --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&amp;lt;!-- Provide any further information or links to additional documentation here --&amp;gt;&lt;br /&gt;
https://docs.google.com/document/d/1mAL20vvhZnYkZuc_60awac5wc-kiaBzYZ_AJahS_nkM/edit?usp=sharing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add four tildes below (&amp;quot;~~~~&amp;quot;) to create an automatic signature, including your wiki username. Ensure your user page (click on your username to create it) includes an up to date contact email address so that people can contact you if they want to discuss your workflow --&amp;gt;&lt;br /&gt;
[[Special:Contributions/172.104.134.96|172.104.134.96]] 11:19, 5 October 2023 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Note that your workflow will be marked with a CC3.0 licence --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Workflow:Assessing_information&amp;diff=6122</id>
		<title>Workflow:Assessing information</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Workflow:Assessing_information&amp;diff=6122"/>
		<updated>2023-10-05T11:19:17Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: A workflow that is undertaken by repositories for all datasets to ensure relevance and understandability of data and metadata based on defined criteria.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox COW&lt;br /&gt;
|status=Production&lt;br /&gt;
|input=Submitted datasets.&lt;br /&gt;
|output=Assessed datasets. This workflow is part of the curation to ensure FAIR datasets within a Trusted Digital Repository&lt;br /&gt;
|organisation=SEADDA community of archaeologists and digital specialists&lt;br /&gt;
|organisationurl=https://www.seadda.eu/&lt;br /&gt;
}}&lt;br /&gt;
==Workflow Description==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- To add an image of your workflow, open the &amp;quot;Upload File&amp;quot; link on the left in a new browser tab and follow on screen instructions, then return to this page and add the name of your uploaded image to the line below - replacing &amp;quot;workflow.png&amp;quot; with the name of your file. Replace the text &amp;quot;Textual description&amp;quot; with a short description of your image. Filenames are case sensitive! If you don't want to add a workflow diagram or other image, delete the line below  --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:workflow.png|Textual description]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Describe your workflow here with an overview of the different steps or processes involved--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
A member of staff runs through a checklist to ensure that:&lt;br /&gt;
- The dataset confirms to the collection policy of the repository&lt;br /&gt;
- Data deposit contains no malware (all files).&lt;br /&gt;
- Digital objects are in correct formats (all files).&lt;br /&gt;
- Data deposit has collection-level metadata.&lt;br /&gt;
- All digital objects have core descriptive metadata (all files).&lt;br /&gt;
- Digital objects have additional technical metadata (all files).&lt;br /&gt;
- Digital objects can be opened, are valid, and can be reused (all files, representative sample for large datasets) .&lt;br /&gt;
- The data deposit has no sensitive data concerns (all files, representative sample for large datasets).&lt;br /&gt;
- Content is appropriate and complete (all files, representative sample for large datasets).&lt;br /&gt;
- The dataset is structured in a manner which is clear for the purposes of reusing data.&lt;br /&gt;
&lt;br /&gt;
==Purpose, Context and Content==&lt;br /&gt;
&amp;lt;!-- Describe what your workflow is for - i.e. what it is designed to achieve, what the organisational context of the workflow is, and what content it is designed to work with --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
Within the SEADDA consortium, workflows for dealing with assessing information were examined from three long standing digital repositories for archaeological data: the Archaeology Data Station of Data Archiving and Networked Services (DANS), the Swedish National Data Service (SND), and the Archaeology Data Service (ADS). &lt;br /&gt;
&lt;br /&gt;
==Evaluation/Review==&lt;br /&gt;
&amp;lt;!-- How effective was the workflow? Was it replaced with a better workflow? Did it work well with some content but not others? What is the current status of the workflow? Does it relate to another workflow already described on the wiki? Link, explain and elaborate --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
==Further Information==&lt;br /&gt;
&amp;lt;!-- Provide any further information or links to additional documentation here --&amp;gt;&lt;br /&gt;
https://docs.google.com/document/d/1mAL20vvhZnYkZuc_60awac5wc-kiaBzYZ_AJahS_nkM/edit?usp=sharing&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add four tildes below (&amp;quot;~~~~&amp;quot;) to create an automatic signature, including your wiki username. Ensure your user page (click on your username to create it) includes an up to date contact email address so that people can contact you if they want to discuss your workflow --&amp;gt;&lt;br /&gt;
[[Special:Contributions/172.104.134.96|172.104.134.96]] 11:19, 5 October 2023 (UTC)&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Note that your workflow will be marked with a CC3.0 licence --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=CSV_export_form_for_Microsoft_Access&amp;diff=6115</id>
		<title>CSV export form for Microsoft Access</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=CSV_export_form_for_Microsoft_Access&amp;diff=6115"/>
		<updated>2023-09-28T11:53:03Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: An Access form to export database tables to CSV&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=A Microsoft Access form to export all database tables to interoperable CSV files.&lt;br /&gt;
|homepage=https://dans.knaw.nl/en/file-formats/spreadsheets/csv/&lt;br /&gt;
|platforms=Microsoft Windows&lt;br /&gt;
|language=English; Dutch&lt;br /&gt;
|formats_in=MDB, ACCDB, XLS&lt;br /&gt;
|formats_out=CSV (Comma Separated Values)&lt;br /&gt;
|function=File Format Migration&lt;br /&gt;
|content=Database, Spreadsheet&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&lt;br /&gt;
Directly exporting a data table from Microsoft Access to CSV often results in incorrect handling of certain values such as dates, integers with separators, memo fields and such. This tool consists of an Access database in MDB format which contains a single Form, which can be used to easily select and export all tables within the database to standardized CSV files, keeping all of the values intact.&lt;br /&gt;
&lt;br /&gt;
The form can be copied from this MDB into another Access database, to use for exporting the tables from that database.&lt;br /&gt;
&lt;br /&gt;
It is also possible to import an External data table or spreadsheet (such as an Excel sheet or a DBF file) into this database, then export that table to CSV.&lt;br /&gt;
&lt;br /&gt;
Please take note of the fact that only the data tables are exported. If information on table attributes or table relations is present and needs to be retained, this information should be documented separately. Access provides the Database Tool 'Database Documenter' which can write this information into a PDF file.&lt;br /&gt;
&lt;br /&gt;
When importing tables from spreadsheets, take note that the CSV exports will only keep the values from rows and columns and will not store characteristics such as coloring of cells, cell annotations or formulas.&lt;br /&gt;
&lt;br /&gt;
Known issues and instructions how to resolve them are detailed in a readme file, provided with the tool.&lt;br /&gt;
&lt;br /&gt;
This script was developed internally at DANS (Data Archiving and Networked Services), the Dutch national centre of expertise and repository for research data.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Workflow:Archival_Forensics_workflow_(storage_media_deposit)&amp;diff=6078</id>
		<title>Workflow:Archival Forensics workflow (storage media deposit)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Workflow:Archival_Forensics_workflow_(storage_media_deposit)&amp;diff=6078"/>
		<updated>2023-05-26T11:59:10Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox COW&lt;br /&gt;
|status=Experimental&lt;br /&gt;
|tools=Archivists' Toolkit, Audacity, BitCurator, Duke Data Accessioner, FTK (Forensic Toolkit), Karen's Directory Printer, TeraCopy, TreeSize, VLC Media Player, VirtualBox, WinMerge&lt;br /&gt;
|input=Request to forensically process a digital deposit (storage media) to the University of Glasgow Archives &amp;amp; Special Collections, as part of of the Digital Archiving workflow (see Further Information).&lt;br /&gt;
|output=A verified, authentic copy of storage media content exported as a logical or physical image file, with or without forensic processing.&lt;br /&gt;
|organisation=Archives and Special Collections (ASC), University of Glasgow&lt;br /&gt;
|organisationurl=https://www.gla.ac.uk/myglasgow/archivespecialcollections/&lt;br /&gt;
}}&lt;br /&gt;
==Workflow Description==&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- To add an image of your workflow, open the &amp;quot;Upload File&amp;quot; link on the left in a new browser tab and follow on screen instructions, then return to this page and add the name of your uploaded image to the line below - replacing &amp;quot;workflow.png&amp;quot; with the name of your file. Replace the text &amp;quot;Textual description&amp;quot; with a short description of your image. Filenames are case sensitive! If you don't want to add a workflow diagram or other image, delete the line below  --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
[[File:Archival-forensics-workglow_v0-1_web_sm.png|1000px|Archival forensics workflow produced by Archives and Special Collections at the University of Glasgow]]&amp;lt;br&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Describe your workflow here with an overview of the different steps or processes involved--&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; START&lt;br /&gt;
: A request to forensically process a digital deposit (storage media) to the University of Glasgow Archives &amp;amp; Special Collections, as part of of the [https://coptr.digipres.org/index.php/Workflow:Digital_archiving_workflow_(high-level) Digital Archiving workflow.] &amp;lt;br/&amp;gt;&lt;br /&gt;
&lt;br /&gt;
; PREPARATION&lt;br /&gt;
: Obtain supporting resources and materials to forensically process digital storage media.&lt;br /&gt;
# Consult the physical conservation and preservation report, documenting all actions on the acquired media and produced during Acquisition in the [https://coptr.digipres.org/index.php/Workflow:Digital_archiving_workflow_(high-level) Digital Archiving workflow].&lt;br /&gt;
# Retrieve the unique accession number generated for the media to be processed from the Collections Management System. Use the accession number as reference in all forensic processing actions.&lt;br /&gt;
# Update the conservation and preservation logs on the Collections Management System relating to the storage media, including:&lt;br /&gt;
#* photographic records of the storage media before processing, clearly showing state, serial number(s) and any other relevant information recorded on the media (e.g. labels).&lt;br /&gt;
#* Documentation of media characteristics, such as technology, type, brand, model, serial number.&lt;br /&gt;
#* Documentation of any hardware setup or configuration necessary to process the storage medium.&lt;br /&gt;
# Proceed to Imaging.&lt;br /&gt;
&lt;br /&gt;
; IMAGING&lt;br /&gt;
: Create an exact copy of storage media, encapsulating contents and structures in a single file (a disk image). &lt;br /&gt;
# Use write-blocking tools (software or hardware) to only permit read-only access to storage media, so as to avoid compromising the integrity of the data; and protect the data chain of custody.&lt;br /&gt;
# Use disk imaging software to generate a forensic image file, which can either be: &lt;br /&gt;
#* a physical image, which is a bit-by-bit (exact) copy of the storage medium and includes active (used) and free space. Any deleted data or file fragments will be copied into the image file.&lt;br /&gt;
#* A logical image, which captures active data on the device but not any deleted space, deleted files or fragments.&lt;br /&gt;
#* A selection of specific files and directories, also known as a targeted collection.&lt;br /&gt;
# Instruct the disk imaging software to create a complete file and directory listing; and verify the integrity of the generated image file by comparing hashes:&lt;br /&gt;
#* If verification fails and attempts at re-imaging are unsuccessful, create a &amp;quot;failed imaging&amp;quot; report in the Collections Management System logs.&lt;br /&gt;
#* If verification is successful, store the image in process store.&lt;br /&gt;
# Is further forensic processing and analysis required?&lt;br /&gt;
#* If no, submit the verified disk image to the [https://coptr.digipres.org/index.php/Workflow:Digital_archiving_workflow_(high-level) Digital Archiving workflow]. OR&lt;br /&gt;
#* If yes, proceed to Processing.&lt;br /&gt;
&lt;br /&gt;
; PROCESSING&lt;br /&gt;
: Extract and manage information from the data in storage media, and make it available for analysis.&lt;br /&gt;
# Collate sources for processing, by selecting specific folders/files to review and - where appropriate - aggregating data from multiple storage media.&lt;br /&gt;
# Perform virus and malware detection checks on the collated sources.&lt;br /&gt;
# Use forensic software to identify and, if possible, remove irrelevant or redundant files from processing. Examples may include operating systems, system files, or user-defined files that have been deemed as irrelevant.&lt;br /&gt;
# Use forensic software to process the data, including hash generation for files; expanding compound files (e.g. zip archives); format identification and validation; creating search text indices; and preparing audiovisual, web and email data for analysis.&lt;br /&gt;
# Proceed to Analysis.&lt;br /&gt;
		&lt;br /&gt;
; ANALYSIS&lt;br /&gt;
: Use digital forensics methods to search, categorise, review, interpret and curate data in storage media, so as to aid selection and appraisal processes.&lt;br /&gt;
# Review the agreement(s) under which the records were donated, in order to identify permissible actions (e.g. whether restoring deleted files is allowed).&lt;br /&gt;
# Depending on the nature of the data and on archival needs, use forensic software to identify records of interest, and make them available for appraisal. Analysis methods may include:&lt;br /&gt;
#* Data carving, for restoring data that was deleted or lost from the file system.&lt;br /&gt;
#* Decrypting encrypted files and recovering passwords for password-protected files.&lt;br /&gt;
#* Viewing and exporting geolocation data from files that have geolocation information associated with them.&lt;br /&gt;
#* Analysing document content to explore terms/words of interest; and automate the identification of personal information, such as names, phone numbers, credit card and social security numbers.&lt;br /&gt;
#* Identifying the language in which documents are written.&lt;br /&gt;
#* Generating thumbnails from images and videos; and extracting metadata from multimedia files.&lt;br /&gt;
#* Flagging duplicate files.&lt;br /&gt;
#* Discovering information (including documents and email communications) relating to pre-defined lists of persons of interest.&lt;br /&gt;
# Once all analyses have been completed, consolidate the resulting data into an appropriate file/folder structure. &lt;br /&gt;
# Proceed to Exporting.&lt;br /&gt;
&lt;br /&gt;
; EXPORTING&lt;br /&gt;
: Export the forensically analysed contents of storage media as logical disk images, alongside relevant processing reports, filters and labels.&lt;br /&gt;
# Export any custom filters and labels created to manage the data, which can be useful for other digital archiving processes. Filters help locate items of interest quickly; and labels allow for grouping files in customised ways (e.g. flagging content that requires archivist attention; or records associated with a specific individual).&lt;br /&gt;
# Export any reports generated during processing and analysis, such as file hashes, virus and malware detection reports, search index terms and geolocation data.&lt;br /&gt;
# Export the forensically curated contents of processed storage media into a logical disk image.&lt;br /&gt;
# Submit the logical disk image to the [https://coptr.digipres.org/index.php/Workflow:Digital_archiving_workflow_(high-level) Digital Archiving workflow].&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
==Purpose, Context and Content==&lt;br /&gt;
&amp;lt;!-- Describe what your workflow is for - i.e. what it is designed to achieve, what the organisational context of the workflow is, and what content it is designed to work with --&amp;gt;&lt;br /&gt;
The workflow is meant to describe the steps and processes involved in an archival forensics examination of digital records submitted in storage media to University Archives at the University of Glasgow. Although the workflow can operate as stand-alone, it has been designed to align with and extend the [https://coptr.digipres.org/index.php/Workflow:Digital_archiving_workflow_(high-level) Digital Archiving workflow].&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add four tildes below (&amp;quot;~~~~&amp;quot;) to create an automatic signature, including your wiki username. Ensure your user page (click on your username to create it) includes an up to date contact email address so that people can contact you if they want to discuss your workflow --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Note that your workflow will be marked with a CC3.0 licence --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6064</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6064"/>
		<updated>2023-04-24T09:57:27Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=Java 8 / 11&lt;br /&gt;
|formats_in=e.g. XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=e.g. XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM systems from SER Group ([https://www.sergroup.com/en/ sergroup.com]).&lt;br /&gt;
The long-term data is stored as XMLs within the AIPs and managed in the ECM index management. It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects. The preservation metadata is acquired both from internal mechanisms and from external sources.&lt;br /&gt;
Due to the extensive use of Java and standardized APIs it can be extended and customized in many ways.&lt;br /&gt;
&lt;br /&gt;
Furthermore the system is based on:&lt;br /&gt;
* DROID and custom developments for format recognition&lt;br /&gt;
* JHOVE and a number of custom modules for format validation&lt;br /&gt;
* Custom development and free tools (including ffmpeg/ffprobe) for metadata extraction from content objects&lt;br /&gt;
* Custom development and free tools for format conversion&lt;br /&gt;
&lt;br /&gt;
== Stakeholder/Audience ==&lt;br /&gt;
The primary focus are (long-term) archives but can also be used in other environments with appropriate extensions (e.g. Geographic Information Systems).&lt;br /&gt;
For example, it is offered as a service in the digital archive network &amp;quot;DiPS.kommunal&amp;quot; in North Rhine-Westphalia, Germany ([http://www.danrw.de danrw.de] / [https://www.danrw.de/ueber-das-da-nrw/da-nrw-ein-loesungsverbund details])&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6063</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6063"/>
		<updated>2023-04-24T09:54:12Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=Java 8 / 11&lt;br /&gt;
|formats_in=e.g. XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=e.g. XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM systems from SER Group ([https://www.sergroup.com/en/ sergroup.com]).&lt;br /&gt;
The long-term data is stored as XMLs within the AIPs and managed in the ECM index management. It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects. The preservation metadata is acquired both from internal mechanisms and from external sources.&lt;br /&gt;
Due to the extensive use of Java and standardized APIs it can be extended and customized in many ways.&lt;br /&gt;
&lt;br /&gt;
Furthermore the system is based on:&lt;br /&gt;
* DROID and custom developments for format recognition&lt;br /&gt;
* JHOVE and a number of custom modules for format validation&lt;br /&gt;
* Custom development and free tools (including ffmpeg/ffprobe) for metadata extraction from content objects&lt;br /&gt;
* Custom development and free tools for format conversion&lt;br /&gt;
&lt;br /&gt;
== Stakeholder/Audience ==&lt;br /&gt;
The primary focus are (long-term) archives but can also be used in other environments with appropriate extensions (e.g. Geographic Information Systems).&lt;br /&gt;
For example, it is offered as a service in the digital archive network &amp;quot;DiPS.kommunal&amp;quot; in North Rhine-Westphalia, Germany ([http://www.danrw.de danrw.de])&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6062</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6062"/>
		<updated>2023-04-24T09:51:17Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=Java 8 / 11&lt;br /&gt;
|formats_in=e.g. XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=e.g. XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM systems from SER Group ([https://www.sergroup.com/en/ sergroup.com]).&lt;br /&gt;
The long-term data is stored as XMLs within the AIPs and managed in the ECM index management. It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;br /&gt;
Due to the extensive use of Java and standardized APIs it can be extended and customized in many ways.&lt;br /&gt;
&lt;br /&gt;
Furthermore the system is based on:&lt;br /&gt;
* DROID and custom developments for format recognition&lt;br /&gt;
* JHOVE and a number of custom modules for format validation, &lt;br /&gt;
* Custom development and free tools (including ffmpeg/ffprobe) for metadata extraction from content objects&lt;br /&gt;
* Custom development and free tools for format conversion&lt;br /&gt;
&lt;br /&gt;
== Stakeholder/Audience ==&lt;br /&gt;
The primary focus are (long-term) archives but can also be used in other environments with appropriate extensions (e.g. Geographic Information Systems)&lt;br /&gt;
For example, it is offered as a service in the digital archive network &amp;quot;DiPS.kommunal&amp;quot; in North Rhine-Westphalia, Germany ([http://www.danrw.de danrw.de])&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6061</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6061"/>
		<updated>2023-04-24T09:49:06Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=Java 8 / 11&lt;br /&gt;
|formats_in=e.g. XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=e.g. XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM systems from SER Group ([https://www.sergroup.com/en/ sergroup.com]).&lt;br /&gt;
The long-term data is stored as XMLs within the AIPs and managed in the ECM index management. It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;br /&gt;
Due to the extensive use of Java and standardized APIs it can be extended and customized in many ways.&lt;br /&gt;
&lt;br /&gt;
== Stakeholder/Audience ==&lt;br /&gt;
The primary focus are (long-term) archives but can also be used in other environments with appropriate extensions (e.g. Geographic Information Systems)&lt;br /&gt;
For example, it is offered as a service in the digital archive network &amp;quot;DiPS.kommunal&amp;quot; in North Rhine-Westphalia, Germany ([http://www.danrw.de danrw.de])&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6060</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6060"/>
		<updated>2023-04-24T09:45:30Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=Java 8 / 11&lt;br /&gt;
|formats_in=e.g. XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=e.g. XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM systems from SER Group ([https://www.sergroup.com/en/ sergroup.com]).&lt;br /&gt;
The long-term data is stored as XMLs within the AIPs and managed in the ECM index management. It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;br /&gt;
Due to the extensive use of Java and standardized APIs it can be extended and customized in many ways.&lt;br /&gt;
&lt;br /&gt;
== Stakeholder/Audience ==&lt;br /&gt;
The primary focus are (long-term) archives but can also be used in other environments with appropriate extensions (e.g. Geographic Information Systems)&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6059</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6059"/>
		<updated>2023-04-24T09:36:55Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=java 8 or higher&lt;br /&gt;
|formats_in=XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group [https://www.sergroup.com/en/ sergroup.com].&lt;br /&gt;
It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6058</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6058"/>
		<updated>2023-04-24T09:36:09Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=java 8 or higher&lt;br /&gt;
|formats_in=XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group [SER Group https://www.sergroup.com/en/].&lt;br /&gt;
It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6057</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6057"/>
		<updated>2023-04-24T09:35:16Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=java 8 or higher&lt;br /&gt;
|formats_in=XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group [https://www.sergroup.com/en/].&lt;br /&gt;
It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6056</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6056"/>
		<updated>2023-04-24T09:33:08Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|platforms=Linux and Windows as well as the common database systems&lt;br /&gt;
|language=java 8 or higher&lt;br /&gt;
|formats_in=XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group ([https://www.sergroup.com/en/])&lt;br /&gt;
It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6055</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6055"/>
		<updated>2023-04-24T08:54:01Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|formats_in=XML, XDOMEA, XPSR&lt;br /&gt;
|formats_out=XML, XDOMEA, PREMIS&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group.&lt;br /&gt;
It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6054</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6054"/>
		<updated>2023-04-24T08:53:00Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant '''Di'''gital '''P'''reservation '''S'''olution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|formats_in=XML, XDOMEA&lt;br /&gt;
|formats_out=XML&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group.&lt;br /&gt;
It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6053</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6053"/>
		<updated>2023-04-24T08:52:06Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS (OAIS compliant digital reservation solution)&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|formats_in=XML, XDOMEA&lt;br /&gt;
|formats_out=XML&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group.&lt;br /&gt;
It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6052</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6052"/>
		<updated>2023-04-24T08:51:04Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group.&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|formats_in=XML, XDOMEA&lt;br /&gt;
|formats_out=XML&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group.&lt;br /&gt;
It uses PREMIS to store technical object information (format information, hash values, file sizes, etc.), events (pre-ingest, ingest, conversions, etc.), agents (natural persons and systems) and relationships between objects.&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6051</id>
		<title>DiPS (Digital Preservation Solution)</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=DiPS_(Digital_Preservation_Solution)&amp;diff=6051"/>
		<updated>2023-04-24T08:49:20Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: Created page with &amp;quot;{{Infobox tool |purpose=DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group. |homepage=https://go.sergroup.com/dips-kommunal |formats_i...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=DiPS is an OAIS compliant long-term archive solution based on the ECM system from SER Group.&lt;br /&gt;
|homepage=https://go.sergroup.com/dips-kommunal&lt;br /&gt;
|formats_in=XML, XDOMEA&lt;br /&gt;
|formats_out=XML&lt;br /&gt;
|function=Access, Active Data Storage, File Format Identification, File Format Migration, File Management, Metadata Extraction, Preservation System, Secure Deletion, Service, Storage, Transfer, Validation, Workflow&lt;br /&gt;
|content=Audio, Binary Data, Container, Document, Image, Metadata, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Fq&amp;diff=6023</id>
		<title>Fq</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Fq&amp;diff=6023"/>
		<updated>2023-01-02T09:36:55Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Tool, language and decoders for working with binary data.&lt;br /&gt;
|homepage=https://github.com/wader/fq&lt;br /&gt;
|sourcecode=https://github.com/wader/fq&lt;br /&gt;
|license=MIT and BSD&lt;br /&gt;
|cost=Free and open source&lt;br /&gt;
|platforms=macOS, Linux and Winodows&lt;br /&gt;
|language=golang&lt;br /&gt;
|formats_in=CSV (Comma Separated Values), XML, PNG, TIFF, SVG, JPEG, GIF, JSON, YAML, TOML, MP4, MP3, Matroka&lt;br /&gt;
|formats_out=XML, JSON, YAML, TOML, CSV&lt;br /&gt;
|function=Access, Validation, Binary &amp;amp; Hexidecimal Editing, Discovery, Repair, Quality Assurance, Policy, File Format Identification, File Recovery, Forensic, Metadata Extraction&lt;br /&gt;
|content=Binary Data, Container, Metadata&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
fq is inspired by the well known jq tool and language and allows you to work with binary formats the same way you would using jq. In addition it can present data like a hex viewer, transform, slice and concatenate binary data. It also supports nested formats and has an interactive REPL with auto-completion.&lt;br /&gt;
&lt;br /&gt;
It was originally designed to query, inspect and debug media codecs and containers like mp4, flac, mp3, jpeg. But has since then been extended to support a variety of formats like executables, packet captures (with TCP reassembly) and serialization formats like JSON, YAML, XML, ASN1 BER, Avro, CBOR, protobuf. In addition it also has functions to work with URL:s, convert to/from hex, number bases, search for things etc.&lt;br /&gt;
&lt;br /&gt;
In summary it aims to be jq, hexdump, dd and gdb for files combined into one.&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
* [https://www.youtube.com/watch?v=GJOq_b0eb-s Binary Tools Summit 2022]&lt;br /&gt;
* [https://www.youtube.com/watch?v=-Pwt5KL-xRs&amp;amp;t=1450s NTTW (No Time To Wait) 2022]&lt;br /&gt;
&lt;br /&gt;
= Development Activity =&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
All development activity is visible on GitHub: https://github.com/wader/fq/commits&lt;br /&gt;
 &lt;br /&gt;
=== Release Feed ===&lt;br /&gt;
Below are the the latest FQ releases:&lt;br /&gt;
&amp;lt;rss max=5&amp;gt;https://github.com/wader/fq/releases.atom&amp;lt;/rss&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Bad_Peggy&amp;diff=6015</id>
		<title>Bad Peggy</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Bad_Peggy&amp;diff=6015"/>
		<updated>2022-11-30T09:40:15Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: corrected typo&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|image=BadPeggy.png&lt;br /&gt;
|purpose=Scans for damaged images and photos.&lt;br /&gt;
|homepage=https://www.coderslagoon.com/#/product/badpeggy&lt;br /&gt;
|license=GPLv3&lt;br /&gt;
|platforms=Windows, Linux, OSX&lt;br /&gt;
|formats_in=JPEG, PNG, BMP, GIF&lt;br /&gt;
|function=Quality Assurance, Validation&lt;br /&gt;
|content=Image&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
Bad Peggy scans images ([[JPEG]], [[PNG]], [[BMP]], [[GIF]]) for damages and other blemishes, and shows the results and files instantly. It enables you to find such broken files quickly, inspect and then either delete or move them to a different location. &lt;br /&gt;
 &lt;br /&gt;
Requires Java 6 or higher. &lt;br /&gt;
 &lt;br /&gt;
Licensed under the GPLv3.&lt;br /&gt;
&lt;br /&gt;
Quoted from the documentation:&lt;br /&gt;
&amp;quot;Bad Beggy uses the Java Image IO (JIIO) library to examine image files. Its decoder emits warnings and errors while an image gets loaded. Thus the results do depend on it being up-to-date and also its changes in functionality. Bad Peggy checks though on startup if in general, well-known errors in images do get detected, i.e. if JIIO is still functioning in detecting damaged images as expected. What &amp;quot;damaged&amp;quot; truly means depends and can be&lt;br /&gt;
*small difference from the official format, e.g. extra data appended after the actual image.&lt;br /&gt;
*non-critical issues like unknown values, which do not affect displaying the image at all.&lt;br /&gt;
*minor damage which only disturb smaller parts of the image.&lt;br /&gt;
*major damage, which causes the display to be corrupted after a particular position.&lt;br /&gt;
*completely truncated or i.e. incomplete images.&lt;br /&gt;
*errors at the beginning of the files, so that decoding can't even commence.&lt;br /&gt;
*files with are not images at all, but accidentally carry the file extension.&lt;br /&gt;
*image files which don't get recognized b the JIIO, but can be processed by other image viewers, e.g. if additional information is stored before the *image data starts (which smarter or more aggressive decoders then skip).&lt;br /&gt;
*an image which looks damaged because it got loaded as such, and saved again in another application - and thus is structurally fine.&lt;br /&gt;
*an image which is logically damaged but does not cause complains by the JIIO, although the flaws are clearly visible - this is one of the most problematic cases, since such files won't be detected by Bad Peggy - detection for such problems is difficult, you can compare this with a text editor loading and displaying a file with the word &amp;quot;text pr{cessor&amp;quot; in the, where the 'a' to '{' change was caused by a faulty transmission but the text *still makes sense to the editor itself.&lt;br /&gt;
In general it is not recommended to just discard every image reported as damaged but to check out if repairing or re-saving the file in other applications into a generally valid image format is possible.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
* '''KOST-CECO:''' Used in [[KOST-Val]] for the JPEG validation module. [[KOST-Val]] evaluates the error message &amp;quot;Not a JPEG file&amp;quot; further.&lt;br /&gt;
* '''Error detection of JPEG files with JHOVE and Bad Peggy:''' http://openpreservation.org/blog/2016/11/29/jpegvalidation/ (December 2016)&lt;br /&gt;
&lt;br /&gt;
* To validate other file formats than JPEG, make sure the file extension (e. g. gif, png, bmp) appears on the List &amp;quot;options&amp;quot; -&amp;gt; &amp;quot;file extension&amp;quot;. If it does not appear there, maybe you should re-download directly from '''Coderslagoon''' https://www.coderslagoon.com/#/product/badpeggy&lt;br /&gt;
&lt;br /&gt;
== Development Activity ==&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
&amp;lt;!-- Add the OpenHub.com ID for the tool, if known. --&amp;gt;&lt;br /&gt;
New in Version 2.0:&lt;br /&gt;
* Support for PNG, BMP and GIF images.&lt;br /&gt;
* Simplified status bar.&lt;br /&gt;
* Visual error differentiation changed to grayscale.&lt;br /&gt;
* Error differentiation now in done in gray tones.&lt;br /&gt;
* Message box button text is now translated.&lt;br /&gt;
* Minor bug fixes and cosmetic changes.&lt;br /&gt;
 &lt;br /&gt;
Bad Peggy Sources: https://www.coderslagoon.com/files/badpeggy20_src.tar.xz&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Bagger&amp;diff=5994</id>
		<title>Bagger</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Bagger&amp;diff=5994"/>
		<updated>2022-11-25T14:28:19Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: /* Activity Feed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=GUI application to  facilitate the creation and verification of [[BagIt]] bags.&lt;br /&gt;
|homepage=https://github.com/LibraryOfCongress/bagger&lt;br /&gt;
|license=Open License, but relies on several Apache based and GNU licensed components.&lt;br /&gt;
|function=Fixity, Transfer&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details&lt;br /&gt;
|ohloh_id=Bagger&lt;br /&gt;
}}&lt;br /&gt;
= Description =&lt;br /&gt;
The [[BagIt]] specification is a hierarchical file packaging format for the creation of standardised digital containers called &amp;amp;#39;bags,&amp;amp;#39; which are used for storing and transferring digital content. Derived from work by the Library of Congress and the California Digital Library, a bag consists of a &amp;amp;lsquo;payload&amp;amp;rsquo; - the digital content - and &amp;amp;lsquo;tags&amp;amp;#39; - metadata files to document the storage and transfer of the bag. There are a number of [http://sourceforge.net/projects/loc-xferutils/ Bagit-specific tools] to ease bag creation, including the BagIt Library, a Java-based software library to support the creation, manipulation, and validation of bags. For those less comfortable with command-line interface, the Bagger application provides a graphical user interface to the BagIt Library.&lt;br /&gt;
====Provider====&lt;br /&gt;
The United States Library of Congress, and the National Digital Information Infrastructure and Preservation Program (NDIIPP)&lt;br /&gt;
====Licensing and cost====&lt;br /&gt;
[http://www.linfo.org/bsdlicense.html BSD License] - free. &amp;amp;nbsp;The BagIt Library is public domain.&lt;br /&gt;
====Development activity====&lt;br /&gt;
Bagger 2.1.2 was released in February 2012. BagIt Library 4.1 was released in January 2012.&lt;br /&gt;
The Library of Congress website implies ongoing development of the Transfer Utilities.&amp;amp;nbsp;&lt;br /&gt;
====Platform and interoperability====&lt;br /&gt;
Both the BagIt Library and Bagger require Java 6.&lt;br /&gt;
====Functional notes====&lt;br /&gt;
Bags contain at minimum three elements: a &amp;amp;lsquo;payload&amp;amp;rsquo; and at least two &amp;amp;lsquo;tags.&amp;amp;rsquo; The payload consists of the content being preserved. The first tag is&amp;amp;nbsp;a manifest itemising the files making up the content along with their checksums; the second is a bagit.txt file identifying the container as a bag and giving the version of the specification used and the character encoding of the tags. &amp;amp;nbsp;The specification additionally allows for several optional tags. &amp;amp;nbsp;&lt;br /&gt;
====Documentation and user support====&lt;br /&gt;
Documentation is extremely sparse, primarily consisting of README files detailing release notes. The [http://www.digitalpreservation.gov/documents/bagitspec.pdf BagIt specification itself ]can be found through the Library of Congress website.&lt;br /&gt;
It appears that the main user support consists of a mailing list hosted by Sourceforge; however, the list archive only shows 11 posts for 2011.&lt;br /&gt;
====Usability====&lt;br /&gt;
The BagIt Library uses a command-line interface, while Bagger provides a graphical user interface. No installation is required; the tools can simply be downloaded and run, although it may not be immediately clear to users how to do so.&lt;br /&gt;
====Expertise required====&lt;br /&gt;
BagIt is designed to create a common language for users exchanging digital materials, essentially negating the need for expertise about others&amp;amp;rsquo; protocols. However, for configuration, familiarity with one&amp;amp;rsquo;s own repository&amp;amp;rsquo;s technical protocols is essential.&lt;br /&gt;
====Standards compliance====&lt;br /&gt;
The BagIt specification is an Internet Engineering Task Force (IETF) internet draft.&lt;br /&gt;
====Influence and take-up====&lt;br /&gt;
The BagIt specification has become widely accepted in the preservation community, and is used by the Library of Congress, Chronopolis, and The Stanford Digital Repository, among others. The Transfer Utilities have been downloaded nearly 4000 times from Sourceforge.&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= User Experiences =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Development Activity =&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
=== Activity Feed===&lt;br /&gt;
Link to any RSS feed that is updated when issue or code updates occur, if any, e.g:&lt;br /&gt;
&amp;lt;rss max=7&amp;gt;https://github.com/LibraryOfCongress/bagger/commits/master.atom&amp;lt;/rss&amp;gt;&lt;br /&gt;
&lt;br /&gt;
=== Release Feed ===&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Siegfried&amp;diff=5993</id>
		<title>Siegfried</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Siegfried&amp;diff=5993"/>
		<updated>2022-11-25T14:24:46Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: /* Activity Feed */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=A PRONOM based, command line, file format identification tool using Aho Corasick matching and no buffer limits.&lt;br /&gt;
|homepage=http://www.itforarchivists.com/siegfried&lt;br /&gt;
|license=Apache License 2.0&lt;br /&gt;
|function=File Format Identification&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
Siegried is a file format identification tool that, like DROID and Fido, is based on PRONOM. However, it uses a different pattern matching algorithm that offers different strengths and weaknesses to those other PRONOM based tools. A detailed description of the tool and why it was created can be found in [http://www.openplanetsfoundation.org/blogs/2014-09-27-siegfried-pronom-based-file-format-identification-tool this blog post].&lt;br /&gt;
Besides, there is a more detailed description in terms of functionality on the [https://github.com/richardlehane/siegfried Github Page of Siegfried].&lt;br /&gt;
&lt;br /&gt;
Siegfried was first publicly released the 28th february 2014 in version 0.1.&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
* '''ZBW:'''&lt;br /&gt;
** The command line tool is very easy to handle. &amp;lt;br /&amp;gt;The default output is yaml. This can be changed to csv or json.&lt;br /&gt;
***Usual command: sf file.ext (will output in yaml)&lt;br /&gt;
***Change output to csv: sf -csv file.ext&lt;br /&gt;
***Change output to json: sf -json file.ext&lt;br /&gt;
**It is also possible to save the output in an external file:&lt;br /&gt;
***sf file.ext &amp;gt;output.yml&lt;br /&gt;
***sf -csv file.ext &amp;gt;output.csv&lt;br /&gt;
***sf -json file.ext &amp;gt;output.json&lt;br /&gt;
&lt;br /&gt;
== Development Activity ==&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
All development activity is visible on GitHub: http://github.com/richardlehane/siegfried/commits&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
=== Release Feed ===&lt;br /&gt;
Below the last 3 release feeds:&lt;br /&gt;
&amp;lt;rss max=3&amp;gt;https://github.com/richardlehane/siegfried/releases.atom&amp;lt;/rss&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
 &lt;br /&gt;
=== Activity Feed ===&lt;br /&gt;
Below the last 5 commits:&lt;br /&gt;
&amp;lt;rss max=5&amp;gt;https://github.com/richardlehane/siegfried/commits/main.atom&amp;lt;/rss&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Add the Ohloh.com ID for the tool, if known. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Rosetta&amp;diff=5984</id>
		<title>Rosetta</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Rosetta&amp;diff=5984"/>
		<updated>2022-11-13T17:42:19Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Ex Libris Rosetta enables institutions to preserve and provide access to the collections in their care.&lt;br /&gt;
|homepage=http://www.exlibrisgroup.com/category/RosettaOverview&lt;br /&gt;
|license=Commercially licensed product.&lt;br /&gt;
|formats_in=METS (Metadata Encoding and Transmission Standard), PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|formats_out=METS (Metadata Encoding and Transmission Standard), PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|function=Access, File Format Migration, Metadata Processing, Preservation System&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
Designed in collaboration with the National Library of New Zealand and reviewed by an international peer group of recognized leaders and innovators, Ex Libris Rosetta enables institutions to preserve and provide access to the collections in their care, now and in the future.&lt;br /&gt;
&lt;br /&gt;
Rosetta is a complete preservation solution. Its focus is to archive and preserve the digitized and born digital materials stored at academic and memory institutions like libraries and archives, research organizations, and government institutions.&lt;br /&gt;
It aims to ensure data integrity and access over time for the archived digital data.&lt;br /&gt;
&lt;br /&gt;
Rosetta supports the acquisition, validation, ingest, storage, preservation, and dissemination of digital objects that are in various formats and originate from many sources.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Rosetta uses standards like Premis and METS. The METS profile of Ex Libris is published and open. The data model of Rosetta is as follows:&lt;br /&gt;
&lt;br /&gt;
* intellectual entity (coherent set of content, the whole unit like e. g. a digitized book)&lt;br /&gt;
* representation (the set of files, including all the metadata)&lt;br /&gt;
* file&lt;br /&gt;
* bit-stream&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Tools integrated in Rosetta are as follows: &lt;br /&gt;
* BIRT (open source, eclipse-based Business Intelligence and Reporting Tools reporting system)&lt;br /&gt;
* thanks to a SDK, Rosetta users can easily build their own tools, submission applications and plug-ins&lt;br /&gt;
* Pronom connection&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rosetta is an OAIS-compliant solution.&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
== Development Activity ==&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
&amp;lt;!-- Add the OpenHub.com ID for the tool, if known. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Rosetta&amp;diff=5983</id>
		<title>Rosetta</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Rosetta&amp;diff=5983"/>
		<updated>2022-11-13T17:40:35Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: /* Description */&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=Ex Libris Rosetta enables institutions to preserve and provide access to the collections in their care.&lt;br /&gt;
|homepage=http://www.exlibrisgroup.com/category/RosettaOverview&lt;br /&gt;
|license=Commercially licensed product.&lt;br /&gt;
|formats_in=METS (Metadata Encoding and Transmission Standard), PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|formats_out=METS (Metadata Encoding and Transmission Standard), PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|function=Access, File Format Migration, Metadata Processing, Preservation System&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
Designed in collaboration with the National Library of New Zealand and reviewed by an international peer group of recognized leaders and innovators, Ex Libris Rosetta enables institutions to preserve and provide access to the collections in their care, now and in the future.&lt;br /&gt;
&lt;br /&gt;
Rosetta is a complete preservation solution. Its focus is to archive and preserve the digitized and born digital materials stored at academic and memory institutions like libraries and archives, research organizations and government institutions.&lt;br /&gt;
It aims to ensure data integrity and access over time for the archived digital data.&lt;br /&gt;
&lt;br /&gt;
Rosetta supports the acquisition, validation, ingest, storage, preservation and dissemination of digitale objects that are in various formats and originate from many sources.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Rosetta uses standards like Premis and Mets. The METS profile of Ex Libris is  published and open. The data model of Rosetta is as following:&lt;br /&gt;
&lt;br /&gt;
* intellectual entity (coherent set of content, the whole unit like e. g. a digitized book)&lt;br /&gt;
* representation (the set of files, including all the metadata)&lt;br /&gt;
* file&lt;br /&gt;
* bit-stream&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
Tools integrated in Rosetta&lt;br /&gt;
* BIRT (open source, eclipse-based Business Intelligence and Reporting Tools reporting system)&lt;br /&gt;
* thanks to a SDK, Rosetta users can easily build their own tools, submission applications and plug-ins&lt;br /&gt;
* Pronom connection&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
Rosetta is an OAIS-compliant solution.&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
== Development Activity ==&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
&amp;lt;!-- Add the OpenHub.com ID for the tool, if known. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=AudiAnnotate&amp;diff=5950</id>
		<title>AudiAnnotate</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=AudiAnnotate&amp;diff=5950"/>
		<updated>2022-11-02T20:56:09Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=To make audio and its interpretations more discoverable and usable by extending the use of the newest IIIF (International Image Interoperability Framework) standard for audio with the development of the AudiAnnotate web application, documented workflows and workshops that will facilitate the use of existing best-of-breed, open source tools for audio annotation&lt;br /&gt;
(Sonic Visualiser), for public code and document repositories (GitHub), and audio presentation (Universal Viewer) to produce, publish, and sustain shareable W3C Web Annotations for individual and collaborative audio projects.&lt;br /&gt;
|homepage=http://audiannotate.brumfieldlabs.com&lt;br /&gt;
|cost=None&lt;br /&gt;
|function=Academic Social Networking, Access, Annotation, Personal Archiving, Preservation System, Service, Version Control, Workflow, Metadata Processing, Rendering, Discovery, Persistent Identification, Managing Active Research Data&lt;br /&gt;
|content=Audio, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
The AudiAnnotate project originates from the premise that facilitating the annotation of audio collections will accelerate access to, promote scholarship with, and extend our understanding of important audio collections, some of which may be currently inaccessible and others which could potentially be lost forever. Audio collections are not discoverable without annotations. If we cannot discover an audio file, we will not use it in scholarship. If we do not use audio collections, libraries and archives that hold massive collections of audio recordings from a diverse range of bygone timeframes, cultures, and contexts will not preserve them.&lt;br /&gt;
&lt;br /&gt;
Broadly speaking, the application and workflows that we will develop in the AudiAnnotate project will help users to translate their own analyses of audio recordings into media annotations that will be publishable as easy-to-maintain, static, W3C Web Annotations associated with IIIF manifests and hosted in a GitHub repository that are viewable through presentation software such as Universal Viewer.&lt;br /&gt;
&lt;br /&gt;
In response to the need for a workflow that supports IIIF manifest creation, collaborative editing, flexible modes of presentation, and permissions control, the AudiAnnotate project is developing AWE, a documented workflow using the recently adopted IIIF standard for AV materials that will help libraries, archives, and museums (LAMs), scholars, and the public access and use AV cultural heritage items. We will achieve this goal by connecting existing best-of-breed, open source tools for AV management (Aviary), annotation (such as Audacity and OHMS), public code and document repositories (GitHub), and the AudiAnnotate web application for creating and sharing IIIF manifests and annotations. Usually limited by proprietary software and LAM systems with restricted access to AV, users will use AWE as a complete sequence of tools and transformations for accessing, identifying, annotating, and sharing AWE “projects” such as singular pages or multi-page exhibits or editions with AV materials. LAMs will benefit from AWE as it facilitates metadata generation, is built on W3C web standards in IIIF for sharing online scholarship, and generates static web pages that are lightweight and easy to preserve and harvest. AWE represents a new kind of AV ecosystem where the exchange is opened between institutional repositories, annotation software, online repositories and publication platforms, and all kinds of users.&lt;br /&gt;
&lt;br /&gt;
The AudiAnnotate Project has been awarded a 2019 Digital Extension Grant from the American Council of Learned Societies. AWE has been generously funded by the Andrew W. Mellon Foundation. &lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=AudiAnnotate&amp;diff=5949</id>
		<title>AudiAnnotate</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=AudiAnnotate&amp;diff=5949"/>
		<updated>2022-11-02T20:50:32Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=To make audio and its interpretations more discoverable and usable by extending the use of the newest IIIF (International Image Interoperability Framework) standard for audio with the development of the AudiAnnotate web application, documented workflows and workshops that will facilitate the use of existing best-of-breed, open source tools for audio annotation&lt;br /&gt;
(Sonic Visualiser), for public code and document repositories (GitHub), and audio presentation (Universal Viewer) to produce, publish, and sustain shareable W3C Web Annotations for individual and collaborative audio projects.&lt;br /&gt;
|homepage=http://audiannotate.brumfieldlabs.com&lt;br /&gt;
|cost=None&lt;br /&gt;
|function=Access, Annotation, Preservation System, Workflow, Academic Social Networking, Personal Archiving, Service, Version Control&lt;br /&gt;
|content=Audio, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
The AudiAnnotate project originates from the premise that facilitating the annotation of audio collections will accelerate access to, promote scholarship with, and extend our understanding of important audio collections, some of which may be currently inaccessible and others which could potentially be lost forever. Audio collections are not discoverable without annotations. If we cannot discover an audio file, we will not use it in scholarship. If we do not use audio collections, libraries and archives that hold massive collections of audio recordings from a diverse range of bygone timeframes, cultures, and contexts will not preserve them.&lt;br /&gt;
&lt;br /&gt;
Broadly speaking, the application and workflows that we will develop in the AudiAnnotate project will help users to translate their own analyses of audio recordings into media annotations that will be publishable as easy-to-maintain, static, W3C Web Annotations associated with IIIF manifests and hosted in a GitHub repository that are viewable through presentation software such as Universal Viewer.&lt;br /&gt;
&lt;br /&gt;
In response to the need for a workflow that supports IIIF manifest creation, collaborative editing, flexible modes of presentation, and permissions control, the AudiAnnotate project is developing AWE, a documented workflow using the recently adopted IIIF standard for AV materials that will help libraries, archives, and museums (LAMs), scholars, and the public access and use AV cultural heritage items. We will achieve this goal by connecting existing best-of-breed, open source tools for AV management (Aviary), annotation (such as Audacity and OHMS), public code and document repositories (GitHub), and the AudiAnnotate web application for creating and sharing IIIF manifests and annotations. Usually limited by proprietary software and LAM systems with restricted access to AV, users will use AWE as a complete sequence of tools and transformations for accessing, identifying, annotating, and sharing AWE “projects” such as singular pages or multi-page exhibits or editions with AV materials. LAMs will benefit from AWE as it facilitates metadata generation, is built on W3C web standards in IIIF for sharing online scholarship, and generates static web pages that are lightweight and easy to preserve and harvest. AWE represents a new kind of AV ecosystem where the exchange is opened between institutional repositories, annotation software, online repositories and publication platforms, and all kinds of users.&lt;br /&gt;
&lt;br /&gt;
The AudiAnnotate Project has been awarded a 2019 Digital Extension Grant from the American Council of Learned Societies. AWE has been generously funded by the Andrew W. Mellon Foundation. &lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=AudiAnnotate&amp;diff=5948</id>
		<title>AudiAnnotate</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=AudiAnnotate&amp;diff=5948"/>
		<updated>2022-11-02T19:43:38Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=To make audio and its interpretations more discoverable and usable by extending the use of the newest IIIF (International Image Interoperability Framework) standard for audio with the development of the AudiAnnotate web application, documented workflows and workshops that will facilitate the use of existing best-of-breed, open source tools for audio annotation&lt;br /&gt;
(Sonic Visualiser), for public code and document repositories (GitHub), and audio presentation (Universal Viewer) to produce, publish, and sustain shareable W3C Web Annotations for individual and collaborative audio projects.&lt;br /&gt;
|homepage=http://audiannotate.brumfieldlabs.com&lt;br /&gt;
|cost=None&lt;br /&gt;
|function=Access, Annotation, Preservation System, Workflow&lt;br /&gt;
|content=Audio, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
The AudiAnnotate project originates from the premise that facilitating the annotation of audio collections will accelerate access to, promote scholarship with, and extend our understanding of important audio collections, some of which may be currently inaccessible and others which could potentially be lost forever. Audio collections are not discoverable without annotations. If we cannot discover an audio file, we will not use it in scholarship. If we do not use audio collections, libraries and archives that hold massive collections of audio recordings from a diverse range of bygone timeframes, cultures, and contexts will not preserve them.&lt;br /&gt;
&lt;br /&gt;
Broadly speaking, the application and workflows that we will develop in the AudiAnnotate project will help users to translate their own analyses of audio recordings into media annotations that will be publishable as easy-to-maintain, static, W3C Web Annotations associated with IIIF manifests and hosted in a GitHub repository that are viewable through presentation software such as Universal Viewer.&lt;br /&gt;
&lt;br /&gt;
In response to the need for a workflow that supports IIIF manifest creation, collaborative editing, flexible modes of presentation, and permissions control, the AudiAnnotate project is developing AWE, a documented workflow using the recently adopted IIIF standard for AV materials that will help libraries, archives, and museums (LAMs), scholars, and the public access and use AV cultural heritage items. We will achieve this goal by connecting existing best-of-breed, open source tools for AV management (Aviary), annotation (such as Audacity and OHMS), public code and document repositories (GitHub), and the AudiAnnotate web application for creating and sharing IIIF manifests and annotations. Usually limited by proprietary software and LAM systems with restricted access to AV, users will use AWE as a complete sequence of tools and transformations for accessing, identifying, annotating, and sharing AWE “projects” such as singular pages or multi-page exhibits or editions with AV materials. LAMs will benefit from AWE as it facilitates metadata generation, is built on W3C web standards in IIIF for sharing online scholarship, and generates static web pages that are lightweight and easy to preserve and harvest. AWE represents a new kind of AV ecosystem where the exchange is opened between institutional repositories, annotation software, online repositories and publication platforms, and all kinds of users.&lt;br /&gt;
&lt;br /&gt;
The AudiAnnotate Project has been awarded a 2019 Digital Extension Grant from the American Council of Learned Societies. AWE has been generously funded by the Andrew W. Mellon Foundation. &lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=AudiAnnotate&amp;diff=5947</id>
		<title>AudiAnnotate</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=AudiAnnotate&amp;diff=5947"/>
		<updated>2022-11-02T19:42:48Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: summary&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|purpose=To make audio and its interpretations more discoverable and usable by extending the use of the newest IIIF (International Image Interoperability Framework) standard for audio with the development of the AudiAnnotate web application, documented workflows and workshops that will facilitate the use of existing best-of-breed, open source tools for audio annotation&lt;br /&gt;
(Sonic Visualiser), for public code and document repositories (GitHub), and audio presentation (Universal Viewer) to produce, publish, and sustain shareable W3C Web Annotations for individual and collaborative audio projects.&lt;br /&gt;
|homepage=http://audiannotate.brumfieldlabs.com&lt;br /&gt;
|cost=0&lt;br /&gt;
|function=Access, Annotation, Preservation System, Workflow&lt;br /&gt;
|content=Audio, Video&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
The AudiAnnotate project originates from the premise that facilitating the annotation of audio collections will accelerate access to, promote scholarship with, and extend our understanding of important audio collections, some of which may be currently inaccessible and others which could potentially be lost forever. Audio collections are not discoverable without annotations. If we cannot discover an audio file, we will not use it in scholarship. If we do not use audio collections, libraries and archives that hold massive collections of audio recordings from a diverse range of bygone timeframes, cultures, and contexts will not preserve them.&lt;br /&gt;
&lt;br /&gt;
Broadly speaking, the application and workflows that we will develop in the AudiAnnotate project will help users to translate their own analyses of audio recordings into media annotations that will be publishable as easy-to-maintain, static, W3C Web Annotations associated with IIIF manifests and hosted in a GitHub repository that are viewable through presentation software such as Universal Viewer.&lt;br /&gt;
&lt;br /&gt;
In response to the need for a workflow that supports IIIF manifest creation, collaborative editing, flexible modes of presentation, and permissions control, the AudiAnnotate project is developing AWE, a documented workflow using the recently adopted IIIF standard for AV materials that will help libraries, archives, and museums (LAMs), scholars, and the public access and use AV cultural heritage items. We will achieve this goal by connecting existing best-of-breed, open source tools for AV management (Aviary), annotation (such as Audacity and OHMS), public code and document repositories (GitHub), and the AudiAnnotate web application for creating and sharing IIIF manifests and annotations. Usually limited by proprietary software and LAM systems with restricted access to AV, users will use AWE as a complete sequence of tools and transformations for accessing, identifying, annotating, and sharing AWE “projects” such as singular pages or multi-page exhibits or editions with AV materials. LAMs will benefit from AWE as it facilitates metadata generation, is built on W3C web standards in IIIF for sharing online scholarship, and generates static web pages that are lightweight and easy to preserve and harvest. AWE represents a new kind of AV ecosystem where the exchange is opened between institutional repositories, annotation software, online repositories and publication platforms, and all kinds of users.&lt;br /&gt;
&lt;br /&gt;
The AudiAnnotate Project has been awarded a 2019 Digital Extension Grant from the American Council of Learned Societies. AWE has been generously funded by the Andrew W. Mellon Foundation. &lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=EPADD&amp;diff=5939</id>
		<title>EPADD</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=EPADD&amp;diff=5939"/>
		<updated>2022-10-24T04:35:20Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|image=Epadd_logo_orig.png&lt;br /&gt;
|purpose=ePADD is a software package developed by Stanford University's Special Collections &amp;amp; University Archives that supports archival processes around the appraisal, ingest, processing, discovery, and delivery of email archives.&lt;br /&gt;
|homepage=https://library.stanford.edu/projects/epadd&lt;br /&gt;
|license=Apache 2.0&lt;br /&gt;
|platforms=Java&lt;br /&gt;
|Wikidata ID=Q59652265&lt;br /&gt;
|formats_out=PREMIS (Preservation Metadata Implementation Strategies)&lt;br /&gt;
|function=Access, Appraisal, Content Profiling, Metadata Extraction, Metadata Processing&lt;br /&gt;
|content=Email&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details&lt;br /&gt;
|ohloh_id=epadd&lt;br /&gt;
}}&lt;br /&gt;
&amp;lt;!-- Use the structure provided in this template, do not change it! --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Note that to use the image field, you should leave the value as {{PAGENAMEE}}.png (or similar) and upload a copy of the image. Hot-linking is not supported. If you don't want an image, just remove that line. --&amp;gt;&lt;br /&gt;
&lt;br /&gt;
== Description ==&lt;br /&gt;
&amp;lt;!-- Describe the what the tool does, focusing on it's digital preservation value. Keep it factual. --&amp;gt;&lt;br /&gt;
From the [https://github.com/ePADD/epadd project Github page]:&lt;br /&gt;
&amp;quot;ePADD is a software package developed by Stanford University's Special Collections &amp;amp; University Archives that supports archival processes around the appraisal, ingest, processing, discovery, and delivery of email archives.&lt;br /&gt;
&lt;br /&gt;
The software is comprised of four modules:&lt;br /&gt;
&lt;br /&gt;
* '''Appraisal:''' Allows donors, dealers, and curators to easily gather and review email archives prior to transferring those files to an archival repository.&lt;br /&gt;
&lt;br /&gt;
* '''Processing:''' Provides archivists with the means to arrange and describe email archives.&lt;br /&gt;
&lt;br /&gt;
* '''Discovery:''' Provides the tools for repositories to remotely share a redacted view of their email archives with users through a web server discovery environment. (Note that this module is downloaded separately).&lt;br /&gt;
&lt;br /&gt;
* '''Delivery:''' Enables archival repositories to provide moderated full-text access to unrestricted email archives within a reading room environment.&amp;quot;&lt;br /&gt;
&lt;br /&gt;
From the [https://library.stanford.edu/projects/epadd/ Project page]:&lt;br /&gt;
&lt;br /&gt;
'''ePADD Technical Information'''&lt;br /&gt;
&lt;br /&gt;
ePADD is written in Java and Javascript and powered by Apache Tomcat (v7.0) using Java EE Servlet API (v3.x) and Java Mail (v1.4.2). Text and metadata extraction, indexing and retrieval is performed by Apache Lucene (v4.7) and Apache Tika (v1.8). Charting and visualization is supported using the D3-based reusable chart library (v0.4.10). Oracle's Java Application Bundler and Launch4J are used for packaging on Mac and Windows platforms respectively. Other Java libraries from Apache (Lang, commons, CLI, IO, logging, etc.) are also used. JSON formatting is performed with the libraries org.json and Gson.&lt;br /&gt;
 &lt;br /&gt;
ePADD has implemented its own natural language processing (NLP) toolkit which is used for named entity extraction, disambiguation and other tasks. This toolkit supplants the Apache OpenNLP used in earlier beta versions of the ePADD software. We continue to use Muse as an internal library within ePADD. However, the Apache OpenNLP proved insufficient for our needs (at least for name recognition), and after various rounds of customization, we built our own named entity recognizer. This toolkit uses external datasets such as Wikipedia/DBpedia, Freebase, Geonames, OCLC FAST and LC Subject Headings/LC Name Authority File.&lt;br /&gt;
 &lt;br /&gt;
The project is developed with IDEs like IntelliJ Idea and Eclipse, built with Apache Maven, Ant, and custom shell scripts, and tracked using Git for source control and issue tracking. The ePADD software client is browser-based and compatible with Chrome and Firefox. It is optimized for Windows 7 SP1/10,  OSX 10.12/10.13, and Ubuntu 16.04 machines, using Java 8.&lt;br /&gt;
 &lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
&amp;lt;!-- Add hotlinks to user experiences with the tool (eg. blog posts). These should illustrate the effectiveness (or otherwise) of the tool. Use a bullet list. --&amp;gt;&lt;br /&gt;
* On migrating from different email formats before ingest to ePADD https://groups.google.com/forum/#!topic/digital-curation/srt-oIVwAGU&lt;br /&gt;
* [https://twitter.com/e_padd?lang=en ePADD on Twitter]&lt;br /&gt;
* [http://library.stanford.edu/blogs/special-collections-unbound/2018/07/epadd-60-beta-released ePADD 6.0 beta released!]&lt;br /&gt;
* [https://docs.google.com/document/d/1CVIpWK5FNs5KWVHgvtWTa7u0tZjUrFrBHq6_6ZJVfEA ePADD User Guide]&lt;br /&gt;
* [https://docs.google.com/document/d/10U9Hxh9MS9C9bS8M7uYuXk5m7EBFpgd0yiwCcOSM6D ePADD Shared Discovery Module Website Collection Contributor Guide]&lt;br /&gt;
* [https://library.stanford.edu/projects/epadd/presentations-publications Full list of presentations and publications]&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
== Development Activity ==&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
All development activity is visible on GitHub: http://github.com/ePADD/epadd/commits&lt;br /&gt;
 &lt;br /&gt;
 &lt;br /&gt;
=== Release Feed ===&lt;br /&gt;
Below the last 3 release feeds:&lt;br /&gt;
&amp;lt;rss max=3&amp;gt;https://github.com/ePADD/epadd/releases.atom&amp;lt;/rss&amp;gt;&lt;br /&gt;
  &lt;br /&gt;
 &lt;br /&gt;
=== Activity Feed ===&lt;br /&gt;
Below the last 5 commits:&lt;br /&gt;
&amp;lt;rss max=5&amp;gt;https://github.com/ePADD/epadd/commits/master.atom&amp;lt;/rss&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=Minimum_Preservation_Tool&amp;diff=5929</id>
		<title>Minimum Preservation Tool</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=Minimum_Preservation_Tool&amp;diff=5929"/>
		<updated>2022-09-15T19:13:44Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox tool&lt;br /&gt;
|image=MPT_horizontal_no_logo.png&lt;br /&gt;
|purpose=The Minimum Preservation Tool (MPT) can be used to create an interim preservation storage environment for files awaiting preservation in a longer term repository solution. It supports checksum generation, fixity checking, and replication across two or more storage nodes.&lt;br /&gt;
|homepage=https://github.com/britishlibrary/mpt&lt;br /&gt;
|license=http://www.apache.org/licenses/LICENSE-2.0&lt;br /&gt;
|function=Fixity, Preservation System&lt;br /&gt;
}}&lt;br /&gt;
{{Infobox tool details}}&lt;br /&gt;
== Description ==&lt;br /&gt;
The MPT was designed to make use of existing network storage and compute resources available at most institutions. It is intended to provide greater protection to digital content than a standard network storage offering. The key differences between an MPT solution and standard network storage are replication of content across two or more storage “nodes” and regular fixity checking (checksum validation) of all files on all nodes to help ensure content remains authentic and unchanged. &lt;br /&gt;
&lt;br /&gt;
Papers &amp;amp; Posters:&lt;br /&gt;
Back to Basics: The Minimum Preservation Tool (iPRES2021) - short paper.  Pennock, Maureen;  Beaman, John;  May, Peter;  Davies, Kevin&lt;br /&gt;
https://zenodo.org/record/5788586#.YyN4Q3bMJhE&lt;br /&gt;
&lt;br /&gt;
Upscaling the MPT (iPRES2022)- poster. May, Peter; Davies, Kevin (forthcoming)&lt;br /&gt;
&lt;br /&gt;
== User Experiences ==&lt;br /&gt;
Introduction to the MPT https://www.dpconline.org/blog/minimum-preservation-tool-mpt&lt;br /&gt;
&lt;br /&gt;
== Development Activity ==&lt;br /&gt;
https://github.com/britishlibrary/mpt - Main site&lt;br /&gt;
&lt;br /&gt;
https://github.com/anjackson/mpt/releases/tag/v1.1.6-UI - Experimental MPT user interface from @anjacks0n&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Provide *evidence* of development activity of the tool. For example, RSS feeds for code issues or commits. --&amp;gt;&lt;br /&gt;
&amp;lt;!-- Add the OpenHub.com ID for the tool, if known. --&amp;gt;&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=SIARD&amp;diff=5928</id>
		<title>SIARD</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=SIARD&amp;diff=5928"/>
		<updated>2022-09-13T13:37:31Z</updated>

		<summary type="html">&lt;p&gt;172.104.134.96: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox format&lt;br /&gt;
|Wikidata ID=Q2206173&lt;br /&gt;
|File formats wiki ID=SIARD&lt;br /&gt;
|PRONOM PUID=fmt/161, fmt/995, fmt/1196&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>172.104.134.96</name></author>
	</entry>
</feed>