Difference between revisions of "AudiAnnotate"

From COPTR
Jump to navigation Jump to search
(summary)
 
 
(2 intermediate revisions by the same user not shown)
Line 3: Line 3:
 
(Sonic Visualiser), for public code and document repositories (GitHub), and audio presentation (Universal Viewer) to produce, publish, and sustain shareable W3C Web Annotations for individual and collaborative audio projects.
 
(Sonic Visualiser), for public code and document repositories (GitHub), and audio presentation (Universal Viewer) to produce, publish, and sustain shareable W3C Web Annotations for individual and collaborative audio projects.
 
|homepage=http://audiannotate.brumfieldlabs.com
 
|homepage=http://audiannotate.brumfieldlabs.com
|cost=0
+
|cost=None
|function=Access, Annotation, Preservation System, Workflow
+
|function=Academic Social Networking, Access, Annotation, Personal Archiving, Preservation System, Service, Version Control, Workflow, Metadata Processing, Rendering, Discovery, Persistent Identification, Managing Active Research Data
 
|content=Audio, Video
 
|content=Audio, Video
 
}}
 
}}

Latest revision as of 20:56, 2 November 2022




To make audio and its interpretations more discoverable and usable by extending the use of the newest IIIF (International Image Interoperability Framework) standard for audio with the development of the AudiAnnotate web application, documented workflows and workshops that will facilitate the use of existing best-of-breed, open source tools for audio annotation (Sonic Visualiser), for public code and document repositories (GitHub), and audio presentation (Universal Viewer) to produce, publish, and sustain shareable W3C Web Annotations for individual and collaborative audio projects.
Homepage:http://audiannotate.brumfieldlabs.com
Cost:None
Function:Academic Social Networking,Access,Annotation,Personal Archiving,Preservation System,Service,Version Control,Workflow,Metadata Processing,Rendering,Discovery,Persistent Identification,Managing Active Research Data
Content type:Audio,Video



Description[edit]

The AudiAnnotate project originates from the premise that facilitating the annotation of audio collections will accelerate access to, promote scholarship with, and extend our understanding of important audio collections, some of which may be currently inaccessible and others which could potentially be lost forever. Audio collections are not discoverable without annotations. If we cannot discover an audio file, we will not use it in scholarship. If we do not use audio collections, libraries and archives that hold massive collections of audio recordings from a diverse range of bygone timeframes, cultures, and contexts will not preserve them.

Broadly speaking, the application and workflows that we will develop in the AudiAnnotate project will help users to translate their own analyses of audio recordings into media annotations that will be publishable as easy-to-maintain, static, W3C Web Annotations associated with IIIF manifests and hosted in a GitHub repository that are viewable through presentation software such as Universal Viewer.

In response to the need for a workflow that supports IIIF manifest creation, collaborative editing, flexible modes of presentation, and permissions control, the AudiAnnotate project is developing AWE, a documented workflow using the recently adopted IIIF standard for AV materials that will help libraries, archives, and museums (LAMs), scholars, and the public access and use AV cultural heritage items. We will achieve this goal by connecting existing best-of-breed, open source tools for AV management (Aviary), annotation (such as Audacity and OHMS), public code and document repositories (GitHub), and the AudiAnnotate web application for creating and sharing IIIF manifests and annotations. Usually limited by proprietary software and LAM systems with restricted access to AV, users will use AWE as a complete sequence of tools and transformations for accessing, identifying, annotating, and sharing AWE “projects” such as singular pages or multi-page exhibits or editions with AV materials. LAMs will benefit from AWE as it facilitates metadata generation, is built on W3C web standards in IIIF for sharing online scholarship, and generates static web pages that are lightweight and easy to preserve and harvest. AWE represents a new kind of AV ecosystem where the exchange is opened between institutional repositories, annotation software, online repositories and publication platforms, and all kinds of users.

The AudiAnnotate Project has been awarded a 2019 Digital Extension Grant from the American Council of Learned Societies. AWE has been generously funded by the Andrew W. Mellon Foundation.

User Experiences[edit]