Difference between revisions of "Heritrix"

From COPTR
Jump to navigation Jump to search
(Import from spreadsheet via script.)
(→‎Development Activity: Adding activity sources)
Line 46: Line 46:
  
 
{{Infobox_tool_details
 
{{Infobox_tool_details
 +
|releases_rss=https://github.com/internetarchive/heritrix3/commits/master.atom
 +
|issues_rss=https://webarchive.jira.com/sr/jira.issueviews:searchrequest-rss/temp/SearchRequest.xml?jqlQuery=project+%3D+HER&tempMax=100
 +
|mailing_lists=https://groups.yahoo.com/neo/groups/archive-crawler/info
 
|ohloh_id=Heritrix
 
|ohloh_id=Heritrix
 
}}
 
}}

Revision as of 15:51, 22 September 2015

Heritrix is an open-source web crawler, allowing users to target websites they wish to include in a collection and to harvest an instance of each site.
Homepage:http://crawler.archive.org
License:GNU Lesser General Public License 2.1
Platforms:Written in Java. Must have Java Runtime Environment (JRE, http://www.java.com/en/download/index.jsp) and at least Java version 5.0 installed. Default heap size is 256MB RAM.
Appears in COW:Quality Assurance: Iterative Seed Issue Decision Tree, Web Archiving Quality Assurance (QA) Workflow, Web Archiving Quality Assurance Lifecycle


Description

Heritrix is an open-source web crawler, allowing users to target websites they wish to include in a collection and to harvest an instance of each site. The software is most often used as a powerful back-end tool incorporated into a web archiving workflow.

Provider

Internet Archive

Licensing and cost

Apache License, Version 2.0 – free. Some individual source code files are subject to or offered under other licenses.

Development activity

Version 3.1.1 was released in May 2012. Heritrix powers the Internet Archive, and so receives ongoing support.

Platform and interoperability

As a Java application, Heritrix is theoretically platform agnostic; however, only Linux is supported.  The software requires Java Runtime Environment 1.6 or higher, and at least 256MB of available RAM.

Functional notes

Web crawls are carried out by configuring a ‘job,’ which itself is an instance of a crawl template called a ‘profile.’ Although they contain the same configurations, these two entities have different functions; profiles record the set of configurations and act as a starting point for shaping a new job, but only the job itself can excecute a crawl. The software will crawl FTP sites in addition to HTTP. Users can examine the results of a crawl by opening its log files, which include information about crawl problems and errors, each URI that was collected, and statistics about the job as a whole. Users can also create reports showing a summary of the crawl’s activity. Heritrix stores the web resources it crawls in an Arc file. The software includes a command-line tool called arcreader which can be used to extract the contents.

Documentation and user support

The User Guide for versions 3.0 and 3.1 is in the form of a wiki, which at time of writing is not structured in any obvious narrative order; while detailed, it is very difficult to navigate.  The User Manual for version 2.0 is structured and can be used as a reference for navigation.  Extensive documentation is available, including release notes, Javadoc API documentation, and FAQs linking within the wiki. Heritrix’s website links to two active mailing lists: a yahoo discussion group and a sourceforge list distributing source code commits. The project also uses a public JIRA for bug, feature, and issue tracking.

Usability

Heritrix is installed via a command line interface, but once installed the user can launch a web-based interface for configuration. Setting up a crawl requires a significant number of adjustments.

Expertise required

Installation requires solid knowledge of Linux and command line interfaces. As with any web archiving software, deep understanding of the project’s scope and collections policy is essential in order to set up appropriate targets.

Standards compliance

Heritrix does not offer metadata support. The software is designed to respect robots.txt exclusion directives and META robots tags.

Influence and take-up

Heritrix is extremely influential; as of March 2012 the sourceforge site reports nearly 240,000 downloads. Users include the Internet Archive, The British Library, the United States Library of Congress, and the French National Library. The software powers Netarchive Suite and the Web Curator Tool.


User Experiences

Development Activity

Error in widget Ohloh Project: unable to write file /var/www/html/extensions/Widgets/compiled_templates/wrt674f4041e9f064_44081924

Mailing List(s)

See here for more information.

Release Feed

2024-11-29 12:06:57
[tag:github.com,2008:Grit::Commit/2b2179d9c7d2d59e7383ab3eadd9809912c98a07 Update pom.xml for 3.6.1-SNAPSHOT]
by ato https://github.com/ato
2024-11-29 11:57:01
[tag:github.com,2008:Grit::Commit/be81b3a4079f729195f8098d2c484d67120641f0 Update maven release plugins for Java compat]
by ato https://github.com/ato
2024-11-29 11:46:02
[tag:github.com,2008:Grit::Commit/c67ff98b2d8045768900b444b9258969947fc120 [maven-release-plugin] prepare for next development iteration]
by ato https://github.com/ato
2024-11-29 11:42:40
[tag:github.com,2008:Grit::Commit/2d903ae772fd87a615e873b4c73dd307faa2383d [maven-release-plugin] prepare release 3.6.0]
by ato https://github.com/ato
2024-11-29 11:28:11
[tag:github.com,2008:Grit::Commit/dd3616ea0796ccea9610d9ec18f4b1fc5d57e6fa Update CHANGELOG for 3.6.0 release]
by ato https://github.com/ato

Issues Feed

Failed to load RSS feed from https://webarchive.jira.com/sr/jira.issueviews:searchrequest-rss/temp/SearchRequest.xml?jqlQuery=project+%3D+HER&tempMax=100: There was a problem during the HTTP request: 400 Bad Request