Difference between revisions of "Heritrix"

From COPTR
Jump to navigation Jump to search
(Trial import from script.)
(Trial import from script.)
Line 1: Line 1:
 
{{Infobox_tool
 
{{Infobox_tool
|purpose=Heritrix is a flexible, extensible, robust, and scalable Web crawler capable of fetching, archiving, and analyzing Internet-accessible content.
+
|purpose=Heritrix is an open-source web crawler, allowing users to target websites they wish to include in a collection and to harvest an instance of each site.
 
|image=
 
|image=
 
|homepage=http://crawler.archive.org
 
|homepage=http://crawler.archive.org
Line 13: Line 13:
  
 
= Description =
 
= Description =
Heritrix is a flexible, extensible, robust, and scalable Web crawler capable of fetching, archiving, and analyzing Internet-accessible content. Developed by Internet Archive. Written in Java.
+
[https://webarchive.jira.com/wiki/display/Heritrix/Heritrix Heritrix] is an open-source web crawler, allowing users to target websites they wish to include in a collection and to harvest an instance of each site. The software is most often used as a powerful back-end tool incorporated into a web archiving workflow.
 +
====Provider====
 +
Internet Archive
 +
====Licensing and cost====
 +
[http://www.apache.org/licenses/LICENSE-2.0.html Apache License, Version 2.0] – free. Some individual source code files are subject to or offered under other licenses.
 +
====Development activity====
 +
Version 3.1.1 was released in May 2012.
 +
Heritrix powers the Internet Archive, and so receives ongoing support.
 +
====Platform and interoperability====
 +
As a Java application, Heritrix is theoretically platform agnostic; however, only Linux is supported.  The software requires Java Runtime Environment 1.6 or higher, and at least 256MB of available RAM.
 +
====Functional notes====
 +
Web crawls are carried out by configuring a ‘job,’ which itself is an instance of a crawl template called a ‘profile.’ Although they contain the same configurations, these two entities have different functions; profiles record the set of configurations and act as a starting point for shaping a new job, but only the job itself can excecute a crawl.
 +
The software will crawl FTP sites in addition to HTTP. Users can examine the results of a crawl by opening its log files, which include information about crawl problems and errors, each URI that was collected, and statistics about the job as a whole. Users can also create reports showing a summary of the crawl’s activity.
 +
Heritrix stores the web resources it crawls in an Arc file. The software includes a command-line tool called arcreader which can be used to extract the contents.
 +
====Documentation and user support====
 +
The [https://webarchive.jira.com/wiki/display/Heritrix/Heritrix+3.0+and+3.1+User+Guide User Guide for versions 3.0 and 3.1] is in the form of a wiki, which at time of writing is not structured in any obvious narrative order; while detailed, it is very difficult to navigate.  The [http://crawler.archive.org/articles/user_manual/ User Manual for version 2.0] is structured and can be used as a reference for navigation.  Extensive documentation is available, including release notes, Javadoc API documentation, and FAQs linking within the wiki.
 +
Heritrix’s website links to two active mailing lists: a yahoo discussion group and a sourceforge list distributing source code commits. The project also uses a public JIRA for bug, feature, and issue tracking.
 +
====Usability====
 +
Heritrix is installed via a command line interface, but once installed the user can launch a web-based interface for configuration. Setting up a crawl requires a significant number of adjustments.
 +
====Expertise required====
 +
Installation requires solid knowledge of Linux and command line interfaces. As with any web archiving software, deep understanding of the project’s scope and collections policy is essential in order to set up appropriate targets.
 +
====Standards compliance====
 +
Heritrix does not offer metadata support. The software is designed to respect robots.txt exclusion directives and META robots tags.
 +
====Influence and take-up====
 +
Heritrix is extremely influential; as of March 2012 the sourceforge site reports nearly 240,000 downloads. [https://webarchive.jira.com/wiki/display/Heritrix/Users+of+Heritrix Users] include the Internet Archive, The British Library, the United States Library of Congress, and the French National Library. The software powers [http://www.dcc.ac.uk/node/9380 Netarchive Suite] and the [http://www.dcc.ac.uk/node/9394 Web Curator Tool].
 +
 
  
 
= User Experiences =
 
= User Experiences =

Revision as of 21:07, 13 November 2013

Heritrix is an open-source web crawler, allowing users to target websites they wish to include in a collection and to harvest an instance of each site.
Homepage:http://crawler.archive.org
License:GNU Lesser General Public License 2.1
Platforms:Written in Java. Must have Java Runtime Environment (JRE, http://www.java.com/en/download/index.jsp) and at least Java version 5.0 installed. Default heap size is 256MB RAM.
Appears in COW:Quality Assurance: Iterative Seed Issue Decision Tree, Web Archiving Quality Assurance (QA) Workflow, Web Archiving Quality Assurance Lifecycle


Description

Heritrix is an open-source web crawler, allowing users to target websites they wish to include in a collection and to harvest an instance of each site. The software is most often used as a powerful back-end tool incorporated into a web archiving workflow.

Provider

Internet Archive

Licensing and cost

Apache License, Version 2.0 – free. Some individual source code files are subject to or offered under other licenses.

Development activity

Version 3.1.1 was released in May 2012. Heritrix powers the Internet Archive, and so receives ongoing support.

Platform and interoperability

As a Java application, Heritrix is theoretically platform agnostic; however, only Linux is supported.  The software requires Java Runtime Environment 1.6 or higher, and at least 256MB of available RAM.

Functional notes

Web crawls are carried out by configuring a ‘job,’ which itself is an instance of a crawl template called a ‘profile.’ Although they contain the same configurations, these two entities have different functions; profiles record the set of configurations and act as a starting point for shaping a new job, but only the job itself can excecute a crawl. The software will crawl FTP sites in addition to HTTP. Users can examine the results of a crawl by opening its log files, which include information about crawl problems and errors, each URI that was collected, and statistics about the job as a whole. Users can also create reports showing a summary of the crawl’s activity. Heritrix stores the web resources it crawls in an Arc file. The software includes a command-line tool called arcreader which can be used to extract the contents.

Documentation and user support

The User Guide for versions 3.0 and 3.1 is in the form of a wiki, which at time of writing is not structured in any obvious narrative order; while detailed, it is very difficult to navigate.  The User Manual for version 2.0 is structured and can be used as a reference for navigation.  Extensive documentation is available, including release notes, Javadoc API documentation, and FAQs linking within the wiki. Heritrix’s website links to two active mailing lists: a yahoo discussion group and a sourceforge list distributing source code commits. The project also uses a public JIRA for bug, feature, and issue tracking.

Usability

Heritrix is installed via a command line interface, but once installed the user can launch a web-based interface for configuration. Setting up a crawl requires a significant number of adjustments.

Expertise required

Installation requires solid knowledge of Linux and command line interfaces. As with any web archiving software, deep understanding of the project’s scope and collections policy is essential in order to set up appropriate targets.

Standards compliance

Heritrix does not offer metadata support. The software is designed to respect robots.txt exclusion directives and META robots tags.

Influence and take-up

Heritrix is extremely influential; as of March 2012 the sourceforge site reports nearly 240,000 downloads. Users include the Internet Archive, The British Library, the United States Library of Congress, and the French National Library. The software powers Netarchive Suite and the Web Curator Tool.


User Experiences

Development Activity

Error in widget Ohloh Project: unable to write file /var/www/html/extensions/Widgets/compiled_templates/wrt67404ed46f4199_56329972