<?xml version="1.0"?>
<feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en-GB">
	<id>https://coptr.digipres.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Danielle+plumer</id>
	<title>COPTR - User contributions [en-gb]</title>
	<link rel="self" type="application/atom+xml" href="https://coptr.digipres.org/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=Danielle+plumer"/>
	<link rel="alternate" type="text/html" href="https://coptr.digipres.org/Special:Contributions/Danielle_plumer"/>
	<updated>2026-04-12T16:31:23Z</updated>
	<subtitle>User contributions</subtitle>
	<generator>MediaWiki 1.35.14</generator>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1918</id>
		<title>GNU Wget</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1918"/>
		<updated>2014-10-01T03:54:09Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: Added information about features, license, platforms &amp;amp; installation, documentation, and user experiences. Fixed Ohloh ID.&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox_tool&lt;br /&gt;
|purpose= Non-interactive network downloader &lt;br /&gt;
|image=Gnu2.png&lt;br /&gt;
|homepage=http://www.gnu.org/software/wget/&lt;br /&gt;
|license=GNU General Public License&lt;br /&gt;
|platforms=Unix, Linux, Windows, Macintosh&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Delete the Categories that do not apply --&amp;gt;&lt;br /&gt;
[[Category:Web Crawl]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
GNU Wget is a free software package for retrieving files using HTTP,  HTTPS and FTP,  the most widely-used Internet protocols. It is a non-interactive command line tool,  so it may easily be called from scripts,  cron jobs,  terminals without X-Windows support,  etc. &lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
&lt;br /&gt;
From the Wget manual: &lt;br /&gt;
&lt;br /&gt;
* Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user’s presence, which can be a great hindrance when transferring a lot of data.&lt;br /&gt;
* Wget can follow links in HTML, XHTML, and CSS pages, to create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as “recursive downloading.” While doing that, Wget respects the Robot Exclusion Standard (/robots.txt).  Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing.&lt;br /&gt;
* File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages.&lt;br /&gt;
* Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.&lt;br /&gt;
* Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. Wget uses the passive FTP downloading by default, active FTP being an option.&lt;br /&gt;
* Wget supports IP version 6, the next generation of IP. IPv6 is autodetected at compile-time, and can be disabled at either build or run time. Binaries built with IPv6 support work well in both IPv4-only and dual family environments.&lt;br /&gt;
* Built-in features offer mechanisms to tune which links you wish to follow (see Following Links).&lt;br /&gt;
* The progress of individual downloads is traced using a progress gauge. Interactive downloads are tracked using a “thermometer”-style gauge, whereas non-interactive ones are traced with dots, each dot representing a fixed amount of data received (1KB by default). Either gauge can be customized to your preferences.&lt;br /&gt;
* Most of the features are fully configurable, either through command line options, or via the initialization file .wgetrc (see Startup File). Wget allows you to define global startup files (/usr/local/etc/wgetrc by default) for site settings. You can also specify the location of a startup file with the –config option.&lt;br /&gt;
* Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (see the file COPYING that came with GNU Wget, for details).&lt;br /&gt;
&lt;br /&gt;
As of version 1.14, Wget supports WARC output. See http://www.archiveteam.org/index.php?title=Wget_with_WARC_output for details of the development of this feature.&lt;br /&gt;
&lt;br /&gt;
== Platform ==&lt;br /&gt;
&lt;br /&gt;
GNU Wget can be installed on Unix-like systems (UNIX, Linux), Mac OS, and Windows computers.&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
&lt;br /&gt;
* Unix-like systems: Most package managers include Wget, but they may not include the latest version. To get a later version with support for WARC, for example, Linux and UNIX users should compile the latest version of the source code following the instructions at http://wget.addictivecode.org/FrequentlyAskedQuestions#How_do_I_compile_Wget.3F.&lt;br /&gt;
&lt;br /&gt;
* Macintosh: The default Mac OS does not include Wget. Source code can be compiled for Mac OS X or users can install an alternative package manager such as Homebrew (Homebrew installs the latest version by default). See http://coolestguidesontheplanet.com/install-and-configure-wget-on-os-x/ for instructions on how to install from source.&lt;br /&gt;
&lt;br /&gt;
* Windows: packages for later versions of Wget compiled for Windows are available at http://eternallybored.org/misc/wget/.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
The user manual is available at http://www.gnu.org/software/wget/manual/wget.html. The manual is also available via man wget in Unix-like systems.&lt;br /&gt;
&lt;br /&gt;
Additional documentation, including an FAQ, is available on the Wget wiki, http://wget.addictivecode.org/Wget.&lt;br /&gt;
&lt;br /&gt;
= User Experiences =&lt;br /&gt;
&lt;br /&gt;
* Milligan, Ian. (2012). Automated downloading with Wget. http://programminghistorian.org/lessons/automated-downloading-with-wget&lt;br /&gt;
* ArchiveTeam. (2014). Wget. http://www.archiveteam.org/index.php?title=Wget&lt;br /&gt;
&lt;br /&gt;
= Development Activity =&lt;br /&gt;
&lt;br /&gt;
{{Infobox_tool_details&lt;br /&gt;
|ohloh_id=Wget&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1917</id>
		<title>User:Danielle plumer</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1917"/>
		<updated>2014-10-01T02:22:02Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== About Me ==&lt;br /&gt;
I am a digital collections consultant working with cultural heritage institutions interested in putting their collections online. I work primarily in the areas of project planning and metadata creation, standards, and normalization. I also do research into information extraction from textual materials and natural language processing for information retrieval.&lt;br /&gt;
&lt;br /&gt;
From 2005-2011, I coordinated the Texas Heritage Online program at the Texas State Library and Archives Commission, where I consulted with and assisted participants from the library, archives, and museum communities who were developing digital library projects. &lt;br /&gt;
&lt;br /&gt;
I also teach graduate-level courses for the College of Information at the University of North Texas and for the School of Information at The University of Texas at Austin, including courses on metadata, digitization, and digital preservation. In addition to these formal courses, I developed and and so-taught a series of workshops offered throughout Texas as part of a grant funded by the Institute for Museum and Library Services. These workshops covered Digital Project Planning and Management Basics, Digital Archives Systems and Applications, Metadata Standards and Crosswalks, and Principles of Controlled Vocabulary and Thesaurus Design as well as supplemental courses on Digital Preservation Planning and Management and Digital Preservation Tools.&lt;br /&gt;
&lt;br /&gt;
I earned an M.S. in Information Studies at The University of Texas at Austin in 2003. Prior to that, I earned a Ph.D. in English at the University of California, Davis.&lt;br /&gt;
&lt;br /&gt;
==== Personal Links: ====&lt;br /&gt;
* [[Help:Editing | Guidelines for Editing COPTR]]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Help:Editing MediaWiki Editing Help]&lt;br /&gt;
&lt;br /&gt;
==== Notes: ====&lt;br /&gt;
* To add an image to the InfoBox, first upload an image. For best results, images should be approx. 200 px. wide. PNG and JPG are both verified to work.&lt;br /&gt;
** Use the syntax |image=Gnu2.png within the InfoBox.&lt;br /&gt;
** Example: http://coptr.digipres.org/index.php?title=GNU_Wget&lt;br /&gt;
 &lt;br /&gt;
== Conflict of Interest ==&lt;br /&gt;
I am an independent consultant. My business, [http://www.dcplumer.com dcplumer associates], works with various libraries, archives, museums, and nonprofit organizations, and some of my paid work involves search engine optimization and use of social media (including Wikipedia) for digital collections. My goal as an editor is to advance the aims of COPTR. If other editors feel that my editing behavior is not advancing the aims of COPTR or that I am not following the [[Help:Editing | Guidelines for Editing COPTR]], I will be happy to modify my editing behavior. Please do not hesitate to contact me, via my [[User:Danielle_plumer | user talk]] page, if you have questions about my behavior.&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1916</id>
		<title>GNU Wget</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1916"/>
		<updated>2014-10-01T02:19:42Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox_tool&lt;br /&gt;
|purpose= Non-interactive network downloader &lt;br /&gt;
|image=Gnu2.png&lt;br /&gt;
|homepage=http://www.gnu.org/software/wget/&lt;br /&gt;
|license=GNU General Public License&lt;br /&gt;
|platforms=Unix, Linux, Windows, Macintosh&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Delete the Categories that do not apply --&amp;gt;&lt;br /&gt;
[[Category:Web Crawl]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
GNU Wget is a free software package for retrieving files using HTTP,  HTTPS and FTP,  the most widely-used Internet protocols. It is a non-interactive command line tool,  so it may easily be called from scripts,  cron jobs,  terminals without X-Windows support,  etc. &lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
&lt;br /&gt;
From the Wget manual: &lt;br /&gt;
&lt;br /&gt;
* Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user’s presence, which can be a great hindrance when transferring a lot of data.&lt;br /&gt;
* Wget can follow links in HTML, XHTML, and CSS pages, to create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as “recursive downloading.” While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). * * Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing.&lt;br /&gt;
* File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages.&lt;br /&gt;
* Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.&lt;br /&gt;
* Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. Wget uses the passive FTP downloading by default, active FTP being an option.&lt;br /&gt;
* Wget supports IP version 6, the next generation of IP. IPv6 is autodetected at compile-time, and can be disabled at either build or run time. Binaries built with IPv6 support work well in both IPv4-only and dual family environments.&lt;br /&gt;
* Built-in features offer mechanisms to tune which links you wish to follow (see Following Links).&lt;br /&gt;
* The progress of individual downloads is traced using a progress gauge. Interactive downloads are tracked using a “thermometer”-style gauge, whereas non-interactive ones are traced with dots, each dot representing a fixed amount of data received (1KB by default). Either gauge can be customized to your preferences.&lt;br /&gt;
* Most of the features are fully configurable, either through command line options, or via the initialization file .wgetrc (see Startup File). Wget allows you to define global startup files (/usr/local/etc/wgetrc by default) for site settings. You can also specify the location of a startup file with the –config option.&lt;br /&gt;
* Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (see the file COPYING that came with GNU Wget, for details).&lt;br /&gt;
&lt;br /&gt;
As of version 1.14, Wget supports WARC output. See http://www.archiveteam.org/index.php?title=Wget_with_WARC_output for details of the development of this feature.&lt;br /&gt;
&lt;br /&gt;
== Platform ==&lt;br /&gt;
&lt;br /&gt;
GNU Wget can be installed on Unix-like systems (UNIX, Linux), Mac OS, and Windows computers.&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
&lt;br /&gt;
* Unix-like systems: Most package managers include Wget, but they may not include the latest version. To get a later version with support for WARC, for example, Linux and UNIX users should compile the latest version of the source code following the instructions at http://wget.addictivecode.org/FrequentlyAskedQuestions#How_do_I_compile_Wget.3F.&lt;br /&gt;
&lt;br /&gt;
* Macintosh: The default Mac OS does not include Wget. Source code can be compiled for Mac OS X or users can install an alternative package manager such as Homebrew (it is unknown which version of Wget Homebrew installs). See http://coolestguidesontheplanet.com/install-and-configure-wget-on-os-x/ for instructions on how to install from source.&lt;br /&gt;
&lt;br /&gt;
* Windows: packages for later versions of Wget compiled for Windows are available at http://eternallybored.org/misc/wget/.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
The user manual is available at http://www.gnu.org/software/wget/manual/wget.html. The manual is also available via man wget in Unix-like systems.&lt;br /&gt;
&lt;br /&gt;
Additional documentation, including an FAQ, is available on the Wget wiki, http://wget.addictivecode.org/Wget.&lt;br /&gt;
&lt;br /&gt;
= User Experiences =&lt;br /&gt;
&lt;br /&gt;
* Milligan, Ian. (2012). Automated downloading with Wget. http://programminghistorian.org/lessons/automated-downloading-with-wget&lt;br /&gt;
* ArchiveTeam. (2014). Wget. http://www.archiveteam.org/index.php?title=Wget&lt;br /&gt;
&lt;br /&gt;
= Development Activity =&lt;br /&gt;
&lt;br /&gt;
{{Infobox_tool_details&lt;br /&gt;
|ohloh_id=Wget&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=File:Gnu2.png&amp;diff=1915</id>
		<title>File:Gnu2.png</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=File:Gnu2.png&amp;diff=1915"/>
		<updated>2014-10-01T02:18:30Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1914</id>
		<title>GNU Wget</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1914"/>
		<updated>2014-10-01T02:06:20Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox_tool&lt;br /&gt;
|purpose= Non-interactive network downloader &lt;br /&gt;
|image=gnu_small.png&lt;br /&gt;
|homepage=http://www.gnu.org/software/wget/&lt;br /&gt;
|license=GNU General Public License&lt;br /&gt;
|platforms=Unix, Linux, Windows, Macintosh&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Delete the Categories that do not apply --&amp;gt;&lt;br /&gt;
[[Category:Web Crawl]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
GNU Wget is a free software package for retrieving files using HTTP,  HTTPS and FTP,  the most widely-used Internet protocols. It is a non-interactive command line tool,  so it may easily be called from scripts,  cron jobs,  terminals without X-Windows support,  etc. &lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
&lt;br /&gt;
From the Wget manual: &lt;br /&gt;
&lt;br /&gt;
* Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user’s presence, which can be a great hindrance when transferring a lot of data.&lt;br /&gt;
* Wget can follow links in HTML, XHTML, and CSS pages, to create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as “recursive downloading.” While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). * * Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing.&lt;br /&gt;
* File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages.&lt;br /&gt;
* Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.&lt;br /&gt;
* Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. Wget uses the passive FTP downloading by default, active FTP being an option.&lt;br /&gt;
* Wget supports IP version 6, the next generation of IP. IPv6 is autodetected at compile-time, and can be disabled at either build or run time. Binaries built with IPv6 support work well in both IPv4-only and dual family environments.&lt;br /&gt;
* Built-in features offer mechanisms to tune which links you wish to follow (see Following Links).&lt;br /&gt;
* The progress of individual downloads is traced using a progress gauge. Interactive downloads are tracked using a “thermometer”-style gauge, whereas non-interactive ones are traced with dots, each dot representing a fixed amount of data received (1KB by default). Either gauge can be customized to your preferences.&lt;br /&gt;
* Most of the features are fully configurable, either through command line options, or via the initialization file .wgetrc (see Startup File). Wget allows you to define global startup files (/usr/local/etc/wgetrc by default) for site settings. You can also specify the location of a startup file with the –config option.&lt;br /&gt;
* Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (see the file COPYING that came with GNU Wget, for details).&lt;br /&gt;
&lt;br /&gt;
As of version 1.14, Wget supports WARC output. See http://www.archiveteam.org/index.php?title=Wget_with_WARC_output for details of the development of this feature.&lt;br /&gt;
&lt;br /&gt;
== Platform ==&lt;br /&gt;
&lt;br /&gt;
GNU Wget can be installed on Unix-like systems (UNIX, Linux), Mac OS, and Windows computers.&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
&lt;br /&gt;
* Unix-like systems: Most package managers include Wget, but they may not include the latest version. To get a later version with support for WARC, for example, Linux and UNIX users should compile the latest version of the source code following the instructions at http://wget.addictivecode.org/FrequentlyAskedQuestions#How_do_I_compile_Wget.3F.&lt;br /&gt;
&lt;br /&gt;
* Macintosh: The default Mac OS does not include Wget. Source code can be compiled for Mac OS X or users can install an alternative package manager such as Homebrew (it is unknown which version of Wget Homebrew installs). See http://coolestguidesontheplanet.com/install-and-configure-wget-on-os-x/ for instructions on how to install from source.&lt;br /&gt;
&lt;br /&gt;
* Windows: packages for later versions of Wget compiled for Windows are available at http://eternallybored.org/misc/wget/.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
The user manual is available at http://www.gnu.org/software/wget/manual/wget.html. The manual is also available via man wget in Unix-like systems.&lt;br /&gt;
&lt;br /&gt;
Additional documentation, including an FAQ, is available on the Wget wiki, http://wget.addictivecode.org/Wget.&lt;br /&gt;
&lt;br /&gt;
= User Experiences =&lt;br /&gt;
&lt;br /&gt;
* Milligan, Ian. (2012). Automated downloading with Wget. http://programminghistorian.org/lessons/automated-downloading-with-wget&lt;br /&gt;
* ArchiveTeam. (2014). Wget. http://www.archiveteam.org/index.php?title=Wget&lt;br /&gt;
&lt;br /&gt;
= Development Activity =&lt;br /&gt;
&lt;br /&gt;
{{Infobox_tool_details&lt;br /&gt;
|ohloh_id=Wget&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=File:Gnu_small.png&amp;diff=1913</id>
		<title>File:Gnu small.png</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=File:Gnu_small.png&amp;diff=1913"/>
		<updated>2014-10-01T02:05:12Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1912</id>
		<title>GNU Wget</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1912"/>
		<updated>2014-10-01T01:48:22Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox_tool&lt;br /&gt;
|purpose= Non-interactive network downloader &lt;br /&gt;
|image=[[File:Gnu.jpg]]&lt;br /&gt;
|homepage=http://www.gnu.org/software/wget/&lt;br /&gt;
|license=GNU General Public License&lt;br /&gt;
|platforms=Unix, Linux, Windows, Macintosh&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Delete the Categories that do not apply --&amp;gt;&lt;br /&gt;
[[Category:Web Crawl]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
GNU Wget is a free software package for retrieving files using HTTP,  HTTPS and FTP,  the most widely-used Internet protocols. It is a non-interactive command line tool,  so it may easily be called from scripts,  cron jobs,  terminals without X-Windows support,  etc. &lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
&lt;br /&gt;
From the Wget manual: &lt;br /&gt;
&lt;br /&gt;
* Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user’s presence, which can be a great hindrance when transferring a lot of data.&lt;br /&gt;
* Wget can follow links in HTML, XHTML, and CSS pages, to create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as “recursive downloading.” While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). * * Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing.&lt;br /&gt;
* File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages.&lt;br /&gt;
* Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.&lt;br /&gt;
* Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. Wget uses the passive FTP downloading by default, active FTP being an option.&lt;br /&gt;
* Wget supports IP version 6, the next generation of IP. IPv6 is autodetected at compile-time, and can be disabled at either build or run time. Binaries built with IPv6 support work well in both IPv4-only and dual family environments.&lt;br /&gt;
* Built-in features offer mechanisms to tune which links you wish to follow (see Following Links).&lt;br /&gt;
* The progress of individual downloads is traced using a progress gauge. Interactive downloads are tracked using a “thermometer”-style gauge, whereas non-interactive ones are traced with dots, each dot representing a fixed amount of data received (1KB by default). Either gauge can be customized to your preferences.&lt;br /&gt;
* Most of the features are fully configurable, either through command line options, or via the initialization file .wgetrc (see Startup File). Wget allows you to define global startup files (/usr/local/etc/wgetrc by default) for site settings. You can also specify the location of a startup file with the –config option.&lt;br /&gt;
* Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (see the file COPYING that came with GNU Wget, for details).&lt;br /&gt;
&lt;br /&gt;
As of version 1.14, Wget supports WARC output. See http://www.archiveteam.org/index.php?title=Wget_with_WARC_output for details of the development of this feature.&lt;br /&gt;
&lt;br /&gt;
== Platform ==&lt;br /&gt;
&lt;br /&gt;
GNU Wget can be installed on Unix-like systems (UNIX, Linux), Mac OS, and Windows computers.&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
&lt;br /&gt;
* Unix-like systems: Most package managers include Wget, but they may not include the latest version. To get a later version with support for WARC, for example, Linux and UNIX users should compile the latest version of the source code following the instructions at http://wget.addictivecode.org/FrequentlyAskedQuestions#How_do_I_compile_Wget.3F.&lt;br /&gt;
&lt;br /&gt;
* Macintosh: The default Mac OS does not include Wget. Source code can be compiled for Mac OS X or users can install an alternative package manager such as Homebrew (it is unknown which version of Wget Homebrew installs). See http://coolestguidesontheplanet.com/install-and-configure-wget-on-os-x/ for instructions on how to install from source.&lt;br /&gt;
&lt;br /&gt;
* Windows: packages for later versions of Wget compiled for Windows are available at http://eternallybored.org/misc/wget/.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
The user manual is available at http://www.gnu.org/software/wget/manual/wget.html. The manual is also available via man wget in Unix-like systems.&lt;br /&gt;
&lt;br /&gt;
Additional documentation, including an FAQ, is available on the Wget wiki, http://wget.addictivecode.org/Wget.&lt;br /&gt;
&lt;br /&gt;
= User Experiences =&lt;br /&gt;
&lt;br /&gt;
* Milligan, Ian. (2012). Automated downloading with Wget. http://programminghistorian.org/lessons/automated-downloading-with-wget&lt;br /&gt;
* ArchiveTeam. (2014). Wget. http://www.archiveteam.org/index.php?title=Wget&lt;br /&gt;
&lt;br /&gt;
= Development Activity =&lt;br /&gt;
&lt;br /&gt;
{{Infobox_tool_details&lt;br /&gt;
|ohloh_id=Wget&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=File:Gnu.jpg&amp;diff=1911</id>
		<title>File:Gnu.jpg</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=File:Gnu.jpg&amp;diff=1911"/>
		<updated>2014-10-01T01:47:32Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1909</id>
		<title>GNU Wget</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1909"/>
		<updated>2014-10-01T00:58:14Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox_tool&lt;br /&gt;
|purpose= Non-interactive network downloader &lt;br /&gt;
|image=&lt;br /&gt;
|homepage=http://www.gnu.org/software/wget/&lt;br /&gt;
|license=GNU General Public License&lt;br /&gt;
|platforms=Unix, Linux, Windows, Macintosh&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Delete the Categories that do not apply --&amp;gt;&lt;br /&gt;
[[Category:Web Crawl]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
GNU Wget is a free software package for retrieving files using HTTP,  HTTPS and FTP,  the most widely-used Internet protocols. It is a non-interactive command line tool,  so it may easily be called from scripts,  cron jobs,  terminals without X-Windows support,  etc. &lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
&lt;br /&gt;
From the Wget manual: &lt;br /&gt;
&lt;br /&gt;
* Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user’s presence, which can be a great hindrance when transferring a lot of data.&lt;br /&gt;
* Wget can follow links in HTML, XHTML, and CSS pages, to create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as “recursive downloading.” While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). * * Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing.&lt;br /&gt;
* File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages.&lt;br /&gt;
* Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.&lt;br /&gt;
* Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. Wget uses the passive FTP downloading by default, active FTP being an option.&lt;br /&gt;
* Wget supports IP version 6, the next generation of IP. IPv6 is autodetected at compile-time, and can be disabled at either build or run time. Binaries built with IPv6 support work well in both IPv4-only and dual family environments.&lt;br /&gt;
* Built-in features offer mechanisms to tune which links you wish to follow (see Following Links).&lt;br /&gt;
* The progress of individual downloads is traced using a progress gauge. Interactive downloads are tracked using a “thermometer”-style gauge, whereas non-interactive ones are traced with dots, each dot representing a fixed amount of data received (1KB by default). Either gauge can be customized to your preferences.&lt;br /&gt;
* Most of the features are fully configurable, either through command line options, or via the initialization file .wgetrc (see Startup File). Wget allows you to define global startup files (/usr/local/etc/wgetrc by default) for site settings. You can also specify the location of a startup file with the –config option.&lt;br /&gt;
* Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (see the file COPYING that came with GNU Wget, for details).&lt;br /&gt;
&lt;br /&gt;
As of version 1.14, Wget supports WARC output. See http://www.archiveteam.org/index.php?title=Wget_with_WARC_output for details of the development of this feature.&lt;br /&gt;
&lt;br /&gt;
== Platform ==&lt;br /&gt;
&lt;br /&gt;
GNU Wget can be installed on Unix-like systems (UNIX, Linux), Mac OS, and Windows computers.&lt;br /&gt;
&lt;br /&gt;
=== Installation ===&lt;br /&gt;
&lt;br /&gt;
* Unix-like systems: Most package managers include Wget, but they may not include the latest version. To get a later version with support for WARC, for example, Linux and UNIX users should compile the latest version of the source code following the instructions at http://wget.addictivecode.org/FrequentlyAskedQuestions#How_do_I_compile_Wget.3F.&lt;br /&gt;
&lt;br /&gt;
* Macintosh: The default Mac OS does not include Wget. Source code can be compiled for Mac OS X or users can install an alternative package manager such as Homebrew (it is unknown which version of Wget Homebrew installs). See http://coolestguidesontheplanet.com/install-and-configure-wget-on-os-x/ for instructions on how to install from source.&lt;br /&gt;
&lt;br /&gt;
* Windows: packages for later versions of Wget compiled for Windows are available at http://eternallybored.org/misc/wget/.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
The user manual is available at http://www.gnu.org/software/wget/manual/wget.html. The manual is also available via man wget in Unix-like systems.&lt;br /&gt;
&lt;br /&gt;
Additional documentation, including an FAQ, is available on the Wget wiki, http://wget.addictivecode.org/Wget.&lt;br /&gt;
&lt;br /&gt;
= User Experiences =&lt;br /&gt;
&lt;br /&gt;
* Milligan, Ian. (2012). Automated downloading with Wget. http://programminghistorian.org/lessons/automated-downloading-with-wget&lt;br /&gt;
* ArchiveTeam. (2014). Wget. http://www.archiveteam.org/index.php?title=Wget&lt;br /&gt;
&lt;br /&gt;
= Development Activity =&lt;br /&gt;
&lt;br /&gt;
{{Infobox_tool_details&lt;br /&gt;
|ohloh_id=Wget&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1908</id>
		<title>GNU Wget</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=GNU_Wget&amp;diff=1908"/>
		<updated>2014-10-01T00:31:43Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;{{Infobox_tool&lt;br /&gt;
|purpose= Non-interactive network downloader &lt;br /&gt;
|image=&lt;br /&gt;
|homepage=http://www.gnu.org/software/wget/&lt;br /&gt;
|license=GNU General Public License&lt;br /&gt;
|platforms=Unix, Linux, Windows, Macintosh&lt;br /&gt;
}}&lt;br /&gt;
&lt;br /&gt;
&amp;lt;!-- Delete the Categories that do not apply --&amp;gt;&lt;br /&gt;
[[Category:Web Crawl]]&lt;br /&gt;
&lt;br /&gt;
&lt;br /&gt;
= Description =&lt;br /&gt;
GNU Wget is a free software package for retrieving files using HTTP,  HTTPS and FTP,  the most widely-used Internet protocols. It is a non-interactive command line tool,  so it may easily be called from scripts,  cron jobs,  terminals without X-Windows support,  etc. &lt;br /&gt;
&lt;br /&gt;
== Features ==&lt;br /&gt;
&lt;br /&gt;
From the Wget manual: &lt;br /&gt;
&lt;br /&gt;
* Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user’s presence, which can be a great hindrance when transferring a lot of data.&lt;br /&gt;
* Wget can follow links in HTML, XHTML, and CSS pages, to create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as “recursive downloading.” While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). * * Wget can be instructed to convert the links in downloaded files to point at the local files, for offline viewing.&lt;br /&gt;
* File name wildcard matching and recursive mirroring of directories are available when retrieving via FTP. Wget can read the time-stamp information given by both HTTP and FTP servers, and store it locally. Thus Wget can see if the remote file has changed since last retrieval, and automatically retrieve the new version if it has. This makes Wget suitable for mirroring of FTP sites, as well as home pages.&lt;br /&gt;
* Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.&lt;br /&gt;
* Wget supports proxy servers, which can lighten the network load, speed up retrieval and provide access behind firewalls. Wget uses the passive FTP downloading by default, active FTP being an option.&lt;br /&gt;
* Wget supports IP version 6, the next generation of IP. IPv6 is autodetected at compile-time, and can be disabled at either build or run time. Binaries built with IPv6 support work well in both IPv4-only and dual family environments.&lt;br /&gt;
* Built-in features offer mechanisms to tune which links you wish to follow (see Following Links).&lt;br /&gt;
* The progress of individual downloads is traced using a progress gauge. Interactive downloads are tracked using a “thermometer”-style gauge, whereas non-interactive ones are traced with dots, each dot representing a fixed amount of data received (1KB by default). Either gauge can be customized to your preferences.&lt;br /&gt;
* Most of the features are fully configurable, either through command line options, or via the initialization file .wgetrc (see Startup File). Wget allows you to define global startup files (/usr/local/etc/wgetrc by default) for site settings. You can also specify the location of a startup file with the –config option.&lt;br /&gt;
* Finally, GNU Wget is free software. This means that everyone may use it, redistribute it and/or modify it under the terms of the GNU General Public License, as published by the Free Software Foundation (see the file COPYING that came with GNU Wget, for details).&lt;br /&gt;
&lt;br /&gt;
As of version 1.14, Wget supports WARC output. See http://www.archiveteam.org/index.php?title=Wget_with_WARC_output for details of the development of this feature.&lt;br /&gt;
&lt;br /&gt;
==Documentation==&lt;br /&gt;
The user manual is available at http://www.gnu.org/software/wget/manual/wget.html. The manual is also available via man wget in Unix-like systems.&lt;br /&gt;
&lt;br /&gt;
Additional documentation, including an FAQ, is available on the Wget wiki, http://wget.addictivecode.org/Wget.&lt;br /&gt;
&lt;br /&gt;
== Installation ==&lt;br /&gt;
&lt;br /&gt;
Unix-like systems: Most package managers include Wget, but they may not include the latest version. To get a later version with support for WARC, for example, Linux, Unix, and Macintosh users should compile the latest version of the source code following the instructions at http://wget.addictivecode.org/FrequentlyAskedQuestions#How_do_I_compile_Wget.3F.&lt;br /&gt;
&lt;br /&gt;
Windows: packages for later versions of Wget compiled for Windows are available at http://eternallybored.org/misc/wget/.&lt;br /&gt;
&lt;br /&gt;
= User Experiences =&lt;br /&gt;
&lt;br /&gt;
* Milligan, Ian. (2012). Automated downloading with Wget. http://programminghistorian.org/lessons/automated-downloading-with-wget&lt;br /&gt;
* ArchiveTeam. (2014). Wget. http://www.archiveteam.org/index.php?title=Wget&lt;br /&gt;
&lt;br /&gt;
= Development Activity =&lt;br /&gt;
&lt;br /&gt;
{{Infobox_tool_details&lt;br /&gt;
|ohloh_id=Wget&lt;br /&gt;
}}&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1907</id>
		<title>User:Danielle plumer</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1907"/>
		<updated>2014-09-30T23:36:47Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== About Me ==&lt;br /&gt;
I am a digital collections consultant working with cultural heritage institutions interested in putting their collections online. I work primarily in the areas of project planning and metadata creation, standards, and normalization. I also do research into information extraction from textual materials and natural language processing for information retrieval.&lt;br /&gt;
&lt;br /&gt;
From 2005-2011, I coordinated the Texas Heritage Online program at the Texas State Library and Archives Commission, where I consulted with and assisted participants from the library, archives, and museum communities who were developing digital library projects. &lt;br /&gt;
&lt;br /&gt;
I also teach graduate-level courses for the College of Information at the University of North Texas and for the School of Information at The University of Texas at Austin, including courses on metadata, digitization, and digital preservation. In addition to these formal courses, I developed and and so-taught a series of workshops offered throughout Texas as part of a grant funded by the Institute for Museum and Library Services. These workshops covered Digital Project Planning and Management Basics, Digital Archives Systems and Applications, Metadata Standards and Crosswalks, and Principles of Controlled Vocabulary and Thesaurus Design as well as supplemental courses on Digital Preservation Planning and Management and Digital Preservation Tools.&lt;br /&gt;
&lt;br /&gt;
I earned an M.S. in Information Studies at The University of Texas at Austin in 2003. Prior to that, I earned a Ph.D. in English at the University of California, Davis.&lt;br /&gt;
&lt;br /&gt;
==== Personal Links: ====&lt;br /&gt;
* [[Help:Editing | Guidelines for Editing COPTR]]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Help:Editing MediaWiki Editing Help]&lt;br /&gt;
 &lt;br /&gt;
== Conflict of Interest ==&lt;br /&gt;
I am an independent consultant. My business, [http://www.dcplumer.com dcplumer associates], works with various libraries, archives, museums, and nonprofit organizations, and some of my paid work involves search engine optimization and use of social media (including Wikipedia) for digital collections. My goal as an editor is to advance the aims of COPTR. If other editors feel that my editing behavior is not advancing the aims of COPTR or that I am not following the [[Help:Editing | Guidelines for Editing COPTR]], I will be happy to modify my editing behavior. Please do not hesitate to contact me, via my [[User:Danielle_plumer | user talk]] page, if you have questions about my behavior.&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1906</id>
		<title>User:Danielle plumer</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1906"/>
		<updated>2014-09-30T23:35:29Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== About Me ==&lt;br /&gt;
I am a digital collections consultant working with cultural heritage institutions interested in putting their collections online. I work primarily in the areas of project planning and metadata creation, standards, and normalization. I also do research into information extraction from textual materials and natural language processing for information retrieval.&lt;br /&gt;
&lt;br /&gt;
From 2005-2011, I coordinated the Texas Heritage Online program at the Texas State Library and Archives Commission, where I consulted with and assisted participants from the library, archives, and museum communities who were developing digital library projects. &lt;br /&gt;
&lt;br /&gt;
I also teach graduate-level courses for the College of Information at the University of North Texas and for the School of Information at The University of Texas at Austin, including courses on metadata, digitization, and digital preservation. In addition to these formal courses, I developed and and so-taught a series of workshops offered throughout Texas as part of a grant funded by the Institute for Museum and Library Services. These workshops covered Digital Project Planning and Management Basics, Digital Archives Systems and Applications, Metadata Standards and Crosswalks, and Principles of Controlled Vocabulary and Thesaurus Design as well as supplemental courses on Digital Preservation Planning and Management and Digital Preservation Tools.&lt;br /&gt;
&lt;br /&gt;
I earned an M.S. in Information Studies at The University of Texas at Austin in 2003. Prior to that, I earned a Ph.D. in English at the University of California, Davis.&lt;br /&gt;
&lt;br /&gt;
==== Personal Links: ====&lt;br /&gt;
* [[Help:Editing | Guidelines for Editing COPTR]]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Help:Editing MediaWiki Editing Help]&lt;br /&gt;
 &lt;br /&gt;
== Conflict of Interest ==&lt;br /&gt;
I am an independent consultant. My business, [http://www.dcplumer.com dcplumer associates], works with various libraries, archives, museums, and nonprofit organizations, and some of my paid work involves search engine optimization and use of social media (including Wikipedia) for digital collections. My goal as an editor is to advance the aims of COPTR. If other editors feel that my editing behavior is not advancing the aims of COPTR or that I am not following the within the [[Help:Editing | Guidelines for Editing COPTR]], I will be happy to modify my editing behavior. Please do not hesitate to contact me, via my [http://coptr.digipres.org/User:Danielle_plumer user talk page, if you have questions about my behavior.&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1905</id>
		<title>User:Danielle plumer</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1905"/>
		<updated>2014-09-30T23:34:37Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: &lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== About Me ==&lt;br /&gt;
I am a digital collections consultant working with cultural heritage institutions interested in putting their collections online. I work primarily in the areas of project planning and metadata creation, standards, and normalization. I also do research into information extraction from textual materials and natural language processing for information retrieval.&lt;br /&gt;
&lt;br /&gt;
From 2005-2011, I coordinated the Texas Heritage Online program at the Texas State Library and Archives Commission, where I consulted with and assisted participants from the library, archives, and museum communities who were developing digital library projects. &lt;br /&gt;
&lt;br /&gt;
I also teach graduate-level courses for the College of Information at the University of North Texas and for the School of Information at The University of Texas at Austin, including courses on metadata, digitization, and digital preservation. In addition to these formal courses, I developed and and so-taught a series of workshops offered throughout Texas as part of a grant funded by the Institute for Museum and Library Services. These workshops covered Digital Project Planning and Management Basics, Digital Archives Systems and Applications, Metadata Standards and Crosswalks, and Principles of Controlled Vocabulary and Thesaurus Design as well as supplemental courses on Digital Preservation Planning and Management and Digital Preservation Tools.&lt;br /&gt;
&lt;br /&gt;
I earned an M.S. in Information Studies at The University of Texas at Austin in 2003. Prior to that, I earned a Ph.D. in English at the University of California, Davis.&lt;br /&gt;
&lt;br /&gt;
==== Personal Links: ====&lt;br /&gt;
* [[Help:Editing | Guidelines for Editing COPTR]]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Help:Editing MediaWiki Editing Help]&lt;br /&gt;
 &lt;br /&gt;
== Conflict of Interest ==&lt;br /&gt;
I am an independent consultant. My business, [http://www.dcplumer.com dcplumer associates], works with various libraries, archives, museums, and nonprofit organizations, and some of my paid work involves search engine optimization and use of social media (including Wikipedia) for digital collections. My goal as an editor is to advance the aims of COPTR. If other editors feel that my editing behavior is not advancing the aims of COPTR or that I am not following the within the [[Help:Editing | Guidelines for Editing COPTR]], I will be happy to modify my editing behavior. Please do not hesitate to contact me, via my User:Danielle_plumer page, if you have questions about my behavior.&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
	<entry>
		<id>https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1904</id>
		<title>User:Danielle plumer</title>
		<link rel="alternate" type="text/html" href="https://coptr.digipres.org/index.php?title=User:Danielle_plumer&amp;diff=1904"/>
		<updated>2014-09-30T23:32:10Z</updated>

		<summary type="html">&lt;p&gt;Danielle plumer: Created page with &amp;quot;== About Me == I am a digital collections consultant working with cultural heritage institutions interested in putting their collections online. I work primarily in the areas ...&amp;quot;&lt;/p&gt;
&lt;hr /&gt;
&lt;div&gt;== About Me ==&lt;br /&gt;
I am a digital collections consultant working with cultural heritage institutions interested in putting their collections online. I work primarily in the areas of project planning and metadata creation, standards, and normalization. I also do research into information extraction from textual materials and natural language processing for information retrieval.&lt;br /&gt;
&lt;br /&gt;
From 2005-2011, I coordinated the Texas Heritage Online program at the Texas State Library and Archives Commission, where I consulted with and assisted participants from the library, archives, and museum communities who were developing digital library projects. &lt;br /&gt;
&lt;br /&gt;
I also teach graduate-level courses for the College of Information at the University of North Texas and for the School of Information at The University of Texas at Austin, including courses on metadata, digitization, and digital preservation. In addition to these formal courses, I developed and and so-taught a series of workshops offered throughout Texas as part of a grant funded by the Institute for Museum and Library Services. These workshops covered Digital Project Planning and Management Basics, Digital Archives Systems and Applications, Metadata Standards and Crosswalks, and Principles of Controlled Vocabulary and Thesaurus Design as well as supplemental courses on Digital Preservation Planning and Management and Digital Preservation Tools.&lt;br /&gt;
&lt;br /&gt;
I earned an M.S. in Information Studies at The University of Texas at Austin in 2003. Prior to that, I earned a Ph.D. in English at the University of California, Davis.&lt;br /&gt;
&lt;br /&gt;
==== Personal Links: ====&lt;br /&gt;
* [[Help:Editing | Guidelines for Editing COPTR]]&lt;br /&gt;
* [https://www.mediawiki.org/wiki/Help:Editing MediaWiki Editing Help]&lt;br /&gt;
 &lt;br /&gt;
== Conflict of Interest ==&lt;br /&gt;
I am an independent consultant. My business, [http://www.dcplumer.com dcplumer associates], works with various libraries, archives, museums, and nonprofit organizations, and some of my paid work involves search engine optimization and use of social media (including Wikipedia) for digital collections. My goal as an editor is to advance the aims of COPTR. If other editors feel that my editing behavior is not advancing the aims of COPTR or that I am not following the within the [[Help:Editing | Guidelines for Editing COPTR]], I will be happy to modify my editing behavior. Please do not hesitate to contact me, via my [[User:Danielle_plumer | user talk]] page, if you have questions about my behavior.&lt;/div&gt;</summary>
		<author><name>Danielle plumer</name></author>
	</entry>
</feed>