Difference between revisions of "Matchbox Tool"
(Import from spreadsheet via script.) |
Ania Molenda (talk | contribs) |
||
Line 1: | Line 1: | ||
− | {{ | + | {{Infobox tool |
|purpose=Matchbox: Duplicate detection tool for digital document collections. | |purpose=Matchbox: Duplicate detection tool for digital document collections. | ||
− | |||
|homepage=https://github.com/openplanets/scape/tree/master/pc-qa-matchbox | |homepage=https://github.com/openplanets/scape/tree/master/pc-qa-matchbox | ||
|license=Open source | |license=Open source | ||
− | | | + | |function=Quality Assurance, De-Duplication |
+ | |content=Image | ||
+ | }} | ||
+ | {{Infobox tool details | ||
+ | |ohloh_id=Matchbox Tool | ||
}} | }} | ||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
− | |||
= Description = | = Description = | ||
The Matchbox tool is responsible for finding duplicatre pairs in a collection of digital documents based on SIFT features and SSIM methods. Consequently the tool takes a collection path with associated parameters as input. Currently three scenarios are implemented. These are: | The Matchbox tool is responsible for finding duplicatre pairs in a collection of digital documents based on SIFT features and SSIM methods. Consequently the tool takes a collection path with associated parameters as input. Currently three scenarios are implemented. These are: | ||
Line 59: | Line 55: | ||
= Development Activity = | = Development Activity = | ||
− | |||
− | |||
− | |||
− |
Revision as of 15:28, 22 April 2021
Description
The Matchbox tool is responsible for finding duplicatre pairs in a collection of digital documents based on SIFT features and SSIM methods. Consequently the tool takes a collection path with associated parameters as input. Currently three scenarios are implemented. These are:
- Duplicate search in one turn (parameter ‘all’)
- Professional duplicate search (experienced user can execute particular step in ‘FindDuplicates’ workflow)
- Quick check if two documents are duplicates (based on previous BoW dictionary).
Further parameters that influence and adjust duplicate analysis are currently investigated.
Image processing method:
The image processing algorithm can be described in 4 steps:
1. Document feature extraction
- Interest point detection (applying Scale Invariant Feature Transform (SIFT) keypoint extraction)
- Derivation of local feature descriptors (invariant to geometrical or radiometrical distortions)
2. Learning visual dictionary
- Clustering method applied to all SIFT descriptors of all images using k-means algorithm
- Run over collection and collect local descriptors in a visual dictionary using Bag-Of-Words (BoW) algorithm
3. Create visual histogram for each image document
4. Detect similar images based on visual histogram and local descriptors. Evaluate similarity score — pair-wise comparison of corresponding keyword frequency histograms for all documents. Conduct structural similarity analysis applying Sturctural SIMilarity (SSIM) approach (1 means identical and 0 means very different)
- Rotate
- Scale
- Mask
- Overlaying
Usage:
FindDuplicates script can be invoked from command line. For standard usage two parameters are required: path to the collection documents and ‘all’.
scape/pc-qa-matchbox/Python# python2.7 FindDuplicates.py h
usage: FindDuplicates.py [-h] [-threads THREADS|—threads THREADS] [-sdk SDK|—sdk SDK] [-precluster PRECLUSTER|—precluster PRECLUSTER] [-clahe CLAHE|—clahe CLAHE] [-config CONFIG|—config CONFIG] [-featdir FEATDIR|—featdir FEATDIR] [-bowsize BOWSIZE|—bowsize BOWSIZE] [-csv|—csv] [-v] dir all,extract,compare,train,bowhist,clean
User Experiences
currently installed at Austrian National Library