NIOD Institute for War, Holocaust, and Genocide Studies · PyLaia · Published February 14, 2023

NIOD_WarLet_1935-1950_NoBasemodel

Text Recognition

Description

The HTR model ‘NIOD_WarLet_1935-1950_NoBasemodel’ was trained using 968 'Ground Truth' transcriptions of high-resolution scans of various handwritten letters. These letters are all written in Dutch and originate from the period 1935-1950. The training set contains personal correspondence from a wide variety of letter writers (e.g., children, soldiers, Jewish people in hiding). These personal correspondences are all part of the archival collection known as '247 Correspondentie' held by the NIOD Institute for War, Holocaust, and Genocide Studies in Amsterdam. This model was created as part of the project 'First-Hand Accounts of War: War letters (1935-1950) from NIOD digitised’. All documents used for training and validation were scanned and transcribed within this project. This project ran from 2020 to 2023 and was funded by the Mondriaan Fund, the Dutch Ministry of Health, Welfare, and Sport, and the NIOD Institute for War, Holocaust, and Genocide Studies in Amsterdam. The 'Ground Truth' training set is created by project members Annelies van Nispen, Carlijn Keijzer, and Milan van Lange. Additional transcription and correction of ‘Ground Truth’ transcriptions was performed under supervision of Muriël Bouman by citizen scientists Hillebrand Verkroost, Bart Cohen, Evelien Bachrach, Marjo Janssens, and Cocky Sietses. The validation set contains a sample of 17 ‘Ground Truth’ transcriptions from various writers and sub-collections. The model is trained using PyLaia HTR, max. 500 epochs (321 epochs trained), learning rate 0.0003. No base model was used.

Try this model

Use this modelOpen in Transkribus
Low error rate5.3% CER

Character Error Rate (CER) measures the percentage of characters incorrectly recognised. Lower is better. This model scored 5.3% on its validation set. As a rule of thumb, a CER below 10% is considered good for most handwritten material. This is a larger model trained on diverse material, which generally makes it more robust across different handwriting styles. That said, larger training sets also make it harder to push the CER down further.

Measured on the model's own validation data. Results on your documents may differ depending on handwriting style, document condition, language, and how closely your material resembles the training data.

Words160,955
Lines20,935
Training Pages968
Model ID50053
Languages
Dutch