Recoll: Indexing performance and index sizes

The time needed to index a given set of documents, and the resulting index size depend of many factors, such as file size and proportion of actual text content for the index size, cpu speed, available memory, average file size and format for the speed of indexing.

We try here to give a number of reference points which can be used to roughly estimate the resources needed to create and store an index. Obviously, your data set will never fit one of the samples, so the results cannot be exactly predicted.

The following data was obtained on a machine with a 1800 Mhz AMD Duron CPU, 768Mb of Ram, and a 7200 RPM 160 GBytes IDE disk, running Suse 10.1.

recollindex (version 1.8.2 with xapian 1.0.0) is executed with the default flush threshold value. The process memory usage is the one given by ps

Data Data size Indexing time Index size Peak process memory usage
Random pdfs harvested on Google 1.7 GB, 3564 files 27 mn 230 MB 225 MB
Ietf mailing list archive 211 MB, 44,000 messages 8 mn 350 MB 90 MB
Partial Wikipedia dump 15 GB, one million files 6H30 10 GB 324 MB
Random pdfs harvested on Google
Recoll 1.9, idxflushmb set to 10
1.7 GB, 3564 files 25 mn 262 MB 65 MB

Notice how the index size for the mail archive is bigger than the data size. Myriads of small pure text documents will do this. The factor of expansion would be even much worse with compressed folders of course (the test was on uncompressed data).

The last test was performed with Recoll 1.9.0 which has an ajustable flush threshold (idxflushmb parameter), here set to 10 MB. Notice the much lower peak memory usage, with no performance degradation. The resulting index is bigger though, the exact reason is not known to me, possibly because of additional fragmentation