comparing language performance and memory usage

Christian Harms's picture

With the small coding contest some weeks ago we got many comments and it’s worth to make a conclusion for the solutions in different languages. What language is easier to write, has better memory usage or better performance?

To clarify the question:

Remove duplicate lines from a file

We have a text file with 13 Million entries, all numbers in the range from 1.000.000 to 100.000.000. Each line has only one number. The file is unordered and we have about 1% duplicate entries. Remove the duplicates.

Be efficient with the memory usage and / or find a performant solution. The script should be started with the filename and linenumber and print the result to stdout.

benchmark preparation

First I generate the random numbers as test files with a simple python script. All tests will started with bench.py over all test files. See the sourcecode on github.com.

The command line tool /usr/bin/time can detect the cpu and memory consumption. This will be started as a sub process. After each run the user time, system time and maximum memory usage will saved. The result is stored into a JavaScript Object for the highchart. You can see the charts in the middle of this article.

Feel free to add new solutions and make some improvements.

first solution: sorting the numbers

The first solution in the comments was sorting all numbers. The expected memory usage should be 13.000.000 * 4 byte (100.000.000 fits into 32bit integers) if the sorting algorithm is not using an extra array for swapping. The average case effort for the sorting algorithm (quicksort or merge sort) is O(n + log n).

The command line tool sort can do this:

  1. sort -u rand_numbers.txt > unique_numbers.txt

A small optimization with sort is comparing alpha-numerical instead string. It will use less memory:

  1. sort -u -n rand_numbers.txt > unique_numbers.txt

The same solution in c should be compare-able with the sort-command. But I could not find the exact sort algorithm behind the method qsort. The implementation of standard qsort method could be mergesort, partition-exchange sort or quicksort and the memory usage will be higher than the pure memory amount for an array of integer.

solve_qsort.c

  1. int main(int argc, char*argv[]) {
  2.  
  3.   char line[100];
  4.   int i=0, last;
  5.   FILE *fp = fopen(argv[1], "r");
  6.   int count = (int)atoi(argv[2]);
  7.   //allocating the size for the n values
  8.   int32_t *digits = (int32_t *)malloc(count * sizeof(int32_t));
  9.  
  10.   // reading the lines, convert into an int, push into array
  11.   while (fgets(line, 100, fp)) {
  12.     digits[i++] = (int32_t)atoi(line);
  13.   }
  14.   fclose(fp);
  15.  
  16.   //sort the complete array
  17.   qsort(digits, count, sizeof(int), compare);
  18.  
  19.   //Print all entries, ignoring doubles
  20.   last = -1;
  21.   for (i=0; i<count; i++) {
  22.     if (last!=digits[i]) printf("%d\n", digits[i]);
  23.     last = digits[i];
  24.   }
  25. }

This example is simple and not optimized, the same solution in python is not surprising (but shorter):

solve_number_sort.py

  1. last = ""
  2. for n in sorted(open(sys.argv[1])):
  3.     if last != n:
  4.         sys.stdout.write( n )
  5.     last = n

This simple python version sort the input file as strings and print the values (sys.stdout.write is faster than the print method!). Converting the input file into an integer array will save memory usage. The result files will have a large diff because sorting as numbers or as string has different results.

solve_number_sort.py

  1. values = map(int, open(sys.argv[1]))
  2. values.sort()
  3.  
  4. last = 0
  5. for n in values:
  6.     if last != n:
  7.         sys.stdout.write("%d\n" % n)
  8.     last = n

benchmark result of sorting

In the charts you can see the O(n*log n) execution time. I separated the user and system time to clarify the algorithm calculation time vs. the system reading time.

The non-linear computing time results of the randomized entries in the test files.

limiting the memory usage

The sort command can split the input array into small chunks (for sorting), save it to disk and merge it with limited memory usage. The command line tool ulimit -d limits the memory for all processes and sort use temporary files. But I chose the build-in parameter from the sort command with the --buffer-size=SIZE option.

  1. sort -u -n -S 20M rand_numbers.txt

After successful testing I the built an memory limited sorting variant with python as a second example. The magic merge-component can be found in the module heapq. It offers the functionality to merge a list of open file iterators with the pre-sorted chunks.

merge_sort.py

  1. import sys, tempfile, heapq
  2. limit = 40000
  3. def sortedFileIterator(digits):
  4.     fp = tempfile.TemporaryFile()
  5.     digits.sort()
  6.     fp.write("\n".join(map(lambda x:str(x), digits)))
  7.     fp.seek(0)
  8.     return fp
  9.  
  10. iters = []
  11. digits = []
  12. for line in file(sys.argv[1]):
  13.     digits.append(int(line.strip()))
  14.     if len(digits)==limit:
  15.         iters.append(sortedFileIterator(digits))
  16.         digits = []
  17. iters.append(sortedFileIterator(digits))
  18.  
  19. #merge all sorted ranges and filter doubles
  20. oldItem = -1
  21. for sortItem in heapq.merge(*iters):
  22.     if oldItem != sortItem:
  23.         print sortItem.strip()
  24.     oldItem = sortItem

The both constant lines in the chart are the two examples with constant memory usage.

remove duplicates with hashmap

The second solution simply put all entries into a hashmap. The values be will used as keys and the data structure will remove the duplicate entries automatically. This can be seen in the perl example.

Some languages offers a set - this data structure stores only the unique keys without a value. The point of interest while benchmarking will be the memory usage for the “easy to use” build-in data structure for millions of integers.

perl command line

  1. perl -lne'exists $h{$_} or print $h{$_}=$_'

solve_set.py

  1. for n in set(open(sys.argv[1])):
  2.     sys.stdout.write( n )

solve_set.lua

  1. local set = {}
  2. for n in io.lines(arg[1]) do
  3.     if not set[n] then
  4.         print(n)
  5.         set[n] = true
  6.     end
  7. end

All solutions can be written in short time and will work. But the memory usage is terrible! The solutions needs 10 times of memory than the raw integer array. And the normal effort for using a hashmap with O(n) (set and get for n million in this case) is not correct. The “brute force inserting” keys into a hashmap will trigger the reorganisation of the bucket tables of the hashmap data structure! You can see this in the memory usage chart.

Using a bitarray instead a hashmap

The limits in the problem description offers a more elegant solution. The raw memory usage for 13 million 32-bit integer is ~49MB. Mapping all integers from 1 million to 100 million to the bit position (in linear memory) will use ~ 11MB (99 million bits / 8). So the memory usage for the bitarray will be lower and constant. And the computation for the very short map function will be short.

solve_bittarray.c

  1. int main(int argc, char*argv[]) {
  2.  
  3.   char line[100];
  4.   const minValue = 1000000;
  5.   const maxValue = 100000000;
  6.   char *bitarray = (char *)malloc((maxValue - minValue) / 8);
  7.   FILE *fp = fopen(argv[1], "r");
  8.  
  9.   int pos;
  10.  
  11.   while (fgets(line, 100, fp)) {
  12.     pos = atoi(line);
  13.     if (!(bitarray[(pos-minValue)>>3] & 1<<(pos%8))) {
  14.       printf("%d\n", pos);
  15.       bitarray[(pos-minValue)>>3]|=(1<<(pos%8));
  16.     }
  17.   }
  18.   fclose(fp);
  19.   return 0;
  20. }

We got a python solution with the module bitarray. If you dont have the module bitarray you have to install it. On the default ubuntu installation you need the python-dev and the setuptools packages. The bitarray module is available with easy_install after successful package installlation.

  1. sudo apt-get install python-dev python-setuptools
  2. sudo easy_install bitarray

The python solution is short, but the execution time is much longer than the c variant.

solve_bitarray.py

  1. import sys, bitarray
  2. minValue = 1000000
  3. maxValue = 100000000
  4. bits = bitarray.bitarray(maxValue-minValue+1)
  5.  
  6. for line in file(sys.argv[1]):
  7.    i = int(line)
  8.    if not bits[i - minValue]:
  9.        bits[i - minValue] = 1
  10.        sys.stdout.write(line)

Using mmap instead of normal fileio

A co-worker rated the file access higher than the computation time and offered a variant with mmap. The mmap function maps the input file into memory and you can iterate over it as a byte array. Because the parsing part of integers is easy the variant should be compareable with the bitarray c variant.

https://github.com/ChristianHarms/uc_sniplets/blob/master/no_duplicates/...

The memory usage for the bittarray-mmap variant will increase because the command line tool recognize the mapped memory as process memory usage.

conclusion

The bitarray solution has constant, low memory usage and fast execution time. It will only work because the available numbers are limited in range.

Using hashmap / set solutions can result in massive memory usage.

Sorting the input entries is the only one solution with the possibility to work with limited memory.

Find more about language performance with the "c++ / go / java / scala" language performance benchmark by google on readwriteweb.com.

AttachmentSize
charts.js.txt11.96 KB