Perl: Iterating through large hash, runs out of memory

552 views Asked by At

I have been trying to find values that match between two columns (columns a and column b) of a large file and print the common values, plus the corresponding column d. I have been doing this by interating through hashes, however, because the file is so large, there is not enough memory to produce the output file. Is there any other way to do the same thing using less memory resources.

Any help is much appreciated.

The script I have written thus far is below:

#!usr/bin/perl
use warnings;
use strict;

open (FILE1, "<input.txt") || die "$!\n Couldn't open input.txt\n";
open (Output, ">output.txt")||die "Can't Open output.txt ";
my $hash1={};
my $hash2={};

while (<FILE1>) {
    chomp (my $line=$_);
    my ($a, $b, $c, $d) = split (/\t/, $line);

    if ($a) {
        $hash1->{$a}{info1} = "$d"; #original_ID-> YOB
    }
    if ($b) {
        $hash2->{$b}{info2} = "$a"; #original_ID-> sire
    }

    foreach my $key (keys %$hash2) {
        if (exists $hash1{$a}) {
            $info1 = $hash1->{$a}->{info1};
            print "$a\t$info1\n";
        }
    }
}

close FILE1;
close Output;
print "Done\n";

To clarify, the input file is a large pedigree file. An example is:

1    2   3   1977
2    4   5   1944
3    4   5   1950
4    5   6   1930
5    7   6   1928

An example of the output file is:

2   1944
4   1950
5   1928
1

There are 1 answers

1
Georgi Rangelov On

Does the below work for you ?

#!/usr/local/bin/perl

use strict;
use warnings;
use DBM::Deep;
use List::MoreUtils qw(uniq);

my @seen;

my $db = DBM::Deep->new(
    file => "foo.db",
    autoflush => 1
);

while (<>) {
    chomp;
    my @fields = split /\s+/;
    $$db{$fields[0]} = $fields[3];
    push @seen, $fields[1];
}

for (uniq @seen) {
    print $_ . " " . $$db{$_} . "\n" if exists $$db{$_};
}