I have a program that reads a very large data file from Berkeley DB [ which is mounted on a SAN Storage ]. This works perfectly fine on a solaris machine using perl5.6.0 version.
However, the same program returns fewer values read from the same file on a linux machine. Is this a problem with the size of the data file?
Any pointers to solve this mystery are welcome.
Thanks, Shobha Deepthi
edit to include Shobha's reproducer (from comments):
#!/usr/cisco/bin/perl5.6
use strict;
use DB_File qw($DB_HASH);
my $db_file = "/vws/aak/qddts/data/value_cache/To-be-fixed";
my $db_ref;
my %db;
if (tie(%db, 'DB_File', $db_file, O_RDONLY, 0444, $DB_HASH)) {
$db_ref = \%db;
print Dumper($db_ref);
}
1;
This sounds like a filesystem issue to me. What is your fstype? Oh, and what versions / distros / archs are your OSs?
Things to check:
[yet another edit] I would also run a strace on your script in linux to see if there's anything strange happening around record 12,000