why 2D hashmap is memory inefficient?

274 views Asked by At

I was told by friends that using 2D hashmap is strongly discouraged due to fragmentation problem? Could anyone tell if that's the case and why?

2

There are 2 answers

0
Jonathan Holland On BEST ANSWER

Personally I don't see any reason to discourage the use if there is a legitimate need for a 2d hashmap.

What they may be referring to is how the system deals with collisions. If two different values end up with the same hash value position, what do we do? We still need to store them both. There are a few different techniques used to handle this issue and one of them is to aim to start with a very large set of possible hash value positions which could potentially lead to a lot of wasted space. A better method is to just check the next available position until it finds a free spot.

It has been a while since I studied the storage of these types, but that seems like what they may be talking about. It is not a major issue and certainly not a reason to never use hashmaps (including 2d ones). I'm not sure on this but I think the issues above compound when used in more dimensions (hence more of an issue with a 2d hashmap).

1
Mark Ransom On

In order to be efficient, a hashmap needs a certain amount of empty space, otherwise the collision rate will be too high. If the hashmap contains more hashmaps, the effect multiplies - if each hashmap is 50% full, the combination is only 25% full.

A more effective strategy might be to combine the two keys into a single key and use a single level hashmap.