I'm wondering if anyone knows a good way to remove duplicate Values in a LinkedHashMap
? I have a LinkedHashMap
with pairs of String
and List<String>
. I'd like to remove duplicates across the ArrayList
's. This is to improve some downstream processing.
The only thing I can think of is keeping a log of the processed Values as I iterate over HashMap
and then through the ArrayList
and check to see if I've encountered a Value previously. This approach seems like it would degrade in performance as the list grows. Is there a way to pre-process the HashMap
to remove duplicates from the ArrayList
values?
To illustrate...if I have String1>List1 (a, b, c) String2>List2 (c, d, e) I would want to remove "c" so there are no duplicates across the Lists within the HashMap.
I believe creating a second HashMap, that can be sorted by values (Alphabetically, numerically), then do a single sweep through the sorted list, to check to see if the current node, is equivalent to the next node, if it is, remove the next one, and keep the increment at the same, so it will remain at the same index of that sorted list.
Or, when you are adding values, you can check to see if it already contains this value.