I suspect I've corrupted my sstables for a table therefore I'm running sstableverify utility while node is down. I'm receiving messages like [GC overhead limit exceeded]
Seeking help if this issue can be worked-around or addressed.. thanks in advance!
sstableverify -v enterprise ale_state_access_point
Exception in thread "main" java.lang.OutOfMemoryError: GC overhead limit exceeded at java.util.Arrays.copyOf(Arrays.java:3332) at java.lang.AbstractStringBuilder.ensureCapacityInternal(AbstractStringBuilder.java:124) at java.lang.AbstractStringBuilder.append(AbstractStringBuilder.java:649) at java.lang.StringBuilder.append(StringBuilder.java:202) at org.apache.cassandra.io.sstable.Descriptor.filenameFor(Descriptor.java:170) at org.apache.cassandra.io.sstable.Descriptor.filenameFor(Descriptor.java:125) at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:709) at org.apache.cassandra.io.sstable.format.SSTableReader.load(SSTableReader.java:672) at org.apache.cassandra.io.sstable.format.SSTableReader.open(SSTableReader.java:466) at org.apache.cassandra.io.sstable.format.SSTableReader.openNoValidation(SSTableReader.java:377) at org.apache.cassandra.tools.StandaloneVerifier.main(StandaloneVerifier.java:89) ERROR 20:33:15 LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@6d42f926) to class org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1047072254:/cassandra/data/enterprise/ale_state_access_point-ae4c50d0d67a11e696b25735df805631/lb-79600-big was not released before the reference was garbage collected ERROR 20:33:15 LEAK DETECTED: a reference (org.apache.cassandra.utils.concurrent.Ref$State@69f4a15d) to class org.apache.cassandra.io.sstable.format.SSTableReader$InstanceTidier@1968390106:/cassandra/data/enterprise/ale_state_access_point-ae4c50d0d67a11e696b25735df805631/lb-58267-big was not released before the reference was garbage collected
There is only so much you can do if you tampered with the sstables and broke them. This would also happen when the node would try to load the table. I would suggest restoring from backup or scrubbing the table.