I have 2 flavors of the same ALGOL code - its a ONE-ONE replacement
- Which uses - RESIZE (TO RETURN IT LIBARAY POOL)
- Which uses - DEALLOCATE (TO RETURN IT SYSTEM)
The one which the DEALLOCATE Consumes more CPU time and inturn more %processor Usage
Why is that the DEALLOCATE consumes more CPU?
and how to mitigate this?
Burroughs/Unisys/A-Series, I presume?
It's been a few years since I used one of those systems, but I presume this hasn't changed too much.
RESIZE changes the size of your object (let's say it's an array to simplify life). When the original array was created, it was pulled from an ASD pool. There are several pools of various sizes. The actual memory assigned to your program may not have been the exact size you requested, although your descriptor will be "doctored" so that it appears to be exactly that size (so various intrinsic calls work properly). If you RESIZE within the actual size of that memory item, then only that doctoring has to be updated. Fast, easy.
Otherwise, it actually calls the MCP procedure EXPANDAROW instead of RESIZEANDDEALLOCATE. Usually, but not always, this can find additional memory without having to return the original memory to the ASD pools.
In the second case, DEALLOCATE, the MCP procedure RESIZEANDDEALLOCATE is indeed called, the memory is returned to the ASD pools, dope vectors are deallocated, memory is cleared, memory link words are updated and the ASD pools are updated. Your program pays for all of that (just as it paid for the original allocation).
Your question doesn't have enough background to answer your mitigation question. Maybe you don't? Why are you calling RESIZE/DEALLOCATE anyway? This generally happens on BLOCKEXIT already.