SET action upon encountering miss in LRU cache java implementation

197 views Asked by At

I'm implementing LRU cache in Java using my own implementation of DoublyLinkedList with a node having integer key and values where the key denotes the page identifier and the value denotes its location on the disk. Also, I am using a Hashmap for O(1) access. I know the following basics of implementation:

If requested key is a cache hit, then return its value(i.e. its location) and move this node to the front of the DoublyLinkedList.

I have a doubt in the other important aspect when it's a miss. According to various resources, when it's a miss then the page number that is the key in my case is set to head of the LinkedList and while doing that, if we encounter the capacity to be full then we delete the element at the end and then enter it to the front. Now for the purpose of implementation, what should be the 'value' of such page number (i.e. the key) when I bring it into the cache? Should I set it to garbage? In my below presented implementation of 'get' method, I am returning -1 currently upon a miss.

Also, in practical production code of LRU cache, does the cache contain the page numbers or actual values stored at those page numbers? T

Please help me clear this aspect. Thanks.

import java.util.HashMap;


public class LRU {

private static class Node {

    Node previous;
    Node next;
    int key;
    int value;

    public Node(int key, int value) {
        this.key = key;
        this.value = value;
    }
}

Node head = null;
Node tail = null;

HashMap<Integer, Node> hmap = new HashMap<>();

public int get(int key) {
    if (hmap.containsKey(key)) {
        Node n = hmap.get(key);
        remove(n);
        moveToFront(n);
        return n.value;
    }
    return -1;
}

public void remove(Node n) {
    // if n is head
    if (n.previous == null) {
        head = head.next;
    } else {
        n.previous.next = n.next;
    }

    //if n is tail
    if (n.next == null) {
        tail = n.previous;
    } else {
        n.next.previous = n.previous;

    }
}

public void moveToFront(Node n) {
    if(n==head)
        return;
    if(n.next==null)
    {
        n.previous.next=n.next;
        tail = n.previous;                    
        n.previous = null;
        n.next = head;
        head = n;
    }
    else {
        n.previous.next=n.next;
        n.next.previous=n.previous;
    }
}

public void set(int key, int value) {

   }
}
1

There are 1 answers

3
Roberto Attias On BEST ANSWER

The term cache generally identifies a key-value pair map with a limited capacity. Attempting to add a new pair when the capacity has been reached will cause one of the existing key-value pairs to be evicted, and the noew one to be added. Usually also, addition of a pair happens implicitly as a function of trying to retrieve the value for a key that is not currently stored in the cache (i.e. the cache performs the eviction, loads the new pair, and returns the value).

LRU (Least Recently Used) is one of the possible policies to select the pair to be evicted, not the only one. A processor data cache is an example of cache, implemented in hardware, where the key is obtained from an address (typically considering only a certain number of most-significant-bits), and the value is a portion of memory containing such address In this case, the cache line contains a copy of the data residing in memory. Another example is, in OS supporting virtual memory, the OS paging implemented as a combination of HW (MMU) and software in the OS kernel. In this example, closer to what you describe, the value of the pair is a physical address for the memory page hosting the virtual page.

So when you ask "Also, in practical production code of LRU cache, does the cache contain the page numbers or actual values stored at those page numbers? T", the answer may depend on what cache we're talking about. In your case, it's not clear what the cache is backed by. I'll assume it's an offset in some file. In that case, I would expect your value to be a byte array or string representing a portion of the file. An access to offset x in the file would be mapped to a key = offset/PAGE_SIZE; if the key is not present in the table, and if the table is at capacity, then the last node could be "recycled", setting replacing the existing key, reading the content of the file at offsets (key, key + PAGE_SIZE-1) in the byte buffer, and finally moving the page as first.