Choosing salt and key size for Rfc2898DeriveBytes

925 views Asked by At

I'm working on upgrading the password hashing function in a legacy asp.net application.

It was using Rfc2898DeriveBytes with the default SHA1. I've upgraded the application now to .Net 4.7.2, so I'm now able to choose a better hashing algorithm.

This is my function so far...

public static string GeneratePasswordHash(string password)
{
    var private_key = "{my-secret-key}";

    using (var derived_bytes = new Rfc2898DeriveBytes(password + private_key, 32, 10000, HashAlgorithmName.SHA384))
    {
        byte[] hash = derived_bytes.GetBytes(64);
        byte[] salt = derived_bytes.Salt;

        byte[] combined_hash = new byte[96];

        System.Buffer.BlockCopy(salt, 0, combined_hash, 0, 32);
        System.Buffer.BlockCopy(hash, 0, combined_hash, 32, 64);

        return Convert.ToBase64String(combined_hash);
    }
}

I'm not sure how to go about choosing an appropriate key and salt size though. At the moment I've picked 32 and 64 bytes respectively, but these were really just chosen arbitrarily. I know the minimum salt size is 8 bytes, but I don't really know why I would choose a larger vs smaller salt.

Also is there some kind of rule-of-thumb to decide the ratio between salt and key size?

Performance is not really a consideration, as it will never be running frequently enough to put any noticeable load on the server.

Likewise storage space for the hash is fairly much irrelevant. Whether the hashed passwords require 88 characters or 128, makes no real difference.

Previously salt and key size were both 32 bytes. If I were to keep it the same, does this somehow negate the benefit of changing to SHA384, since my final hash will be the same size as before?

With my very basic understanding of hashing algorithms, bigger numbers = better security. However I don't want to choose rediculously large key sizes etc. if doing so is completely pointless from a security perspective.

0

There are 0 answers