Pytorch: Overfitting on a small batch: Debugging

1.1k views Asked by At

I am building a multi-class image classifier.
There is a debugging trick to overfit on a single batch to check if there any deeper bugs in the program.
How to design the code in a way that can do it in a much portable format?
One arduous and a not smart way is to build a holdout train/test folder for a small batch where test class consists of 2 distribution - seen data and unseen data and if the model is performing better on seen data and poorly on unseen data, then we can conclude that our network doesn't have any deeper structural bug.
But, this does not seems like a smart and a portable way, and have to do it with every problem.

Currently, I have a dataset class where I am partitioning the data in train/dev/test in the below way -

def split_equal_into_val_test(csv_file=None, stratify_colname='y',
                              frac_train=0.6, frac_val=0.15, frac_test=0.25,
                              ):
    """
    Split a Pandas dataframe into three subsets (train, val, and test).

    Following fractional ratios provided by the user, where val and
    test set have the same number of each classes while train set have
    the remaining number of left classes
    Parameters
    ----------
    csv_file : Input data csv file to be passed
    stratify_colname : str
        The name of the column that will be used for stratification. Usually
        this column would be for the label.
    frac_train : float
    frac_val   : float
    frac_test  : float
        The ratios with which the dataframe will be split into train, val, and
        test data. The values should be expressed as float fractions and should
        sum to 1.0.
    random_state : int, None, or RandomStateInstance
        Value to be passed to train_test_split().

    Returns
    -------
    df_train, df_val, df_test :
        Dataframes containing the three splits.

    """
    df = pd.read_csv(csv_file).iloc[:, 1:]

    if frac_train + frac_val + frac_test != 1.0:
        raise ValueError('fractions %f, %f, %f do not add up to 1.0' %
                         (frac_train, frac_val, frac_test))

    if stratify_colname not in df.columns:
        raise ValueError('%s is not a column in the dataframe' %
                         (stratify_colname))

    df_input = df

    no_of_classes = 4
    sfact = int((0.1*len(df))/no_of_classes)

    # Shuffling the data frame
    df_input = df_input.sample(frac=1)


    df_temp_1 = df_input[df_input['labels'] == 1][:sfact]
    df_temp_2 = df_input[df_input['labels'] == 2][:sfact]
    df_temp_3 = df_input[df_input['labels'] == 3][:sfact]
    df_temp_4 = df_input[df_input['labels'] == 4][:sfact]

    dev_test_df = pd.concat([df_temp_1, df_temp_2, df_temp_3, df_temp_4])
    dev_test_y = dev_test_df['labels']
    # Split the temp dataframe into val and test dataframes.
    df_val, df_test, dev_Y, test_Y = train_test_split(
        dev_test_df, dev_test_y,
        stratify=dev_test_y,
        test_size=0.5,
        )


    df_train = df[~df['img'].isin(dev_test_df['img'])]

    assert len(df_input) == len(df_train) + len(df_val) + len(df_test)

    return df_train, df_val, df_test

def train_val_to_ids(train, val, test, stratify_columns='labels'): # noqa
    """
    Convert the stratified dataset in the form of dictionary : partition['train] and labels.

    To generate the parallel code according to https://stanford.edu/~shervine/blog/pytorch-how-to-generate-data-parallel
    Parameters
    -----------
    csv_file : Input data csv file to be passed
    stratify_columns : The label column

    Returns
    -----------
    partition, labels:
        partition dictionary containing train and validation ids and label dictionary containing ids and their labels # noqa

    """
    train_list, val_list, test_list = train['img'].to_list(), val['img'].to_list(), test['img'].to_list() # noqa
    partition = {"train_set": train_list,
                 "val_set": val_list,
                 }
    labels = dict(zip(train.img, train.labels))
    labels.update(dict(zip(val.img, val.labels)))
    return partition, labels

P.S - I know about the Pytorch lightning and know that they have an overfitting feature which can be used easily but I don't want to move to PyTorch lightning.

1

There are 1 answers

8
hkchengrex On BEST ANSWER

I don't know how portable it will be, but a trick that I use is to modify the __len__ function in the Dataset.

If I modified it from

def __len__(self):
    return len(self.data_list)

to

def __len__(self):
    return 20

It will only output the first 20 elements in the dataset (regardless of shuffle). You only need to change one line of code and the rest should work just fine so I think it's pretty neat.