I can't save the cleaned df to target directory

59 views Asked by At

I am trying to remove duplicates from large files, but save those into a different directory. I ran the code below, but it saved them (overwrote) within the root directory. I know that if I switch to inplace='False' it won't overwrite those files in the root directory, but it also doesn't copy them into the target directory either, so that doesn't help.

Please advise and thank you! :)

import os
import pandas as pd
from glob import glob
import csv
from pathlib import Path

root = Path(r'C:\my root directory') 
target = Path(r'C:\my root directory\target')
file_list = root.glob("*.csv")

desired_columns = ['ZIP', 'COUNTY', 'COUNTYID']

for csv_file in file_list:
    df = pd.read_csv(csv_file)
    df.drop_duplicates(subset=desired_columns, keep="first", inplace=True)
    df.to_csv(os.path.join(target,csv_file))

Example:

ZIP COUNTYID    COUNTY
32609   1   ALACHUA
32609   1   ALACHUA
32666   1   ALACHUA
32694   1   ALACHUA
32694   1   ALACHUA
32694   1   ALACHUA
32666   1   ALACHUA
32666   1   ALACHUA
32694   1   ALACHUA
1

There are 1 answers

6
ddejohn On BEST ANSWER

This should work, while also reducing your dependencies:

import pandas as pd
import pathlib

root = pathlib.Path(r"C:\my root directory") 
target = root / "target"
file_list = root.glob("*.csv")

desired_columns = ["ZIP", "COUNTY", "COUNTYID"]
for csv_file in file_list:
    df = pd.read_csv(csv_file)
    df.drop_duplicates(subset=desired_columns, keep="first", inplace=True)
    df.to_csv(target / csv_file.name)

Note that since target is relative to your root directory, you can simply join using the / operator.