I am trying to remove duplicates from large files, but save those into a different directory. I ran the code below, but it saved them (overwrote) within the root directory. I know that if I switch to inplace='False' it won't overwrite those files in the root directory, but it also doesn't copy them into the target directory either, so that doesn't help.
Please advise and thank you! :)
import os
import pandas as pd
from glob import glob
import csv
from pathlib import Path
root = Path(r'C:\my root directory')
target = Path(r'C:\my root directory\target')
file_list = root.glob("*.csv")
desired_columns = ['ZIP', 'COUNTY', 'COUNTYID']
for csv_file in file_list:
df = pd.read_csv(csv_file)
df.drop_duplicates(subset=desired_columns, keep="first", inplace=True)
df.to_csv(os.path.join(target,csv_file))
Example:
ZIP COUNTYID COUNTY
32609 1 ALACHUA
32609 1 ALACHUA
32666 1 ALACHUA
32694 1 ALACHUA
32694 1 ALACHUA
32694 1 ALACHUA
32666 1 ALACHUA
32666 1 ALACHUA
32694 1 ALACHUA
This should work, while also reducing your dependencies:
Note that since
target
is relative to your root directory, you can simply join using the/
operator.