Aggregation and Grouping with Pandas

Asked by At

I am trying to add a column of values based on account numbers and have the result show in a new column. Also, I'm identifying the first instance of the contract account as a Unique value and the others as duplicates. For example:

Index   CA#   Duplicate?    $     $$
1      1101   True        440.4  880.80 
2      1101   False       440.4  -   
3      1102   True        440.8  440.80 
4      1103   True        441.2  441.20 
5      1106   True        442.4  1,327.20 
6      1106   False       442.4  -   
7      1106   False       442.4  -   

My first column 'CA#' are identifiers that I want to flag as True(or 1) if they're the first and only CA#, otherwise, I want them flagged as False(or 0). For instance, CA# 1101 on Index 1 would receive a True and CA# 1101 on Index 2 would receive a False.

Then I'm trying to use that True flag to sum the total that each CA# is linked to on the $ column. In the CA#1101 case, the total $$ is 880.80. So far I have only got as far as trying to generate a new column that has the boolean identifier for the unique values on the CA# column, but I get only True values on the new series that my code creates and I know that is wrong.

import pandas as pd
from pandas import DataFrame, Series
import numpy as np

file_name= ('A:\LEO\Documents\Mock data.xlsx')
sheet_name= ('Sheet1')
data = pd.read_excel(io= file_name, sheet= sheet_name)
data.sort_values('CA#', inplace= True)
data_ltd = DataFrame(data, columns=['CA#','$'])
bool_series = data_ltd['CA'].duplicated()
data_ltd ['bool_series'] = bool_series
print(data_ltd[bool_series].head(10))

3 Answers

1
Erfan On Best Solutions

Use the inverse of duplicated:

~df.duplicated('CA#')

0     True
1    False
2     True
3     True
4     True
5    False
6    False
dtype: bool
df['Duplicate?'] = ~df.duplicated('CA#')

    CA#  Duplicate?      $        $$
0  1101        True  440.4    880.80
1  1101       False  440.4         -
2  1102        True  440.8    440.80
3  1103        True  441.2    441.20
4  1106        True  442.4  1,327.20
5  1106       False  442.4         -
6  1106       False  442.4         -

To get your $$ column, we can use groupby and np.where:

df['$$'] = df.groupby('CA#')['$'].transform('sum')
df['$$'] = np.where(df['$$'].duplicated(), '-', df['$$'])

    CA#  Duplicate?      $                  $$
0  1101        True  440.4               880.8
1  1101       False  440.4                   -
2  1102        True  440.8               440.8
3  1103        True  441.2               441.2
4  1106        True  442.4  1327.1999999999998
5  1106       False  442.4                   -
6  1106       False  442.4                   -
1
Mike On

This should do the trick for the Duplicated column:

df = pd.DataFrame({'CA#': [1101, 1101, 1102,1103, 1106, 1106, 1106]})
seen = set()
def already(x):
    global seen
    if x in seen:
        return False
    else:
        seen.add(x)
        return True

df['Duplicate'] = df['CA#'].apply(already)
df
#     CA#  Duplicate
# 0  1101       True
# 1  1101      False
# 2  1102       True
# 3  1103       True
# 4  1106       True
# 5  1106      False
# 6  1106      False
0
Leo On

data_fr.sort_values(by='CA',ascending= True, inplace= True) #Start with sorting the values data_fr['Unique Px']= ~data_fr.duplicated('CA') #Identify duplicates data_fr['$$'] = data_fr.groupby('CA')['$'].transform('sum') #Group and aggregate in a new column