I have square matrix like this.

            ACSM3     ACSX12    ADXM28  ...   UGT2B15      VCAN        XK
ACSM3    1.000000  0.929347  0.999914  ...  0.986433  0.999947 -0.999680
ACSX12    0.929347  1.000000  0.924428  ...  0.977350  0.925496 -0.919704
ADXM28   0.999914  0.924428  1.000000  ...  0.984196  0.999996 -0.999925
ADAM28   0.999976  0.926774  0.999981  ...  0.985275  0.999994 -0.999831
ADH1B   -0.999509 -0.917317 -0.999834  ... -0.980802 -0.999778  0.999982
ADTRP   -0.999039 -0.912273 -0.999528  ... -0.978290 -0.999438  0.999828
AEBP1    0.983312  0.846668  0.985611  ...  0.940104  0.985133 -0.987601
AKR1B10 -0.999658 -0.919371 -0.999915  ... -0.981800 -0.999874  1.000000
UBL3     0.997347  0.900002  0.998215  ...  0.971864  0.998043 -0.998870
UGT2B15  0.986433  0.977350  0.984196  ...  1.000000  0.984690 -0.981961
VCAN     0.999947  0.925496  0.999996  ...  0.984690  1.000000 -0.999887
XK      -0.999680 -0.919704 -0.999925  ... -0.981961 -0.999887  1.000000

After using stack function I'm bring the data to the shape what I want, but as you can see there is multiple values for all data because of comparing each other.

dfHealty = df_healtyWithGenes.stack().reset_index()
dfHealty.columns = ['gene1', 'gene2', 'score']
dfHealty = dfHealty[dfHealty.gene1 != dfHealty.gene2]

I can filter by score but its not good idea, data may be broke.

How can I filter by gene column?

gene1   gene2   score
EPB41L4B PGC    0.496713249 
PGC EPB41L4B    0.496713249 
CHGA    MT1G    0.496751983
MT1G    CHGA    0.496751983
AEBP1   FCER1G  0.497061368 
FCER1G  AEBP1   0.497061368 
ADTRP   CAPN9   0.497122603
CAPN9   ADTRP   0.497122603
FAM189A2 GLUL   0.49721763
GLUL FAM189A2 0.49721763
CA9 DUOX1   0.497233294
DUOX1   CA9 0.497233294
EDNRA   MSLN    0.497267565
MSLN    EDNRA   0.497267565
HRASLS2 LIPF    0.497581499
LIPF    HRASLS2 0.497581499
EPB41L4B    NEDD4L  0.497613643
NEDD4L  EPB41L4B    0.497613643

I need to convert data like this.

gene1   gene2   score
EPB41L4B PGC    0.496713249 
CHGA    MT1G    0.496751983
AEBP1   FCER1G  0.497061368 
ADTRP   CAPN9   0.497122603
FAM189A2 GLUL   0.49721763
CA9 DUOX1   0.497233294
EDNRA   MSLN    0.497267565

3 Answers

3
Matthew Barlowe On Best Solutions

Using the data given you can remove the duplicate pairs in the data like this

import pandas as pd

cols = ['gene1','gene2','score']
data = [['EPB41L4B', 'PGC',0.496713249], 
        ['PGC','EPB41L4B',0.496713249], 
        ['CHGA','MT1G',0.496751983],
        ['MT1G','CHGA',0.496751983],
        ['AEBP1','FCER1G',0.497061368 ],
        ['FCER1G','AEBP1',0.497061368], 
        ['ADTRP','CAPN9',0.497122603],
        ['CAPN9','ADTRP',0.497122603],
        ['FAM189A2','GLUL',0.49721763],
        ['GLUL','FAM189A2',0.49721763],
        ['CA9','DUOX1',0.497233294],
        ['DUOX1','CA9',0.497233294],
        ['EDNRA','MSLN',0.497267565],
        ['MSLN','EDNRA',0.497267565],
        ['HRASLS2','LIPF',0.497581499],
        ['LIPF','HRASLS2',0.497581499],
        ['EPB41L4B','NEDD4L',0.497613643],
        ['NEDD4L','EPB41L4B',0.497613643]]

df = pd.DataFrame(data,columns=cols)
df = df[df['gene1'] < df['gene2']]
print(df)

Which produces output like this

       gene1   gene2     score
0   EPB41L4B     PGC  0.496713
2       CHGA    MT1G  0.496752
4      AEBP1  FCER1G  0.497061
6      ADTRP   CAPN9  0.497123
8   FAM189A2    GLUL  0.497218
10       CA9   DUOX1  0.497233
12     EDNRA    MSLN  0.497268
14   HRASLS2    LIPF  0.497581
16  EPB41L4B  NEDD4L  0.497614
0
YOLO On

I think you can do like this. Here's a minimal example of what you are trying to do:

import pandas as pd
import numpy as np

# sample data frame
df = pd.DataFrame({'col1': ['a','b'], 'col2':['b','a'], 'col3':[1,1]})

   col1 col2  col3
0    a    b     1
1    b    a     1

# take first two columns from where to remove duplicates
df2 = df.iloc[:,:2]

# sort the columns based on their corresponding values and create a new df 
df3 = pd.DataFrame(np.sort(df2.values), axis=1), df2.index, df2.columns)

# finally drop duplicates
result = pd.concat([df3, df['col3']], axis=1).drop_duplicates(subset=['col1','col2'])

  col1 col2  col3
0    a    b     1
0
Marcus Lim On

It seems to me that you can just take every second row.

print(df.iloc[::2].reset_index(drop=True))

Output:

      gene1   gene2        score
0  EPB41L4B     PGC  0.496713249
1      CHGA    MT1G  0.496751983
2     AEBP1  FCER1G  0.497061368
3     ADTRP   CAPN9  0.497122603
4  FAM189A2    GLUL   0.49721763
5       CA9   DUOX1  0.497233294
6     EDNRA    MSLN  0.497267565
7   HRASLS2    LIPF  0.497581499
8  EPB41L4B  NEDD4L  0.497613643

You can also use frozenset to filter out duplicates:

without_dupes = {frozenset([first, second]): score for first, second, score in df.values}.items()

result = pd.DataFrame([(*k, v) for k, v in without_dupes], columns=['gene1', 'gene2', 'score'])
print(result)

Output:

     gene1     gene2        score
0       PGC  EPB41L4B  0.496713249
1      CHGA      MT1G  0.496751983
2    FCER1G     AEBP1  0.497061368
3     ADTRP     CAPN9  0.497122603
4  FAM189A2      GLUL   0.49721763
5     DUOX1       CA9  0.497233294
6     EDNRA      MSLN  0.497267565
7   HRASLS2      LIPF  0.497581499
8    NEDD4L  EPB41L4B  0.497613643