I am having a challenging time finding references to my issue after searching. I have a function that performs an Asana task update. It will run just fine when accessed in a simple script. I copied it into a class and now I get an invalid JSON error.
The traceback is as follows:
Traceback (most recent call last):
File "C:\Users\johcalvi\AppData\Local\Programs\Python\Python39\lib\tkinter\__init__.py", line 1892, in __call__
return self.func(*args)
File "c:\Users\johcalvi\Documents\Scripts\Sandbox\asana_reporting\LD_Reporting.py", line 934, in controlsTrainingKPI
kpi.actualHrs()
File "c:\Users\johcalvi\Documents\Scripts\Sandbox\asana_reporting\LD_Reporting.py", line 237, in actualHrs
client.tasks.update_task(task_gid,
File "C:\Users\johcalvi\AppData\Local\Programs\Python\Python39\lib\site-packages\asana\resources\gen\tasks.py", line 446, in update_task
return self.client.put(path, params, **options)
File "C:\Users\johcalvi\AppData\Local\Programs\Python\Python39\lib\site-packages\asana\client.py", line 216, in put
return self.request('put', path, data=body, headers=headers, **options)
File "C:\Users\johcalvi\AppData\Local\Programs\Python\Python39\lib\site-packages\asana\client.py", line 91, in request
raise STATUS_MAP[response.status_code](response)
asana.error.InvalidRequestError: Invalid Request: Could not parse request data, invalid JSON
Example 1: the function that is defined within the class
def actualHrs(self):
global df
ar.asanaAuth()
#Initial Queries for Actual Hours worked (workHrs) and Projected Hours(projectedHrs)
self.workHrs = client.tasks.search_in_workspace(
workspace = '8442528107068',
params = {'projects.any':self.controlsTM,
'due_on.after':'2021-12-31',
'iterator_type':'items',
'resource_type':'task',
'opt_fields':['name','start_on',
'due_on','assignee.name',
'custom_fields.name',
'custom_fields.display_value']} ,
opt_pretty=True)
#Build dataframe for actual hours worked data
self.work_results = [self.x for self.x in self.workHrs]
self.work_col_list = ['gid',
'Trainer',
'Task Scope',
'Hours_worked',
'Extra Hours',
'Barrier' ]
json_normalize(self.work_results)
self.dic_flattened=[flatten(self.d) for self.d in self.work_results]
df = pd.DataFrame(self.dic_flattened)
#Clean up dataFrame
df = df.rename(columns = {'assignee_name':'Trainer',
'custom_fields_3_display_value':'Task Scope',
'custom_fields_4_display_value':'Hours_worked',
'custom_fields_5_display_value':'Barrier',
'custom_fields_12_display_value':'Extra Hours',
'due_on':'Due',
'start_on':'Start'
})
#Set dataType for calculated fields
df['Due'] = pd.to_datetime(df['Due'])
df['Start'] = pd.to_datetime(df['Start'])
df['Hours_worked'] = df['Hours_worked'].astype(float, errors= 'raise')
df['Extra Hours'] = df['Extra Hours'].astype(float, errors = 'raise' )
#Calculate Extra Hours_worked (>8 in a day)
df['Extra Hours'] = df['Hours_worked'] - 8
df['Extra Hours'] = np.where(df['Extra Hours'] < 0, 0, df['Extra Hours'])
df= df.drop(columns= [col for col in df if col not in self.work_col_list]).fillna(value = np.nan).reindex(columns = self.work_col_list)
#For troubleshooting data mismatch
df.to_csv('data_csv.csv')
#Update Asana calculated fields for Extra Hours,
for self.index, self.row in df.iterrows():
task_gid=self.row['gid']
#Calculate % Utilization based on Proj_Hours/Total Avail monthly Hours
# per trainer for a 30 day outlook (206hrs per month per trainer)
## Calculate Extra_Hours based on any over 9 in a day
extra_hrs_val =self.row['Hours_worked']-9
if extra_hrs_val <0 :
extra_hrs_val = 0
else:
extra_hrs_val = extra_hrs_val
print('Updating Extra Hours')
client.tasks.update_task(task_gid,
params = {'notes':'Task automatically updated',
'custom_fields':{'1201357453126283':
extra_hrs_val}
},
opt_pretty=False
)
return df
Example 2 is from the test that I ran in a separate script:
def actualHrs():
global df
asanaAuth()
#Initial Queries for Actual Hours worked (workHrs) and Projected Hours(projectedHrs)
workHrs = client.tasks.search_in_workspace(
workspace = '8442528107068',
params = {'projects.any':controlsTM,
'due_on.after':'2021-12-31',
'iterator_type':'items',
'resource_type':'task',
'opt_fields':['name','start_on',
'due_on','assignee.name',
'custom_fields.name',
'custom_fields.display_value']} ,
opt_pretty=True)
#Build dataframe for actual hours worked data
work_results = [x for x in workHrs]
work_col_list = ['gid','Trainer',
'Task Scope',
'Hours_worked',
'Extra Hours',
'Barrier' ]
json_normalize(work_results)
dic_flattened=[flatten(d) for d in work_results]
df = pd.DataFrame(dic_flattened)
#Clean up dataFrame
df = df.rename(columns = {'assignee_name':'Trainer',
'custom_fields_3_display_value':'Task Scope',
'custom_fields_4_display_value':'Hours_worked',
'custom_fields_5_display_value':'Barrier',
'custom_fields_12_display_value':'Extra Hours',
'due_on':'Due',
'start_on':'Start'
})
#Set dataType for calculated fields
df['Due'] = pd.to_datetime(df['Due'])
df['Start'] = pd.to_datetime(df['Start'])
df['Hours_worked'] = df['Hours_worked'].astype(float, errors= 'raise')
df['Extra Hours'] = df['Extra Hours'].astype(float, errors = 'raise' )
#Calculate Extra Hours_worked (>8 in a day)
df['Extra Hours'] = df['Hours_worked'] - 8
df['Extra Hours'] = np.where(df['Extra Hours'] < 0, 0, df['Extra Hours'])
df= df.drop(columns= [col for col in df if col not in work_col_list]).fillna(value = np.nan).reindex(columns = work_col_list)
#For troubleshooting data mismatch
df.to_csv('data_csv.csv')
#Update Asana calculated fields for Projected Hours, Extra Hours, and % Utilization
for index, row in df.iterrows():
task_gid= row['gid']
## Calculate Extra Hours :
extra_hrs_val = row['Hours_worked']-9
if extra_hrs_val <0 :
extra_hrs_val = 0
else:
extra_hrs_val = extra_hrs_val
print('Updating Asana')
print(task_gid)
client.tasks.update_task(task_gid,
{'notes':'Task automatically updated',
'custom_fields': {'1201357453126283': extra_hrs_val}
},
opt_pretty=False
)
return df
For all I know I am chasing my tail due to something simple as I am still pretty new to this.
Thank you all for the sound advice on both my potential issue as well as helping to improve my style...It turns out that I was formatting null values as nan when using the pd.fillna method in the function in question which was causing the JSON structure to fail. After reformatting it to a value of 0 all works as designed. I think I had an errant undo or a copy/paste error