Python (2.7.10): Key Error: 'id'

1.1k views Asked by At

I am trying to follow a tutorial to make a Reddit and Twitter bot in Python. I have used Python (2.7.10) as I believe that was the version used in the tutorial, however I have the following errors:

Traceback (most recent call last):
    File "C:\Python27\twitterbot.py", line 82, in <module>
        main()
    File "C:\Python27\twitterbot.py", line 63, in main
        post_dict, post_ids = tweet_creator(subreddit)
    File "C:\Python27\twitterbot.py", line 30, in tweet_creator
        short_link = shorten(post_link)
    File "C:\Python27\twitterbot.py", line 46, in shorten
        link = json.loads(r.text)['id']
        KeyError: 'id'

The full script can be seen below (tokens and keys removed):

import praw
import json
import requests
import tweepy
import time

access_token = 'REMOVED'
access_token_secret = 'REMOVED'
consumer_key = 'REMOVED'
consumer_secret = 'REMOVED'

def strip_title(title):
    if len(title) < 94:
        return title
    else:
        return title[:93] + "..."

def tweet_creator(subreddit_info):
    post_dict = {}
    post_ids = []
    print "[Computer] Getting posts from Reddit"
    for submission in subreddit_info.get_hot(limit=20):
        post_dict[strip_title(submission.title)] = submission.url
        post_ids.append(submission.id)
    print "[Computer] Generating short link using goo.gl"
    mini_post_dict = {}
    for post in post_dict:
        post_title = post
        post_link = post_dict[post]         
        short_link = shorten(post_link)
        mini_post_dict[post_title] = short_link 
    return mini_post_dict, post_ids

def setup_connection_reddit(subreddit):
    print "[Computer] setting up connection with Reddit"
    r = praw.Reddit('yasoob_python reddit twitter Computer '
                'monitoring %s' %(subreddit)) 
    subreddit = r.get_subreddit(subreddit)
    return subreddit

def shorten(url):
    headers = {'content-type': 'application/json'}
    payload = {"longUrl": url}
    url = "https://www.googleapis.com/urlshortener/v1/url"
    r = requests.post(url, data=json.dumps(payload), headers=headers)
    link = json.loads(r.text)['id']
    return link

def duplicate_check(id):
    found = 0
    with open('posted_posts.txt', 'r') as file:
        for line in file:
            if id in line:
                found = 1
    return found

def add_id_to_file(id):
    with open('posted_posts.txt', 'a') as file:
        file.write(str(id) + "\n")

def main():
    subreddit = setup_connection_reddit('showerthoughts')
    post_dict, post_ids = tweet_creator(subreddit)
    tweeter(post_dict, post_ids)

def tweeter(post_dict, post_ids):
    auth = tweepy.OAuthHandler(consumer_key, consumer_secret)
    auth.set_access_token(access_token, access_token_secret)
    api = tweepy.API(auth)
    for post, post_id in zip(post_dict, post_ids):
        found = duplicate_check(post_id)
        if found == 0:
            print "[Computer] Posting this link on twitter"
            print post+" "+post_dict[post]+" #Python #reddit #Computer"
            api.update_status(post+" "+post_dict[post]+" #Python #reddit #Computer")
            add_id_to_file(post_id)
            time.sleep(30)
        else:
            print "[Computer] Already posted" 

if __name__ == '__main__':
    main()
1

There are 1 answers

0
Glaive On

I met some similar problem but I'm not sure if this is the same problem. From PRAW 3.0, Redditor class is using the lazyload feature which was used for Subreddit class in PRAW 2.x. You could use assert(redditor.has_fetched) to check the object is loaded or not.

Specific to the class Redditor, both 'id' and 'name' are lazyload attributes, and same for some other attributes such as 'link_karma'. I query them directly before as: vars(redditor)['id'] It worked for PRAW 2.x and reported an error for PRAW 3.0 My fix is call: redditor.link_karma to load all the features.