Issue with web crawler: IndexError: string index out of range

497 views Asked by At

I am making a web crawler. I'm not using scrapy or anything, I'm trying to have my script do most things. I have tried doing a search for the issue however I can't seem to find anything that helps with the error. I've tried switching around some of the variable to try and narrow down the problem. I am getting an error on line 24 saying IndexError: string index out of range. The functions run on the first url, (the original url) then the second and fail on the third in the original array. I'm lost, any help would be appreciated greatly! Note, I'm only printing all of them for testing, I'll eventually have them printed to a text file.

import requests
from bs4 import BeautifulSoup

# creating requests from user input
url = raw_input("Please enter a domain to crawl, without the 'http://www' part : ")

def makeRequest(url):
    r = requests.get('http://' + url)
    # Adding in BS4 for finding a tags in HTML
    soup =  BeautifulSoup(r.content, 'html.parser')
    # Writes a as the link found in the href
    output = soup.find_all('a')
    return output


def makeFilter(link):
    # Creating array for our links
    found_link = []
    for a in link:
        a = a.get('href')
        a_string = str(a)

        # if statement to filter our links
        if a_string[0] == '/': # this is the line with the error
            # Realtive Links
            found_link.append(a_string)

        if 'http://' + url in a_string:
            # Links from the same site
            found_link.append(a_string)

        if 'https://' + url in a_string:
            # Links from the same site with SSL
            found_link.append(a_string)

        if 'http://www.' + url in a_string:
            # Links from the same site
            found_link.append(a_string)

        if 'https://www.' + url in a_string:
            # Links from the same site with SSL
            found_link.append(a_string)
        #else:  
        #   found_link.write(a_string + '\n') # testing only
    output = found_link

    return output   

# Function for removing duplicates
def remove_duplicates(values):
    output = []
    seen = set()
    for value in values:
        if value not in seen:
            output.append(value)
            seen.add(value)
    return output

# Run the function with our list in this order -> Makes the request -> Filters the links -> Removes duplicates
def createURLList(values):
    requests = makeRequest(values)
    new_list = makeFilter(requests)
    filtered_list = remove_duplicates(new_list)

    return filtered_list

result = createURLList(url)

# print result

# for verifying and crawling resulting pages
for b in result:
    sub_directories = createURLList(url + b)
    crawler = []
    crawler.append(sub_directories)

    print crawler
1

There are 1 answers

6
Jack On BEST ANSWER

After a_string = str(a) try adding:

if not a_string:
  continue