502 error when scraping LinkedIn using scrapy with splash

3.2k views Asked by At

I tried scraping the Linkedin company page for Netflix using Scrapy with Splash. It works perfectly fine when I use scrapy shell but gives 502 error when I run the script.

Error:

2017-01-06 16:06:45 [scrapy.core.engine] INFO: Spider opened
2017-01-06 16:06:45 [scrapy.extensions.logstats] INFO: Crawled 0 pages (at 0 pages/min), scraped 0 items (at 0 items/min)
2017-01-06 16:06:52 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.linkedin.com/company/netflix via http://localhost:8050/render.html> (failed 1 times): 502 Bad Gateway
2017-01-06 16:06:55 [scrapy.downloadermiddlewares.retry] DEBUG: Retrying <GET https://www.linkedin.com/company/netflix via http://localhost:8050/render.html> (failed 2 times): 502 Bad Gateway
2017-01-06 16:07:05 [scrapy.downloadermiddlewares.retry] DEBUG: Gave up retrying <GET https://www.linkedin.com/company/netflix via http://localhost:8050/render.html> (failed 3 times): 502 Bad Gateway
2017-01-06 16:07:05 [scrapy.core.engine] DEBUG: Crawled (502) <GET https://www.linkedin.com/company/netflix via http://localhost:8050/render.html> (referer: None)
2017-01-06 16:07:05 [scrapy.spidermiddlewares.httperror] INFO: Ignoring response <502 https://www.linkedin.com/company/netflix>: HTTP status code is not handled or not allowed
2017-01-06 16:07:05 [scrapy.core.engine] INFO: Closing spider (finished)

In Splash Terminal:

2017-01-06 10:36:52.186410 [render] [139764812670456] loadFinished: RenderErrorInfo(type='HTTP', code=999, text='Request denied', url='https://www.linkedin.com/company/netflix')
2017-01-06 10:36:52.205523 [events] {"fds": 18, "qsize": 0, "args": {"url": "https://www.linkedin.com/company/netflix", "headers": {"User-Agent": "Scrapy/1.3.0 (+http://scrapy.org)", "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", "Accept-Language": "en"}, "uid": 139764812670456, "wait": 0.5}, "rendertime": 6.674675464630127, "timestamp": 1483699012, "user-agent": "Scrapy/1.3.0 (+http://scrapy.org)", "maxrss": 87956, "error": {"info": {"url": "https://www.linkedin.com/company/netflix", "code": 999, "type": "HTTP", "text": "Request denied"}, "error": 502, "description": "Error rendering page", "type": "RenderError"}, "active": 0, "load": [0.51, 0.67, 0.8], "status_code": 502, "client_ip": "172.17.0.1", "method": "POST", "_id": 139764812670456, "path": "/render.html"}
2017-01-06 10:36:52.206259 [-] "172.17.0.1" - - [06/Jan/2017:10:36:51 +0000] "POST /render.html HTTP/1.1" 502 192 "-" "Scrapy/1.3.0 (+http://scrapy.org)"

Code for spider:

import scrapy
from scrapy_splash import SplashRequest
from linkedin.items import LinkedinItem


class LinkedinScrapy(scrapy.Spider):
    name = 'linkedin_spider'  # spider name
    allowed_domains = ['linkedin.com']
    start_urls = ['https://www.linkedin.com/company/netflix']

    def start_requests(self):
        for url in self.start_urls:
            yield SplashRequest(url, self.parse, 
                           endpoint='render.html', args={'wait': 0.5)

    def parse(self, response):
        item = LinkedinItem()
        item['name'] = response.xpath('//*[@id="stream-promo-top-bar"]/div[2]/div[1]/div[1]/div/h1/span/text()').extract_first()
        item['followers'] = response.xpath('//*[@id = "biz-follow-mod"]/div/div/div/p/text()').extract_first().split()[0]
        item['description'] = response.xpath('//*[@id="stream-about-section"]/div[2]/div[1]/div/p/text()').extract_first()
              yield item
1

There are 1 answers

4
Granitosaurus On BEST ANSWER

It's probably because LinkedIn is denying access because of useragent string your request is using:

"User-Agent": "Scrapy/1.3.0 (+http://scrapy.org)"

You should change user-agent in your spider to something else, see mozillas documentation for that