If I perform a code search using the GitHub Search API and request 100 results per page, I get a varying number of results -
import requests
# url = "https://api.github.com/search/code?q=torch +in:file + language:python+size:0..250&page=1&per_page=100"
url = "https://api.github.com/search/code?q=torch +in:file + language:python&page=1&per_page=100"
headers = {
'Authorization': 'Token xxxxxxxxxxxxxxxxxxxxxxxxxxxxx'
}
response = requests.request("GET", url, headers=headers).json()
print(len(response['items']))
Thanks to this answer, I have the following workaround: I run the query multiple times until I get the required results on a page.
My current project requires me to iterate through the search API looking for files of varying sizes. I am basically repeating the procedure described here. Therefore my code looks something like this -
url = "https://api.github.com/search/code?q=torch +in:file + language:python+size:0..250&page=1&per_page=100"
In this case, I don't know in advance the number of results a page should actually have. Could someone tell me a workaround for this? Maybe I am using the Search API incorrectly?
GitHub provides documentation about Using pagination in the REST API. Each response includes a
Link
header that includes a link for the next set of results (along with other links); you can use this to iterate over the complete result set.For the particular search you're doing ("every python file that contains the word 'torch'"), you're going to run into rate limits fairly quickly, but for example the following code would iterate over results, 10 at a time (or so), until 50 or more results have been read:
Here I'm using the
httplink
module to parse theLink
header, but you could accomplish the same thing with an appropriate regular expression and there
module.