-
Notifications
You must be signed in to change notification settings - Fork 2
skip callbacks when a hoster is unreachable & cleanup/refactoring #22
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Changes from 2 commits
File filter
Filter by extension
Conversations
Jump to
Diff view
Diff view
There are no files selected for viewing
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,15 +1,14 @@ | ||
| from typing import Dict, Any, Type, Union | ||
| from typing import Dict | ||
| from crawlers.lib.platforms.i_crawler import ICrawler | ||
| from crawlers.lib.platforms.gitea import GiteaCrawler | ||
| from crawlers.lib.platforms.gitlab import GitLabCrawler | ||
| from crawlers.lib.platforms.bitbucket import BitBucketCrawler | ||
| from crawlers.lib.platforms.github import GitHubV4Crawler, GitHubRESTCrawler | ||
| from crawlers.lib.platforms.github import GitHubV4Crawler | ||
|
|
||
| platforms: Dict[str, ICrawler] = { | ||
| GiteaCrawler.type: GiteaCrawler, | ||
| GitLabCrawler.type: GitLabCrawler, | ||
| GitHubV4Crawler.type: GitHubV4Crawler, | ||
| GitHubRESTCrawler.type: GitHubRESTCrawler, | ||
| BitBucketCrawler.type: BitBucketCrawler, | ||
| } | ||
|
|
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,5 +1,8 @@ | ||
| import logging | ||
| from typing import List, Tuple | ||
| from typing import List, Tuple, Union | ||
|
|
||
| from requests import ConnectionError, Timeout, TooManyRedirects | ||
| from urllib3.exceptions import MaxRetryError | ||
|
|
||
| from crawlers.constants import GITEA_PER_PAGE_MAX, DEFAULT_REQUEST_TIMEOUT | ||
| from crawlers.lib.platforms.i_crawler import ICrawler | ||
|
|
@@ -25,7 +28,7 @@ def set_state(cls, state: dict = None) -> dict: | |
| state = super().set_state(state) | ||
| return state | ||
|
|
||
| def crawl(self, state: dict = None) -> Tuple[bool, List[dict], dict]: | ||
| def crawl(self, state: dict = None) -> Tuple[bool, List[dict], dict, Union[Exception, None]]: | ||
| state = state or self.state | ||
| while self.has_next_crawl(state): | ||
| params = dict( | ||
|
|
@@ -40,9 +43,13 @@ def crawl(self, state: dict = None) -> Tuple[bool, List[dict], dict]: | |
| f"- response not ok, status: {response.status_code}") | ||
| return False, [], state # nr.1 - we skip rest of this block, hope we get it next time | ||
| result = response.json() | ||
| except (MaxRetryError, ConnectionError, Timeout, TooManyRedirects) as e: | ||
| logger.exception(f"{self} - crawler cannot reach hoster") | ||
| # we re-raise these, as we want to avoid returning empty results to the indexer | ||
| raise e | ||
| except Exception as e: | ||
| logger.exception(f"(skipping block chunk) gitea crawler crashed") | ||
| return False, [], state # nr.2 - we skip rest of this block, hope we get it next time | ||
| return False, [], state, e # nr.2 - we skip rest of this block, hope we get it next time | ||
|
Contributor
Author
There was a problem hiding this comment. Choose a reason for hiding this commentThe reason will be displayed to describe this comment to others. Learn more. First except with specific exceptions are the ones we want to re-raise as they mean we have a complete failure for the block - we could try to salvage these requests more but I think we should start like this. the other side, all other exceptions are like I said handled like before, we catch them and continue (but now we also yield any caught exceptions that occured). |
||
|
|
||
| state['is_done'] = len(result['data']) != state['per_page'] # finish early, we reached the end | ||
|
|
||
|
|
||
| Original file line number | Diff line number | Diff line change |
|---|---|---|
| @@ -1,2 +1 @@ | ||
| from .github_v4 import GitHubV4Crawler | ||
| from .github_rest import GitHubRESTCrawler |
This file was deleted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is the main part that fixes the potential issue - we simply dont issue the callback request if we encounter these specific exceptions.
All other exceptions are handled like previously, meaning they will only cause the chunk where they occur in to be empty - this one instead ignores any crawled content (if any) and just drops the block without a callback, and asks for the next block (if we're in the automated workflow like in docker-compose).
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
im not sure anymore if we can deal with it like that.
since we only clean up dead blocks on add_blocks (when the crawler sends the "answer"), we would only ever hand out more blocks, but never clean up these dead blocks.
also, right now, in the indexer we are scheduling blocks like "hoster with the oldest run timestamp first", meaning that crawling would be stuck on this hoster the indexer never get answers for. (and even if we change the way we schedule, this hoster would only pile up more and more dead blocks, until redis uses up all the ram and crashes)
maybe we should think about a proper communication protocol first, something that wraps the repo list we return in some kind of state, so that the indexer has a way to, for example, pause a run on a hoster for a day on connection errors, or end the run completely without making an export - something like that?