scrapy does not follow all URLs within a page












-1














I am new to scrapy and just successfully crawled a page, retrieving 58 results while I was looking for 120 results that are available.



The problem seems to be, that if one website contains 4 links, scrapy follows the first one and the other three will never be visited as the links to those pages are only within that one page and it will never be visited again. I am assuming this, since within the result set those 3 are missing but the links are OK if I visit the page in the browser.



The spider:



import scrapy
from scrapy.linkextractors import LinkExtractor
from scrapy.spiders import CrawlSpider, Rule

#CLOSESPIDER_PAGECOUNT=1

from bid.items import riegerItem

class GetbidSpider(CrawlSpider):
name = 'example'
allowed_domains = ['www.example.co.uk']
start_urls = ['https://www.example.co.uk/']

rules = (

Rule(
LinkExtractor(allow=['test/.*,item,.*u']),
callback='parse_item'
),

# follow all urls in beta folder that are not schmuck
Rule(
LinkExtractor(allow=['test/[^dismiss|this].*']),
follow=True
),
)
...


Output:



{'downloader/request_bytes': 31681,
'downloader/request_count': 101,
'downloader/request_method_count/GET': 101,
'downloader/response_bytes': 1129752,
'downloader/response_count': 101,
'downloader/response_status_count/200': 101,
'dupefilter/filtered': 746,
'finish_reason': 'finished',
'finish_time': datetime.datetime(2018, 11, 22, 23, 0, 30, 937420),
'item_scraped_count': 58,
'log_count/DEBUG': 161,
'log_count/INFO': 8,
'memusage/max': 49242112,
'memusage/startup': 49242112,
'request_depth_max': 4,
'response_received_count': 101,
'scheduler/dequeued': 100,
'scheduler/dequeued/memory': 100,
'scheduler/enqueued': 100,
'scheduler/enqueued/memory': 100,
'start_time': datetime.datetime(2018, 11, 22, 23, 0, 26, 78036)}
2018-11-23 00:00:30 [scrapy.core.engine] INFO: Spider closed (finished)


I am using the default settings from the template spider.



If I run it again, slightly other amounts will be fetched



How can I debug this problem in order to retrieve all results?










share|improve this question





























    -1














    I am new to scrapy and just successfully crawled a page, retrieving 58 results while I was looking for 120 results that are available.



    The problem seems to be, that if one website contains 4 links, scrapy follows the first one and the other three will never be visited as the links to those pages are only within that one page and it will never be visited again. I am assuming this, since within the result set those 3 are missing but the links are OK if I visit the page in the browser.



    The spider:



    import scrapy
    from scrapy.linkextractors import LinkExtractor
    from scrapy.spiders import CrawlSpider, Rule

    #CLOSESPIDER_PAGECOUNT=1

    from bid.items import riegerItem

    class GetbidSpider(CrawlSpider):
    name = 'example'
    allowed_domains = ['www.example.co.uk']
    start_urls = ['https://www.example.co.uk/']

    rules = (

    Rule(
    LinkExtractor(allow=['test/.*,item,.*u']),
    callback='parse_item'
    ),

    # follow all urls in beta folder that are not schmuck
    Rule(
    LinkExtractor(allow=['test/[^dismiss|this].*']),
    follow=True
    ),
    )
    ...


    Output:



    {'downloader/request_bytes': 31681,
    'downloader/request_count': 101,
    'downloader/request_method_count/GET': 101,
    'downloader/response_bytes': 1129752,
    'downloader/response_count': 101,
    'downloader/response_status_count/200': 101,
    'dupefilter/filtered': 746,
    'finish_reason': 'finished',
    'finish_time': datetime.datetime(2018, 11, 22, 23, 0, 30, 937420),
    'item_scraped_count': 58,
    'log_count/DEBUG': 161,
    'log_count/INFO': 8,
    'memusage/max': 49242112,
    'memusage/startup': 49242112,
    'request_depth_max': 4,
    'response_received_count': 101,
    'scheduler/dequeued': 100,
    'scheduler/dequeued/memory': 100,
    'scheduler/enqueued': 100,
    'scheduler/enqueued/memory': 100,
    'start_time': datetime.datetime(2018, 11, 22, 23, 0, 26, 78036)}
    2018-11-23 00:00:30 [scrapy.core.engine] INFO: Spider closed (finished)


    I am using the default settings from the template spider.



    If I run it again, slightly other amounts will be fetched



    How can I debug this problem in order to retrieve all results?










    share|improve this question



























      -1












      -1








      -1







      I am new to scrapy and just successfully crawled a page, retrieving 58 results while I was looking for 120 results that are available.



      The problem seems to be, that if one website contains 4 links, scrapy follows the first one and the other three will never be visited as the links to those pages are only within that one page and it will never be visited again. I am assuming this, since within the result set those 3 are missing but the links are OK if I visit the page in the browser.



      The spider:



      import scrapy
      from scrapy.linkextractors import LinkExtractor
      from scrapy.spiders import CrawlSpider, Rule

      #CLOSESPIDER_PAGECOUNT=1

      from bid.items import riegerItem

      class GetbidSpider(CrawlSpider):
      name = 'example'
      allowed_domains = ['www.example.co.uk']
      start_urls = ['https://www.example.co.uk/']

      rules = (

      Rule(
      LinkExtractor(allow=['test/.*,item,.*u']),
      callback='parse_item'
      ),

      # follow all urls in beta folder that are not schmuck
      Rule(
      LinkExtractor(allow=['test/[^dismiss|this].*']),
      follow=True
      ),
      )
      ...


      Output:



      {'downloader/request_bytes': 31681,
      'downloader/request_count': 101,
      'downloader/request_method_count/GET': 101,
      'downloader/response_bytes': 1129752,
      'downloader/response_count': 101,
      'downloader/response_status_count/200': 101,
      'dupefilter/filtered': 746,
      'finish_reason': 'finished',
      'finish_time': datetime.datetime(2018, 11, 22, 23, 0, 30, 937420),
      'item_scraped_count': 58,
      'log_count/DEBUG': 161,
      'log_count/INFO': 8,
      'memusage/max': 49242112,
      'memusage/startup': 49242112,
      'request_depth_max': 4,
      'response_received_count': 101,
      'scheduler/dequeued': 100,
      'scheduler/dequeued/memory': 100,
      'scheduler/enqueued': 100,
      'scheduler/enqueued/memory': 100,
      'start_time': datetime.datetime(2018, 11, 22, 23, 0, 26, 78036)}
      2018-11-23 00:00:30 [scrapy.core.engine] INFO: Spider closed (finished)


      I am using the default settings from the template spider.



      If I run it again, slightly other amounts will be fetched



      How can I debug this problem in order to retrieve all results?










      share|improve this question















      I am new to scrapy and just successfully crawled a page, retrieving 58 results while I was looking for 120 results that are available.



      The problem seems to be, that if one website contains 4 links, scrapy follows the first one and the other three will never be visited as the links to those pages are only within that one page and it will never be visited again. I am assuming this, since within the result set those 3 are missing but the links are OK if I visit the page in the browser.



      The spider:



      import scrapy
      from scrapy.linkextractors import LinkExtractor
      from scrapy.spiders import CrawlSpider, Rule

      #CLOSESPIDER_PAGECOUNT=1

      from bid.items import riegerItem

      class GetbidSpider(CrawlSpider):
      name = 'example'
      allowed_domains = ['www.example.co.uk']
      start_urls = ['https://www.example.co.uk/']

      rules = (

      Rule(
      LinkExtractor(allow=['test/.*,item,.*u']),
      callback='parse_item'
      ),

      # follow all urls in beta folder that are not schmuck
      Rule(
      LinkExtractor(allow=['test/[^dismiss|this].*']),
      follow=True
      ),
      )
      ...


      Output:



      {'downloader/request_bytes': 31681,
      'downloader/request_count': 101,
      'downloader/request_method_count/GET': 101,
      'downloader/response_bytes': 1129752,
      'downloader/response_count': 101,
      'downloader/response_status_count/200': 101,
      'dupefilter/filtered': 746,
      'finish_reason': 'finished',
      'finish_time': datetime.datetime(2018, 11, 22, 23, 0, 30, 937420),
      'item_scraped_count': 58,
      'log_count/DEBUG': 161,
      'log_count/INFO': 8,
      'memusage/max': 49242112,
      'memusage/startup': 49242112,
      'request_depth_max': 4,
      'response_received_count': 101,
      'scheduler/dequeued': 100,
      'scheduler/dequeued/memory': 100,
      'scheduler/enqueued': 100,
      'scheduler/enqueued/memory': 100,
      'start_time': datetime.datetime(2018, 11, 22, 23, 0, 26, 78036)}
      2018-11-23 00:00:30 [scrapy.core.engine] INFO: Spider closed (finished)


      I am using the default settings from the template spider.



      If I run it again, slightly other amounts will be fetched



      How can I debug this problem in order to retrieve all results?







      python scrapy scrapy-spider






      share|improve this question















      share|improve this question













      share|improve this question




      share|improve this question








      edited Nov 23 '18 at 4:57









      stranac

      13.6k31724




      13.6k31724










      asked Nov 22 '18 at 23:19









      merlin

      6731921




      6731921
























          1 Answer
          1






          active

          oldest

          votes


















          0














          Found the problem. The regex with the second rule was prohibiting those URLs. Worked on the regex and not it runs OK.






          share|improve this answer





















            Your Answer






            StackExchange.ifUsing("editor", function () {
            StackExchange.using("externalEditor", function () {
            StackExchange.using("snippets", function () {
            StackExchange.snippets.init();
            });
            });
            }, "code-snippets");

            StackExchange.ready(function() {
            var channelOptions = {
            tags: "".split(" "),
            id: "1"
            };
            initTagRenderer("".split(" "), "".split(" "), channelOptions);

            StackExchange.using("externalEditor", function() {
            // Have to fire editor after snippets, if snippets enabled
            if (StackExchange.settings.snippets.snippetsEnabled) {
            StackExchange.using("snippets", function() {
            createEditor();
            });
            }
            else {
            createEditor();
            }
            });

            function createEditor() {
            StackExchange.prepareEditor({
            heartbeatType: 'answer',
            autoActivateHeartbeat: false,
            convertImagesToLinks: true,
            noModals: true,
            showLowRepImageUploadWarning: true,
            reputationToPostImages: 10,
            bindNavPrevention: true,
            postfix: "",
            imageUploader: {
            brandingHtml: "Powered by u003ca class="icon-imgur-white" href="https://imgur.com/"u003eu003c/au003e",
            contentPolicyHtml: "User contributions licensed under u003ca href="https://creativecommons.org/licenses/by-sa/3.0/"u003ecc by-sa 3.0 with attribution requiredu003c/au003e u003ca href="https://stackoverflow.com/legal/content-policy"u003e(content policy)u003c/au003e",
            allowUrls: true
            },
            onDemand: true,
            discardSelector: ".discard-answer"
            ,immediatelyShowMarkdownHelp:true
            });


            }
            });














            draft saved

            draft discarded


















            StackExchange.ready(
            function () {
            StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53439000%2fscrapy-does-not-follow-all-urls-within-a-page%23new-answer', 'question_page');
            }
            );

            Post as a guest















            Required, but never shown

























            1 Answer
            1






            active

            oldest

            votes








            1 Answer
            1






            active

            oldest

            votes









            active

            oldest

            votes






            active

            oldest

            votes









            0














            Found the problem. The regex with the second rule was prohibiting those URLs. Worked on the regex and not it runs OK.






            share|improve this answer


























              0














              Found the problem. The regex with the second rule was prohibiting those URLs. Worked on the regex and not it runs OK.






              share|improve this answer
























                0












                0








                0






                Found the problem. The regex with the second rule was prohibiting those URLs. Worked on the regex and not it runs OK.






                share|improve this answer












                Found the problem. The regex with the second rule was prohibiting those URLs. Worked on the regex and not it runs OK.







                share|improve this answer












                share|improve this answer



                share|improve this answer










                answered Nov 23 '18 at 7:12









                merlin

                6731921




                6731921






























                    draft saved

                    draft discarded




















































                    Thanks for contributing an answer to Stack Overflow!


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.





                    Some of your past answers have not been well-received, and you're in danger of being blocked from answering.


                    Please pay close attention to the following guidance:


                    • Please be sure to answer the question. Provide details and share your research!

                    But avoid



                    • Asking for help, clarification, or responding to other answers.

                    • Making statements based on opinion; back them up with references or personal experience.


                    To learn more, see our tips on writing great answers.




                    draft saved


                    draft discarded














                    StackExchange.ready(
                    function () {
                    StackExchange.openid.initPostLogin('.new-post-login', 'https%3a%2f%2fstackoverflow.com%2fquestions%2f53439000%2fscrapy-does-not-follow-all-urls-within-a-page%23new-answer', 'question_page');
                    }
                    );

                    Post as a guest















                    Required, but never shown





















































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown

































                    Required, but never shown














                    Required, but never shown












                    Required, but never shown







                    Required, but never shown







                    Popular posts from this blog

                    Berounka

                    Sphinx de Gizeh

                    Different font size/position of beamer's navigation symbols template's content depending on regular/plain...