有时爬虫会报如下超时错误:
Traceback (most recent call last): File "/opt/pyspider/pyspider/run.py", line 351, in app.config['fetch'] = lambda x: umsgpack.unpackb(fetcher_rpc.fetch(x).data) File "/usr/lib/python2.7/xmlrpclib.py", line 1233, in __call__ return self.__send(self.__name, args) File "/usr/lib/python2.7/xmlrpclib.py", line 1587, in __request verbose=self.__verbose File "/usr/lib/python2.7/xmlrpclib.py", line 1273, in request return self.single_request(host, handler, request_body, verbose) File "/usr/lib/python2.7/xmlrpclib.py", line 1321, in single_request response.msg, ProtocolError:
请问有什么好的方法避免?
这个错误只会在调试时出现
@足兆叉虫
这个确实是调试时的前台错位,而且在后台fetcher会报这样的错误:
[E 161014 23:45:09 tornado_fetcher:202] [599] douban:f25b579c7b441d19bc800412cccb145b https://movie.douban.com/revi... ValueError('No JSON object could be decoded',) 50.00s
我调试完成后,真正开始爬取时,过一段时间后会有大量的这个错误,而且在页面上显示爬虫status为“PAUSED”。请问是什么问题?如何解决?