This time I will show you how to use the nodeJs crawler, and what are the precautions for using the nodeJs crawler. The following is a practical case, let's take a look.
Background
Recently I plan to review the nodeJs-related content I have seen before, and write a few crawlers to kill the boredom, and I discovered some during the crawling process Questions, record them for future reference.Dependencies
The cheerio library that is widely available on the Internet is used to process the crawled content, superagent is used to process requests, and log4js is used to record logs.Log configuration
Without further ado, let’s go directly to the code:const log4js = require('log4js'); log4js.configure({ appenders: { cheese: { type: 'dateFile', filename: 'cheese.log', pattern: '-yyyy-MM-dd.log', // 包含模型 alwaysIncludePattern: true, maxLogSize: 1024, backups: 3 } }, categories: { default: { appenders: ['cheese'], level: 'info' } } }); const logger = log4js.getLogger('cheese'); logger.level = 'INFO'; module.exports = logger;
Crawling content and processing
superagent.get(cityItemUrl).end((err, res) => { if (err) { return console.error(err); } const $ = cheerio.load(res.text); // 解析当前页面,获取当前页面的城市链接地址 const cityInfoEle = $('.newslist1 li a'); cityInfoEle.each((idx, element) => { const $element = $(element); const sceneURL = $element.attr('href'); // 页面地址 const sceneName = $element.attr('title'); // 城市名称 if (!sceneName) { return; } logger.info(`当前解析到的目的地是: ${sceneName}, 对应的地址为: ${sceneURL}`); getDesInfos(sceneURL, sceneName); // 获取城市详细信息 ep.after('getDirInfoComplete', cityInfoEle.length, (dirInfos) => { const content = JSON.parse(fs.readFileSync(path.join(dirname, './imgs.json'))); dirInfos.forEach((element) => { logger.info(`本条数据为:${JSON.stringify(element)}`); Object.assign(content, element); }); fs.writeFileSync(path.join(dirname, './imgs.json'), JSON.stringify(content)); }); }); });
Create folder
function mkdirSync(dirname) { if (fs.existsSync(dirname)) { return true; } if (mkdirSync(path.dirname(dirname))) { fs.mkdirSync(dirname); return true; } return false; }
Read and write files
const content = JSON.parse(fs.readFileSync(path.join(dirname, './dir.json'))); dirInfos.forEach((element) => { logger.info(`本条数据为:${JSON.stringify(element)}`); Object.assign(content, element); }); fs.writeFileSync(path.join(dirname, './dir.json'), JSON.stringify(content));
Batch download resources
Downloaded resources may include pictures, audio, etc. Use Bagpipe to handle asynchronous concurrency. Refer toconst Bagpipe = require('bagpipe'); const bagpipe = new Bagpipe(10); bagpipe.push(downloadImage, url, dstpath, (err, data) => { if (err) { console.log(err); return; } console.log(`[${dstpath}]: ${data}`); });
function downloadImage(src, dest, callback) { request.head(src, (err, res, body) => { if (src && src.indexOf('http') > -1 || src.indexOf('https') > -1) { request(src).pipe(fs.createWriteStream(dest)).on('close', () => { callback(null, dest); }); } }); }
Encoding
Sometimes the web page content processed directly using cheerio.load is found to be encoded text after writing to the file. You can useconst $ = cheerio.load(buf, { decodeEntities: false });
const reg = /<.*?>/g;
How to use js to encapsulate ajax function functions and usage
Detailed explanation of the use of common built-in functions in JS
The above is the detailed content of How to use nodeJs crawler. For more information, please follow other related articles on the PHP Chinese website!