Home>Article>Web Front-end> How to implement HTTP transfer of large files based on nodejs? (Sharing of practical methods)
Based onnodeHow to implement http transmission of large files? The following article will introduce to you several practical http file transfer solutions based on nodejs. I hope it will be helpful to you!
The http file transfer solution based onnodejsplays an important role in the current front-end and back-end full-stack development. In this article, I will go through several A solution to implement HTTP transfer of large files. Before implementing the function, we first write a large file through the fs module of nodejs and generate a local file in the project:
const fs = require('fs'); const writeStream = fs.createWriteStream(__dirname + "/file.txt"); for(let i = 0;i <= 100000; i++) { writeStream.write(`${i} —— 我是${i}号文件\n`, "utf-8"); } writeStream.end();
After the above code runs successfully, A text file with a size of3.2MBwill be generated in the current execution directory, which will be used as the "large file material" for the following program. Before listing the large file transfer scheme, we first encapsulate the two public methods that will be used later:File reading method
andFile compression method
:
// 封装读取文件的方法 const readFile = async (paramsData) => { return new Promise((resolve, reject) => { fs.readFile(paramsData, (err, data) => { if(err) { reject('文件读取错误'); } else { resolve(data); } }) }) } // 封装文件压缩方法 const gzip = async (paramsData) => { return new Promise((resolve, reject) => { zlib.gzip(paramsData, (err, result) => { if(err) { reject('文件压缩错误'); } else { resolve(result); } }) }) }
1. Transmit through large files after data compression
When the browser sends a request, it will carryaccept
andaccept- *
Request header information, used to tell the server the file types supported by the current browser, the supported compression format list and the supported languages. TheAccept-Encoding
field in the request header is used to tell the server the content encoding method (usually a certain compression algorithm) that the client can understand. The server will choose a method supported by the client and notify the client of the choice through the response headerContent-Encoding
. The response header tells the browser that the JS script returned is passedgzip
Compression algorithm processed
// 请求头 accept-encoding: gzip, deflate, br
// 响应头 cache-control: max-age=2592000 content-encoding: gzip content-type: application/x-javascript
Based on the understanding ofAccept-Encoding
andContent-Encoding
fields, let’s verify that it is not turned ongzip
And the effect of turning ongzip
.
// 实现一个简单的文件读取服务器(没有开启gzip) const server = http.createServer(async (req, res) => { res.writeHead(200, { "Content-Type": "text/plain;charset=utf-8", }); const buffer = await readFile(__dirname + '/file.txt'); res.write(buffer); res.end(); }) server.listen(3000, () => { console.log(`server启动成功`) })
// 实现一个简单的文件读取服务器(开启gzip) const server = http.createServer(async(req, res) => { res.writeHead(200, { "Content-Type": "text/plain;charset=utf-8", "Content-Encoding": "gzip" }); const buffer = await readFile(__dirname + '/file.txt'); const gzipData = await gzip(buffer); res.write(gzipData); res.end(); }) server.listen(3000, () => { console.log(`server启动成功`) })
2. Transmission through data chunking
When there is a scenario where a large HTML table needs to be generated using the data obtained from the database query, or when a large number of images need to be transmitted, this can be achieved through block transmission.
Transfer-Encoding: chunked Transfer-Encoding: gzip, chunked
The value of theTransfer-Encoding
field in the response header ischunked
, which indicates that the data is sent in a series of chunks. It should be noted that the two fieldsTransfer-Encoding
andContent-Length
are mutually exclusive, which means that these two fields cannot appear at the same time in the response message.
// 数据分块传输 const spilitChunks = async () =>{ const buffer = await readFile(__dirname + '/file.txt'); const lines = buffer.toString('utf-8').split('\n'); let [chunks, i, n] = [[], 0, lines.length]; while(i < n) { chunks.push(lines.slice(i, i+= 10)); }; return chunks; } const server = http.createServer(async(req, res) => { res.writeHead(200, { "Content-Type": "text/plain;charset=utf-8", "Transfer-Encoding": "chunked", "Access-Control-Allow-Origin": "*", }); const chunks = await spilitChunks(); for(let i =0; i< chunks.length; i++) { setTimeout(() => { let content = chunks[i].join("&"); res.write(`${content.length.toString(16)}\r\n${content}\r\n`); }, i * 1000); } setTimeout(() => { res.end(); }, chunks.length * 1000); }) server.listen(3000, () => { console.log(`server启动成功`) })
3. Transmit through data stream
When usingNode.js
Return large files to the client When using a stream to return a file stream, you can avoid taking up too much memory when processing large files. The specific implementation is as follows. When using stream form to return file data, the value of the HTTP response headerTransfer-Encoding
field ischunked
, indicating that the data is sent in a series of chunks.
const server = http.createServer((req, res) => { res.writeHead(200, { "Content-Type": "text/plain;charset=utf-8", "Content-Encoding": "gzip", "Transfer-Encoding": "chunked" }); fs.createReadStream(__dirname + "/file.txt") .setEncoding("utf-8") .pipe(zlib.createGzip()) .pipe(res); }) server.listen(3000, () => { console.log(`server启动成功`) })
For more node-related knowledge, please visit:nodejs tutorial! !
The above is the detailed content of How to implement HTTP transfer of large files based on nodejs? (Sharing of practical methods). For more information, please follow other related articles on the PHP Chinese website!