//方法一:长度为l的数组切成n片,追加n次 $('.btn1').click(function(event) { var arr=['中国','美国','法国','英国','俄罗斯','朝鲜','瑞典','挪威','德国','意大利','南非','埃及','巴基斯坦','哈萨克斯坦','印度','越南','加拿大','澳大利亚']; var result=[]; for(i=0;i'+result[j][k]+''; } console.log('html--'+html); $('.list').append(html); } }); //方法二:长度为l的数组直接追加到dom中,循环l次 $('.btn2').click(function(event) { var arr=['中国','美国','法国','英国','俄罗斯','朝鲜','瑞典','挪威','德国','意大利','南非','埃及','巴基斯坦','哈萨克斯坦','印度','越南','加拿大','澳大利亚']; var html=''; for(i=0;i'+arr[i]+''; } $('.list').append(html); });
This is the time measured by firebug: 8.3 milliseconds are added in groups (method one), 5.74 milliseconds are added directly (method two)
Personally think that grouping It will reduce dom rendering and avoid lagging. However, the results measured by firebug are contrary to my thoughts. Is it because the data is too small?
In fact, your grouping does not really share the pressure. You need to add a timer and use a time-sharing function
An example is creating a QQ friend list for WebQQ. There are usually hundreds or thousands of friends in the list. If a friend is represented by a node, when we render the list on the page, we may have to create hundreds or thousands of nodes in the page at one time.
Adding a large number of DOM nodes to the page in a short period of time will obviously overwhelm the browser. The result we see is often
stuttering or even suspended animation of the browser. The code is as follows:
One of the solutions to this problem is the following timeChunk function. The timeChunk function allows the work of creating nodes to be performed in batches. For example, instead of creating 1000 nodes in 1 second, create 8 nodes every 200 milliseconds.
The above is excerpted from the book Design Patterns
The reason why your solution is slow is as follows,
$('.list').append(html);
Every time this code loops, it needs to reposition the dom element, that is, $('.list')jquery You need to obtain the .list document, which is equivalent to looping N and positioning it N times. This is of course inefficient, so your second option only positions it once and appends all the elements to it, which is obviously more efficient. As for you Array sharding cannot be reflected in js at all. How many times should it be looped, but it does not improve efficiency. It also adds an extra step of sharding. js itself is single-threaded, and the efficiency is still the same no matter how many times it is divided.
Getting thousands of pieces of data at one time shows that there is something wrong with your API design.
Personally, I feel that the first method consumes more memory and time (when the volume is small). First, the above statements add up and run more times. Second, appending multiple times is definitely not as good as appending in one go
Thousands of pieces of data obtained at one time will not all be appended at once.
The reasonable approach is to load according to the importance and secondaryness of the display. Load what you can see first, and then insert the rest into the page in batches when the process is idle.