js native array.flat对于depth = 1是否较慢?

This gist is a small benchmark I wrote comparing the performance for 4 alternatives for flattening arrays of depth=1 in JS (the code can be copied as-is into the google console). If I'm not missing anything, the native Array.prototype.flat has the worst performance by far - on the order of 30-50 times slower than any of the alternatives. It should be noted that the 4th implementation in this benchmark is consistently the most performant - often achieving a performance that is 70 times better. The code was tested several times in node v12 and the Chrome console.

This result is accentuated most in a large subset - see the last 2 arrays tested below. This result is very surprising, given the spec, and the V8 implementation which seems to follow the spec by the letter. My C++ knowledge is non-existent, as is my familiarity with the V8 rabbit hole, but it seems to me that given the recursive definition, once we reach a final depth subarray, no further recursive calls are made for that subarray call (the flag shouldFlatten is false when the decremented depth reaches 0, i.e., the final sub-level) and adding to the flattened result includes iterative looping over each sub-element, and a simple call to this method. Therefore, I cannot see a good reason why a.flat should suffer so much on performance.

我认为也许在本机公寓中未预先分配结果大小的事实可能解释了这一差异。未预先分配的该基准测试的第二个实施方案表明,仅凭这一点并不能解释差异-它的性能仍然比本地公寓高5-10倍。这可能是什么原因?

经过测试的实现(代码顺序相同,存储在实现数组中-我写的两个代码段位于代码段的末尾):

  1. 我自己的展平实现,其中包括预分配最终展平的长度(从而避免了全部重新分配的大小)。对不起命令式代码,我想获得最大的性能。最简单的天真实现,遍历每个子数组并推入最终数组。 (因此冒着重新分配大小的风险)。 Array.prototype.flat(本机平面)[] .concat(... arr)(=扩展数组,然后将结果连接在一起。这是一种实现depth = 1展平的流行方法)。

测试的数组(代码顺序相同,存储在基准对象中):

  1. 1000个子数组,每个都有10个元素。 (总共10个)10个子数组,每个子数组1000个元素。 (总共10个)10000个子数组,每个子数组1000个元素。 (总计1000万)100个子数组,每个子数组100000个元素。 (总计1000万)
评论