I have checked each and every question and article about this, that I could find, but nothing really answers it. I make it sound as if I have just one question, there's more of them.
Trying to use an Object for this task just does not work, because it is just way slower than Array ( now, I understand, that the differences in speed, even if in millions, are not that big, but why not try to dig for ultimate performance, right ? )
So here is a list of questions, that I cannot seem to find a reliable answer to:
There are probably some more questions I have, but I cannot think of them right now.
The fastest configuration I found, that supports IDs as indexes and holes between them, as well as starting at a high index, is using an Array that holds the data you want to store, and another array, that just holds the indexes, so that when you are searching for an object with ID 10, you touch the index Array, instead of the Array with data.
You can see an example of this in http://jsperf.com/array-management-performance/2.
EDIT: If you want to see the performance degradation, please "review" this jsperf and change minId and maxId to some big numebers.
Here are some stats:
Read Defined: Firefox 165.173 mln ops/sec | Chrome 351.699 mln ops/sec
Read Undefined: Firefox 98.582 mln ops/sec | Chrome 54.666 mln ops/sec
Write Close: Firefox 7.599 mln ops/sec | Chrome 291.244 mln ops/sec
Write Far: Firefox 5.599 mln ops/sec | Chrome 93.733 mln ops/sec
Write Overwrite: Firefox 7.599 mln ops/sec | Chrome 291.244 mln ops/sec
Read Defined: Firefox 681.206 mln ops/sec | Chrome 401.522 mln ops/sec
Read Undefined: Firefox 681.206 mln ops/sec | Chrome 62.827 mln ops/sec
Write Close: Firefox 400.234 mln ops/sec | Chrome 121.519 mln ops/sec
Write Far: Firefox 348.560 mln ops/sec | Chrome 121.519 mln ops/sec
Write Overwrite: Firefox 400.234 mln ops/sec | Chrome 234.337 mln ops/sec
P.S. Did you know, that Read Defined is faster on Chrome on Mobile, than on Chrome on Desktop ?
I jsperf-ed every single question I have here, so either my/others' tests were incorrectly written, or this is some really funky stuff.
P.S.2 I know, that one of the Array tests is actually testing Object, and another Object test is actually testing Array, please do not point that out, I am aware, I wrote this, the error is because jsperf is very poorly made, and I was trying to relatively quickly check a lot of different settings.
P.S.3 I am sorry, that some of these tests are really messy, I did not think, I would actually need to show them to anyone, but I still think there are sufficient.
EDIT: ( Answer, hopefully to re-open the question, so that I can answer it )
Here I go with an answer to all of our (my) problems. Now that I read my questions again, while having the answer, I can see how my questions could not be answered easily, or at all really, without digging into the sources of each browser.
I myself still do not know the exact answers to 1, 2, 3, 4 or even 5, but the answer that is correct, is "Because of implementation differences", it is as simple as that, obviously, I have not returned here to write just that, I have a solution to the problem I am showing in this question.
Simplified Problem: You have a set of IDs, that are assigned to data, most likely from the DB. You want that data in an array in JS, so that You can work with it very easily, but You DO NOT WANT to lose performance, while putting 100k elements into a JS array. How does one put N elements into an array in JS without losing read/write performance ? (I did not test write, because it is not that important to me).
Solution and Explanation: While studying this problem, I have checked a mutlitude of potential solutions, like splines, fourier transformations, hash tables, binary search, and a twist on map/hash tables, that actually performed VERY well, but not well enough for my taste ( it would also perform worse with the increase in size ).
The final solution seems dead simple and obvious, but somehow ( I guess I'm dumb ) it took me 10 days to figure out, it scales perfectly, and access to any element is extremely fast O(1).
To be able to write this solution, I use this property of JS engines http://jsperf.com/array-of-arrays-with-high-indexes/2 , it would seem, that access to first 1000 elements IS SUPER FAST ( even in arrays of arrays ), and because I know my data is unsigned integers, I can easily map the integers from 1D to 3D, so that FastArray can hold up to a Billion elements.
You can see this solution in action right here: http://jsperf.com/map-check-v2/5 -- The first test is slow, because I read the sample data from a simple array, and that is very slow ... EDIT: This does not seem to be the case, I really do not know what is causing it, I think jsperf is kind of freaking out, because running just that test yields pretty good results, very weird.
P.S.4 Moderators. please re-open this question, so that people searching for Ultimate Array Performance can see this answer in a nice actual Answer. Thanks.
P.S.5 I do not know why, I guess there is a bug in my final code, that makes Chrome perform relatively poorly, which is probably easily fixable.
First of all, JS engines are different on every browser, so they don't perform equally on every case.
When you create an array with just one element at a high index the Object will be created with all the other "spaces" filled with
undefined values so the JS engine have to traverse the whole structure to find your value.
As @RobG says, the native array methods like
forEach will perform a lot better in this case.
©2020 All rights reserved.