1) Indices are a lot more portable across different environments than pointers. They can be serialized to disk and/or sent over the network, along with the data structure they refer to. Pointers can't even be shared between different processes, since they're local to an address space by design.
2) Indices enable relocation, but pointers restrict it. A struct that stores a pointer to itself cannot be trivially moved/copied, but a struct containing an integer offset into itself can. A live pointer to an object pool element prevents the object pool from being safely moved around in memory, but an index into the object pool does not impose such restriction.
* Indices are likely to increase register pressure slightly, as unoptimized code must keep around the base as well (and you can't assume optimization will happen). In many cases the base is stored in a struct so you'll also have to pay for an extra load stall.
* With indices, you're likely to give up on type safety unless your language supports zero-overhead types and you bother to define and use appropriate wrappers. Note in particular that "difference between two indices" should be a different type than "index", just like for pointers.
It's a problem in practice. Of the three times I've ever had to use a debugger on Rust code, two came from code someone had written to do their own index allocation. They'd created a race condition that Rust would ordinary prevent.
But I agree, it does give up some of the benefits of using native references.
Great that in the days of Electron garbage, this kind of stuff gets rediscovered.
I've done this in small applications in C (where nodes were already being statically allocated) and/or assembly (hacking on an existing binary).
No idea about the effect on speed in general; I was trying to save a few bytes of storage in a place where that mattered.
The early versions of FORTRAN did not have dynamic memory allocation. Therefore the main program pre-allocated one or more work arrays, which were either known globally or they were passed as arguments to all procedures.
Then wherever a C program might use malloc, an item would be allocated in a work array and the references between data structures would use the indices of the allocated items. Items could be freed as described in TFA, by putting them in a free list.
The use of the data items allocated in work arrays in FORTRAN was made easier by the fact that the language allowed the aliasing of any chunk of memory to a variable of any type, either a scalar or an array of any rank and dimensions.
So this suggestion just recommends the return to the old ways. Despite its limitations, when maximum speed is desired, FORTRAN remains unbeatable by any of its successors.
Wait a minute, I've seen it stated many times that a primary reason FORTRAN can be better optimised than C is that it doesn't allow aliasing memory as easily as C does (what that means, maybe you can say), and that's why 'restrict' was added to C. On the other hand, C's "strict aliasing rule" allows compilers to assume that pointers of different types don't alias the same memory, which allows optimisations.
yeah, i feel like it's low key ECS (minus object/slot polymorphism)
I had a decent sized C library that I could conditionally compile (via macros and ifdefs) to use pointers (64-bit) or indexes (32-bit), and I saw no performance improvement, at least for static allocation.
- You can check indices that are out of bounds without running into formal undefined behavior. ISO C does not require pointers to distinct objects to be comparable via inequality, only exact equality. (In practice it works fine in any flat-address-space implementation and may be regarded as a common extension.)
- Indices are implicitlyl scaled. If you have done a range check that index is valid, then it refers to an entry in your array. At worst it is some unoccupied/free entry that the caller shouldn't be using. If you have checked that a pointer points into the array, you don't know that it's valid; you also have to check that its displacement from the base of the array is a multiple of the element size; i.e. it is aligned.
For shared data structures, you have more to worry about, so regardless if you use indices or pointers you must use either atomic operations or means to ensure exclusive access to the entire data structure or means to detect the need for retries when using optimistic accesses.
Solving ABA is probably a point in favor of indices (if we are working in a higher level language) because their type supports the bit operations for tagging. However, some hardware has support for hardware tagging for pointers. E.g. ARM; Android uses it.
However, a C compiler may choose to use in machine language whichever of indices or pointers is more efficient on the target machine, regardless of whether the source program uses indices or pointers.
Allocate your nodes in contiguous memory, but use pointers to refer to them instead of indices. This would remove an indirect reference when resolving node references: dereference vs (storage_base_address + element_size * index) Resizing your storage does become potentially painful: you have to repoint all your inter-node pointers. But maybe an alternative there is to just add another contiguous (memory page-sized?) region for more nodes.
Lots of trade offs to consider :)
I'd like to point out that most of the benefits explained in the article are already given to you by default on the Java virtual machine, even if you designed tree object classes the straightforward way:
> Smaller Nodes: A pointer costs 8 bytes to store on a modern 64-bit system, but unless your planning on storing over 4 billion nodes in memory, an index can be stored in just 4 bytes.
You can use the compressed OOPs (ordinary object pointers) JVM option, which on 64-bit JVMs this drops the size of a pointer from 8 bytes to 4 bytes.
> Faster Access: [...] nodes are stored contiguously in memory, the data structure will fit into fewer memory pages and more nodes will fit in the cpu’s cache line, which generally improves access times significantly
If you are using a copying garbage collector (as opposed to reference counting or mark-and-sweep), then memory allocation is basically incrementing a pointer, and consecutively allocated nodes in time are consecutive in memory as well.
> Less Allocation Overhead: [...] make a separate allocation for each individual node, one at a time. This is a very naive way of allocating memory, however, as each memory allocation comes with a small but significant overhead
Also not true for a garbage-collected memory system with bump allocation. The memory allocator only needs to keep a single pointer to keep track of where the next allocation needs to be. The memory system doesn't need to keep track of which blocks are in use or keep free lists - because those are implied by tracing all objects from the known roots. What I'm saying is, the amount of bookkeeping for a C-style malloc()+free() system is completely different than a copying garbage collector.
> Instant Frees: [...] entire structure has to be traversed to find and individually free each node [...] freeing the structure becomes just a single free call
This is very much the key benefit of copying garbage collectors: Unreachable objects require zero effort to free. If you null out the pointer to the root of the tree, then the whole tree is unreachable and no work is needed to traverse or free each individual object.
Now, am I claiming that copying garbage collection is the solution to all problems? No, not at all. But I am pointing out that as evidenced by the article, this style of memory allocation and deallocation is a common pattern, and it fits extremely well with copying garbage collection. I am more than willing to admit that GCs are more complicated to design, less suitable for hard real-time requirements, etc. So, a small number of incredibly smart people design the general GC systems instead of a larger number of ordinary programmers coming up with the tricks described in the article.
https://ziglang.org/documentation/0.15.1/std/#std.heap.memor...