Indices, not Pointers(joegm.github.io)
68 points by vitalnodo 9 hours ago | 14 comments
10000truths 7 hours ago
There are a couple other advantages that are unstated in the article, yet very important from a software design perspective:

1) Indices are a lot more portable across different environments than pointers. They can be serialized to disk and/or sent over the network, along with the data structure they refer to. Pointers can't even be shared between different processes, since they're local to an address space by design.

2) Indices enable relocation, but pointers restrict it. A struct that stores a pointer to itself cannot be trivially moved/copied, but a struct containing an integer offset into itself can. A live pointer to an object pool element prevents the object pool from being safely moved around in memory, but an index into the object pool does not impose such restriction.

hinkley 2 hours ago
Someone regaled me with a story of the distributed computing system on the NeXT machines that utilized 64 bit pointers, where the upper bytes were the machine address and the lower bytes the memory address on that machine.
cma 5 hours ago
Data memory-dependent prefetchers like in Apple's newer chips I think only work with full pointers and not offsets, so it could be a decent perf hit.
astrange 4 hours ago
Indices can be much smaller than pointers (which are 8 bytes), so they have plenty of cache advantages of their own which can make up for that.
whstl 1 hour ago
Still depends. If the indices are pointing to a dense, flat, contiguous array, it will still be faster than following pointers into scattered heap allocations, with or without prefetching, because of how CPU caching works.
4 hours ago
o11c 6 hours ago
Some minor disadvantages:

* Indices are likely to increase register pressure slightly, as unoptimized code must keep around the base as well (and you can't assume optimization will happen). In many cases the base is stored in a struct so you'll also have to pay for an extra load stall.

* With indices, you're likely to give up on type safety unless your language supports zero-overhead types and you bother to define and use appropriate wrappers. Note in particular that "difference between two indices" should be a different type than "index", just like for pointers.

IshKebab 41 minutes ago
On the latter point, I always use this in Rust: https://github.com/zheland/typed-index-collections
skulk 7 hours ago
This is a very tempting and commonly used strategy in Rust to bypass the borrow checker. I've used it to implement tries/DFAs with great success (though I can't find the code anymore)
Animats 3 hours ago
The trouble is, you've just replicated the problems of raw pointers. You can have dangling indices if the underlying object is reused or cleared. You can have aliasing, with two indices to the same object.

It's a problem in practice. Of the three times I've ever had to use a debugger on Rust code, two came from code someone had written to do their own index allocation. They'd created a race condition that Rust would ordinary prevent.

IshKebab 40 minutes ago
You don't replicate all the problems of raw pointers. You can't have type confusion or undefined behaviour. It's totally memory safe. That's a pretty huge difference.

But I agree, it does give up some of the benefits of using native references.

bombela 3 hours ago
You don't bypass the borrow checker. Instead you use it the way it wants to be used.
lmm 2 hours ago
Often you are bypassing it. E.g. if you rebalance a tree then references into that tree may now be invalid, so the borrow checker will prevent you from doing that while you have live references. But you may also invalidate indices into the tree, and the borrow checker can't help you with that.
pjmlp 1 hour ago
This is an old school technique anyone that has been coding since the days writing business applications in Assembly was common, knows this kind of stuff.

Great that in the days of Electron garbage, this kind of stuff gets rediscovered.

kitd 1 hour ago
It's also the basis for efficient memory usage with the ECS pattern used extensively by game engines.
zahlman 9 hours ago
> There is a pattern I’ve learned while using Zig which I’ve never seen used in any other language.

I've done this in small applications in C (where nodes were already being statically allocated) and/or assembly (hacking on an existing binary).

No idea about the effect on speed in general; I was trying to save a few bytes of storage in a place where that mattered.

adrian_b 2 hours ago
While the author has seen this pattern in Zig, this pattern was the normal way of writing programs in FORTRAN, decades before the appearance of the C language.

The early versions of FORTRAN did not have dynamic memory allocation. Therefore the main program pre-allocated one or more work arrays, which were either known globally or they were passed as arguments to all procedures.

Then wherever a C program might use malloc, an item would be allocated in a work array and the references between data structures would use the indices of the allocated items. Items could be freed as described in TFA, by putting them in a free list.

The use of the data items allocated in work arrays in FORTRAN was made easier by the fact that the language allowed the aliasing of any chunk of memory to a variable of any type, either a scalar or an array of any rank and dimensions.

So this suggestion just recommends the return to the old ways. Despite its limitations, when maximum speed is desired, FORTRAN remains unbeatable by any of its successors.

versteegen 19 minutes ago
> the language allowed the aliasing of any chunk of memory to a variable of any type, either a scalar or an array of any rank and dimensions.

Wait a minute, I've seen it stated many times that a primary reason FORTRAN can be better optimised than C is that it doesn't allow aliasing memory as easily as C does (what that means, maybe you can say), and that's why 'restrict' was added to C. On the other hand, C's "strict aliasing rule" allows compilers to assume that pointers of different types don't alias the same memory, which allows optimisations.

throwawaymaths 7 hours ago
> There is a pattern I’ve learned while using Zig which I’ve never seen used in any other language.

yeah, i feel like it's low key ECS (minus object/slot polymorphism)

anonnon 2 hours ago
> No idea about the effect on speed in general; I was trying to save a few bytes of storage in a place where that mattered.

I had a decent sized C library that I could conditionally compile (via macros and ifdefs) to use pointers (64-bit) or indexes (32-bit), and I saw no performance improvement, at least for static allocation.

anonymousiam 7 hours ago
But in C, there's not really any difference between pointers and indices. You can access array elements using either method, and they're both equally efficient.
kazinator 6 hours ago
There are some differences.

- You can check indices that are out of bounds without running into formal undefined behavior. ISO C does not require pointers to distinct objects to be comparable via inequality, only exact equality. (In practice it works fine in any flat-address-space implementation and may be regarded as a common extension.)

- Indices are implicitlyl scaled. If you have done a range check that index is valid, then it refers to an entry in your array. At worst it is some unoccupied/free entry that the caller shouldn't be using. If you have checked that a pointer points into the array, you don't know that it's valid; you also have to check that its displacement from the base of the array is a multiple of the element size; i.e. it is aligned.

cyber_kinetist 5 hours ago
You do have to take care of the ABA problem - if you access memory using an index that became invalid before and another object is using instead, you will have some weird hard-to-debug logic errors (worse than use-after-free, since even Valgrind can't save you). To prevent this you need another generational counter to store along with your id (which is either incremented for every usage or assigned a random hash)
adrian_b 3 hours ago
This matters only for shared data structures. It is irrelevant for thread-local data.

For shared data structures, you have more to worry about, so regardless if you use indices or pointers you must use either atomic operations or means to ensure exclusive access to the entire data structure or means to detect the need for retries when using optimistic accesses.

kazinator 3 hours ago
Well, solving/mitigating the ABA ambiguity can debug use-after-free errors in single-threaded programs also. Because when a pointer A is freed to B, and then recycled again for a new object, we can make it into a different pointer A' (e.g. with a tagging scheme). So then when the old A pointer copies are lingering around, we can tell they are invalid due to having the wrong tag.

Solving ABA is probably a point in favor of indices (if we are working in a higher level language) because their type supports the bit operations for tagging. However, some hardware has support for hardware tagging for pointers. E.g. ARM; Android uses it.

smadge 5 hours ago
The distinction in the article really is between calling malloc for every added node in your data structure (“pointer”) or using the pre-allocated memory in the next element of an array (“index”).
adrian_b 3 hours ago
This is only one of the advantages discussed in TFA. The others are those due to using indices instead of pointers (like smaller size, cache locality, range checking, possibility of data exchange between systems with distinct address spaces).
adrian_b 3 hours ago
After being translated literally in machine language, they are not equally efficient.

However, a C compiler may choose to use in machine language whichever of indices or pointers is more efficient on the target machine, regardless of whether the source program uses indices or pointers.

LegionMammal978 7 hours ago
Depending on how many elements you have, you can save some space using 32-bit or even 16-bit indices in place of 64-bit pointers. (Just make sure there isn't any route to overflow the index type.)
octoberfranklin 5 hours ago
It also is/was common in Java when you need to reduce the pressure on the JVM garbage collector.
nielsbot 3 hours ago
Wonder how this compares to combining pointers with indices (effectively):

Allocate your nodes in contiguous memory, but use pointers to refer to them instead of indices. This would remove an indirect reference when resolving node references: dereference vs (storage_base_address + element_size * index) Resizing your storage does become potentially painful: you have to repoint all your inter-node pointers. But maybe an alternative there is to just add another contiguous (memory page-sized?) region for more nodes.

Lots of trade offs to consider :)

munch117 2 hours ago
You have just reinvented the slab allocator.
userbinator 2 hours ago
I've done this before for easily relocatable (realloc'able) blocks but the code can get bigger and slower due to the additional address arithmetic. If you know the allocation pattern then allocating in large blocks is useful. It's always a tradeoff, and there is no right or best answer, so keep this in mind as another technique to consider.
adonovan 5 hours ago
One other benefit of indices is that they are ordered, whereas pointers in many languages (e.g. Go, but not C) are unordered. So you can binary search over an array of indices, for example, or use the relative sign of two indices to indicate a push or a pop operation in a balanced parenthesis tree.
ajxs 3 hours ago
One disadvantage that I don't think I've seen anyone mention so far: You need to use an in-band value to represent the absence of a node. That may or may not be a problem for your particular implementation.
adrian_b 3 hours ago
A null index works fine (obviously, that location in the array must remain unused, but that does not necessarily waste memory, as the base of the array can point one word before its actual start, if checks for not using null indices are always done).
quantified 9 hours ago
Reinventing malloc to some degree.
femto 8 hours ago
malloc with NEAR ad FAR pointers (as was used in MSDOS on processors with segmented memory).
adamnemecek 7 hours ago
Generational arenas is the name of the pattern.
nayuki 6 hours ago
Managing your own buffer of object slots, allocating from them, and using integer indexes instead of typed object pointers - all of these are a small amount of extra work compared to using native language features.

I'd like to point out that most of the benefits explained in the article are already given to you by default on the Java virtual machine, even if you designed tree object classes the straightforward way:

> Smaller Nodes: A pointer costs 8 bytes to store on a modern 64-bit system, but unless your planning on storing over 4 billion nodes in memory, an index can be stored in just 4 bytes.

You can use the compressed OOPs (ordinary object pointers) JVM option, which on 64-bit JVMs this drops the size of a pointer from 8 bytes to 4 bytes.

> Faster Access: [...] nodes are stored contiguously in memory, the data structure will fit into fewer memory pages and more nodes will fit in the cpu’s cache line, which generally improves access times significantly

If you are using a copying garbage collector (as opposed to reference counting or mark-and-sweep), then memory allocation is basically incrementing a pointer, and consecutively allocated nodes in time are consecutive in memory as well.

> Less Allocation Overhead: [...] make a separate allocation for each individual node, one at a time. This is a very naive way of allocating memory, however, as each memory allocation comes with a small but significant overhead

Also not true for a garbage-collected memory system with bump allocation. The memory allocator only needs to keep a single pointer to keep track of where the next allocation needs to be. The memory system doesn't need to keep track of which blocks are in use or keep free lists - because those are implied by tracing all objects from the known roots. What I'm saying is, the amount of bookkeeping for a C-style malloc()+free() system is completely different than a copying garbage collector.

> Instant Frees: [...] entire structure has to be traversed to find and individually free each node [...] freeing the structure becomes just a single free call

This is very much the key benefit of copying garbage collectors: Unreachable objects require zero effort to free. If you null out the pointer to the root of the tree, then the whole tree is unreachable and no work is needed to traverse or free each individual object.

Now, am I claiming that copying garbage collection is the solution to all problems? No, not at all. But I am pointing out that as evidenced by the article, this style of memory allocation and deallocation is a common pattern, and it fits extremely well with copying garbage collection. I am more than willing to admit that GCs are more complicated to design, less suitable for hard real-time requirements, etc. So, a small number of incredibly smart people design the general GC systems instead of a larger number of ordinary programmers coming up with the tricks described in the article.

unnah 56 minutes ago
All good points. On the other side of the balance, by using pointers everywhere you give a lot more work for the tracing garbage collector. If it was possible to keep all your data in a single array of value types, the garbage collector would not need to do basically any work at all. Maybe Project Valhalla will one day allow that. At the moment the closest you can get is a structure-of-arrays setup.
Zambyte 6 hours ago
It seems like this is pretty much reimplements std.heap.MemoryPool, no?

https://ziglang.org/documentation/0.15.1/std/#std.heap.memor...

lowbloodsugar 6 hours ago
It’s 1980 again!