Got all the way here and had to look back up to see this post was from 2019. The MSVC standard library has been open source for several years now. https://github.com/microsoft/STL
Though to be perfectly honest, setting a breakpoint and looking at the disassembly is probably easier than reading standard library code.
I agree with you that I prefer looking at optimized assembly with symbols rather than following code through files (which are usually filled with #ifdefs and macros).
But you're correct, while I can read https://doc.rust-lang.org/src/alloc/sync.rs.html (where Rust's Arc is defined) ...
... good luck to me in https://github.com/microsoft/STL/blob/main/stl/inc/memory
There are tricks to cope with C++ macros not being hygienic, layered on top of tricks to cope with the fact C++ doesn't have ZSTs, tricks to reduce redundancy in writing all this out for related types, and hacks to improve compiler diagnostics when you do something especially stupid. Do its maintainers learn to read like this? I guess so, as it's Open Source.
/// This is inline markdown documentation in Rust source,
Ughh, this brings bad memories of the days I spent trying to diagnose why glibc often would give wrong answers for some users and not other users (they’ve since mitigated this problem slightly by combining pthreads and libdl into the same library). I wish they would get rid of this, since even the comment on it notes that the optimization is unsound (the ability to make syscalls directly, as used by go and others, makes this optimization potentially dangerous). It also upsets static analysis tools, since they see that glibc doesn’t appear to have the synchronization the library promises.
> Parallelism without pthread
To get __atomic_add_dispatch to work, looks like one is expected to ensure pthread_create is referenced. One way to do it without creating a pthread or std::thread, is to do it outside LTO'd files, or like they did above.
> > It is possible to create threads by using the OS syscalls bypassing completely the requirement of pthead
As the other person said, it is impratical to do so, and it's easier to just reimplement gthread and pthread functions to be hooks (some toolchains do this).
I have tried and failed to do this for a C++ program because the amount of C++ runtime static init/shutdown stuff you would need to deal with isn't practical to implement yourself.
Maybe there's a reason I'd never run into, but this seems like a missed opportunity. Even if I have no idea what Goose is, I can see it's a type, that seems like a win.
If you go to the CPU-specific tables, LOCK ADD is like 10-50 (Zen 3: 8, Zen 2: 20, Bulldozer: 55, lol) cycles latency vs the expected 1 cycle for regular ADD. And about 10 cycles on Intel CPUs.
So it can be starkly slower on some older AMD platforms, and merely ~10x slower on modern x86 platforms.
Writing performant parallel code always means absolutely minimizing communication between threads.
Atomic operations work inside the confines of the cache coherence protocol. Nothing has to be flushed to main memory, or even a lower cache
An atomic operation does something more along the lines of emitting an invalidation, putting the cache line in to an exclusive state, and then ignores find and invalidation requests from other cores while it operates.
Even if you're one of the crazy people who thinks that's the sane default, the value from analysing and choosing a better ordering rule for this key type is enormous and when you do that analysis your answer is going to be acquire-release and only for some edge cases, in many places the relaxed atomic ordering is fine.
All RMW operations have sequentially consistent semantics on x86.
It's not exactly a store buffer flush, but any subsequent loads in the pipeline will stall until the store has completed.
Sequential consistency is a property of a programming language's semantics and can not simply be inferred from hardware. It is possible for hardware operations to all be SC but for the compiler to still provide weaker memory orderings through compiler specific optimizations.
There is a pretty clear mapping in terms of C++ atomic operations to hardware instructions, and while the C++ memory model is not defined in terms of instruction reordering, that mapping is still useful to talk about performance. Sequential consistency is also a pretty broadly accepted concept outside of the C++ memory model, I think you're being a little too nitpicky on terminology.
There are algorithms whose correctness depends on sequential consistency which can not be implemented in x86 without explicit barriers, for example Dekker's algorithm.
What x86 does provide is TSO semantics, not sequential consistency.
From the Intel SDM:
> Synchronization mechanisms in multiple-processor systems may depend upon a strong memory-ordering model. Here, a program can use a locking instruction such as the XCHG instruction or the LOCK prefix to ensure that a read-modify-write operation on memory is carried out atomically. Locking operations typically operate like I/O operations in that they wait for all previous instructions to complete and for all buffered writes to drain to memory (see Section 8.1.2, “Bus Locking”).
I don't believe that shared_ptr uses seq-cst because I can just look at the source code, and I know that inc ref is relaxed and dec ref is acq-rel, as they should be.
However, none of this makes a difference on x86, where RMW atomic operations all lower to the same instructions (like LOCK ADD). Loads also do not care about memory order, and stores sometimes do, and that was what my comment was about.
So hence the sequentially consistent ordering doesn't come into the picture.
And yeah, no, you don't get the sequentially consistent ordering for free on x86. x86 has the total store order, but firstly that's not quite enough to deliver sequentially consistent semantics in the machine on its own and then also the compiler has barriers during optimisation and those are impacted too. So if you insist on this ordering (which to be clear again you almost never should, the fact it's the default in C++ is IMO a mistake) it does make a difference on x86.
Why would shared_ptr refcounting need anything other than relaxed? Acq/rel are for implementing multi-variable atomic protocols, and shared_ptr refcounting simply doesn't have other variables.
There's essentially nothing but DMA/PCIe accesses that won't look at shared/global cache in hopes of read hits before looking at the underlying memory, at least on any system (more specifically, CPU) you'd (want to) run modern Linux on.
There are non-temporal memory accesses where reads don't leave a trace in cache and writes only use a limited amount of cache for some modest writeback "early (reported) completion/throughput-smoothing" effects, as well as some special-purpose memory access types.
For example, on x86, "store-combining": it's a special mode of mapping set as such in the page table entry responsible for that virtual address, where writes use a special store combining buffer (that's typically some single-digit-number of cache lines) local to the core used as a writeback cache so small writes from a loop (like, for example, translating a CPU side pixel buffer to a GPU pixel encoding while at the same time writing through PCIe mapping into VRAM) can accumulate into full cachelines to eliminate any need for read-before-write transfers of those cachelines and also generally to make those writeback transfers more efficient for cases where you go through PCIe/Infiniband/RoCE (and can benefit from typically up to 64 cachelines being bundled together to reduce packet/header overhead).
What is slow, though, at least on some contemporary relevant architectures like Zen3 (just naming because I had checked that in some detail), are single-thread-originated random reads that break the L2 cache's prefetcher (especially if they don't hit any DRAM page twice), because the L1D cache has a fairly limited quantity (for Zen1 [0] and Zen2 [1] I could now find mention of 22) of asynchronous cache-miss-handlers, with random DRAM read latency (assuming you use 1G ("giant") pages and stay in the 32G of DRAM that the 32 entries of L1 TLB can therefore cache) around 50~100ns (especially once some concurrency causes minor congestion at the DDR4 interface) therefore dropping request inverse throughput to around 5ns/cacheline i.e. 12.8 GB/s, a fraction of the (e.g. on spec-conform DDR4-3200 on a mainstream Zen3 desktop processor like a "Ryzen 9 5900") 51.2 GB/s per-CCD (compute chip; a 5950X has 2 of those plus a northbridge; it's the connection to the northbridge that's limiting here) that limits streaming reads (technically it'd be around 2% lower because you'd have to either 100% fill the DDR4 data interface (not quite possible in practice) or add some reads through PCIe (attached to the northbridge's central data hub which doesn't seem to have any throughput limits other than those of the access ports themselves)).
[0]: https://www.7-cpu.com/cpu/Zen.html [1]: https://www.7-cpu.com/cpu/Zen2.html