Description
Happens in 3.11.3 on Linux but looks like other versions are affected as well.
When opening an existing shared memory with multiprocessing.shared_memory, the underlying object (/dev/shm/...) gets destroyed after the python code exits even if opened with create=False and even if .close() was called previously.
This happens cause the multiprocessing.resource_tracker circumvents the entire encapsulation in shared_memory and calls shm_unlink by itself, disregarding the internal state of shared_memory, see here:
This hook is applied even if the shared_memory has been opened with create=False and is also issued when the shared memory has been closed with .close().
Suggested fix:
shared_memory should register itself to the resource_tracker only if it is responsible for removing the underlying file, which is obviously not the case when opened with create=False.