This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: SUSv3's "memory location" and threads


Ian Lance Taylor wrote:
"Adam Olsen" <rhamph@gmail.com> writes:

For example, if I had "struct { char foo; char bar[3]; }", where my
first thread had a pointer to foo and was modifying it, while my
second thread had a pointer to bar and was modifying it, would that
meet the requirements?  My understanding is that a C compiler can (and
in many cases, will) use larger writes so long as they appear the same
for a single-threaded program; this obviously breaks threading though.

Yes, that can happen.

Can you cite an example with GCC on a given CPU or do you mean because the C specification doesn't rule it out then it is possible for some compiler somewhere to use this method of access. So in a purity sense "it can happen" but in reality an example maybe difficult to find.


Byte accesses are usually done with byte sized load and store instructions. It would seemly require more instructions and more work to load / shift / mask / store using larger width load/stores than was necessary.

It interests me to learn in what cases would any compiler elect to use a larger sized memory access than was otherwise indicated in the C language program that was implied from the C language type used to access a given memory location.


To refine my query I'm interested in cases where the default alignment and padding provided by the compiler for packing structs (and other similar accessors) would be insufficient.


It is specifically the case where a memory location is re-written with the same contents as are already in that location by another thread/CPU in the course of writing to an adjacent byte. So in the case of the original posters example, there are 4 bytes of memory location to choose from, pick any 2 different byte locations and hypothesize an example of concurrent access from different threads/CPUs which would not be thread safe.



Agreed on sig_atomic_t, it is guaranteed to be thread safe for direct load and store operations only. For example this rules out expecting pre/post increment/decrement to be thread safe. This makes it useful enough for posting a signaled condition from within signal handlers (like time to terminate an application that needs to perform a controlled shutdown by exiting a main loop and performing shutdown tasks from its main thread).


Darryl



Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]