This is the mail archive of the gcc-help@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: volatile shared memory


NightStrike <nightstrike@gmail.com> writes:

> Do you have to use volatile if you're writing to memory mapped
> hardware, or just reading?

Most memory mapped hardware responds to both read and write requests, so
you normally have to use volatile in any cases.  Of course the details
are going to depend on the specific hardware in question.


>> This is because the issues related to making code multi-processor safe
>> are related to memory barriers and memory cache behaviour. ÂAdding a
>> volatile qualifier will not change the program's behaviour with respect
>> to either.
>
> Is caching the reason that makes another process sharing a memory
> address different than a piece of hardware sharing a memory address?

I'm not sure I completely understand the question.  Memory mapped
hardware is not memory in the conventional sense.  It's hardware that is
manipulated by direct memory reads and writes.  Memory mapped hardware
is not cached, but that is not the most important difference.


>> Don't think volatile. ÂThink memory barriers invoked via asm
>> constructs. ÂUse the new atomic builtins.
>
> I thought (probably incorrectly) that the atomic builtins were only
> for atomic actions between threads in a process, not between separate
> processes.  Do they really work with the latter?  The information I
> got on freenode's #gcc (not oftc) was that gcc can't do anything to
> protect shared memory between processes, that you have to use a system
> semaphore feature.

Shared memory on a multiprocessor machine is shared memory.  It really
doesn't matter whether the memory is shared between threads or between
processes.  The only significant difference between a thread and a
process on a modern OS is whether memory is shared by default or not
(there are other differences regarding signal delivery that are
irrelevant here).  Once you create memory shared between processes, you
are effectively dealing with threads.

So, yes, the atomic builtins work fine.  However, my observation is that
very few people can use them correctly.  I would never use them myself,
except for the limiting cases of atomic increment and atomic compare and
swap with __ATOMIC_SEQ_CST.

Use mutexes instead.  Not all operating systems support mutexes in
process shared memory, but they should work fine on GNU/Linux.  You do
have to be careful to ensure that only one process initializes the
mutex.

Ian


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]