This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH, HPPA] Atomic builtins using kernel helpers for Linux


> Below is the equivalent patch for HPPA.
> As mentioned before, there are not much differences between this version and
> the ARM version.

I like the idea of this patch but I believe that this needs careful
thought before the interface is cast in stone.

+/* Determine kernel LWS function call (0=32bit, 1=64bit userspace)  */
+#define LWS_CAS (sizeof(unsigned long) == 4 ? 0 : 1)
+
+/* Kernel helper for compare-and-exchange.  */
+#define __kernel_cmpxchg( oldval, newval, mem )                                \
+  ({                                                                   \
+    register long lws_ret   asm("r28");                                        \
+    register long lws_errno asm("r21");                                        \
+    register unsigned long lws_mem asm("r26") = (unsigned long) (mem); \
+    register long lws_old asm("r25") = (oldval);                       \
+    register long lws_new asm("r24") = (newval);                       \
+    asm volatile(      "ble    0xb0(%%sr2, %%r0)       \n\t"           \
+                       "ldi    %5, %%r20               \n\t"           \
+       : "=r" (lws_ret), "=r" (lws_errno), "=r" (lws_mem),             \
+         "=r" (lws_old), "=r" (lws_new)                                \
+       : "i" (LWS_CAS), "2" (lws_mem), "3" (lws_old), "4" (lws_new)    \
+       : "r1", "r20", "r22", "r23", "r31", "memory"                    \
+    );                                                                         \
+    lws_errno;                                                         \
+   })

>From a style standpoint, GCC macro defines and their arguments are usually
in upper case.  There is no space before and after the arguments.

You are using the 32-bit and 64-bit cmpxchg's in the 32 and 64-bit
runtimes, respectively.  However, the implementation that's currently
in the kernel for lws_compare_and_swap64 operates on a 32-bit object.
Thus, the type for oldval and newval should be int.

Possibly, the kernel implementation should be modified to add byte,
half and dword operations.  This would avoid some of the shift and
mask operations in the subsequent defines.  There's currently no
64-bit runtime, so changing lws_compare_and_swap64 shouldn't be a
problem.

I would like to see kernel support for lws_compare_and_swap8,
lws_compare_and_swap16, lws_compare_and_swap32 on 32-bit kernels.
I would also like to see lws_compare_and_swap64 changed to do
a 64-bit swap.  Hopefully, this could be done without running
out of space on the gateway page.

+/* Kernel helper for memory barrier.  */
+#define __kernel_dmb() asm volatile ( "" : : : "memory" );

Comment for above?

+/* Note: we implement byte, short and int versions of atomic operations
using
+   the above kernel helpers, but there is no support for "long long"
(64-bit)
+   operations as yet.  */

This comment assumes 32-bit runtime.  "long long" and "long" are
both 64 bits in the 64-bit runtime.  This swap could be done easily
with a 64-bit kernel.

+#if __BYTE_ORDER == __LITTLE_ENDIAN
+#define INVERT_MASK_1 0
+#define INVERT_MASK_2 0
+#elif __BYTE_ORDER == __BIG_ENDIAN
+#define INVERT_MASK_1 24
+#define INVERT_MASK_2 16
+#else
+#error "Endianess missing"
+#endif

As far as I know, GCC supports no working little endian implementations
on PA-RISC, so the little endian defines are just additional clutter.

+#define MASK_1 0xffu
+#define MASK_2 0xffffu

With the above kernel support, I am hoping the mask defines will
go away.

+#define SUBWORD_SYNC_OP(OP, PFX_OP, INF_OP, TYPE, WIDTH, RETURN)       \
+  TYPE HIDDEN                                                          \
+  NAME##_##RETURN (OP, WIDTH) (TYPE *ptr, TYPE val)                    \
+  {                                                                    \
+    int *wordptr = (int *) ((unsigned int) ptr & ~3);                  \

The cast is wrong for 64-bit runtime.  Should be unsigned long.

+    unsigned int mask, shift, oldval, newval;                          \
+    int failure;                                                       \
+                                                                       \
+    shift = (((unsigned int) ptr & 3) << 3) ^ INVERT_MASK_##WIDTH;     \

Ditto.

+    int *wordptr = (int *)((unsigned int) ptr & ~3), fail;             \

Ditto.

+    shift = (((unsigned int) ptr & 3) << 3) ^ INVERT_MASK_##WIDTH;     \

Ditto.

+    int *wordptr = (int *) ((unsigned int) ptr & ~3);                  \

Ditto.

+    shift = (((unsigned int) ptr & 3) << 3) ^ INVERT_MASK_##WIDTH;     \

Ditto.

I don't like very much the fact that the implementations loop forever
when EFAULT or ENOSYS is returned by the kernel.

Dave
-- 
J. David Anglin                                  dave.anglin@nrc-cnrc.gc.ca
National Research Council of Canada              (613) 990-0752 (FAX: 952-6602)


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]