This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [PATCH] PowerPC: Implement TARGET_ATOMIC_ASSIGN_EXPAND_FENV


On 03-09-2014 11:01, Maciej W. Rozycki wrote:
> On Tue, 2 Sep 2014, Adhemerval Zanella wrote:
>
>> Ping.
>>
>> On 19-08-2014 13:54, Adhemerval Zanella wrote:
>>> Ping.
>>>
>>> On 06-08-2014 17:21, Adhemerval Zanella wrote:
>>>> On 01-08-2014 12:31, Joseph S. Myers wrote:
>>>>> On Thu, 31 Jul 2014, David Edelsohn wrote:
>>>>>
>>>>>> Thanks for implementing the FENV support.  The patch generally looks 
>>>>>> good to me.
>>>>>>
>>>>>> My one concern is a detail in the implementation of "update". I do not
>>>>>> have enough experience with GENERIC to verify the details and it seems
>>>>>> like it is missing building an outer COMPOUND_EXPR containing
>>>>>> update_mffs and the CALL_EXPR for update mtfsf.
>>>>> I suppose what's actually odd there is that you have
>>>>>
>>>>> +  tree update_mffs = build2 (MODIFY_EXPR, void_type_node, old_fenv, call_mffs);
>>>>> +
>>>>> +  tree old_llu = build1 (VIEW_CONVERT_EXPR, uint64_type_node, update_mffs);
>>>>>
>>>>> so you build a MODIFY_EXPR in void_type_node but then convert it with a 
>>>>> VIEW_CONVERT_EXPR.  If you'd built the MODIFY_EXPR in double_type_node 
>>>>> then the VIEW_CONVERT_EXPR would be meaningful (the value of an assignment 
>>>>> a = b being the new value of a), but reinterpreting a void value doesn't 
>>>>> make sense.  Or you could probably just use call_mffs directly in the 
>>>>> VIEW_CONVERT_EXPR without explicitly creating the old_fenv variable.
>>>>>
>>>> Thanks for the review Josephm.  I have changed to avoid the void reinterpretation
>>>> and use call_mffs directly.  I have also removed the the mask generation in 'clear'
>>>> from your previous message, it is now reusing the mas used in feholdexcept.  The 
>>>> testcase patch is the same as before.
>>>>
>>>> Checked on both linux-powerpc64/powerpc64le and no regressions found.
>>>>
>>>> --
>>>>
>>>> 2014-08-06  Adhemerval Zanella  <azanella@linux.vnet.ibm.com>
>>>>
>>>> gcc:
>>>> 	* config/rs6000/rs6000.c (rs6000_atomic_assign_expand_fenv): New
>>>> 	function.
>>>>
>>>> gcc/testsuite:
>>>> 	* gcc.dg/atomic/c11-atomic-exec-5.c
>>>> 	(test_main_long_double_add_overflow): Define and run only for
>>>> 	LDBL_MANT_DIG != 106.
>>>> 	(test_main_complex_long_double_add_overflow): Likewise.
>>>> 	(test_main_long_double_sub_overflow): Likewise.
>>>> 	(test_main_complex_long_double_sub_overflow): Likewise.
>  FWIW I pushed it through regression testing across my usual set of 
> powerpc-linux-gnu multilibs with the results (for c11-atomic-exec-5.c) as 
> follows:
>
> -mcpu=603e						PASS
> -mcpu=603e -msoft-float					UNSUPPORTED
> -mcpu=8540 -mfloat-gprs=single -mspe=yes -mabi=spe	UNSUPPORTED
> -mcpu=8548 -mfloat-gprs=double -mspe=yes -mabi=spe	UNSUPPORTED
> -mcpu=7400 -maltivec -mabi=altivec			PASS
> -mcpu=e6500 -maltivec -mabi=altivec			PASS
> -mcpu=e5500 -m64					PASS
> -mcpu=e6500 -m64 -maltivec -mabi=altivec		PASS

Thanks for testing it, I'll to add these permutations on my own testbench.

>
> (floating-point environment is of course unsupported for soft-float 
> targets and for the SPE FPU another change is required to implement 
> floating-point environment handling to complement one proposed here).  
> No regressions otherwise.
>
>  While at it, may I propose another change on top of this?
>
>  I've noticed the test case is rather slow, it certainly takes much more 
> time than the average one, I've seen elapsed times of well over a minute 
> on reasonably fast hardware and occasionally a timeout midway through even 
> though the test case was otherwise progressing just fine.  I think lock 
> contention or unrelated system activity such as hardware interrupts (think 
> a busy network!) may contribute to it for systems using LL/SC loops for 
> atomicity.
>
>  So I think the default timeout that's used for really quick tests should 
> be extended a bit.  I propose a factor of 2, just not to make it too 
> excessive, at least for the beginning (maybe it'll have to be higher 
> eventually).

Do you mind if I incorporate this change on my patchset?

>
>  OK?
>
> 2014-09-03  Maciej W. Rozycki  <macro@codesourcery.com>
>
> 	gcc/testsuite/
> 	* gcc.dg/atomic/c11-atomic-exec-5.c (dg-timeout-factor): New 
> 	setting.
>
>   Maciej
>
> gcc-test-c11-atomic-exec-5-timeout-factor.diff
> Index: gcc-fsf-trunk-quilt/gcc/testsuite/gcc.dg/atomic/c11-atomic-exec-5.c
> ===================================================================
> --- gcc-fsf-trunk-quilt.orig/gcc/testsuite/gcc.dg/atomic/c11-atomic-exec-5.c	2014-09-02 17:34:06.718927043 +0100
> +++ gcc-fsf-trunk-quilt/gcc/testsuite/gcc.dg/atomic/c11-atomic-exec-5.c	2014-09-03 14:51:12.958927233 +0100
> @@ -9,6 +9,7 @@
>  /* { dg-additional-options "-D_XOPEN_SOURCE=600" { target *-*-solaris2.1[0-9]* } } */
>  /* { dg-require-effective-target fenv_exceptions } */
>  /* { dg-require-effective-target pthread } */
> +/* { dg-timeout-factor 2 } */
>
>  #include <fenv.h>
>  #include <float.h>
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]