This is the mail archive of the
gcc-patches@gcc.gnu.org
mailing list for the GCC project.
Re: [PATCH] Reduce stack usage in sha512 (PR target/77308)
- From: Bernd Edlinger <bernd dot edlinger at hotmail dot de>
- To: Eric Botcazou <ebotcazou at adacore dot com>, Richard Biener <richard dot guenther at gmail dot com>
- Cc: "gcc-patches at gcc dot gnu dot org" <gcc-patches at gcc dot gnu dot org>, Nick Clifton <nickc at redhat dot com>, Richard Earnshaw <richard dot earnshaw at arm dot com>, "Ramana Radhakrishnan" <ramana dot radhakrishnan at arm dot com>
- Date: Fri, 30 Sep 2016 13:34:06 +0000
- Subject: Re: [PATCH] Reduce stack usage in sha512 (PR target/77308)
- Authentication-results: sourceware.org; auth=none
- Authentication-results: spf=softfail (sender IP is 10.152.2.52) smtp.mailfrom=hotmail.de; adacore.com; dkim=none (message not signed) header.d=none;adacore.com; dmarc=none action=none header.from=hotmail.de;
- References: <AM4PR0701MB21629E281C1C4538834D9806E4C10@AM4PR0701MB2162.eurprd07.prod.outlook.com> <CAFiYyc3hqNF5oZR1PfYboW=EjruEu3+LiWw10246PoRKjhqHVg@mail.gmail.com> <5106795.3uCmH4qeSv@polaris> <AM4PR0701MB21623769225A738C608614F2E4C10@AM4PR0701MB2162.eurprd07.prod.outlook.com>
- Spamdiagnosticmetadata: NSPM
- Spamdiagnosticoutput: 1:99
On 09/30/16 12:14, Bernd Edlinger wrote:
> Eric Botcazou wrote:
>>> A comment before the SETs and a testcase would be nice. IIRC
>>> we do have stack size testcases via using -fstack-usage.
>>
>> Or -Wstack-usage, which might be more appropriate here.
>
> Yes. good idea. I was not aware that we already have that kind of tests.
>
> When trying to write this test. I noticed, that I did not try -Os so far.
> But for -Os the stack is still the unchanged 3500 bytes.
>
> However for embedded targets I am often inclined to use -Os, and
> would certainly not expect the stack to explode...
>
> I see in arm.md there are places like
>
> /* If we're optimizing for size, we prefer the libgcc calls. */
> if (optimize_function_for_size_p (cfun))
> FAIL;
>
Oh, yeah. The comment is completely misleading.
If this pattern fails, expmed.c simply expands some
less efficient rtl, which also results in two shifts
and one or-op. No libgcc calls at all.
So in simple cases without spilling the resulting
assembler is the same, regardless if this pattern
fails or not. But the half-defined out registers
make a big difference when it has to be spilled.
> /* Expand operation using core-registers.
> 'FAIL' would achieve the same thing, but this is a bit smarter. */
> scratch1 = gen_reg_rtx (SImode);
> scratch2 = gen_reg_rtx (SImode);
> arm_emit_coreregs_64bit_shift (LSHIFTRT, operands[0], operands[1],
> operands[2], scratch1, scratch2);
>
>
> .. that explains why this happens. I think it would be better to
> use the emit_coreregs for shift count >= 32, because these are
> effectively 32-bit shifts.
>
> Will try if that can be improved, and come back with the
> results.
>
The test case with -Os has 3520 bytes stack usage.
When only shift count >= 32 are handled we
have still 3000 bytes stack usage.
And when arm_emit_coreregs_64bit_shift is always
allowed to run, we have 2360 bytes stack usage.
Also for the code size it is better not to fail this
pattern. So I propose to remove this exception in all
three expansions.
Here is an improved patch with the test case from the PR.
And a comment on the redundant SET why it is better to clear
the out register first.
Bootstrap and reg-testing on arm-linux-gnueabihf.
Is it OK for trunk?
Thanks
Bernd.
2016-09-29 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR target/77308
* config/arm/arm.c (arm_emit_coreregs_64bit_shift): Clear the result
register explicitly.
* config/arm/arm.md (ashldi3, ashrdi3, lshrdi3): Don't FAIL if
optimizing for size.
testsuite:
2016-09-29 Bernd Edlinger <bernd.edlinger@hotmail.de>
PR target/77308
* gcc.target/arm/pr77308.c: New test.
Index: gcc/config/arm/arm.c
===================================================================
--- gcc/config/arm/arm.c (revision 240645)
+++ gcc/config/arm/arm.c (working copy)
@@ -29226,6 +29226,10 @@ arm_emit_coreregs_64bit_shift (enum rtx_code code,
/* Shifts by a constant less than 32. */
rtx reverse_amount = GEN_INT (32 - INTVAL (amount));
+ /* Clearing the out register in DImode first avoids lots
+ of spilling and results in less stack usage.
+ Later this redundant insn is completely removed. */
+ emit_insn (SET (out, const0_rtx));
emit_insn (SET (out_down, LSHIFT (code, in_down, amount)));
emit_insn (SET (out_down,
ORR (REV_LSHIFT (code, in_up, reverse_amount),
@@ -29237,12 +29241,11 @@ arm_emit_coreregs_64bit_shift (enum rtx_code code,
/* Shifts by a constant greater than 31. */
rtx adj_amount = GEN_INT (INTVAL (amount) - 32);
+ emit_insn (SET (out, const0_rtx));
emit_insn (SET (out_down, SHIFT (code, in_up, adj_amount)));
if (code == ASHIFTRT)
emit_insn (gen_ashrsi3 (out_up, in_up,
GEN_INT (31)));
- else
- emit_insn (SET (out_up, const0_rtx));
}
}
else
Index: gcc/config/arm/arm.md
===================================================================
--- gcc/config/arm/arm.md (revision 240645)
+++ gcc/config/arm/arm.md (working copy)
@@ -4016,10 +4016,6 @@
cheaper to have the alternate code being generated than moving
values to iwmmxt regs and back. */
- /* If we're optimizing for size, we prefer the libgcc calls. */
- if (optimize_function_for_size_p (cfun))
- FAIL;
-
/* Expand operation using core-registers.
'FAIL' would achieve the same thing, but this is a bit smarter. */
scratch1 = gen_reg_rtx (SImode);
@@ -4089,10 +4085,6 @@
cheaper to have the alternate code being generated than moving
values to iwmmxt regs and back. */
- /* If we're optimizing for size, we prefer the libgcc calls. */
- if (optimize_function_for_size_p (cfun))
- FAIL;
-
/* Expand operation using core-registers.
'FAIL' would achieve the same thing, but this is a bit smarter. */
scratch1 = gen_reg_rtx (SImode);
@@ -4159,10 +4151,6 @@
cheaper to have the alternate code being generated than moving
values to iwmmxt regs and back. */
- /* If we're optimizing for size, we prefer the libgcc calls. */
- if (optimize_function_for_size_p (cfun))
- FAIL;
-
/* Expand operation using core-registers.
'FAIL' would achieve the same thing, but this is a bit smarter. */
scratch1 = gen_reg_rtx (SImode);
Index: gcc/testsuite/gcc.target/arm/pr77308.c
===================================================================
--- gcc/testsuite/gcc.target/arm/pr77308.c (revision 0)
+++ gcc/testsuite/gcc.target/arm/pr77308.c (working copy)
@@ -0,0 +1,164 @@
+/* { dg-do compile } */
+/* { dg-options "-Os -Wstack-usage=2500" } */
+
+#define SHA_LONG64 unsigned long long
+#define U64(C) C##ULL
+
+#define SHA_LBLOCK 16
+#define SHA512_CBLOCK (SHA_LBLOCK*8)
+
+typedef struct SHA512state_st {
+ SHA_LONG64 h[8];
+ SHA_LONG64 Nl, Nh;
+ union {
+ SHA_LONG64 d[SHA_LBLOCK];
+ unsigned char p[SHA512_CBLOCK];
+ } u;
+ unsigned int num, md_len;
+} SHA512_CTX;
+
+static const SHA_LONG64 K512[80] = {
+ U64(0x428a2f98d728ae22), U64(0x7137449123ef65cd),
+ U64(0xb5c0fbcfec4d3b2f), U64(0xe9b5dba58189dbbc),
+ U64(0x3956c25bf348b538), U64(0x59f111f1b605d019),
+ U64(0x923f82a4af194f9b), U64(0xab1c5ed5da6d8118),
+ U64(0xd807aa98a3030242), U64(0x12835b0145706fbe),
+ U64(0x243185be4ee4b28c), U64(0x550c7dc3d5ffb4e2),
+ U64(0x72be5d74f27b896f), U64(0x80deb1fe3b1696b1),
+ U64(0x9bdc06a725c71235), U64(0xc19bf174cf692694),
+ U64(0xe49b69c19ef14ad2), U64(0xefbe4786384f25e3),
+ U64(0x0fc19dc68b8cd5b5), U64(0x240ca1cc77ac9c65),
+ U64(0x2de92c6f592b0275), U64(0x4a7484aa6ea6e483),
+ U64(0x5cb0a9dcbd41fbd4), U64(0x76f988da831153b5),
+ U64(0x983e5152ee66dfab), U64(0xa831c66d2db43210),
+ U64(0xb00327c898fb213f), U64(0xbf597fc7beef0ee4),
+ U64(0xc6e00bf33da88fc2), U64(0xd5a79147930aa725),
+ U64(0x06ca6351e003826f), U64(0x142929670a0e6e70),
+ U64(0x27b70a8546d22ffc), U64(0x2e1b21385c26c926),
+ U64(0x4d2c6dfc5ac42aed), U64(0x53380d139d95b3df),
+ U64(0x650a73548baf63de), U64(0x766a0abb3c77b2a8),
+ U64(0x81c2c92e47edaee6), U64(0x92722c851482353b),
+ U64(0xa2bfe8a14cf10364), U64(0xa81a664bbc423001),
+ U64(0xc24b8b70d0f89791), U64(0xc76c51a30654be30),
+ U64(0xd192e819d6ef5218), U64(0xd69906245565a910),
+ U64(0xf40e35855771202a), U64(0x106aa07032bbd1b8),
+ U64(0x19a4c116b8d2d0c8), U64(0x1e376c085141ab53),
+ U64(0x2748774cdf8eeb99), U64(0x34b0bcb5e19b48a8),
+ U64(0x391c0cb3c5c95a63), U64(0x4ed8aa4ae3418acb),
+ U64(0x5b9cca4f7763e373), U64(0x682e6ff3d6b2b8a3),
+ U64(0x748f82ee5defb2fc), U64(0x78a5636f43172f60),
+ U64(0x84c87814a1f0ab72), U64(0x8cc702081a6439ec),
+ U64(0x90befffa23631e28), U64(0xa4506cebde82bde9),
+ U64(0xbef9a3f7b2c67915), U64(0xc67178f2e372532b),
+ U64(0xca273eceea26619c), U64(0xd186b8c721c0c207),
+ U64(0xeada7dd6cde0eb1e), U64(0xf57d4f7fee6ed178),
+ U64(0x06f067aa72176fba), U64(0x0a637dc5a2c898a6),
+ U64(0x113f9804bef90dae), U64(0x1b710b35131c471b),
+ U64(0x28db77f523047d84), U64(0x32caab7b40c72493),
+ U64(0x3c9ebe0a15c9bebc), U64(0x431d67c49c100d4c),
+ U64(0x4cc5d4becb3e42b6), U64(0x597f299cfc657e2a),
+ U64(0x5fcb6fab3ad6faec), U64(0x6c44198c4a475817)
+};
+
+#define B(x,j) (((SHA_LONG64)(*(((const unsigned char *)(&x))+j)))<<((7-j)*8))
+#define PULL64(x) (B(x,0)|B(x,1)|B(x,2)|B(x,3)|B(x,4)|B(x,5)|B(x,6)|B(x,7))
+#define ROTR(x,s) (((x)>>s) | (x)<<(64-s))
+#define Sigma0(x) (ROTR((x),28) ^ ROTR((x),34) ^ ROTR((x),39))
+#define Sigma1(x) (ROTR((x),14) ^ ROTR((x),18) ^ ROTR((x),41))
+#define sigma0(x) (ROTR((x),1) ^ ROTR((x),8) ^ ((x)>>7))
+#define sigma1(x) (ROTR((x),19) ^ ROTR((x),61) ^ ((x)>>6))
+#define Ch(x,y,z) (((x) & (y)) ^ ((~(x)) & (z)))
+#define Maj(x,y,z) (((x) & (y)) ^ ((x) & (z)) ^ ((y) & (z)))
+
+#define ROUND_00_15(i,a,b,c,d,e,f,g,h) do { \
+ T1 += h + Sigma1(e) + Ch(e,f,g) + K512[i]; \
+ h = Sigma0(a) + Maj(a,b,c); \
+ d += T1; h += T1; } while (0)
+#define ROUND_16_80(i,j,a,b,c,d,e,f,g,h,X) do { \
+ s0 = X[(j+1)&0x0f]; s0 = sigma0(s0); \
+ s1 = X[(j+14)&0x0f]; s1 = sigma1(s1); \
+ T1 = X[(j)&0x0f] += s0 + s1 + X[(j+9)&0x0f]; \
+ ROUND_00_15(i+j,a,b,c,d,e,f,g,h); } while (0)
+void sha512_block_data_order(SHA512_CTX *ctx, const void *in,
+ unsigned int num)
+{
+ const SHA_LONG64 *W = in;
+ SHA_LONG64 a, b, c, d, e, f, g, h, s0, s1, T1;
+ SHA_LONG64 X[16];
+ int i;
+
+ while (num--) {
+
+ a = ctx->h[0];
+ b = ctx->h[1];
+ c = ctx->h[2];
+ d = ctx->h[3];
+ e = ctx->h[4];
+ f = ctx->h[5];
+ g = ctx->h[6];
+ h = ctx->h[7];
+
+ T1 = X[0] = PULL64(W[0]);
+ ROUND_00_15(0, a, b, c, d, e, f, g, h);
+ T1 = X[1] = PULL64(W[1]);
+ ROUND_00_15(1, h, a, b, c, d, e, f, g);
+ T1 = X[2] = PULL64(W[2]);
+ ROUND_00_15(2, g, h, a, b, c, d, e, f);
+ T1 = X[3] = PULL64(W[3]);
+ ROUND_00_15(3, f, g, h, a, b, c, d, e);
+ T1 = X[4] = PULL64(W[4]);
+ ROUND_00_15(4, e, f, g, h, a, b, c, d);
+ T1 = X[5] = PULL64(W[5]);
+ ROUND_00_15(5, d, e, f, g, h, a, b, c);
+ T1 = X[6] = PULL64(W[6]);
+ ROUND_00_15(6, c, d, e, f, g, h, a, b);
+ T1 = X[7] = PULL64(W[7]);
+ ROUND_00_15(7, b, c, d, e, f, g, h, a);
+ T1 = X[8] = PULL64(W[8]);
+ ROUND_00_15(8, a, b, c, d, e, f, g, h);
+ T1 = X[9] = PULL64(W[9]);
+ ROUND_00_15(9, h, a, b, c, d, e, f, g);
+ T1 = X[10] = PULL64(W[10]);
+ ROUND_00_15(10, g, h, a, b, c, d, e, f);
+ T1 = X[11] = PULL64(W[11]);
+ ROUND_00_15(11, f, g, h, a, b, c, d, e);
+ T1 = X[12] = PULL64(W[12]);
+ ROUND_00_15(12, e, f, g, h, a, b, c, d);
+ T1 = X[13] = PULL64(W[13]);
+ ROUND_00_15(13, d, e, f, g, h, a, b, c);
+ T1 = X[14] = PULL64(W[14]);
+ ROUND_00_15(14, c, d, e, f, g, h, a, b);
+ T1 = X[15] = PULL64(W[15]);
+ ROUND_00_15(15, b, c, d, e, f, g, h, a);
+
+ for (i = 16; i < 80; i += 16) {
+ ROUND_16_80(i, 0, a, b, c, d, e, f, g, h, X);
+ ROUND_16_80(i, 1, h, a, b, c, d, e, f, g, X);
+ ROUND_16_80(i, 2, g, h, a, b, c, d, e, f, X);
+ ROUND_16_80(i, 3, f, g, h, a, b, c, d, e, X);
+ ROUND_16_80(i, 4, e, f, g, h, a, b, c, d, X);
+ ROUND_16_80(i, 5, d, e, f, g, h, a, b, c, X);
+ ROUND_16_80(i, 6, c, d, e, f, g, h, a, b, X);
+ ROUND_16_80(i, 7, b, c, d, e, f, g, h, a, X);
+ ROUND_16_80(i, 8, a, b, c, d, e, f, g, h, X);
+ ROUND_16_80(i, 9, h, a, b, c, d, e, f, g, X);
+ ROUND_16_80(i, 10, g, h, a, b, c, d, e, f, X);
+ ROUND_16_80(i, 11, f, g, h, a, b, c, d, e, X);
+ ROUND_16_80(i, 12, e, f, g, h, a, b, c, d, X);
+ ROUND_16_80(i, 13, d, e, f, g, h, a, b, c, X);
+ ROUND_16_80(i, 14, c, d, e, f, g, h, a, b, X);
+ ROUND_16_80(i, 15, b, c, d, e, f, g, h, a, X);
+ }
+
+ ctx->h[0] += a;
+ ctx->h[1] += b;
+ ctx->h[2] += c;
+ ctx->h[3] += d;
+ ctx->h[4] += e;
+ ctx->h[5] += f;
+ ctx->h[6] += g;
+ ctx->h[7] += h;
+
+ W += SHA_LBLOCK;
+ }
+}