This is the mail archive of the gcc-patches@gcc.gnu.org mailing list for the GCC project.


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]
Other format: [Raw text]

Re: [RFA][PATCH][PR middle-end/61118] Improve tree CFG accuracy for setjmp/longjmp


On Wed, Feb 28, 2018 at 1:16 AM, Jeff Law <law@redhat.com> wrote:
> Richi, you worked on 57147 which touches on the issues here.  Your
> thoughts would be greatly appreciated.
>
>
> So 61118 is one of several bugs related to the clobbered-by-longjmp warning.
>
> In 61118 is we are unable to coalesce all the objects in the key
> partitions.  To remove the relevant PHIs we have to create two
> assignments to the key pseudos.
>
> Pseudos with more than one assignment are subject to the
> clobbered-by-longjmp analysis:
>
>  * True if register REGNO was alive at a place where `setjmp' was
>    called and was set more than once or is an argument.  Such regs may
>    be clobbered by `longjmp'.  */
>
> static bool
> regno_clobbered_at_setjmp (bitmap setjmp_crosses, int regno)
> {
>   /* There appear to be cases where some local vars never reach the
>      backend but have bogus regnos.  */
>   if (regno >= max_reg_num ())
>     return false;
>
>   return ((REG_N_SETS (regno) > 1
>            || REGNO_REG_SET_P (df_get_live_out (ENTRY_BLOCK_PTR_FOR_FN
> (cfun)),
>                                regno))
>           && REGNO_REG_SET_P (setjmp_crosses, regno));
> }
>
>
> The fact that no path sets the pseudo more than once is not considered.
> If there is more than one static set of the pseudo, then it is
> considered for possible warning.
>
> --
>
>
> I looked at the propagations which led to the inability to coalesce.
> They all seemed valid to me.  We have always allowed copy propagation to
> replace one pseudo with another as long as neither has
> SSA_NAME_USED_IN_ABNORMAL_PHI set.
>
> We have a PHI like
>
> x1(ab) = (x0, x3 (ab))
>
> x0 is not marked as abnormal because the edge isn't abnormal and thus we
> can propagate into the x0 argument of the PHI.  This is consistent with
> behavior since, well, forever.   We propagate a value for x0 resulting
> in something like
>
> x1(b) = (y0, x3 (ab))
>
>
> Where y0 is still live across the PHI.  Thus the partition for x1/x3,
> etc conflicts with the partition for y0 and they can not be coalesced.
> This leads to the multiple assignments to the pseudo for the x1/x3
> partition.  I briefly looked marking all the PHI arguments as abnormal
> when the destination is abnormal, but it just doesn't seem right.
>
> Anyway, I'd already been looking at 21161 and was aware that the CFG's
> we're building in presence of setjmp/longjmp were slightly inaccurate.
>
> In particular, a longjmp returns to the point immediately after the
> setjmp, not to the setjmp itself.  But our CFG building has the edge
> from the abnormal dispatcher going to the block containing the setjmp call.

Yeah...  for SJLJ EH we get this right via __builtin_setjmp_receiver.

> This creates unnecessary irreducible loops.  It turns out that if we fix
> the tree CFG, then lifetimes become more accurate (and more
> constrained).  The more constrained, more accurate lifetime information
> is enough to allow things to coalesce the way we want and everything for
> 61118 just works.

Sounds good.

> It's actually pretty easy to fix the CFG.  We  just need to recognize
> that a "returns twice" function returns not to the call, but to the
> point immediately after the call.  So if we have a call to a returns
> twice function that ends a block with a single successor, when we wire
> up the abnormal dispatcher, we target the single successor rather than
> the block containing the returns-twice call.

Hmm, I think you need to check whether the successor has a single
predecessor, not whether we have a single successor (we always have
that unless setjmp also throws).  If you fix that you keep the CFG
"incorrect" if there are multiple predecessors so I think in addition
to properly creating the edges you have to work on the BB building
part to ensure that there's a single-predecessor block after
returns-twice function calls.  Note that currently we force returns-twice
to be the first (and only) stmt of a block -- your fix would relax this,
returns-twice no longer needs to start a new BB.

-               handle_abnormal_edges (dispatcher_bbs, bb, bb_to_omp_idx,
-                                      &ab_edge_call, false);
+               {
+                 bool target_after_setjmp = false;
+
+                 /* If the returns twice statement looks like a setjmp
+                    call at the end of a block with a single successor
+                    then we want the edge from the dispatcher to target
+                    that single successor.  That more accurately reflects
+                    actual control flow.  The more accurate CFG also
+                    results in fewer false positive warnings.  */
+                 if (gsi_stmt (gsi_last_nondebug_bb (bb)) == call_stmt
+                     && gimple_call_fndecl (call_stmt)
+                     && setjmp_call_p (gimple_call_fndecl (call_stmt))
+                     && single_succ_p (bb))
+                   target_after_setjmp = true;
+                 handle_abnormal_edges (dispatcher_bbs, bb, bb_to_omp_idx,
+                                        &ab_edge_call, false,
+                                        target_after_setjmp);
+               }

I don't exactly get the hops you jump through here -- I think it's
better to split the returns-twice (always last stmt of a block after
the fixing) and the setjmp-receiver (always first stmt of a block) cases.
So, remove the handling of returns-twice from the above case and
handle returns-twice via

  gimple *last = last_stmt (bb);
  if (last && ...)

also handle all returns-twice calls this way, not only setjmp_call_p.

>
> This compromises the test gcc.dg/torture/57147-2.c
>
>
> Prior to this change the CFG looks like
>
>      2
>     / \
>    3<->4
>    |
>    R
>
> Where block #3 contains the setjmp.  The edges 2->4, 3->4 and 4->3 are
> abnormals.  Block #4 is the abnormal dispatcher.
>
> Eventually we remove the edge from 2->3 because the last statement in
> block #2 is to a non-returning function call.  But we leave the abnormal
> edge 2->4 (on purpose) resulting in:
>
>
>      2
>      |
>   +->4
>   |  |
>   +--3
>      |
>      R
>
> The test then proceeds to verify there is a call to setjmp in the
> resulting .optimized dump -- which there is because block #3 remains
> reachable.
>
>
> With this change the CFG looks like:
>
>
>
>      2
>     / \
>    3-->4
>    |  /
>    | /
>    |/
>    R
>
>
> Where the edges 2->4 and 3->4 and 4->R are abnormals.  Block #4 is still
> the dispatcher and the setjmp is still in block #3.
>
> We realize block #2 ends with a call to a noreturn function and again we
> remove the 2->3 edge.  That makes block #3 unreachable and it gets
> removed, resulting in:
>
>     2
>     |
>     4
>     |
>     R
>
> Where 2->4 and 4->R are still abnormal edges.  With bb3 becoming
> unreachable, the setjmp is unreachable and gets removed thus breaking
> the scan part of the test.
>
>
>
>
> If we review the source of the test:
>
>
> struct __jmp_buf_tag {};
> typedef struct __jmp_buf_tag jmp_buf[1];
> extern int _setjmp (struct __jmp_buf_tag __env[1]);
>
> jmp_buf g_return_jmp_buf;
>
> void SetNaClSwitchExpectations (void)
> {
>   __builtin_longjmp (g_return_jmp_buf, 1);
> }
> void TestSyscall(void)
> {
>   SetNaClSwitchExpectations();
>   _setjmp (g_return_jmp_buf);
> }
>
>
> We can easily see that the call to __setjmp can never be reached given
> that we consider the longjmp call as non-returning.  So AFAICT
> everything is as should be expected.  I think the right thing is to just
> remove this compromised test.

I agree.  Bonus points if you look at PR57147 and see if the testcase
was misreduced (maybe it was just for an ICE so we can keep it
and just remove the dump scanning?)

Richard.

> --
>
>
>
> The regression tested from pr61118 disables -ftracer as -ftracer creates
> an additional assignment to key objects which gets carried through into
> RTL thus triggering the problem all over again.  My RTL fixes for 21161
> do not fix this.  So if the patch is accepted I propose we keep 61118
> open, but without the gcc-8 regression marker.  It's still a deficiency
> that -ftracer can trigger a bogus clobbered-by-longjmp warning.
>
> This has been bootstrapped and regression tested on x86_64.
>
> Thoughts?  OK for the trunk?
>
> Jeff
>
>         PR middle-end/61118
>         * tree-cfg.c (handle_abnormal_edges): Accept new argument.
>         (make_edges): Callers of handle_abnormal_edges changed.
>
>         * gcc.dg/torture/pr61118.c: New test.
>         * gcc.dg/torture/pr57147.c: Remove compromised test.
>
>
> diff --git a/gcc/tree-cfg.c b/gcc/tree-cfg.c
> index b87e48d..551195a 100644
> --- a/gcc/tree-cfg.c
> +++ b/gcc/tree-cfg.c
> @@ -35,6 +35,7 @@ along with GCC; see the file COPYING3.  If not see
>  #include "fold-const.h"
>  #include "trans-mem.h"
>  #include "stor-layout.h"
> +#include "calls.h"
>  #include "print-tree.h"
>  #include "cfganal.h"
>  #include "gimple-fold.h"
> @@ -776,13 +777,22 @@ get_abnormal_succ_dispatcher (basic_block bb)
>  static void
>  handle_abnormal_edges (basic_block *dispatcher_bbs,
>                        basic_block for_bb, int *bb_to_omp_idx,
> -                      auto_vec<basic_block> *bbs, bool computed_goto)
> +                      auto_vec<basic_block> *bbs, bool computed_goto,
> +                      bool target_after_setjmp)
>  {
>    basic_block *dispatcher = dispatcher_bbs + (computed_goto ? 1 : 0);
>    unsigned int idx = 0;
> -  basic_block bb;
> +  basic_block bb, target_bb;
>    bool inner = false;
>
> +  /* Determine the block the abnormal dispatcher will transfer
> +     control to.  It may be FOR_BB, or in some cases it may be the
> +     single successor of FOR_BB.  */
> +  if (target_after_setjmp)
> +    target_bb = single_succ (for_bb);
> +  else
> +    target_bb = for_bb;
> +
>    if (bb_to_omp_idx)
>      {
>        dispatcher = dispatcher_bbs + 2 * bb_to_omp_idx[for_bb->index];
> @@ -878,7 +888,7 @@ handle_abnormal_edges (basic_block *dispatcher_bbs,
>         }
>      }
>
> -  make_edge (*dispatcher, for_bb, EDGE_ABNORMAL);
> +  make_edge (*dispatcher, target_bb, EDGE_ABNORMAL);
>  }
>
>  /* Creates outgoing edges for BB.  Returns 1 when it ends with an
> @@ -1075,11 +1085,11 @@ make_edges (void)
>                  potential target for a computed goto or a non-local goto.  */
>               if (FORCED_LABEL (target))
>                 handle_abnormal_edges (dispatcher_bbs, bb, bb_to_omp_idx,
> -                                      &ab_edge_goto, true);
> +                                      &ab_edge_goto, true, false);
>               if (DECL_NONLOCAL (target))
>                 {
>                   handle_abnormal_edges (dispatcher_bbs, bb, bb_to_omp_idx,
> -                                        &ab_edge_call, false);
> +                                        &ab_edge_call, false, false);
>                   break;
>                 }
>             }
> @@ -1094,8 +1104,24 @@ make_edges (void)
>                   && ((gimple_call_flags (call_stmt) & ECF_RETURNS_TWICE)
>                       || gimple_call_builtin_p (call_stmt,
>                                                 BUILT_IN_SETJMP_RECEIVER)))
> -               handle_abnormal_edges (dispatcher_bbs, bb, bb_to_omp_idx,
> -                                      &ab_edge_call, false);
> +               {
> +                 bool target_after_setjmp = false;
> +
> +                 /* If the returns twice statement looks like a setjmp
> +                    call at the end of a block with a single successor
> +                    then we want the edge from the dispatcher to target
> +                    that single successor.  That more accurately reflects
> +                    actual control flow.  The more accurate CFG also
> +                    results in fewer false positive warnings.  */
> +                 if (gsi_stmt (gsi_last_nondebug_bb (bb)) == call_stmt
> +                     && gimple_call_fndecl (call_stmt)
> +                     && setjmp_call_p (gimple_call_fndecl (call_stmt))
> +                     && single_succ_p (bb))
> +                   target_after_setjmp = true;
> +                 handle_abnormal_edges (dispatcher_bbs, bb, bb_to_omp_idx,
> +                                        &ab_edge_call, false,
> +                                        target_after_setjmp);
> +               }
>             }
>         }
>
> diff --git a/gcc/testsuite/gcc.dg/torture/pr57147-2.c b/gcc/testsuite/gcc.dg/torture/pr57147-2.c
> deleted file mode 100644
> index fc5fb39..0000000
> --- a/gcc/testsuite/gcc.dg/torture/pr57147-2.c
> +++ /dev/null
> @@ -1,22 +0,0 @@
> -/* { dg-do compile } */
> -/* { dg-options "-fdump-tree-optimized" } */
> -/* { dg-skip-if "" { *-*-* } { "-fno-fat-lto-objects" } { "" } } */
> -/* { dg-require-effective-target indirect_jumps } */
> -
> -struct __jmp_buf_tag {};
> -typedef struct __jmp_buf_tag jmp_buf[1];
> -extern int _setjmp (struct __jmp_buf_tag __env[1]);
> -
> -jmp_buf g_return_jmp_buf;
> -
> -void SetNaClSwitchExpectations (void)
> -{
> -  __builtin_longjmp (g_return_jmp_buf, 1);
> -}
> -void TestSyscall(void)
> -{
> -  SetNaClSwitchExpectations();
> -  _setjmp (g_return_jmp_buf);
> -}
> -
> -/* { dg-final { scan-tree-dump "setjmp" "optimized" } } */
> diff --git a/gcc/testsuite/gcc.dg/torture/pr61118.c b/gcc/testsuite/gcc.dg/torture/pr61118.c
> new file mode 100644
> index 0000000..12be892
> --- /dev/null
> +++ b/gcc/testsuite/gcc.dg/torture/pr61118.c
> @@ -0,0 +1,652 @@
> +/* { dg-options "-Wextra -fno-tracer" } */
> +typedef unsigned char __u_char;
> +typedef unsigned short int __u_short;
> +typedef unsigned int __u_int;
> +typedef unsigned long int __u_long;
> +typedef signed char __int8_t;
> +typedef unsigned char __uint8_t;
> +typedef signed short int __int16_t;
> +typedef unsigned short int __uint16_t;
> +typedef signed int __int32_t;
> +typedef unsigned int __uint32_t;
> +typedef signed long int __int64_t;
> +typedef unsigned long int __uint64_t;
> +typedef long int __quad_t;
> +typedef unsigned long int __u_quad_t;
> +typedef unsigned long int __dev_t;
> +typedef unsigned int __uid_t;
> +typedef unsigned int __gid_t;
> +typedef unsigned long int __ino_t;
> +typedef unsigned long int __ino64_t;
> +typedef unsigned int __mode_t;
> +typedef unsigned long int __nlink_t;
> +typedef long int __off_t;
> +typedef long int __off64_t;
> +typedef int __pid_t;
> +typedef struct { int __val[2]; } __fsid_t;
> +typedef long int __clock_t;
> +typedef unsigned long int __rlim_t;
> +typedef unsigned long int __rlim64_t;
> +typedef unsigned int __id_t;
> +typedef long int __time_t;
> +typedef unsigned int __useconds_t;
> +typedef long int __suseconds_t;
> +typedef int __daddr_t;
> +typedef int __key_t;
> +typedef int __clockid_t;
> +typedef void * __timer_t;
> +typedef long int __blksize_t;
> +typedef long int __blkcnt_t;
> +typedef long int __blkcnt64_t;
> +typedef unsigned long int __fsblkcnt_t;
> +typedef unsigned long int __fsblkcnt64_t;
> +typedef unsigned long int __fsfilcnt_t;
> +typedef unsigned long int __fsfilcnt64_t;
> +typedef long int __fsword_t;
> +typedef long int __ssize_t;
> +typedef long int __syscall_slong_t;
> +typedef unsigned long int __syscall_ulong_t;
> +typedef __off64_t __loff_t;
> +typedef __quad_t *__qaddr_t;
> +typedef char *__caddr_t;
> +typedef long int __intptr_t;
> +typedef unsigned int __socklen_t;
> +static __inline unsigned int
> +__bswap_32 (unsigned int __bsx)
> +{
> +  return __builtin_bswap32 (__bsx);
> +}
> +static __inline __uint64_t
> +__bswap_64 (__uint64_t __bsx)
> +{
> +  return __builtin_bswap64 (__bsx);
> +}
> +typedef long unsigned int size_t;
> +typedef __time_t time_t;
> +struct timespec
> +  {
> +    __time_t tv_sec;
> +    __syscall_slong_t tv_nsec;
> +  };
> +typedef __pid_t pid_t;
> +struct sched_param
> +  {
> +    int __sched_priority;
> +  };
> +struct __sched_param
> +  {
> +    int __sched_priority;
> +  };
> +typedef unsigned long int __cpu_mask;
> +typedef struct
> +{
> +  __cpu_mask __bits[1024 / (8 * sizeof (__cpu_mask))];
> +} cpu_set_t;
> +extern int __sched_cpucount (size_t __setsize, const cpu_set_t *__setp)
> +  __attribute__ ((__nothrow__ , __leaf__));
> +extern cpu_set_t *__sched_cpualloc (size_t __count) __attribute__ ((__nothrow__ , __leaf__)) ;
> +extern void __sched_cpufree (cpu_set_t *__set) __attribute__ ((__nothrow__ , __leaf__));
> +extern int sched_setparam (__pid_t __pid, const struct sched_param *__param)
> +     __attribute__ ((__nothrow__ , __leaf__));
> +extern int sched_getparam (__pid_t __pid, struct sched_param *__param) __attribute__ ((__nothrow__ , __leaf__));
> +extern int sched_setscheduler (__pid_t __pid, int __policy,
> +          const struct sched_param *__param) __attribute__ ((__nothrow__ , __leaf__));
> +extern int sched_getscheduler (__pid_t __pid) __attribute__ ((__nothrow__ , __leaf__));
> +extern int sched_yield (void) __attribute__ ((__nothrow__ , __leaf__));
> +extern int sched_get_priority_max (int __algorithm) __attribute__ ((__nothrow__ , __leaf__));
> +extern int sched_get_priority_min (int __algorithm) __attribute__ ((__nothrow__ , __leaf__));
> +extern int sched_rr_get_interval (__pid_t __pid, struct timespec *__t) __attribute__ ((__nothrow__ , __leaf__));
> +typedef __clock_t clock_t;
> +typedef __clockid_t clockid_t;
> +typedef __timer_t timer_t;
> +struct tm
> +{
> +  int tm_sec;
> +  int tm_min;
> +  int tm_hour;
> +  int tm_mday;
> +  int tm_mon;
> +  int tm_year;
> +  int tm_wday;
> +  int tm_yday;
> +  int tm_isdst;
> +  long int tm_gmtoff;
> +  const char *tm_zone;
> +};
> +struct itimerspec
> +  {
> +    struct timespec it_interval;
> +    struct timespec it_value;
> +  };
> +struct sigevent;
> +extern clock_t clock (void) __attribute__ ((__nothrow__ , __leaf__));
> +extern time_t time (time_t *__timer) __attribute__ ((__nothrow__ , __leaf__));
> +extern double difftime (time_t __time1, time_t __time0)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__const__));
> +extern time_t mktime (struct tm *__tp) __attribute__ ((__nothrow__ , __leaf__));
> +extern size_t strftime (char *__restrict __s, size_t __maxsize,
> +   const char *__restrict __format,
> +   const struct tm *__restrict __tp) __attribute__ ((__nothrow__ , __leaf__));
> +typedef struct __locale_struct
> +{
> +  struct __locale_data *__locales[13];
> +  const unsigned short int *__ctype_b;
> +  const int *__ctype_tolower;
> +  const int *__ctype_toupper;
> +  const char *__names[13];
> +} *__locale_t;
> +typedef __locale_t locale_t;
> +extern size_t strftime_l (char *__restrict __s, size_t __maxsize,
> +     const char *__restrict __format,
> +     const struct tm *__restrict __tp,
> +     __locale_t __loc) __attribute__ ((__nothrow__ , __leaf__));
> +extern struct tm *gmtime (const time_t *__timer) __attribute__ ((__nothrow__ , __leaf__));
> +extern struct tm *localtime (const time_t *__timer) __attribute__ ((__nothrow__ , __leaf__));
> +extern struct tm *gmtime_r (const time_t *__restrict __timer,
> +       struct tm *__restrict __tp) __attribute__ ((__nothrow__ , __leaf__));
> +extern struct tm *localtime_r (const time_t *__restrict __timer,
> +          struct tm *__restrict __tp) __attribute__ ((__nothrow__ , __leaf__));
> +extern char *asctime (const struct tm *__tp) __attribute__ ((__nothrow__ , __leaf__));
> +extern char *ctime (const time_t *__timer) __attribute__ ((__nothrow__ , __leaf__));
> +extern char *asctime_r (const struct tm *__restrict __tp,
> +   char *__restrict __buf) __attribute__ ((__nothrow__ , __leaf__));
> +extern char *ctime_r (const time_t *__restrict __timer,
> +        char *__restrict __buf) __attribute__ ((__nothrow__ , __leaf__));
> +extern char *__tzname[2];
> +extern int __daylight;
> +extern long int __timezone;
> +extern char *tzname[2];
> +extern void tzset (void) __attribute__ ((__nothrow__ , __leaf__));
> +extern int daylight;
> +extern long int timezone;
> +extern int stime (const time_t *__when) __attribute__ ((__nothrow__ , __leaf__));
> +extern time_t timegm (struct tm *__tp) __attribute__ ((__nothrow__ , __leaf__));
> +extern time_t timelocal (struct tm *__tp) __attribute__ ((__nothrow__ , __leaf__));
> +extern int dysize (int __year) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__const__));
> +extern int nanosleep (const struct timespec *__requested_time,
> +        struct timespec *__remaining);
> +extern int clock_getres (clockid_t __clock_id, struct timespec *__res) __attribute__ ((__nothrow__ , __leaf__));
> +extern int clock_gettime (clockid_t __clock_id, struct timespec *__tp) __attribute__ ((__nothrow__ , __leaf__));
> +extern int clock_settime (clockid_t __clock_id, const struct timespec *__tp)
> +     __attribute__ ((__nothrow__ , __leaf__));
> +extern int clock_nanosleep (clockid_t __clock_id, int __flags,
> +       const struct timespec *__req,
> +       struct timespec *__rem);
> +extern int clock_getcpuclockid (pid_t __pid, clockid_t *__clock_id) __attribute__ ((__nothrow__ , __leaf__));
> +extern int timer_create (clockid_t __clock_id,
> +    struct sigevent *__restrict __evp,
> +    timer_t *__restrict __timerid) __attribute__ ((__nothrow__ , __leaf__));
> +extern int timer_delete (timer_t __timerid) __attribute__ ((__nothrow__ , __leaf__));
> +extern int timer_settime (timer_t __timerid, int __flags,
> +     const struct itimerspec *__restrict __value,
> +     struct itimerspec *__restrict __ovalue) __attribute__ ((__nothrow__ , __leaf__));
> +extern int timer_gettime (timer_t __timerid, struct itimerspec *__value)
> +     __attribute__ ((__nothrow__ , __leaf__));
> +extern int timer_getoverrun (timer_t __timerid) __attribute__ ((__nothrow__ , __leaf__));
> +typedef unsigned long int pthread_t;
> +union pthread_attr_t
> +{
> +  char __size[56];
> +  long int __align;
> +};
> +typedef union pthread_attr_t pthread_attr_t;
> +typedef struct __pthread_internal_list
> +{
> +  struct __pthread_internal_list *__prev;
> +  struct __pthread_internal_list *__next;
> +} __pthread_list_t;
> +typedef union
> +{
> +  struct __pthread_mutex_s
> +  {
> +    int __lock;
> +    unsigned int __count;
> +    int __owner;
> +    unsigned int __nusers;
> +    int __kind;
> +    short __spins;
> +    short __elision;
> +    __pthread_list_t __list;
> +  } __data;
> +  char __size[40];
> +  long int __align;
> +} pthread_mutex_t;
> +typedef union
> +{
> +  char __size[4];
> +  int __align;
> +} pthread_mutexattr_t;
> +typedef union
> +{
> +  struct
> +  {
> +    int __lock;
> +    unsigned int __futex;
> +    __extension__ unsigned long long int __total_seq;
> +    __extension__ unsigned long long int __wakeup_seq;
> +    __extension__ unsigned long long int __woken_seq;
> +    void *__mutex;
> +    unsigned int __nwaiters;
> +    unsigned int __broadcast_seq;
> +  } __data;
> +  char __size[48];
> +  __extension__ long long int __align;
> +} pthread_cond_t;
> +typedef union
> +{
> +  char __size[4];
> +  int __align;
> +} pthread_condattr_t;
> +typedef unsigned int pthread_key_t;
> +typedef int pthread_once_t;
> +typedef union
> +{
> +  struct
> +  {
> +    int __lock;
> +    unsigned int __nr_readers;
> +    unsigned int __readers_wakeup;
> +    unsigned int __writer_wakeup;
> +    unsigned int __nr_readers_queued;
> +    unsigned int __nr_writers_queued;
> +    int __writer;
> +    int __shared;
> +    unsigned long int __pad1;
> +    unsigned long int __pad2;
> +    unsigned int __flags;
> +  } __data;
> +  char __size[56];
> +  long int __align;
> +} pthread_rwlock_t;
> +typedef union
> +{
> +  char __size[8];
> +  long int __align;
> +} pthread_rwlockattr_t;
> +typedef volatile int pthread_spinlock_t;
> +typedef union
> +{
> +  char __size[32];
> +  long int __align;
> +} pthread_barrier_t;
> +typedef union
> +{
> +  char __size[4];
> +  int __align;
> +} pthread_barrierattr_t;
> +typedef long int __jmp_buf[8];
> +enum
> +{
> +  PTHREAD_CREATE_JOINABLE,
> +  PTHREAD_CREATE_DETACHED
> +};
> +enum
> +{
> +  PTHREAD_MUTEX_TIMED_NP,
> +  PTHREAD_MUTEX_RECURSIVE_NP,
> +  PTHREAD_MUTEX_ERRORCHECK_NP,
> +  PTHREAD_MUTEX_ADAPTIVE_NP
> +  ,
> +  PTHREAD_MUTEX_NORMAL = PTHREAD_MUTEX_TIMED_NP,
> +  PTHREAD_MUTEX_RECURSIVE = PTHREAD_MUTEX_RECURSIVE_NP,
> +  PTHREAD_MUTEX_ERRORCHECK = PTHREAD_MUTEX_ERRORCHECK_NP,
> +  PTHREAD_MUTEX_DEFAULT = PTHREAD_MUTEX_NORMAL
> +};
> +enum
> +{
> +  PTHREAD_MUTEX_STALLED,
> +  PTHREAD_MUTEX_STALLED_NP = PTHREAD_MUTEX_STALLED,
> +  PTHREAD_MUTEX_ROBUST,
> +  PTHREAD_MUTEX_ROBUST_NP = PTHREAD_MUTEX_ROBUST
> +};
> +enum
> +{
> +  PTHREAD_PRIO_NONE,
> +  PTHREAD_PRIO_INHERIT,
> +  PTHREAD_PRIO_PROTECT
> +};
> +enum
> +{
> +  PTHREAD_RWLOCK_PREFER_READER_NP,
> +  PTHREAD_RWLOCK_PREFER_WRITER_NP,
> +  PTHREAD_RWLOCK_PREFER_WRITER_NONRECURSIVE_NP,
> +  PTHREAD_RWLOCK_DEFAULT_NP = PTHREAD_RWLOCK_PREFER_READER_NP
> +};
> +enum
> +{
> +  PTHREAD_INHERIT_SCHED,
> +  PTHREAD_EXPLICIT_SCHED
> +};
> +enum
> +{
> +  PTHREAD_SCOPE_SYSTEM,
> +  PTHREAD_SCOPE_PROCESS
> +};
> +enum
> +{
> +  PTHREAD_PROCESS_PRIVATE,
> +  PTHREAD_PROCESS_SHARED
> +};
> +struct _pthread_cleanup_buffer
> +{
> +  void (*__routine) (void *);
> +  void *__arg;
> +  int __canceltype;
> +  struct _pthread_cleanup_buffer *__prev;
> +};
> +enum
> +{
> +  PTHREAD_CANCEL_ENABLE,
> +  PTHREAD_CANCEL_DISABLE
> +};
> +enum
> +{
> +  PTHREAD_CANCEL_DEFERRED,
> +  PTHREAD_CANCEL_ASYNCHRONOUS
> +};
> +extern int pthread_create (pthread_t *__restrict __newthread,
> +      const pthread_attr_t *__restrict __attr,
> +      void *(*__start_routine) (void *),
> +      void *__restrict __arg) __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1, 3)));
> +extern void pthread_exit (void *__retval) __attribute__ ((__noreturn__));
> +extern int pthread_join (pthread_t __th, void **__thread_return);
> +extern int pthread_detach (pthread_t __th) __attribute__ ((__nothrow__ , __leaf__));
> +extern pthread_t pthread_self (void) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__const__));
> +extern int pthread_equal (pthread_t __thread1, pthread_t __thread2)
> +  __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__const__));
> +extern int pthread_attr_init (pthread_attr_t *__attr) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_attr_destroy (pthread_attr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_attr_getdetachstate (const pthread_attr_t *__attr,
> +     int *__detachstate)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_attr_setdetachstate (pthread_attr_t *__attr,
> +     int __detachstate)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_attr_getguardsize (const pthread_attr_t *__attr,
> +          size_t *__guardsize)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_attr_setguardsize (pthread_attr_t *__attr,
> +          size_t __guardsize)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_attr_getschedparam (const pthread_attr_t *__restrict __attr,
> +           struct sched_param *__restrict __param)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_attr_setschedparam (pthread_attr_t *__restrict __attr,
> +           const struct sched_param *__restrict
> +           __param) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_attr_getschedpolicy (const pthread_attr_t *__restrict
> +     __attr, int *__restrict __policy)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_attr_setschedpolicy (pthread_attr_t *__attr, int __policy)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_attr_getinheritsched (const pthread_attr_t *__restrict
> +      __attr, int *__restrict __inherit)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_attr_setinheritsched (pthread_attr_t *__attr,
> +      int __inherit)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_attr_getscope (const pthread_attr_t *__restrict __attr,
> +      int *__restrict __scope)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_attr_setscope (pthread_attr_t *__attr, int __scope)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_attr_getstackaddr (const pthread_attr_t *__restrict
> +          __attr, void **__restrict __stackaddr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2))) __attribute__ ((__deprecated__));
> +extern int pthread_attr_setstackaddr (pthread_attr_t *__attr,
> +          void *__stackaddr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1))) __attribute__ ((__deprecated__));
> +extern int pthread_attr_getstacksize (const pthread_attr_t *__restrict
> +          __attr, size_t *__restrict __stacksize)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_attr_setstacksize (pthread_attr_t *__attr,
> +          size_t __stacksize)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_attr_getstack (const pthread_attr_t *__restrict __attr,
> +      void **__restrict __stackaddr,
> +      size_t *__restrict __stacksize)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2, 3)));
> +extern int pthread_attr_setstack (pthread_attr_t *__attr, void *__stackaddr,
> +      size_t __stacksize) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_setschedparam (pthread_t __target_thread, int __policy,
> +      const struct sched_param *__param)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (3)));
> +extern int pthread_getschedparam (pthread_t __target_thread,
> +      int *__restrict __policy,
> +      struct sched_param *__restrict __param)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (2, 3)));
> +extern int pthread_setschedprio (pthread_t __target_thread, int __prio)
> +     __attribute__ ((__nothrow__ , __leaf__));
> +extern int pthread_once (pthread_once_t *__once_control,
> +    void (*__init_routine) (void)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_setcancelstate (int __state, int *__oldstate);
> +extern int pthread_setcanceltype (int __type, int *__oldtype);
> +extern int pthread_cancel (pthread_t __th);
> +extern void pthread_testcancel (void);
> +typedef struct
> +{
> +  struct
> +  {
> +    __jmp_buf __cancel_jmp_buf;
> +    int __mask_was_saved;
> +  } __cancel_jmp_buf[1];
> +  void *__pad[4];
> +} __pthread_unwind_buf_t __attribute__ ((__aligned__));
> +struct __pthread_cleanup_frame
> +{
> +  void (*__cancel_routine) (void *);
> +  void *__cancel_arg;
> +  int __do_it;
> +  int __cancel_type;
> +};
> +extern void __pthread_register_cancel (__pthread_unwind_buf_t *__buf)
> +     ;
> +extern void __pthread_unregister_cancel (__pthread_unwind_buf_t *__buf)
> +  ;
> +extern void __pthread_unwind_next (__pthread_unwind_buf_t *__buf)
> +     __attribute__ ((__noreturn__))
> +     __attribute__ ((__weak__))
> +     ;
> +struct __jmp_buf_tag;
> +extern int __sigsetjmp (struct __jmp_buf_tag *__env, int __savemask) __attribute__ ((__nothrow__));
> +extern int pthread_mutex_init (pthread_mutex_t *__mutex,
> +          const pthread_mutexattr_t *__mutexattr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutex_destroy (pthread_mutex_t *__mutex)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutex_trylock (pthread_mutex_t *__mutex)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutex_lock (pthread_mutex_t *__mutex)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutex_timedlock (pthread_mutex_t *__restrict __mutex,
> +        const struct timespec *__restrict
> +        __abstime) __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_mutex_unlock (pthread_mutex_t *__mutex)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutex_getprioceiling (const pthread_mutex_t *
> +      __restrict __mutex,
> +      int *__restrict __prioceiling)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_mutex_setprioceiling (pthread_mutex_t *__restrict __mutex,
> +      int __prioceiling,
> +      int *__restrict __old_ceiling)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 3)));
> +extern int pthread_mutex_consistent (pthread_mutex_t *__mutex)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutexattr_init (pthread_mutexattr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutexattr_destroy (pthread_mutexattr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutexattr_getpshared (const pthread_mutexattr_t *
> +      __restrict __attr,
> +      int *__restrict __pshared)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_mutexattr_setpshared (pthread_mutexattr_t *__attr,
> +      int __pshared)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutexattr_gettype (const pthread_mutexattr_t *__restrict
> +          __attr, int *__restrict __kind)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_mutexattr_settype (pthread_mutexattr_t *__attr, int __kind)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutexattr_getprotocol (const pthread_mutexattr_t *
> +       __restrict __attr,
> +       int *__restrict __protocol)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_mutexattr_setprotocol (pthread_mutexattr_t *__attr,
> +       int __protocol)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutexattr_getprioceiling (const pthread_mutexattr_t *
> +          __restrict __attr,
> +          int *__restrict __prioceiling)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_mutexattr_setprioceiling (pthread_mutexattr_t *__attr,
> +          int __prioceiling)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_mutexattr_getrobust (const pthread_mutexattr_t *__attr,
> +     int *__robustness)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_mutexattr_setrobust (pthread_mutexattr_t *__attr,
> +     int __robustness)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlock_init (pthread_rwlock_t *__restrict __rwlock,
> +    const pthread_rwlockattr_t *__restrict
> +    __attr) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlock_destroy (pthread_rwlock_t *__rwlock)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlock_rdlock (pthread_rwlock_t *__rwlock)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlock_tryrdlock (pthread_rwlock_t *__rwlock)
> +  __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlock_timedrdlock (pthread_rwlock_t *__restrict __rwlock,
> +           const struct timespec *__restrict
> +           __abstime) __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_rwlock_wrlock (pthread_rwlock_t *__rwlock)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlock_trywrlock (pthread_rwlock_t *__rwlock)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlock_timedwrlock (pthread_rwlock_t *__restrict __rwlock,
> +           const struct timespec *__restrict
> +           __abstime) __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_rwlock_unlock (pthread_rwlock_t *__rwlock)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlockattr_init (pthread_rwlockattr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlockattr_destroy (pthread_rwlockattr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlockattr_getpshared (const pthread_rwlockattr_t *
> +       __restrict __attr,
> +       int *__restrict __pshared)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_rwlockattr_setpshared (pthread_rwlockattr_t *__attr,
> +       int __pshared)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_rwlockattr_getkind_np (const pthread_rwlockattr_t *
> +       __restrict __attr,
> +       int *__restrict __pref)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_rwlockattr_setkind_np (pthread_rwlockattr_t *__attr,
> +       int __pref) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_cond_init (pthread_cond_t *__restrict __cond,
> +         const pthread_condattr_t *__restrict __cond_attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_cond_destroy (pthread_cond_t *__cond)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_cond_signal (pthread_cond_t *__cond)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_cond_broadcast (pthread_cond_t *__cond)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_cond_wait (pthread_cond_t *__restrict __cond,
> +         pthread_mutex_t *__restrict __mutex)
> +     __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_cond_timedwait (pthread_cond_t *__restrict __cond,
> +       pthread_mutex_t *__restrict __mutex,
> +       const struct timespec *__restrict __abstime)
> +     __attribute__ ((__nonnull__ (1, 2, 3)));
> +extern int pthread_condattr_init (pthread_condattr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_condattr_destroy (pthread_condattr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_condattr_getpshared (const pthread_condattr_t *
> +     __restrict __attr,
> +     int *__restrict __pshared)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_condattr_setpshared (pthread_condattr_t *__attr,
> +     int __pshared) __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_condattr_getclock (const pthread_condattr_t *
> +          __restrict __attr,
> +          __clockid_t *__restrict __clock_id)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_condattr_setclock (pthread_condattr_t *__attr,
> +          __clockid_t __clock_id)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_spin_init (pthread_spinlock_t *__lock, int __pshared)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_spin_destroy (pthread_spinlock_t *__lock)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_spin_lock (pthread_spinlock_t *__lock)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_spin_trylock (pthread_spinlock_t *__lock)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_spin_unlock (pthread_spinlock_t *__lock)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_barrier_init (pthread_barrier_t *__restrict __barrier,
> +     const pthread_barrierattr_t *__restrict
> +     __attr, unsigned int __count)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_barrier_destroy (pthread_barrier_t *__barrier)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_barrier_wait (pthread_barrier_t *__barrier)
> +     __attribute__ ((__nothrow__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_barrierattr_init (pthread_barrierattr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_barrierattr_destroy (pthread_barrierattr_t *__attr)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_barrierattr_getpshared (const pthread_barrierattr_t *
> +        __restrict __attr,
> +        int *__restrict __pshared)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1, 2)));
> +extern int pthread_barrierattr_setpshared (pthread_barrierattr_t *__attr,
> +        int __pshared)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_key_create (pthread_key_t *__key,
> +          void (*__destr_function) (void *))
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (1)));
> +extern int pthread_key_delete (pthread_key_t __key) __attribute__ ((__nothrow__ , __leaf__));
> +extern void *pthread_getspecific (pthread_key_t __key) __attribute__ ((__nothrow__ , __leaf__));
> +extern int pthread_setspecific (pthread_key_t __key,
> +    const void *__pointer) __attribute__ ((__nothrow__ , __leaf__)) ;
> +extern int pthread_getcpuclockid (pthread_t __thread_id,
> +      __clockid_t *__clock_id)
> +     __attribute__ ((__nothrow__ , __leaf__)) __attribute__ ((__nonnull__ (2)));
> +extern int pthread_atfork (void (*__prepare) (void),
> +      void (*__parent) (void),
> +      void (*__child) (void)) __attribute__ ((__nothrow__ , __leaf__));
> +extern __inline __attribute__ ((__gnu_inline__)) int
> +__attribute__ ((__nothrow__ , __leaf__)) pthread_equal (pthread_t __thread1, pthread_t __thread2)
> +{
> +  return __thread1 == __thread2;
> +}
> +void cleanup_fn(void *mutex);
> +typedef struct {
> +  size_t progress;
> +  size_t total;
> +  pthread_mutex_t mutex;
> +  pthread_cond_t cond;
> +  double min_wait;
> +} dmnsn_future;
> +void
> +dmnsn_future_wait(dmnsn_future *future, double progress)
> +{
> +  pthread_mutex_lock(&future->mutex);
> +  while ((double)future->progress/future->total < progress) {
> +    if (progress < future->min_wait) {
> +      future->min_wait = progress;
> +    }
> +    do { __pthread_unwind_buf_t __cancel_buf; void (*__cancel_routine) (void *) = (cleanup_fn); void *__cancel_arg = (&future->mutex); int __not_first_call = __sigsetjmp ((struct __jmp_buf_tag *) (void *) __cancel_buf.__cancel_jmp_buf, 0); if (__builtin_expect ((__not_first_call), 0)) { __cancel_routine (__cancel_arg); __pthread_unwind_next (&__cancel_buf); } __pthread_register_cancel (&__cancel_buf); do {;
> +    pthread_cond_wait(&future->cond, &future->mutex);
> +    do { } while (0); } while (0); __pthread_unregister_cancel (&__cancel_buf); if (0) __cancel_routine (__cancel_arg); } while (0);
> +  }
> +  pthread_mutex_unlock(&future->mutex);
> +}
>


Index Nav: [Date Index] [Subject Index] [Author Index] [Thread Index]
Message Nav: [Date Prev] [Date Next] [Thread Prev] [Thread Next]