[Patch, Fortran] PR 55548: SYSTEM_CLOCK with integer(8) provides nanosecond resolution, but only microsecond precision (without -lrt)

Janus Weil janus@gcc.gnu.org
Sat Dec 1 23:57:00 GMT 2012

Hi Janne,

thanks for your feedback ...

>> here is a straightforward patch for the intrinsic procedure
>> SYSTEM_CLOCK. It does two things:
>> 1) It reduces the resolution of the int8 version from 1 nanosecond to
>> 1 microsecond (COUNT_RATE = 1000000).
>> 2) It adds an int16 version with nanosecond precision.
>> The motivation for item #1 was mainly that the actual precision is
>> usually not better than 1 microsec anyway (unless linking with -lrt).
>> This results in SYSTEM_CLOCK giving values whose last three digits are
>> zero. One can argue that this is not a dramatic misbehavior, but it
>> has disadvantages for certain applications, like e.g. using
>> SYSTEM_CLOCK to initialize the random seed in a Monte-Carlo
>> simulation. In general, I would say that the value of COUNT_RATE
>> should not be larger than the actual precision of the clock used.
>> Moreover, the microsecond resolution for int8 arguments has the
>> advantage that it is compatible with ifort's behavior. Also I think a
>> resolution of 1 microsecond is sufficient for most applications. If
>> someone really needs more, he can now use the int16 version (and link
>> with -lrt).
>> Regtested on x86_64-unknown-linux-gnu (although we don't actually seem
>> to have any test cases for SYSTEM_CLOCK yet). Ok for trunk?
>> Btw, does it make sense to also add an int2 version? If yes, which
>> resolution? Note that most other compilers seem to have an int2
>> version of SYSTEM_CLOCK ...
> No, not Ok.
> IIRC there was some discussion about COUNT_RATE back when the
> nanosecond resolution system_clock feature was developed, and one idea
> was to have different count rates depending on whether clock_gettime
> is available, as you also suggest in the PR. In the end it was decided
> to keep a constant count rate as a consistent rate was seen as more
> important than "wasting" a few of the least significant bits when
> nanosecond resolution is not available, since int8 has sufficient
> range for several centuries even with nanosecond resolution. Anyway, I
> don't feel particularly strongly about this, and if there now is a
> consensus to have a changing count rate, I can live with that.

my patch does not implement such a "changing" (or library-dependent)
count rate, but I would indeed prefer this over the current behavior
(even more so if there is consensus to reject my current patch).

> But, I do object to providing only microsecond resolution for the int8
> version. Nanosecond resolution is indeed not necessary in most cases,
> but like the saying goes, "Precision, like virginity, is never
> regained once lost.". Reducing the resolution is a regression for
> those users who have relied on this feature;

well, I wouldn't count it as a real regression, since:
a) The Fortran standard does not require a certain resolution, but
defines it to be processor dependent.
b) You cannot rely on different compilers giving the same resolution
values, therefore you cannot expect a fixed resolution if you want to
write portable code.

> I, for one, have several
> test and benchmark programs which depend on nanosecond resolution for
> the int8 system_clock.

You mean they depend on the count values being delivered in units of
nanoseconds? In a portable code you cannot depend on getting actual
nanosecond precision, since it may not be available on a certain

> OTOH, if one wants a microsecond count rate,
> converting the count value is just a single statement following the
> system_clock call.

Certainly converting the units is not a problem.

BUT: In a way I think that the current behavior is "lying" to the
user. It pretends to provide a nanosecond resolution, while in fact
the values you get might only be precise up to 1 microsecond.

In my opinion the COUNT_RATE argument should rather return a value
that corresponds to the actual precision of the clock. It should not
just report some fixed unit of measure, but tell me something about
the precision of my clock. If it gives me a value in microseconds, I
can simply convert it to nanoseconds if I need to (knowing that the
precision is only a microsecond). But it it gives me a value in
nanoseconds, how can I know that I'm not getting nanosecond precision?

In summary, I think we should do one of the following:
1) Provide different versions with fixed COUNT_RATE and let the user
choose one via the kind of the argument. This is what we do right now,
and my patch was expanding on this by providing an additional option.
If you want nanosecond resolution, you can still use the int16
version, but now you can also choose to have microsecond resolution
(via the int8 option).
2) Adapt the COUNT_RATE according to the actual precision of the clock
on the system.
3) A combination of both, where the user could choose via
int4/int8/int16 that he does not need more than milli/micro/nanosecond
resolution. Then SYSTEM_CLOCK would try to provide this precision, but
reduce COUNT_RATE to a lower value if the desired precision can not be

I think the last one would be most desirable. Any other opinions?

> As for the int16 and int2 variants, F2003 requires that the arguments
> can be of any kind. AFAICS, this means that different arguments can be
> of different kind.

I will not go into this discussion right now, since this was not the
original intention of my patch.


More information about the Gcc-patches mailing list