This is the mail archive of the
gcc-bugs@gcc.gnu.org
mailing list for the GCC project.
[Bug fortran/55548] New: SYSTEM_CLOCK with integer(8) provides nanosecond resolution, but only microsecond precision (without -lrt)
- From: "janus at gcc dot gnu.org" <gcc-bugzilla at gcc dot gnu dot org>
- To: gcc-bugs at gcc dot gnu dot org
- Date: Fri, 30 Nov 2012 14:07:01 +0000
- Subject: [Bug fortran/55548] New: SYSTEM_CLOCK with integer(8) provides nanosecond resolution, but only microsecond precision (without -lrt)
- Auto-submitted: auto-generated
http://gcc.gnu.org/bugzilla/show_bug.cgi?id=55548
Bug #: 55548
Summary: SYSTEM_CLOCK with integer(8) provides nanosecond
resolution, but only microsecond precision (without
-lrt)
Classification: Unclassified
Product: gcc
Version: 4.8.0
Status: UNCONFIRMED
Keywords: wrong-code
Severity: normal
Priority: P3
Component: fortran
AssignedTo: unassigned@gcc.gnu.org
ReportedBy: janus@gcc.gnu.org
Simple test case:
integer(8) :: t, rate, cmax
call system_clock(t, rate, cmax)
print *, t, rate, cmax
end
When compiling this without any special flags (and in particular without -lrt),
this gives a rate of 1000000000 (corresponding to 1 nanosecond), but the values
of t are only precise to 1 microsecond (the last three digits are always zero).
This is on x86_64-unknown-linux-gnu (Linux 3.4.11, glibc 2.15).
I am aware that linking with -lrt (which is mentioned in the docu) solves this
problem and makes SYSTEM_CLOCK yield values which indeed have nanosecond
precision.
However, the precision claimed by the COUNT_RATE argument should better match
the actual precision (also with default flags!).
Possible solutions:
1) Use a nanosecond COUNT_RATE only when -lrt is given, and microsecond
otherwise.
2) Always use microsecond with integer(8), and nanosecond with integer(16).
Using SYSTEM_CLOCK with integer(16) arguments currently results in:
sysclock.f90:(.text+0x455): undefined reference to `_gfortran_system_clock_16'