On Tue, 2021-07-20 at 16:45 -0300, Adhemerval Zanella wrote: > > On 20/07/2021 08:37, Michael J. Baars wrote: > > On Mon, 2021-07-19 at 09:04 -0300, Adhemerval Zanella wrote: > > > On 19/07/2021 08:34, Michael J. Baars via Libc-alpha wrote: > > > > Hi, > > > > > > > > I've been using the clock() function for years now. Until recently I thought the timing mechanism worked perfectly, then I tried to let the actual time > > > > run > > > > next > > > > to it. As it appears, the clock() function isn't working as perfectly as I thought. > > > > > > > > As a consequence, my internet connection from T-Mobile, which I don't have anymore, so I can't show you the actual speed with the clock() corrected, > > > > wasn't > > > > running at 100mbit/s but a lot slower. The same holds for all other T-Mobile customers in Holland. I hope that someone is willing to have a look at the > > > > glibc > > > > clock() function and repair it. A lot of people would benefit from that. > > > > > > > > Attached: the benchmark of the 100mbit internet connection, the corrected clock() function and an application that shows the malfunction. > > > > > > I didn't fully understand how the clock_gettime() implementation would be > > > related to your internet speed, neither from which architecture, kernel > > > version, and glibc version you obtained your numbers. > > > > architecture: x86_64 > > kernel: kernel-5.10.8-100.fc32.x86_64 > > glibc: glibc-2.31-5 > > > > > In any case the clock_gettime() implementation has been changed recently > > > to support 64-bit time_t on legacy architectures. Another issue on previous > > > release was to move the vDSO pointer setup to loader, so there is no need > > > to demangle it before running (they are set on a read-only page and it > > > might increases the latency a bit). > > > > > > Currently for ABI with default 64-bit time_t there is no change (x86_64 for > > > instance). On legacy ABI with 32-bit time_t support, it would first try > > > to use the vDSO (first the 64-bit one, then the 32-bit) and then the 64-bit > > > syscall, and if it is not available the 32-bit time_t one. > > > > > > So the potential issues you might find are either if you are running on > > > an architecture without any vDSO support on a pre v5.1 kernel (without > > > 64-bit support) or if you are running on a pre v5.1 kernel with vDSO > > > support on y2038 or later. For former, glibc will issue an additional > > > 64-bit syscall that will return ENOSYS; for later it would first run > > > the vDSO to fallback to the 64-bit syscall and later on the 32-bit time_t > > > syscall. > > > > Are you telling me the clock from the example application runs normal on your machine with "#undef CLOCK_CORRECTED"? > > No, because clock() uses CLOCK_PROCESS_CPUTIME_ID, while your code for > CLOCK_CORRECTED uses CLOCK_REALTIME. That's why I puzzled why this is > in any slight related to your internet connection, nor why one would > use clock_gettime(CLOCK_REALTIME) as a replacement for clock() (each > interface uses completely different clocks). > > The clock() implementation has been changed on 2.18 (released on 2013) > to use CLOCK_PROCESS_CPUTIME_ID instead of times() plus _SC_CLK_TCK > to fix BZ#12515 [1]. It allows to get much better precision since > it uses kernel to handle the timer precision instead of trying to > emulate it on userspace (which has inherent issues). > > [1] https://sourceware.org/bugzilla/show_bug.cgi?id=12515 So what you are saying that it is the correct way to measure the speed of the internet connection? If you want to prevent that other processes can send data during you time measurement, you use clock() (CLOCK_PROCESS_CPUTIME_ID). That's strange, that makes the speed of my emmc internal flash memory end up at 17.5Gb/s, while dd is running at 25mb/s. The guys at the coreutils mailing list simply did not believe me.