public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
* [RFC] Test time/tst-cpuclock1.c intermitent failures
@ 2020-01-24 13:53 Lucas A. M. Magalhaes
  2020-01-24 15:17 ` Adhemerval Zanella
  2020-01-28 19:01 ` [PATCH] Fix time/tst-cpuclock1 " Lucas A. M. Magalhaes
  0 siblings, 2 replies; 10+ messages in thread
From: Lucas A. M. Magalhaes @ 2020-01-24 13:53 UTC (permalink / raw)
  To: GlibC Alpha List

The time/tst-cpuclock1.c test fails if running with a high "nice" value
and there are other loads competing for CPU.

First of all, I fail to understand the purpose of this test.  It seems
to me that it's a realtime test as It expects times that are reasonable
for realtime applications. Indeed it was moved from rt/ to time/.  So,
What is this testing?  And why was it moved to time/?

Following, In my attempted to fix it I tried to amortize the fail rate.
So, I tried running the sensible code multiple times and look for 70%
success rate.  However this completely failed as well.  In my first test
with some CPU load it failed.  I will appreciate suggestions of
solution.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC] Test time/tst-cpuclock1.c intermitent failures
  2020-01-24 13:53 [RFC] Test time/tst-cpuclock1.c intermitent failures Lucas A. M. Magalhaes
@ 2020-01-24 15:17 ` Adhemerval Zanella
  2020-01-24 15:30   ` Florian Weimer
  2020-01-28 19:01 ` [PATCH] Fix time/tst-cpuclock1 " Lucas A. M. Magalhaes
  1 sibling, 1 reply; 10+ messages in thread
From: Adhemerval Zanella @ 2020-01-24 15:17 UTC (permalink / raw)
  To: libc-alpha



On 24/01/2020 09:57, Lucas A. M. Magalhaes wrote:
> The time/tst-cpuclock1.c test fails if running with a high "nice" value
> and there are other loads competing for CPU.
> 
> First of all, I fail to understand the purpose of this test.  It seems
> to me that it's a realtime test as It expects times that are reasonable
> for realtime applications. Indeed it was moved from rt/ to time/.  So,
> What is this testing?  And why was it moved to time/?

It tests the 'clock_getcpuclockid' and the idea is to check if a CPu timer
of a cpu bounded process is correctly obtained with clock_gettime within
a expected bound range.

However since the interface returns the CLOCK_PROCESS_CPUTIME_ID clockid 
of the target pid, its result is subject of scheduling pressure. It means
that even with priority boosting, incorrect results might happen depending
of the system load. 

It was move from rt/ to time/ because the symbol was moved from librt to
libc.

> 
> Following, In my attempted to fix it I tried to amortize the fail rate.
> So, I tried running the sensible code multiple times and look for 70%
> success rate.  However this completely failed as well.  In my first test
> with some CPU load it failed.  I will appreciate suggestions of
> solution.
> 

I think to get reliable results on such test we will need to use a schedule
with different guarantees. For instance, SCHED_DEADLINE with a deadline of
X, Deadline of Y (Y >> X) and a even longer period to not interfere with the
parent probing. But it would require a 3.14 kernel and privileged access.

Maybe one option would to relax the time delta between the two clock_gettime
probes to check for values 0s <= 0.5s and maybe if SCHED_DEADLINE is 
available use a more timing check.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC] Test time/tst-cpuclock1.c intermitent failures
  2020-01-24 15:17 ` Adhemerval Zanella
@ 2020-01-24 15:30   ` Florian Weimer
  2020-01-24 16:38     ` Carlos O'Donell
  0 siblings, 1 reply; 10+ messages in thread
From: Florian Weimer @ 2020-01-24 15:30 UTC (permalink / raw)
  To: Adhemerval Zanella; +Cc: libc-alpha

* Adhemerval Zanella:

> It tests the 'clock_getcpuclockid' and the idea is to check if a CPu timer
> of a cpu bounded process is correctly obtained with clock_gettime within
> a expected bound range.
>
> However since the interface returns the CLOCK_PROCESS_CPUTIME_ID clockid 
> of the target pid, its result is subject of scheduling pressure. It means
> that even with priority boosting, incorrect results might happen depending
> of the system load. 
>
> It was move from rt/ to time/ because the symbol was moved from librt to
> libc.

We could move it back to rt.  It might help somewhat because the rt
tests are serialized.

Thanks,
Florian

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC] Test time/tst-cpuclock1.c intermitent failures
  2020-01-24 15:30   ` Florian Weimer
@ 2020-01-24 16:38     ` Carlos O'Donell
  2020-01-24 16:50       ` Lucas A. M. Magalhaes
  0 siblings, 1 reply; 10+ messages in thread
From: Carlos O'Donell @ 2020-01-24 16:38 UTC (permalink / raw)
  To: Florian Weimer, Adhemerval Zanella; +Cc: libc-alpha

On 1/24/20 10:17 AM, Florian Weimer wrote:
> * Adhemerval Zanella:
> 
>> It tests the 'clock_getcpuclockid' and the idea is to check if a CPu timer
>> of a cpu bounded process is correctly obtained with clock_gettime within
>> a expected bound range.
>>
>> However since the interface returns the CLOCK_PROCESS_CPUTIME_ID clockid 
>> of the target pid, its result is subject of scheduling pressure. It means
>> that even with priority boosting, incorrect results might happen depending
>> of the system load. 
>>
>> It was move from rt/ to time/ because the symbol was moved from librt to
>> libc.
> 
> We could move it back to rt.  It might help somewhat because the rt
> tests are serialized.

I would like to get away from serialized tests, so if we can write this test
to be more robust and take into account the system load or uncertainty, then
that would be a win IMO.

Any other alternative is costly for the project:
- Run tests serially (bad for developer experience)
- Write our own test scheduler (increases maintenance cost)

-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC] Test time/tst-cpuclock1.c intermitent failures
  2020-01-24 16:38     ` Carlos O'Donell
@ 2020-01-24 16:50       ` Lucas A. M. Magalhaes
  2020-01-24 17:04         ` Adhemerval Zanella
  0 siblings, 1 reply; 10+ messages in thread
From: Lucas A. M. Magalhaes @ 2020-01-24 16:50 UTC (permalink / raw)
  To: Adhemerval Zanella, Carlos O'Donell, Florian Weimer; +Cc: libc-alpha

Quoting Carlos O'Donell (2020-01-24 12:30:15)
> On 1/24/20 10:17 AM, Florian Weimer wrote:
> > * Adhemerval Zanella:
> > 
> >> It tests the 'clock_getcpuclockid' and the idea is to check if a CPu timer
> >> of a cpu bounded process is correctly obtained with clock_gettime within
> >> a expected bound range.
> >>
> >> However since the interface returns the CLOCK_PROCESS_CPUTIME_ID clockid 
> >> of the target pid, its result is subject of scheduling pressure. It means
> >> that even with priority boosting, incorrect results might happen depending
> >> of the system load. 
> >>
> >> It was move from rt/ to time/ because the symbol was moved from librt to
> >> libc.
> > 
> > We could move it back to rt.  It might help somewhat because the rt
> > tests are serialized.
> 
> I would like to get away from serialized tests, so if we can write this test
> to be more robust and take into account the system load or uncertainty, then
> that would be a win IMO.
> 
> Any other alternative is costly for the project:
> - Run tests serially (bad for developer experience)
> - Write our own test scheduler (increases maintenance cost)
>

I totally agree with Carlos here.  However in the absense of a good
solution I find Florians aproach acceptable.  At least we don't have the
other tests messing with the result of this one.

Thanks,
Lucas Magalhães

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC] Test time/tst-cpuclock1.c intermitent failures
  2020-01-24 16:50       ` Lucas A. M. Magalhaes
@ 2020-01-24 17:04         ` Adhemerval Zanella
  2020-01-25  2:18           ` H.J. Lu
  0 siblings, 1 reply; 10+ messages in thread
From: Adhemerval Zanella @ 2020-01-24 17:04 UTC (permalink / raw)
  To: Lucas A. M. Magalhaes, Carlos O'Donell, Florian Weimer; +Cc: libc-alpha



On 24/01/2020 13:46, Lucas A. M. Magalhaes wrote:
> Quoting Carlos O'Donell (2020-01-24 12:30:15)
>> On 1/24/20 10:17 AM, Florian Weimer wrote:
>>> * Adhemerval Zanella:
>>>
>>>> It tests the 'clock_getcpuclockid' and the idea is to check if a CPu timer
>>>> of a cpu bounded process is correctly obtained with clock_gettime within
>>>> a expected bound range.
>>>>
>>>> However since the interface returns the CLOCK_PROCESS_CPUTIME_ID clockid 
>>>> of the target pid, its result is subject of scheduling pressure. It means
>>>> that even with priority boosting, incorrect results might happen depending
>>>> of the system load. 
>>>>
>>>> It was move from rt/ to time/ because the symbol was moved from librt to
>>>> libc.
>>>
>>> We could move it back to rt.  It might help somewhat because the rt
>>> tests are serialized.
>>
>> I would like to get away from serialized tests, so if we can write this test
>> to be more robust and take into account the system load or uncertainty, then
>> that would be a win IMO.
>>
>> Any other alternative is costly for the project:
>> - Run tests serially (bad for developer experience)
>> - Write our own test scheduler (increases maintenance cost)
>>
> 
> I totally agree with Carlos here.  However in the absense of a good
> solution I find Florians aproach acceptable.  At least we don't have the
> other tests messing with the result of this one.

But it only masks potential issues if the system load is independent
of glibc testing. I still think a better solution you just to relax
the expected clock_gettime delta and add an explanation about 
potentially scheduling pressuring.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [RFC] Test time/tst-cpuclock1.c intermitent failures
  2020-01-24 17:04         ` Adhemerval Zanella
@ 2020-01-25  2:18           ` H.J. Lu
  0 siblings, 0 replies; 10+ messages in thread
From: H.J. Lu @ 2020-01-25  2:18 UTC (permalink / raw)
  To: Adhemerval Zanella
  Cc: Lucas A. M. Magalhaes, Carlos O'Donell, Florian Weimer,
	GNU C Library

On Fri, Jan 24, 2020 at 8:50 AM Adhemerval Zanella
<adhemerval.zanella@linaro.org> wrote:
>
>
>
> On 24/01/2020 13:46, Lucas A. M. Magalhaes wrote:
> > Quoting Carlos O'Donell (2020-01-24 12:30:15)
> >> On 1/24/20 10:17 AM, Florian Weimer wrote:
> >>> * Adhemerval Zanella:
> >>>
> >>>> It tests the 'clock_getcpuclockid' and the idea is to check if a CPu timer
> >>>> of a cpu bounded process is correctly obtained with clock_gettime within
> >>>> a expected bound range.
> >>>>
> >>>> However since the interface returns the CLOCK_PROCESS_CPUTIME_ID clockid
> >>>> of the target pid, its result is subject of scheduling pressure. It means
> >>>> that even with priority boosting, incorrect results might happen depending
> >>>> of the system load.
> >>>>
> >>>> It was move from rt/ to time/ because the symbol was moved from librt to
> >>>> libc.
> >>>
> >>> We could move it back to rt.  It might help somewhat because the rt
> >>> tests are serialized.
> >>
> >> I would like to get away from serialized tests, so if we can write this test
> >> to be more robust and take into account the system load or uncertainty, then
> >> that would be a win IMO.
> >>
> >> Any other alternative is costly for the project:
> >> - Run tests serially (bad for developer experience)
> >> - Write our own test scheduler (increases maintenance cost)
> >>
> >
> > I totally agree with Carlos here.  However in the absense of a good
> > solution I find Florians aproach acceptable.  At least we don't have the
> > other tests messing with the result of this one.
>
> But it only masks potential issues if the system load is independent
> of glibc testing. I still think a better solution you just to relax

I agree.  My glibc testing machines are heavily loaded and I saw this
failure all the time.

> the expected clock_gettime delta and add an explanation about
> potentially scheduling pressuring.



-- 
H.J.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH] Fix time/tst-cpuclock1 intermitent failures
  2020-01-24 13:53 [RFC] Test time/tst-cpuclock1.c intermitent failures Lucas A. M. Magalhaes
  2020-01-24 15:17 ` Adhemerval Zanella
@ 2020-01-28 19:01 ` Lucas A. M. Magalhaes
  2020-01-29 21:58   ` Carlos O'Donell
  1 sibling, 1 reply; 10+ messages in thread
From: Lucas A. M. Magalhaes @ 2020-01-28 19:01 UTC (permalink / raw)
  To: libc-alpha

This test fails intermittently in systems with heavy load as
CLOCK_PROCESS_CPUTIME_ID is subject to scheduler pressure.  Thus the
test boundaries where relaxed to keep it from fail on this systems.

--

Hi,

I tried to implement the solution suggested by Adhemerval and it worked
fine on my tests.

The curious thing I had much more problems with nanosleep returning a
lot earlier then expected.  On the 1s sleep it many times returned in
0.2s for a heavy loaded system.

 time/tst-cpuclock1.c | 11 ++++++++---
 1 file changed, 8 insertions(+), 3 deletions(-)

diff --git a/time/tst-cpuclock1.c b/time/tst-cpuclock1.c
index 0120906f23..4968038207 100644
--- a/time/tst-cpuclock1.c
+++ b/time/tst-cpuclock1.c
@@ -157,14 +157,17 @@ do_test (void)
 
   struct timespec diff = { .tv_sec = after.tv_sec - before.tv_sec,
 			   .tv_nsec = after.tv_nsec - before.tv_nsec };
+  /* In ideal scheduler pressure this diff should be closer to 0.5s.  But in
+     a heavy loaded system the scheduler pressure can make this times to be
+     uncertain.  That's why the upper bound is 0.7s and there is no lower bound
+   */
   if (diff.tv_nsec < 0)
     {
       --diff.tv_sec;
       diff.tv_nsec += 1000000000;
     }
   if (diff.tv_sec != 0
-      || diff.tv_nsec > 600000000
-      || diff.tv_nsec < 100000000)
+      || diff.tv_nsec > 700000000)
     {
       printf ("before - after %ju.%.9ju outside reasonable range\n",
 	      (uintmax_t) diff.tv_sec, (uintmax_t) diff.tv_nsec);
@@ -196,13 +199,15 @@ do_test (void)
 	{
 	  struct timespec d = { .tv_sec = afterns.tv_sec - after.tv_sec,
 				.tv_nsec = afterns.tv_nsec - after.tv_nsec };
+	  /* scheduler pressure may affect sleep time so this test have relaxed
+	     time restrictions.  */
 	  if (d.tv_nsec < 0)
 	    {
 	      --d.tv_sec;
 	      d.tv_nsec += 1000000000;
 	    }
 	  if (d.tv_sec > 0
-	      || d.tv_nsec < sleeptime.tv_nsec
+	      || d.tv_nsec < 100000000
 	      || d.tv_nsec > sleeptime.tv_nsec * 2)
 	    {
 	      printf ("nanosleep time %ju.%.9ju outside reasonable range\n",
-- 
2.20.1

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Fix time/tst-cpuclock1 intermitent failures
  2020-01-28 19:01 ` [PATCH] Fix time/tst-cpuclock1 " Lucas A. M. Magalhaes
@ 2020-01-29 21:58   ` Carlos O'Donell
  2020-01-30 13:38     ` Lucas A. M. Magalhaes
  0 siblings, 1 reply; 10+ messages in thread
From: Carlos O'Donell @ 2020-01-29 21:58 UTC (permalink / raw)
  To: Lucas A. M. Magalhaes, libc-alpha

On 1/28/20 12:02 PM, Lucas A. M. Magalhaes wrote:
> This test fails intermittently in systems with heavy load as
> CLOCK_PROCESS_CPUTIME_ID is subject to scheduler pressure.  Thus the
> test boundaries where relaxed to keep it from fail on this systems.
> 
> --
> 
> Hi,
> 
> I tried to implement the solution suggested by Adhemerval and it worked
> fine on my tests.
> 
> The curious thing I had much more problems with nanosleep returning a
> lot earlier then expected.  On the 1s sleep it many times returned in
> 0.2s for a heavy loaded system.

(a) Idea.

I like the idea behind this patch because it makes the test more robust
while still keeping the intent of the test which is to catch gross mistakes.

At the high level I think we need a slight refactoring and we can fix two
tests with one change.

We see the same failures from rt/tst-cpuclock2.c and I'm wondering if we
couldn't refactor a function into support/ to help with both tests?
Like percent_diff_check(time1, time2, percent_difference_allowed)?

Then both tests can be adjusted to just use the new function.

We could also do with a time_sub() function to compute time1 - time2, and
time_add() function to compute time1 + time2.

(b) Implementation.

You adjust the places which need adjusting for time difference calculation
and that's correct.

(c) Details.

There are a few nit-picky

>  time/tst-cpuclock1.c | 11 ++++++++---
>  1 file changed, 8 insertions(+), 3 deletions(-)
OK.

> diff --git a/time/tst-cpuclock1.c b/time/tst-cpuclock1.c
> index 0120906f23..4968038207 100644
> --- a/time/tst-cpuclock1.c
> +++ b/time/tst-cpuclock1.c
> @@ -157,14 +157,17 @@ do_test (void)
>  
>    struct timespec diff = { .tv_sec = after.tv_sec - before.tv_sec,
>  			   .tv_nsec = after.tv_nsec - before.tv_nsec };
> +  /* In ideal scheduler pressure this diff should be closer to 0.5s.  But in
> +     a heavy loaded system the scheduler pressure can make this times to be
> +     uncertain.  That's why the upper bound is 0.7s and there is no lower bound
> +   */

Suggest:

/* Under ideal scheduler pressure this difference should be closer to 0.5s.
   Under heavy load the scheduler pressure makes the timing uncertain.
   Given the uncertainty we set the upper bound to 0.7s and omit the lower bound.  */

Note:
- Trailing comment close follows on same line after 2 spaces.


>    if (diff.tv_nsec < 0)
>      {
>        --diff.tv_sec;
>        diff.tv_nsec += 1000000000;
>      }
>    if (diff.tv_sec != 0
> -      || diff.tv_nsec > 600000000
> -      || diff.tv_nsec < 100000000)
> +      || diff.tv_nsec > 700000000)

Could we rewrite this as a percent difference function check?

% diff = | e1 - e2 | / (0.5 * (e1 + e2)) * 100

Then subtract 0.5s from the "after" time.

Then do a percent_diff_check (time1, time2, 10%)

And consider them equal within 10% or 5% or whatever.

At least this way we quantify the jitter as a percentage.

>      {
>        printf ("before - after %ju.%.9ju outside reasonable range\n",
>  	      (uintmax_t) diff.tv_sec, (uintmax_t) diff.tv_nsec);
> @@ -196,13 +199,15 @@ do_test (void)
>  	{
>  	  struct timespec d = { .tv_sec = afterns.tv_sec - after.tv_sec,
>  				.tv_nsec = afterns.tv_nsec - after.tv_nsec };
> +	  /* scheduler pressure may affect sleep time so this test have relaxed
> +	     time restrictions.  */
>  	  if (d.tv_nsec < 0)
>  	    {
>  	      --d.tv_sec;
>  	      d.tv_nsec += 1000000000;
>  	    }
>  	  if (d.tv_sec > 0
> -	      || d.tv_nsec < sleeptime.tv_nsec
> +	      || d.tv_nsec < 100000000
>  	      || d.tv_nsec > sleeptime.tv_nsec * 2)

Likewise this also becomes a percent difference test.

>  	    {
>  	      printf ("nanosleep time %ju.%.9ju outside reasonable range\n",
> 


-- 
Cheers,
Carlos.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Fix time/tst-cpuclock1 intermitent failures
  2020-01-29 21:58   ` Carlos O'Donell
@ 2020-01-30 13:38     ` Lucas A. M. Magalhaes
  0 siblings, 0 replies; 10+ messages in thread
From: Lucas A. M. Magalhaes @ 2020-01-30 13:38 UTC (permalink / raw)
  To: Carlos O'Donell, libc-alpha

Quoting Carlos O'Donell (2020-01-29 18:54:19)
> On 1/28/20 12:02 PM, Lucas A. M. Magalhaes wrote:
> > This test fails intermittently in systems with heavy load as
> > CLOCK_PROCESS_CPUTIME_ID is subject to scheduler pressure.  Thus the
> > test boundaries where relaxed to keep it from fail on this systems.
> > 
> > --
> > 
> > Hi,
> > 
> > I tried to implement the solution suggested by Adhemerval and it worked
> > fine on my tests.
> > 
> > The curious thing I had much more problems with nanosleep returning a
> > lot earlier then expected.  On the 1s sleep it many times returned in
> > 0.2s for a heavy loaded system.
> 
> (a) Idea.
> 
> I like the idea behind this patch because it makes the test more robust
> while still keeping the intent of the test which is to catch gross mistakes.
> 
> At the high level I think we need a slight refactoring and we can fix two
> tests with one change.
> 
> We see the same failures from rt/tst-cpuclock2.c and I'm wondering if we
> couldn't refactor a function into support/ to help with both tests?
> Like percent_diff_check(time1, time2, percent_difference_allowed)?
> 

Thanks Carlos for this review.  I also like the idea of this refactor and will
work on this.

> Then both tests can be adjusted to just use the new function.
> 
> We could also do with a time_sub() function to compute time1 - time2, and
> time_add() function to compute time1 + time2.
> 
> (b) Implementation.
> 
> You adjust the places which need adjusting for time difference calculation
> and that's correct.
> 
> (c) Details.
> 
> There are a few nit-picky
> 
> >  time/tst-cpuclock1.c | 11 ++++++++---
> >  1 file changed, 8 insertions(+), 3 deletions(-)
> OK.
> 
> > diff --git a/time/tst-cpuclock1.c b/time/tst-cpuclock1.c
> > index 0120906f23..4968038207 100644
> > --- a/time/tst-cpuclock1.c
> > +++ b/time/tst-cpuclock1.c
> > @@ -157,14 +157,17 @@ do_test (void)
> >  
> >    struct timespec diff = { .tv_sec = after.tv_sec - before.tv_sec,
> >                          .tv_nsec = after.tv_nsec - before.tv_nsec };
> > +  /* In ideal scheduler pressure this diff should be closer to 0.5s.  But in
> > +     a heavy loaded system the scheduler pressure can make this times to be
> > +     uncertain.  That's why the upper bound is 0.7s and there is no lower bound
> > +   */
> 
> Suggest:
> 
> /* Under ideal scheduler pressure this difference should be closer to 0.5s.
>    Under heavy load the scheduler pressure makes the timing uncertain.
>    Given the uncertainty we set the upper bound to 0.7s and omit the lower bound.  */
> 
> Note:
> - Trailing comment close follows on same line after 2 spaces.
> 
> 

Ok, I will rewrite this. It will need another description after the refactor.

> >    if (diff.tv_nsec < 0)
> >      {
> >        --diff.tv_sec;
> >        diff.tv_nsec += 1000000000;
> >      }
> >    if (diff.tv_sec != 0
> > -      || diff.tv_nsec > 600000000
> > -      || diff.tv_nsec < 100000000)
> > +      || diff.tv_nsec > 700000000)
> 
> Could we rewrite this as a percent difference function check?
> 
> % diff = | e1 - e2 | / (0.5 * (e1 + e2)) * 100
> 
> Then subtract 0.5s from the "after" time.
> 
> Then do a percent_diff_check (time1, time2, 10%)
> 
> And consider them equal within 10% or 5% or whatever.
> 
> At least this way we quantify the jitter as a percentage.
> 

Yes. I agree.

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2020-01-30 13:12 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-01-24 13:53 [RFC] Test time/tst-cpuclock1.c intermitent failures Lucas A. M. Magalhaes
2020-01-24 15:17 ` Adhemerval Zanella
2020-01-24 15:30   ` Florian Weimer
2020-01-24 16:38     ` Carlos O'Donell
2020-01-24 16:50       ` Lucas A. M. Magalhaes
2020-01-24 17:04         ` Adhemerval Zanella
2020-01-25  2:18           ` H.J. Lu
2020-01-28 19:01 ` [PATCH] Fix time/tst-cpuclock1 " Lucas A. M. Magalhaes
2020-01-29 21:58   ` Carlos O'Donell
2020-01-30 13:38     ` Lucas A. M. Magalhaes

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).