public inbox for libc-alpha@sourceware.org
 help / color / mirror / Atom feed
From: Adhemerval Zanella Netto <adhemerval.zanella@linaro.org>
To: DJ Delorie <dj@redhat.com>
Cc: libc-alpha@sourceware.org
Subject: Re: [PATCH] malloc: Use C11 like atomics on memusage
Date: Thu, 23 Feb 2023 13:39:05 -0300	[thread overview]
Message-ID: <2a430a50-4c14-d986-014f-8c816f105f52@linaro.org> (raw)
In-Reply-To: <xnedqh8123.fsf@greed.delorie.com>

Hi D.J,

Wilco has objected that these should be relaxed MO [1], so I plan to
send an updated version to fix it.

[1] https://sourceware.org/pipermail/libc-alpha/2023-February/145665.html

On 23/02/23 01:20, DJ Delorie wrote:
> 
> LGTM.
> 
> Reviewed-by: DJ Delorie <dj@redhat.com>
> 
> Adhemerval Zanella via Libc-alpha <libc-alpha@sourceware.org> writes:
>> +static inline void
>> +peak_atomic_max (size_t *peak, size_t val)
>> +{
>> +  size_t v;
>> +  do
>> +    {
>> +      v = atomic_load_relaxed (peak);
>> +      if (v >= val)
>> +	break;
>> +    }
>> +  while (! atomic_compare_exchange_weak_acquire (peak, &v, val));
>> +}
>> +
> 
> This is the only call without a direct replacement.  This inline
> replicates what <atomic.h> does.  Ok.
> 
>> -    = catomic_exchange_and_add (&current_heap, len - old_len) + len - old_len;
>> -  catomic_max (&peak_heap, heap);
>> +    = atomic_fetch_add_acquire (&current_heap, len - old_len) + len - old_len;
>> +  peak_atomic_max (&peak_heap, heap);
> 
> Ok.
> 
>> -  catomic_max (&peak_stack, current_stack);
>> +  peak_atomic_max (&peak_stack, current_stack);
> 
> Ok.
> 
>> -  catomic_max (&peak_total, heap + current_stack);
>> +  peak_atomic_max (&peak_total, heap + current_stack);
> 
> Ok.
> 
>> -      uint32_t idx = catomic_exchange_and_add (&buffer_cnt, 1);
>> +      uint32_t idx = atomic_fetch_add_acquire (&buffer_cnt, 1);
> 
> Ok.
> 
>> -          catomic_compare_and_exchange_val_acq (&buffer_cnt, reset, idx + 1);
>> +	  uint32_t expected = idx + 1;
>> +	  atomic_compare_exchange_weak_acquire (&buffer_cnt, &expected, reset);
> 
> Ok.
> 
>> -  catomic_increment (&calls[idx_malloc]);
>> +  atomic_fetch_add_acquire (&calls[idx_malloc], 1);
> 
> Ok.
> 
>> -  catomic_add (&total[idx_malloc], len);
>> +  atomic_fetch_add_acquire (&total[idx_malloc], len);
> 
> Ok.
> 
>> -  catomic_add (&grand_total, len);
>> +  atomic_fetch_add_acquire (&grand_total, len);
> 
> Ok.
> 
>>    if (len < 65536)
>> -    catomic_increment (&histogram[len / 16]);
>> +    atomic_fetch_add_acquire (&histogram[len / 16], 1);
>>    else
>> -    catomic_increment (&large);
>> +    atomic_fetch_add_acquire (&large, 1);
> 
> Ok.
> 
>> -  catomic_increment (&calls_total);
>> +  atomic_fetch_add_acquire (&calls_total, 1);
> 
> Ok.
> 
>> -      catomic_increment (&failed[idx_malloc]);
>> +      atomic_fetch_add_acquire (&failed[idx_malloc], 1);
> 
> Ok.
> 
>> -  catomic_increment (&calls[idx_realloc]);
>> +  atomic_fetch_add_acquire (&calls[idx_realloc], 1);
> 
> Ok.
> 
>> -      catomic_add (&total[idx_realloc], len - old_len);
>> +      atomic_fetch_add_acquire (&total[idx_realloc], len - old_len);
> 
> Ok.
> 
>> -      catomic_add (&grand_total, len - old_len);
>> +      atomic_fetch_add_acquire (&grand_total, len - old_len);
> 
> Ok.
> 
>> -      catomic_increment (&realloc_free);
>> +      atomic_fetch_add_acquire (&realloc_free, 1);
> 
> Ok.
> 
>> -      catomic_add (&total[idx_free], real->length);
>> +      atomic_fetch_add_acquire (&total[idx_free], real->length);
> 
> Ok.
> 
>>    if (len < 65536)
>> -    catomic_increment (&histogram[len / 16]);
>> +    atomic_fetch_add_acquire (&histogram[len / 16], 1);
>>    else
>> -    catomic_increment (&large);
>> +    atomic_fetch_add_acquire (&large, 1);
> 
> Ok.
> 
>> -  catomic_increment (&calls_total);
>> +  atomic_fetch_add_acquire (&calls_total, 1);
> 
> Ok.
> 
>> -      catomic_increment (&failed[idx_realloc]);
>> +      atomic_fetch_add_acquire (&failed[idx_realloc], 1);
> 
> Ok.
> 
>> -    catomic_increment (&inplace);
>> +    atomic_fetch_add_acquire (&inplace, 1);
> 
> Ok.
> 
>> -    catomic_increment (&decreasing);
>> +    atomic_fetch_add_acquire (&decreasing, 1);
> 
> Ok.
> 
>> -  catomic_increment (&calls[idx_calloc]);
>> +  atomic_fetch_add_acquire (&calls[idx_calloc], 1);
> 
> Ok.
> 
>> -  catomic_add (&total[idx_calloc], size);
>> +  atomic_fetch_add_acquire (&total[idx_calloc], size);
> 
> Ok.
> 
>> -  catomic_add (&grand_total, size);
>> +  atomic_fetch_add_acquire (&grand_total, size);
> 
> Ok.
> 
>>    if (size < 65536)
>> -    catomic_increment (&histogram[size / 16]);
>> +    atomic_fetch_add_acquire (&histogram[size / 16], 1);
>>    else
>> -    catomic_increment (&large);
>> +    atomic_fetch_add_acquire (&large, 1);
> 
> Ok.
> 
>> -      catomic_increment (&failed[idx_calloc]);
>> +      atomic_fetch_add_acquire (&failed[idx_calloc], 1);
> 
> Ok.
> 
>> -      catomic_increment (&calls[idx_free]);
>> +      atomic_fetch_add_acquire (&calls[idx_free], 1);
> 
> Ok.
> 
>> -  catomic_increment (&calls[idx_free]);
>> +  atomic_fetch_add_acquire (&calls[idx_free], 1);
> 
> Ok.
> 
>> -  catomic_add (&total[idx_free], real->length);
>> +  atomic_fetch_add_acquire (&total[idx_free], real->length);
> 
> Ok.
> 
>> -      catomic_increment (&calls[idx]);
>> +      atomic_fetch_add_acquire (&calls[idx], 1);
> 
> Ok.
> 
>> -      catomic_add (&total[idx], len);
>> +      atomic_fetch_add_acquire (&total[idx], len);
> 
> Ok.
> 
>> -      catomic_add (&grand_total, len);
>> +      atomic_fetch_add_acquire (&grand_total, len);
> 
> Ok.
> 
>>        if (len < 65536)
>> -        catomic_increment (&histogram[len / 16]);
>> +        atomic_fetch_add_acquire (&histogram[len / 16], 1);
>>        else
>> -        catomic_increment (&large);
>> +        atomic_fetch_add_acquire (&large, 1);
> 
> Ok.
> 
>> -      catomic_increment (&calls_total);
>> +      atomic_fetch_add_acquire (&calls_total, 1);
> 
> Ok.
> 
>> -        catomic_increment (&failed[idx]);
>> +        atomic_fetch_add_acquire (&failed[idx], 1);
> 
> Ok.
> 
>> -      catomic_increment (&calls[idx]);
>> +      atomic_fetch_add_acquire (&calls[idx], 1);
> 
> Ok.
> 
>> -      catomic_add (&total[idx], len);
>> +      atomic_fetch_add_acquire (&total[idx], len);
> 
> Ok.
> 
>> -      catomic_add (&grand_total, len);
>> +      atomic_fetch_add_acquire (&grand_total, len);
> 
> Ok.
> 
>>        if (len < 65536)
>> -        catomic_increment (&histogram[len / 16]);
>> +        atomic_fetch_add_acquire (&histogram[len / 16], 1);
>>        else
>> -        catomic_increment (&large);
>> +        atomic_fetch_add_acquire (&large, 1);
> 
> Ok.
> 
>> -      catomic_increment (&calls_total);
>> +      atomic_fetch_add_acquire (&calls_total, 1);
> 
> Ok.
> 
>> -        catomic_increment (&failed[idx]);
>> +        atomic_fetch_add_acquire (&failed[idx], 1);
> 
> Ok.
> 
>> -      catomic_increment (&calls[idx_mremap]);
>> +      atomic_fetch_add_acquire (&calls[idx_mremap], 1);
> 
> Ok.
> 
>> -          catomic_add (&total[idx_mremap], len - old_len);
>> +          atomic_fetch_add_acquire (&total[idx_mremap], len - old_len);
> 
> Ok.
> 
>> -          catomic_add (&grand_total, len - old_len);
>> +          atomic_fetch_add_acquire (&grand_total, len - old_len);
> 
> Ok.
> 
> 
>>        if (len < 65536)
>> -        catomic_increment (&histogram[len / 16]);
>> +        atomic_fetch_add_acquire (&histogram[len / 16], 1);
>>        else
>> -        catomic_increment (&large);
>> +        atomic_fetch_add_acquire (&large, 1);
> 
> Ok.
> 
>> -      catomic_increment (&calls_total);
>> +      atomic_fetch_add_acquire (&calls_total, 1);
> 
> Ok.
> 
>> -        catomic_increment (&failed[idx_mremap]);
>> +        atomic_fetch_add_acquire (&failed[idx_mremap], 1);
> 
> Ok.
> 
>> -            catomic_increment (&inplace_mremap);
>> +            atomic_fetch_add_acquire (&inplace_mremap, 1);
> 
> Ok.
> 
>> -            catomic_increment (&decreasing_mremap);
>> +            atomic_fetch_add_acquire (&decreasing_mremap, 1);
> 
> Ok.
> 
>> -      catomic_increment (&calls[idx_munmap]);
>> +      atomic_fetch_add_acquire (&calls[idx_munmap], 1);
> 
> Ok.
> 
>> -          catomic_add (&total[idx_munmap], len);
>> +          atomic_fetch_add_acquire (&total[idx_munmap], len);
> 
> Ok.
> 
>> -        catomic_increment (&failed[idx_munmap]);
>> +        atomic_fetch_add_acquire (&failed[idx_munmap], 1);
> 
> Ok.
> 

  reply	other threads:[~2023-02-23 16:39 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-08-31 18:14 Adhemerval Zanella
2023-02-14 18:47 ` Adhemerval Zanella Netto
2023-02-23  4:20 ` DJ Delorie
2023-02-23 16:39   ` Adhemerval Zanella Netto [this message]
2023-02-23 19:17     ` DJ Delorie
2023-02-16 18:27 Wilco Dijkstra
2023-02-16 18:39 ` Adhemerval Zanella Netto

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=2a430a50-4c14-d986-014f-8c816f105f52@linaro.org \
    --to=adhemerval.zanella@linaro.org \
    --cc=dj@redhat.com \
    --cc=libc-alpha@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).