public inbox for libc-help@sourceware.org
 help / color / mirror / Atom feed
From: 清水祐太郎 <simiyu@shift-crops.net>
To: "Carlos O'Donell" <carlos@redhat.com>, "Ondřej Bílka" <neleai@seznam.cz>
Cc: "libc-help@sourceware.org" <libc-help@sourceware.org>
Subject: RE: malloc/free: tcache security patch
Date: Sat, 21 Apr 2018 05:37:00 -0000	[thread overview]
Message-ID: <20180421053700.AAAE110E8014@mail.shift-crops.net> (raw)
In-Reply-To: <f5950991-a3fa-974f-0008-f47c5e8ed10e@redhat.com>

>> However, as long as there is a possibility that a bug exists, it is
>> necessary to protect it with glibc.
>
> This is not true at all.
>
> We assume a correctly functioning program and optimize for that.

I was wrong about handling bugs in user programs.
Thank you.


> What performance impact do your patches have on x86_64?

I measured the performance of the following program on x86_64 using perf.
My patch does not seem to have a big impact on performance.

```
#include <stdio.h>
#include <stdlib.h>

int main(void){
        int i, j;
        void *p[10];

        for(i=0; i<10000;i++){
                for(j=0; j<sizeof(p)/sizeof(void*); j++)
                        p[j] = malloc(0x20);

                for(j=0; j<sizeof(p)/sizeof(void*); j++)
                        free(p[j]);
        }
}
```

# unpatched
 % perf stat ./testrun.sh ../test/heap

 Performance counter stats for './testrun.sh ../test/heap':

         11.495723      task-clock (msec)         #    0.873 CPUs utilized
                 4      context-switches          #    0.348 K/sec
                 4      cpu-migrations            #    0.348 K/sec
               231      page-faults               #    0.020 M/sec
        11,618,824      cycles                    #    1.011 GHz                      (65.05%)
         4,394,009      stalled-cycles-frontend   #   37.82% frontend cycles idle
         2,770,936      stalled-cycles-backend    #   23.85% backend  cycles idle
        18,236,716      instructions              #    1.57  insns per cycle
                                                  #    0.24  stalled cycles per insn
         4,206,369      branches                  #  365.907 M/sec
            21,151      branch-misses             #    0.50% of all branches          (49.81%)

       0.013160740 seconds time elapsed

 % perf stat ./testrun.sh ../test/heap

 Performance counter stats for './testrun.sh ../test/heap':

         11.263904      task-clock (msec)         #    0.872 CPUs utilized
                 4      context-switches          #    0.355 K/sec
                 4      cpu-migrations            #    0.355 K/sec
               231      page-faults               #    0.021 M/sec
        11,713,045      cycles                    #    1.040 GHz                      (64.94%)
         4,100,644      stalled-cycles-frontend   #   35.01% frontend cycles idle
         2,506,411      stalled-cycles-backend    #   21.40% backend  cycles idle
        18,237,748      instructions              #    1.56  insns per cycle
                                                  #    0.22  stalled cycles per insn
         4,206,986      branches                  #  373.493 M/sec
            17,595      branch-misses             #    0.42% of all branches          (59.86%)

       0.012922059 seconds time elapsed


# patched
 % perf stat ./testrun.sh ../test/heap

 Performance counter stats for './testrun.sh ../test/heap':

         11.486561      task-clock (msec)         #    0.883 CPUs utilized
                 4      context-switches          #    0.348 K/sec
                 5      cpu-migrations            #    0.435 K/sec
               229      page-faults               #    0.020 M/sec
        11,053,931      cycles                    #    0.962 GHz                      (63.01%)
         4,309,748      stalled-cycles-frontend   #   38.99% frontend cycles idle
         2,725,241      stalled-cycles-backend    #   24.65% backend  cycles idle
        18,978,492      instructions              #    1.72  insns per cycle
                                                  #    0.23  stalled cycles per insn
         4,388,405      branches                  #  382.047 M/sec
            26,860      branch-misses             #    0.61% of all branches          (58.63%)

       0.013005366 seconds time elapsed

 % perf stat ./testrun.sh ../test/heap

 Performance counter stats for './testrun.sh ../test/heap':

         11.107714      task-clock (msec)         #    0.876 CPUs utilized
                 5      context-switches          #    0.450 K/sec
                 4      cpu-migrations            #    0.360 K/sec
               230      page-faults               #    0.021 M/sec
        11,560,568      cycles                    #    1.041 GHz                      (64.43%)
         3,919,384      stalled-cycles-frontend   #   33.90% frontend cycles idle
         2,508,035      stalled-cycles-backend    #   21.69% backend  cycles idle
        18,938,825      instructions              #    1.64  insns per cycle
                                                  #    0.21  stalled cycles per insn
         4,380,386      branches                  #  394.355 M/sec
            17,324      branch-misses             #    0.40% of all branches          (59.11%)

       0.012677606 seconds time elapsed

Sincerely

Windows 10 版のメールから送信

差出人: Carlos O'Donell
送信日時: 2018年4月21日 11:33
宛先: 清水祐太郎; Ondřej Bílka
CC: libc-help@sourceware.org
件名: Re: malloc/free: tcache security patch

On 04/20/2018 07:58 PM, 清水祐太郎 wrote:
> However, as long as there is a possibility that a bug exists, it is
> necessary to protect it with glibc.

This is not true at all.

We assume a correctly functioning program and optimize for that.

For example the dynamic loader does not protect against all forms of
errors in ELF files.

Nor does malloc catch all forms of corruption, and it should not,
because doing so is too expensive.

The checks in malloc, particularly checks in the hot path that add
instructions to tcache, *must* be rationalized as a balance between
catching corruption for debugging purposes and performance. It provides
only marginal post-attack mitigation, which is why it must be very low
cost, particularly in tcache.

Please see this for a detailed discussion on the topic:
https://sourceware.org/glibc/wiki/Style_and_Conventions#Error_Handling

What performance impact do your patches have on x86_64?

-- 
Cheers,
Carlos.


      reply	other threads:[~2018-04-21  5:37 UTC|newest]

Thread overview: 5+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-04-20 12:44 清水祐太郎
2018-04-20 21:36 ` Ondřej Bílka
2018-04-21  0:58   ` 清水祐太郎
2018-04-21  2:06     ` Carlos O'Donell
2018-04-21  5:37       ` 清水祐太郎 [this message]

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180421053700.AAAE110E8014@mail.shift-crops.net \
    --to=simiyu@shift-crops.net \
    --cc=carlos@redhat.com \
    --cc=libc-help@sourceware.org \
    --cc=neleai@seznam.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).