public inbox for kawa@sourceware.org
 help / color / mirror / Atom feed
From: Damien Mattei <damien.mattei@gmail.com>
To: Per Bothner <per@bothner.com>
Cc: kawa@sourceware.org
Subject: Re: n arity with method
Date: Tue, 21 Nov 2023 09:48:13 +0100	[thread overview]
Message-ID: <CADEOadchQ1nDr8jBqk1B=TSwee8b1_E-_oii4hzn1_W7hhThow@mail.gmail.com> (raw)
In-Reply-To: <CADEOadfsiHDLxNUkd4fpDw_==O5tEqcjC0kQZdcXOQF4Hpt72g@mail.gmail.com>

[-- Attachment #1: Type: text/plain, Size: 4182 bytes --]

and i just realize there is even more parenthesis { } in the above
expression that the infix operator precedence analyser does not need, we
can rewrite the expression:
{M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}}

{M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - (- η) * z_input[i] *
მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}

which is :
{M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] + η * z_input[i] *
მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}

and my infix operator precedence deal well with * and + or - precedences
and the computation still give the good result:
################## NOT ##################
*init* : nc=#(1 2 1)
z=#(#(0) #(0 0) #(0))
z̃=#(#(0) #(0 0) #(0))
M=#(matrix@5942ee04 matrix@5e76a2bb)
ᐁ=#(#(0) #(0 0) #(0))
nbiter=5000
exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:131:2:
warning - no known slot 'apprentissage' in java.lang.Object
0
1000
2000
3000
4000
exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:132:2:
warning - no known slot 'test' in java.lang.Object
Test des exemples :
#(1) --> #(0.006614618861519643) : on attendait #(0)
#(0) --> #(0.9929063781049513) : on attendait #(1)
Error on examples=1.1928342099764103E-4

;-)

so many computation in deep learning just to compute NOT boolean,what a '50
light bulb computer was doing faster..... :-)

Damien



On Mon, Nov 20, 2023 at 2:51 PM Damien Mattei <damien.mattei@gmail.com>
wrote:

> yes ,indexing is overloaded ,using same procedure for vector,string,hash
> table ,this take times but the slower is the infix precedence operator
> algorithm that is called each time even if the formula in a loop never
> change,example in:
> (for-each-in (j (in-range len_layer_output)) ; line
>    (for-each-in (i (in-range len_layer_input)) ; column , parcours les
> colonnes de la ligne sauf le bias
>        {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}})
>
>         ; and update the bias
>      {M_i_o[j 0]  <-  M_i_o[j 0] - {(- η) * 1.0 * მzⳆმz̃(z_output[j]
> z̃_output[j]) * ᐁ_i_o[j]}}))
>
> {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}} is expanded at each 'for'
> loop iteration in:
>
> ($nfx$
>   (bracket-apply M_i_o j (+ i 1))
>   <-
>   (bracket-apply M_i_o j (+ i 1))
>   -
>   (*
>    (- η)
>    (bracket-apply z_input i)
>    (მzⳆმz̃ (bracket-apply z_output j) (bracket-apply z̃_output j))
>    (bracket-apply ᐁ_i_o j)))
>
> with evaluation of $nfx$ (see code in :
> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/scheme-infix.scm)
> and many bracket-apply (
> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/apply-square-brackets.scm
> ) even if the expression never change, the numeric computation change, but
> not the symbolic evaluation of operator precedence.
>
> This could be precomputed by Scheme+ before compilation by Kawa or any
> Scheme that use Scheme+ but this is a big work i have not begin.
>
> Damien
>
> On Sun, Nov 19, 2023 at 6:50 PM Per Bothner <per@bothner.com> wrote:
>
>> On 11/18/23 23:23, Damien Mattei via Kawa wrote:
>> > when comparing speed on the first part of my program written in Scheme+
>> it
>> > run in 15" with Kawa and 7" in Racket.
>>
>> It is likely you can speed up Kawa quite a bit by fixing a few slow spots.
>> Specifically, anytime you do run-time reflection (or worse: eval/load)
>> you're
>> going to lose a lot of performance. If you can replace generic arithmetic
>> or vector/list indexing with type-specific arithmetic/indexing that can
>> also make a big difference. List processing (especially if you call cons
>> a lot) is always going to be relatively expensive.
>>
>> Profiling is probably the thing to try. I do little-to-no Java programming
>> these days, so I can't be any more specific with help.
>> --
>>         --Per Bothner
>> per@bothner.com   http://per.bothner.com/
>>
>

  reply	other threads:[~2023-11-21  8:48 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-11-16 22:53 Damien Mattei
2023-11-18 11:14 ` Damien Mattei
2023-11-19  7:23   ` Damien Mattei
2023-11-19 17:50     ` Per Bothner
2023-11-20 13:51       ` Damien Mattei
2023-11-21  8:48         ` Damien Mattei [this message]
2023-11-22 10:03           ` Damien Mattei

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CADEOadchQ1nDr8jBqk1B=TSwee8b1_E-_oii4hzn1_W7hhThow@mail.gmail.com' \
    --to=damien.mattei@gmail.com \
    --cc=kawa@sourceware.org \
    --cc=per@bothner.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).