and i just port the code to Guile ( https://github.com/damien-mattei/AI_Deep_Learning/blob/main/exo_retropropagationNhidden_layers_matrix_v2_by_vectors4guile%2B.scm ), which i know is based and easily interfaced with C language which is the fastest language (except assembly) and it is 20% slower than Kawa based on Java... Damien On Tue, Nov 21, 2023 at 9:48 AM Damien Mattei wrote: > and i just realize there is even more parenthesis { } in the above > expression that the infix operator precedence analyser does not need, we > can rewrite the expression: > {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] - {(- η) * z_input[i] * > მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}} > > {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] - (- η) * z_input[i] * > მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]} > > which is : > {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] + η * z_input[i] * > მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]} > > and my infix operator precedence deal well with * and + or - precedences > and the computation still give the good result: > ################## NOT ################## > *init* : nc=#(1 2 1) > z=#(#(0) #(0 0) #(0)) > z̃=#(#(0) #(0 0) #(0)) > M=#(matrix@5942ee04 matrix@5e76a2bb) > ᐁ=#(#(0) #(0 0) #(0)) > nbiter=5000 > exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:131:2: > warning - no known slot 'apprentissage' in java.lang.Object > 0 > 1000 > 2000 > 3000 > 4000 > exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:132:2: > warning - no known slot 'test' in java.lang.Object > Test des exemples : > #(1) --> #(0.006614618861519643) : on attendait #(0) > #(0) --> #(0.9929063781049513) : on attendait #(1) > Error on examples=1.1928342099764103E-4 > > ;-) > > so many computation in deep learning just to compute NOT boolean,what a > '50 light bulb computer was doing faster..... :-) > > Damien > > > > On Mon, Nov 20, 2023 at 2:51 PM Damien Mattei > wrote: > >> yes ,indexing is overloaded ,using same procedure for vector,string,hash >> table ,this take times but the slower is the infix precedence operator >> algorithm that is called each time even if the formula in a loop never >> change,example in: >> (for-each-in (j (in-range len_layer_output)) ; line >> (for-each-in (i (in-range len_layer_input)) ; column , parcours les >> colonnes de la ligne sauf le bias >> {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] - {(- η) * z_input[i] * >> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}}) >> >> ; and update the bias >> {M_i_o[j 0] <- M_i_o[j 0] - {(- η) * 1.0 * მzⳆმz̃(z_output[j] >> z̃_output[j]) * ᐁ_i_o[j]}})) >> >> {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] - {(- η) * z_input[i] * >> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}} is expanded at each 'for' >> loop iteration in: >> >> ($nfx$ >> (bracket-apply M_i_o j (+ i 1)) >> <- >> (bracket-apply M_i_o j (+ i 1)) >> - >> (* >> (- η) >> (bracket-apply z_input i) >> (მzⳆმz̃ (bracket-apply z_output j) (bracket-apply z̃_output j)) >> (bracket-apply ᐁ_i_o j))) >> >> with evaluation of $nfx$ (see code in : >> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/scheme-infix.scm) >> and many bracket-apply ( >> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/apply-square-brackets.scm >> ) even if the expression never change, the numeric computation change, but >> not the symbolic evaluation of operator precedence. >> >> This could be precomputed by Scheme+ before compilation by Kawa or any >> Scheme that use Scheme+ but this is a big work i have not begin. >> >> Damien >> >> On Sun, Nov 19, 2023 at 6:50 PM Per Bothner wrote: >> >>> On 11/18/23 23:23, Damien Mattei via Kawa wrote: >>> > when comparing speed on the first part of my program written in >>> Scheme+ it >>> > run in 15" with Kawa and 7" in Racket. >>> >>> It is likely you can speed up Kawa quite a bit by fixing a few slow >>> spots. >>> Specifically, anytime you do run-time reflection (or worse: eval/load) >>> you're >>> going to lose a lot of performance. If you can replace generic arithmetic >>> or vector/list indexing with type-specific arithmetic/indexing that can >>> also make a big difference. List processing (especially if you call cons >>> a lot) is always going to be relatively expensive. >>> >>> Profiling is probably the thing to try. I do little-to-no Java >>> programming >>> these days, so I can't be any more specific with help. >>> -- >>> --Per Bothner >>> per@bothner.com http://per.bothner.com/ >>> >>