From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: from mail-ej1-x635.google.com (mail-ej1-x635.google.com [IPv6:2a00:1450:4864:20::635]) by sourceware.org (Postfix) with ESMTPS id 4A0173858D35 for ; Wed, 22 Nov 2023 10:03:44 +0000 (GMT) DMARC-Filter: OpenDMARC Filter v1.4.2 sourceware.org 4A0173858D35 Authentication-Results: sourceware.org; dmarc=pass (p=none dis=none) header.from=gmail.com Authentication-Results: sourceware.org; spf=pass smtp.mailfrom=gmail.com ARC-Filter: OpenARC Filter v1.0.0 sourceware.org 4A0173858D35 Authentication-Results: server2.sourceware.org; arc=none smtp.remote-ip=2a00:1450:4864:20::635 ARC-Seal: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1700647427; cv=none; b=phjtwg+5abXZABbcyMuTFPesvBFrPwDay006QYaXIoWoXLCDZy0vfVI5UGUD3trR/hU9aJUobe1aeVp3+Zn9LWvwy0EHzntifKL+yFlkInvHdQcewo8H6ycusoBjMrh68t6pO4oekV8gc72KoVLmgrYxREoGRoEFeMYIEnJt+1c= ARC-Message-Signature: i=1; a=rsa-sha256; d=sourceware.org; s=key; t=1700647427; c=relaxed/simple; bh=cuED5IgJe6kL1p/r2badkeWQJ9UsHty3thc2qyOFfZQ=; h=DKIM-Signature:MIME-Version:From:Date:Message-ID:Subject:To; b=Y6HzS6QbvrHSXu1Gss2tRvJGvjVuQL1rK8oTt+XbycqOHCb3rYL9kJMbQiD3n5AvNetyP/ppDr4PwfBDWYsm8ov3kb1dI93oSx35qZPMYK2kDpuR+WxMFyZgHtgOkc5+bGfP1S/OfOVJD+r7wYZ8vn1eNkYCryo7jDzsDvaegIA= ARC-Authentication-Results: i=1; server2.sourceware.org Received: by mail-ej1-x635.google.com with SMTP id a640c23a62f3a-a00d5b0ec44so325403166b.0 for ; Wed, 22 Nov 2023 02:03:44 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20230601; t=1700647423; x=1701252223; darn=sourceware.org; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:from:to:cc:subject:date:message-id:reply-to; bh=1tG00IOC7pnwramn8oPXVZMqedLCWZ3MV7rNEBwHUGc=; b=U6JP/X6DrDse+6o5Rp8CzzL/go32YxfxohIEMWzwvrb+Yb8cHhf92pl591f/7lViyC Xc3X458HWAK6SHrs5ipp5JjfmqZwZHmvDCmd7+Fo/2kAYHAyiK4eeTINBZ0zoIxzJrhJ SOleGd4qCNdbr7Y8KS1mSqc5yiA1hXnNi/xVAqAdNRUhuu41sKjqdMzjWAjwVnq+dpIb m4lFKC3HwmPDnyVYkdf0+mquu3awb7+pCNz1leIGwpxrk+XKBTQgBXfQy7uw5z8rnTfS uVBu2cFWT8sCI2/Kv4cgP4vwxc7CRy5ZacQ4flX24Vi13NAoQoWOtkHOiBJnzCscw17F JiAQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20230601; t=1700647423; x=1701252223; h=cc:to:subject:message-id:date:from:in-reply-to:references :mime-version:x-gm-message-state:from:to:cc:subject:date:message-id :reply-to; bh=1tG00IOC7pnwramn8oPXVZMqedLCWZ3MV7rNEBwHUGc=; b=T/zIwJS/WmVrscgmY7xOyvM0TPxpjpJ3wzJssqwRTBPNyfcC4KpC7k87/HoQK30mT/ eT5vRcbRfMm0TUxYynIcA9n+8GGsbsT+VOUMV54Ce/Tecq5GxhTKs+ycVKJXPzlV4o3s MwU6KycPQalXgAY9BTFvji8IpSPNuhxFog/tIQw6ysR6cvKU11sEN5LvOw0vBXQrPLjp zn4pkubFx/y8JFk8nRGxfmFBb7LBjbmDwJKWZwkQ71UBSIBoGiHkdC6m9BjeEjaQ2l6A lsBsEBTdav1J5S3zDk5X/ZuRtKxH/3B48CU7JdjrnyuC33ShMriGJrZH8JNFgiU6vHxN EfFQ== X-Gm-Message-State: AOJu0YzYpHyxfTS6gsKsNNjnX6/eMgOb8g4I4wgfR0TtO+UtKMmvlTdf jNQ04JJiL9J32RpMFx5+O6pSl646SitgLqEptBydwUH+VZ5FSw== X-Google-Smtp-Source: AGHT+IFsZrEcqMBA+jEYAE4U+FOPMXHvmpBL3/v0X9HLPb9y1bkfx3ZodRtgrWOdrwWx9s4KIaRM/Dr8Hla6OY9SfoE= X-Received: by 2002:a17:907:29c4:b0:a02:9174:629f with SMTP id ev4-20020a17090729c400b00a029174629fmr956173ejc.55.1700647422657; Wed, 22 Nov 2023 02:03:42 -0800 (PST) MIME-Version: 1.0 References: In-Reply-To: From: Damien Mattei Date: Wed, 22 Nov 2023 11:03:31 +0100 Message-ID: Subject: Re: n arity with method To: Per Bothner Cc: kawa@sourceware.org Content-Type: multipart/alternative; boundary="00000000000084094f060abad59d" X-Spam-Status: No, score=-0.6 required=5.0 tests=BAYES_00,DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,DKIM_VALID_EF,FREEMAIL_FROM,HTML_MESSAGE,RCVD_IN_DNSWL_NONE,SPF_HELO_NONE,SPF_PASS,TXREP,T_SCC_BODY_TEXT_LINE autolearn=ham autolearn_force=no version=3.4.6 X-Spam-Checker-Version: SpamAssassin 3.4.6 (2021-04-09) on server2.sourceware.org List-Id: --00000000000084094f060abad59d Content-Type: text/plain; charset="UTF-8" Content-Transfer-Encoding: quoted-printable and i just port the code to Guile ( https://github.com/damien-mattei/AI_Deep_Learning/blob/main/exo_retropropag= ationNhidden_layers_matrix_v2_by_vectors4guile%2B.scm ), which i know is based and easily interfaced with C language which is the fastest language (except assembly) and it is 20% slower than Kawa based on Java... Damien On Tue, Nov 21, 2023 at 9:48=E2=80=AFAM Damien Mattei wrote: > and i just realize there is even more parenthesis { } in the above > expression that the infix operator precedence analyser does not need, we > can rewrite the expression: > {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] - {(- =CE=B7) * z_input[i] * > =E1=83=9Bz=E2=B3=86=E1=83=9Bz=CC=83(z_output[j] z=CC=83_output[j]) * =E1= =90=81_i_o[j]}} > > {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] - (- =CE=B7) * z_input[i] * > =E1=83=9Bz=E2=B3=86=E1=83=9Bz=CC=83(z_output[j] z=CC=83_output[j]) * =E1= =90=81_i_o[j]} > > which is : > {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] + =CE=B7 * z_input[i] * > =E1=83=9Bz=E2=B3=86=E1=83=9Bz=CC=83(z_output[j] z=CC=83_output[j]) * =E1= =90=81_i_o[j]} > > and my infix operator precedence deal well with * and + or - precedences > and the computation still give the good result: > ################## NOT ################## > *init* : nc=3D#(1 2 1) > z=3D#(#(0) #(0 0) #(0)) > z=CC=83=3D#(#(0) #(0 0) #(0)) > M=3D#(matrix@5942ee04 matrix@5e76a2bb) > =E1=90=81=3D#(#(0) #(0 0) #(0)) > nbiter=3D5000 > exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:131:2: > warning - no known slot 'apprentissage' in java.lang.Object > 0 > 1000 > 2000 > 3000 > 4000 > exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:132:2: > warning - no known slot 'test' in java.lang.Object > Test des exemples : > #(1) --> #(0.006614618861519643) : on attendait #(0) > #(0) --> #(0.9929063781049513) : on attendait #(1) > Error on examples=3D1.1928342099764103E-4 > > ;-) > > so many computation in deep learning just to compute NOT boolean,what a > '50 light bulb computer was doing faster..... :-) > > Damien > > > > On Mon, Nov 20, 2023 at 2:51=E2=80=AFPM Damien Mattei > wrote: > >> yes ,indexing is overloaded ,using same procedure for vector,string,hash >> table ,this take times but the slower is the infix precedence operator >> algorithm that is called each time even if the formula in a loop never >> change,example in: >> (for-each-in (j (in-range len_layer_output)) ; line >> (for-each-in (i (in-range len_layer_input)) ; column , parcours les >> colonnes de la ligne sauf le bias >> {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] - {(- =CE=B7) * z_input[i= ] * >> =E1=83=9Bz=E2=B3=86=E1=83=9Bz=CC=83(z_output[j] z=CC=83_output[j]) * =E1= =90=81_i_o[j]}}) >> >> ; and update the bias >> {M_i_o[j 0] <- M_i_o[j 0] - {(- =CE=B7) * 1.0 * =E1=83=9Bz=E2=B3= =86=E1=83=9Bz=CC=83(z_output[j] >> z=CC=83_output[j]) * =E1=90=81_i_o[j]}})) >> >> {M_i_o[j {i + 1}] <- M_i_o[j {i + 1}] - {(- =CE=B7) * z_input[i] * >> =E1=83=9Bz=E2=B3=86=E1=83=9Bz=CC=83(z_output[j] z=CC=83_output[j]) * =E1= =90=81_i_o[j]}} is expanded at each 'for' >> loop iteration in: >> >> ($nfx$ >> (bracket-apply M_i_o j (+ i 1)) >> <- >> (bracket-apply M_i_o j (+ i 1)) >> - >> (* >> (- =CE=B7) >> (bracket-apply z_input i) >> (=E1=83=9Bz=E2=B3=86=E1=83=9Bz=CC=83 (bracket-apply z_output j) (brac= ket-apply z=CC=83_output j)) >> (bracket-apply =E1=90=81_i_o j))) >> >> with evaluation of $nfx$ (see code in : >> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/scheme-i= nfix.scm) >> and many bracket-apply ( >> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/apply-sq= uare-brackets.scm >> ) even if the expression never change, the numeric computation change, b= ut >> not the symbolic evaluation of operator precedence. >> >> This could be precomputed by Scheme+ before compilation by Kawa or any >> Scheme that use Scheme+ but this is a big work i have not begin. >> >> Damien >> >> On Sun, Nov 19, 2023 at 6:50=E2=80=AFPM Per Bothner wr= ote: >> >>> On 11/18/23 23:23, Damien Mattei via Kawa wrote: >>> > when comparing speed on the first part of my program written in >>> Scheme+ it >>> > run in 15" with Kawa and 7" in Racket. >>> >>> It is likely you can speed up Kawa quite a bit by fixing a few slow >>> spots. >>> Specifically, anytime you do run-time reflection (or worse: eval/load) >>> you're >>> going to lose a lot of performance. If you can replace generic arithmet= ic >>> or vector/list indexing with type-specific arithmetic/indexing that can >>> also make a big difference. List processing (especially if you call cons >>> a lot) is always going to be relatively expensive. >>> >>> Profiling is probably the thing to try. I do little-to-no Java >>> programming >>> these days, so I can't be any more specific with help. >>> -- >>> --Per Bothner >>> per@bothner.com http://per.bothner.com/ >>> >> --00000000000084094f060abad59d--