public inbox for kawa@sourceware.org
 help / color / mirror / Atom feed
* n arity with method
@ 2023-11-16 22:53 Damien Mattei
  2023-11-18 11:14 ` Damien Mattei
  0 siblings, 1 reply; 7+ messages in thread
From: Damien Mattei @ 2023-11-16 22:53 UTC (permalink / raw)
  To: kawa mailing list

[-- Attachment #1: Type: text/plain, Size: 1154 bytes --]

(import (rename (scheme base) (* orig*)))

(define * (make-procedure method: (lambda (x ::number y ::number) (orig* x
y))
 method: (lambda (x ::matrix y ::matrix) (multiply-matrix-matrix  x y))
 method: (lambda (x ::matrix y ::vector) (multiply-matrix-vector  x y))))

is there a way to still have * a n-arity operator with typed methods ?
because now i have this error:
(* 2 3 4)
Argument  (null) has wrong type
at gnu.mapping.CallContext.matchError(CallContext.java:185)
at gnu.expr.GenericProc.applyToConsumerGP(GenericProc.java:132)
at gnu.kawa.functions.ApplyToArgs.applyToConsumerA2A(ApplyToArgs.java:132)
at gnu.mapping.CallContext.runUntilDone(CallContext.java:586)
at gnu.expr.ModuleExp.evalModule2(ModuleExp.java:343)
at gnu.expr.ModuleExp.evalModule(ModuleExp.java:211)
at kawa.Shell.run(Shell.java:289)
at kawa.Shell.run(Shell.java:196)
at kawa.Shell.run(Shell.java:183)
at kawa.repl.processArgs(repl.java:724)
at kawa.repl.main(repl.java:830)
the problem is that * is no more n-arity operator now

Damien

anyway there is perheaps a possibility of using a variable number of args
but i did not think it this evening.... perheaps tomorrow...

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: n arity with method
  2023-11-16 22:53 n arity with method Damien Mattei
@ 2023-11-18 11:14 ` Damien Mattei
  2023-11-19  7:23   ` Damien Mattei
  0 siblings, 1 reply; 7+ messages in thread
From: Damien Mattei @ 2023-11-18 11:14 UTC (permalink / raw)
  To: kawa mailing list

[-- Attachment #1: Type: text/plain, Size: 2257 bytes --]

seems there is some difference in 'load' and 'import' ,the same program can
run with 'load' and not with 'import'
about the overloading of n-arity operators i find a solution in
'define-method' :

(define ⋅ (make-procedure method: (lambda (x ::number y ::number) (* x y))
 method: (lambda (x ::matrix y ::matrix) (multiply-matrix-matrix  x y))
 method: (lambda (x ::matrix y ::vector) (multiply-matrix-vector  x y))
 method: (lambda lyst (apply * lyst))))


(insert-operator! * ⋅)

kawa -d classes
-Dkawa.import.path=".:/Users/mattei/Scheme-PLUS-for-Kawa:./kawa" -C
exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa_classes.scm

i hoped to have a speed up by compiling the kawa code in .class files but
it is the same , i suppose that 'load' compile the code , ad he does it
with more easyness than 'require' , with 'load' the overloading features of
Scheme+ worked fine , not with 'require'



On Thu, Nov 16, 2023 at 11:53 PM Damien Mattei <damien.mattei@gmail.com>
wrote:

> (import (rename (scheme base) (* orig*)))
>
> (define * (make-procedure method: (lambda (x ::number y ::number) (orig* x
> y))
>  method: (lambda (x ::matrix y ::matrix) (multiply-matrix-matrix  x y))
>  method: (lambda (x ::matrix y ::vector) (multiply-matrix-vector  x y))))
>
> is there a way to still have * a n-arity operator with typed methods ?
> because now i have this error:
> (* 2 3 4)
> Argument  (null) has wrong type
> at gnu.mapping.CallContext.matchError(CallContext.java:185)
> at gnu.expr.GenericProc.applyToConsumerGP(GenericProc.java:132)
> at gnu.kawa.functions.ApplyToArgs.applyToConsumerA2A(ApplyToArgs.java:132)
> at gnu.mapping.CallContext.runUntilDone(CallContext.java:586)
> at gnu.expr.ModuleExp.evalModule2(ModuleExp.java:343)
> at gnu.expr.ModuleExp.evalModule(ModuleExp.java:211)
> at kawa.Shell.run(Shell.java:289)
> at kawa.Shell.run(Shell.java:196)
> at kawa.Shell.run(Shell.java:183)
> at kawa.repl.processArgs(repl.java:724)
> at kawa.repl.main(repl.java:830)
> the problem is that * is no more n-arity operator now
>
> Damien
>
> anyway there is perheaps a possibility of using a variable number of args
> but i did not think it this evening.... perheaps tomorrow...
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: n arity with method
  2023-11-18 11:14 ` Damien Mattei
@ 2023-11-19  7:23   ` Damien Mattei
  2023-11-19 17:50     ` Per Bothner
  0 siblings, 1 reply; 7+ messages in thread
From: Damien Mattei @ 2023-11-19  7:23 UTC (permalink / raw)
  To: kawa mailing list

[-- Attachment #1: Type: text/plain, Size: 3136 bytes --]

when comparing speed on the first part of my program written in Scheme+ it
run in 15" with Kawa and 7" in Racket.
the speed is the same in all kawa version.
the REPL version is:
https://github.com/damien-mattei/AI_Deep_Learning/blob/main/exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa%2B.scm
the compiled classes version is:
https://github.com/damien-mattei/AI_Deep_Learning/blob/main/exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa_classes%2B.scm
the new and latest version of Scheme+ for Kawa is now tested and available
here:

https://github.com/damien-mattei/Scheme-PLUS-for-Kawa

docs are  here:
https://damien-mattei.github.io/Scheme-PLUS-for-Racket/Scheme+io.html


On Sat, Nov 18, 2023 at 12:14 PM Damien Mattei <damien.mattei@gmail.com>
wrote:

> seems there is some difference in 'load' and 'import' ,the same program
> can run with 'load' and not with 'import'
> about the overloading of n-arity operators i find a solution in
> 'define-method' :
>
> (define ⋅ (make-procedure method: (lambda (x ::number y ::number) (* x y))
>  method: (lambda (x ::matrix y ::matrix) (multiply-matrix-matrix  x y))
>  method: (lambda (x ::matrix y ::vector) (multiply-matrix-vector  x y))
>  method: (lambda lyst (apply * lyst))))
>
>
> (insert-operator! * ⋅)
>
> kawa -d classes
> -Dkawa.import.path=".:/Users/mattei/Scheme-PLUS-for-Kawa:./kawa" -C
> exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa_classes.scm
>
> i hoped to have a speed up by compiling the kawa code in .class files but
> it is the same , i suppose that 'load' compile the code , ad he does it
> with more easyness than 'require' , with 'load' the overloading features of
> Scheme+ worked fine , not with 'require'
>
>
>
> On Thu, Nov 16, 2023 at 11:53 PM Damien Mattei <damien.mattei@gmail.com>
> wrote:
>
>> (import (rename (scheme base) (* orig*)))
>>
>> (define * (make-procedure method: (lambda (x ::number y ::number) (orig*
>> x y))
>>  method: (lambda (x ::matrix y ::matrix) (multiply-matrix-matrix  x y))
>>  method: (lambda (x ::matrix y ::vector) (multiply-matrix-vector  x y))))
>>
>> is there a way to still have * a n-arity operator with typed methods ?
>> because now i have this error:
>> (* 2 3 4)
>> Argument  (null) has wrong type
>> at gnu.mapping.CallContext.matchError(CallContext.java:185)
>> at gnu.expr.GenericProc.applyToConsumerGP(GenericProc.java:132)
>> at gnu.kawa.functions.ApplyToArgs.applyToConsumerA2A(ApplyToArgs.java:132)
>> at gnu.mapping.CallContext.runUntilDone(CallContext.java:586)
>> at gnu.expr.ModuleExp.evalModule2(ModuleExp.java:343)
>> at gnu.expr.ModuleExp.evalModule(ModuleExp.java:211)
>> at kawa.Shell.run(Shell.java:289)
>> at kawa.Shell.run(Shell.java:196)
>> at kawa.Shell.run(Shell.java:183)
>> at kawa.repl.processArgs(repl.java:724)
>> at kawa.repl.main(repl.java:830)
>> the problem is that * is no more n-arity operator now
>>
>> Damien
>>
>> anyway there is perheaps a possibility of using a variable number of args
>> but i did not think it this evening.... perheaps tomorrow...
>>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: n arity with method
  2023-11-19  7:23   ` Damien Mattei
@ 2023-11-19 17:50     ` Per Bothner
  2023-11-20 13:51       ` Damien Mattei
  0 siblings, 1 reply; 7+ messages in thread
From: Per Bothner @ 2023-11-19 17:50 UTC (permalink / raw)
  To: kawa

On 11/18/23 23:23, Damien Mattei via Kawa wrote:
> when comparing speed on the first part of my program written in Scheme+ it
> run in 15" with Kawa and 7" in Racket.

It is likely you can speed up Kawa quite a bit by fixing a few slow spots.
Specifically, anytime you do run-time reflection (or worse: eval/load) you're
going to lose a lot of performance. If you can replace generic arithmetic
or vector/list indexing with type-specific arithmetic/indexing that can
also make a big difference. List processing (especially if you call cons
a lot) is always going to be relatively expensive.

Profiling is probably the thing to try. I do little-to-no Java programming
these days, so I can't be any more specific with help.
-- 
	--Per Bothner
per@bothner.com   http://per.bothner.com/

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: n arity with method
  2023-11-19 17:50     ` Per Bothner
@ 2023-11-20 13:51       ` Damien Mattei
  2023-11-21  8:48         ` Damien Mattei
  0 siblings, 1 reply; 7+ messages in thread
From: Damien Mattei @ 2023-11-20 13:51 UTC (permalink / raw)
  To: Per Bothner; +Cc: kawa

[-- Attachment #1: Type: text/plain, Size: 2532 bytes --]

yes ,indexing is overloaded ,using same procedure for vector,string,hash
table ,this take times but the slower is the infix precedence operator
algorithm that is called each time even if the formula in a loop never
change,example in:
(for-each-in (j (in-range len_layer_output)) ; line
   (for-each-in (i (in-range len_layer_input)) ; column , parcours les
colonnes de la ligne sauf le bias
       {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}})

        ; and update the bias
     {M_i_o[j 0]  <-  M_i_o[j 0] - {(- η) * 1.0 * მzⳆმz̃(z_output[j]
z̃_output[j]) * ᐁ_i_o[j]}}))

{M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}} is expanded at each 'for'
loop iteration in:

($nfx$
  (bracket-apply M_i_o j (+ i 1))
  <-
  (bracket-apply M_i_o j (+ i 1))
  -
  (*
   (- η)
   (bracket-apply z_input i)
   (მzⳆმz̃ (bracket-apply z_output j) (bracket-apply z̃_output j))
   (bracket-apply ᐁ_i_o j)))

with evaluation of $nfx$ (see code in :
https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/scheme-infix.scm)
and many bracket-apply (
https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/apply-square-brackets.scm
) even if the expression never change, the numeric computation change, but
not the symbolic evaluation of operator precedence.

This could be precomputed by Scheme+ before compilation by Kawa or any
Scheme that use Scheme+ but this is a big work i have not begin.

Damien

On Sun, Nov 19, 2023 at 6:50 PM Per Bothner <per@bothner.com> wrote:

> On 11/18/23 23:23, Damien Mattei via Kawa wrote:
> > when comparing speed on the first part of my program written in Scheme+
> it
> > run in 15" with Kawa and 7" in Racket.
>
> It is likely you can speed up Kawa quite a bit by fixing a few slow spots.
> Specifically, anytime you do run-time reflection (or worse: eval/load)
> you're
> going to lose a lot of performance. If you can replace generic arithmetic
> or vector/list indexing with type-specific arithmetic/indexing that can
> also make a big difference. List processing (especially if you call cons
> a lot) is always going to be relatively expensive.
>
> Profiling is probably the thing to try. I do little-to-no Java programming
> these days, so I can't be any more specific with help.
> --
>         --Per Bothner
> per@bothner.com   http://per.bothner.com/
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: n arity with method
  2023-11-20 13:51       ` Damien Mattei
@ 2023-11-21  8:48         ` Damien Mattei
  2023-11-22 10:03           ` Damien Mattei
  0 siblings, 1 reply; 7+ messages in thread
From: Damien Mattei @ 2023-11-21  8:48 UTC (permalink / raw)
  To: Per Bothner; +Cc: kawa

[-- Attachment #1: Type: text/plain, Size: 4182 bytes --]

and i just realize there is even more parenthesis { } in the above
expression that the infix operator precedence analyser does not need, we
can rewrite the expression:
{M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}}

{M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - (- η) * z_input[i] *
მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}

which is :
{M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] + η * z_input[i] *
მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}

and my infix operator precedence deal well with * and + or - precedences
and the computation still give the good result:
################## NOT ##################
*init* : nc=#(1 2 1)
z=#(#(0) #(0 0) #(0))
z̃=#(#(0) #(0 0) #(0))
M=#(matrix@5942ee04 matrix@5e76a2bb)
ᐁ=#(#(0) #(0 0) #(0))
nbiter=5000
exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:131:2:
warning - no known slot 'apprentissage' in java.lang.Object
0
1000
2000
3000
4000
exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:132:2:
warning - no known slot 'test' in java.lang.Object
Test des exemples :
#(1) --> #(0.006614618861519643) : on attendait #(0)
#(0) --> #(0.9929063781049513) : on attendait #(1)
Error on examples=1.1928342099764103E-4

;-)

so many computation in deep learning just to compute NOT boolean,what a '50
light bulb computer was doing faster..... :-)

Damien



On Mon, Nov 20, 2023 at 2:51 PM Damien Mattei <damien.mattei@gmail.com>
wrote:

> yes ,indexing is overloaded ,using same procedure for vector,string,hash
> table ,this take times but the slower is the infix precedence operator
> algorithm that is called each time even if the formula in a loop never
> change,example in:
> (for-each-in (j (in-range len_layer_output)) ; line
>    (for-each-in (i (in-range len_layer_input)) ; column , parcours les
> colonnes de la ligne sauf le bias
>        {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}})
>
>         ; and update the bias
>      {M_i_o[j 0]  <-  M_i_o[j 0] - {(- η) * 1.0 * მzⳆმz̃(z_output[j]
> z̃_output[j]) * ᐁ_i_o[j]}}))
>
> {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}} is expanded at each 'for'
> loop iteration in:
>
> ($nfx$
>   (bracket-apply M_i_o j (+ i 1))
>   <-
>   (bracket-apply M_i_o j (+ i 1))
>   -
>   (*
>    (- η)
>    (bracket-apply z_input i)
>    (მzⳆმz̃ (bracket-apply z_output j) (bracket-apply z̃_output j))
>    (bracket-apply ᐁ_i_o j)))
>
> with evaluation of $nfx$ (see code in :
> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/scheme-infix.scm)
> and many bracket-apply (
> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/apply-square-brackets.scm
> ) even if the expression never change, the numeric computation change, but
> not the symbolic evaluation of operator precedence.
>
> This could be precomputed by Scheme+ before compilation by Kawa or any
> Scheme that use Scheme+ but this is a big work i have not begin.
>
> Damien
>
> On Sun, Nov 19, 2023 at 6:50 PM Per Bothner <per@bothner.com> wrote:
>
>> On 11/18/23 23:23, Damien Mattei via Kawa wrote:
>> > when comparing speed on the first part of my program written in Scheme+
>> it
>> > run in 15" with Kawa and 7" in Racket.
>>
>> It is likely you can speed up Kawa quite a bit by fixing a few slow spots.
>> Specifically, anytime you do run-time reflection (or worse: eval/load)
>> you're
>> going to lose a lot of performance. If you can replace generic arithmetic
>> or vector/list indexing with type-specific arithmetic/indexing that can
>> also make a big difference. List processing (especially if you call cons
>> a lot) is always going to be relatively expensive.
>>
>> Profiling is probably the thing to try. I do little-to-no Java programming
>> these days, so I can't be any more specific with help.
>> --
>>         --Per Bothner
>> per@bothner.com   http://per.bothner.com/
>>
>

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: n arity with method
  2023-11-21  8:48         ` Damien Mattei
@ 2023-11-22 10:03           ` Damien Mattei
  0 siblings, 0 replies; 7+ messages in thread
From: Damien Mattei @ 2023-11-22 10:03 UTC (permalink / raw)
  To: Per Bothner; +Cc: kawa

[-- Attachment #1: Type: text/plain, Size: 4766 bytes --]

and i just port the code to Guile (
https://github.com/damien-mattei/AI_Deep_Learning/blob/main/exo_retropropagationNhidden_layers_matrix_v2_by_vectors4guile%2B.scm
), which i know is based and easily interfaced with C language which is the
fastest language (except assembly)  and it is 20% slower than Kawa based on
Java...
Damien

On Tue, Nov 21, 2023 at 9:48 AM Damien Mattei <damien.mattei@gmail.com>
wrote:

> and i just realize there is even more parenthesis { } in the above
> expression that the infix operator precedence analyser does not need, we
> can rewrite the expression:
> {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}}
>
> {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - (- η) * z_input[i] *
> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}
>
> which is :
> {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] + η * z_input[i] *
> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}
>
> and my infix operator precedence deal well with * and + or - precedences
> and the computation still give the good result:
> ################## NOT ##################
> *init* : nc=#(1 2 1)
> z=#(#(0) #(0 0) #(0))
> z̃=#(#(0) #(0 0) #(0))
> M=#(matrix@5942ee04 matrix@5e76a2bb)
> ᐁ=#(#(0) #(0 0) #(0))
> nbiter=5000
> exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:131:2:
> warning - no known slot 'apprentissage' in java.lang.Object
> 0
> 1000
> 2000
> 3000
> 4000
> exo_retropropagationNhidden_layers_matrix_v2_by_vectors4kawa.scm:132:2:
> warning - no known slot 'test' in java.lang.Object
> Test des exemples :
> #(1) --> #(0.006614618861519643) : on attendait #(0)
> #(0) --> #(0.9929063781049513) : on attendait #(1)
> Error on examples=1.1928342099764103E-4
>
> ;-)
>
> so many computation in deep learning just to compute NOT boolean,what a
> '50 light bulb computer was doing faster..... :-)
>
> Damien
>
>
>
> On Mon, Nov 20, 2023 at 2:51 PM Damien Mattei <damien.mattei@gmail.com>
> wrote:
>
>> yes ,indexing is overloaded ,using same procedure for vector,string,hash
>> table ,this take times but the slower is the infix precedence operator
>> algorithm that is called each time even if the formula in a loop never
>> change,example in:
>> (for-each-in (j (in-range len_layer_output)) ; line
>>    (for-each-in (i (in-range len_layer_input)) ; column , parcours les
>> colonnes de la ligne sauf le bias
>>        {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
>> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}})
>>
>>         ; and update the bias
>>      {M_i_o[j 0]  <-  M_i_o[j 0] - {(- η) * 1.0 * მzⳆმz̃(z_output[j]
>> z̃_output[j]) * ᐁ_i_o[j]}}))
>>
>> {M_i_o[j {i + 1}]  <-  M_i_o[j {i + 1}] - {(- η) * z_input[i] *
>> მzⳆმz̃(z_output[j] z̃_output[j]) * ᐁ_i_o[j]}} is expanded at each 'for'
>> loop iteration in:
>>
>> ($nfx$
>>   (bracket-apply M_i_o j (+ i 1))
>>   <-
>>   (bracket-apply M_i_o j (+ i 1))
>>   -
>>   (*
>>    (- η)
>>    (bracket-apply z_input i)
>>    (მzⳆმz̃ (bracket-apply z_output j) (bracket-apply z̃_output j))
>>    (bracket-apply ᐁ_i_o j)))
>>
>> with evaluation of $nfx$ (see code in :
>> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/scheme-infix.scm)
>> and many bracket-apply (
>> https://github.com/damien-mattei/Scheme-PLUS-for-Kawa/blob/main/apply-square-brackets.scm
>> ) even if the expression never change, the numeric computation change, but
>> not the symbolic evaluation of operator precedence.
>>
>> This could be precomputed by Scheme+ before compilation by Kawa or any
>> Scheme that use Scheme+ but this is a big work i have not begin.
>>
>> Damien
>>
>> On Sun, Nov 19, 2023 at 6:50 PM Per Bothner <per@bothner.com> wrote:
>>
>>> On 11/18/23 23:23, Damien Mattei via Kawa wrote:
>>> > when comparing speed on the first part of my program written in
>>> Scheme+ it
>>> > run in 15" with Kawa and 7" in Racket.
>>>
>>> It is likely you can speed up Kawa quite a bit by fixing a few slow
>>> spots.
>>> Specifically, anytime you do run-time reflection (or worse: eval/load)
>>> you're
>>> going to lose a lot of performance. If you can replace generic arithmetic
>>> or vector/list indexing with type-specific arithmetic/indexing that can
>>> also make a big difference. List processing (especially if you call cons
>>> a lot) is always going to be relatively expensive.
>>>
>>> Profiling is probably the thing to try. I do little-to-no Java
>>> programming
>>> these days, so I can't be any more specific with help.
>>> --
>>>         --Per Bothner
>>> per@bothner.com   http://per.bothner.com/
>>>
>>

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2023-11-22 10:03 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-11-16 22:53 n arity with method Damien Mattei
2023-11-18 11:14 ` Damien Mattei
2023-11-19  7:23   ` Damien Mattei
2023-11-19 17:50     ` Per Bothner
2023-11-20 13:51       ` Damien Mattei
2023-11-21  8:48         ` Damien Mattei
2023-11-22 10:03           ` Damien Mattei

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).