public inbox for gsl-discuss@sourceware.org
 help / color / mirror / Atom feed
From: Gerard Jungman <jungman@lanl.gov>
To: gsl-discuss@sourceware.org
Subject: Re: containers tentative design summary
Date: Mon, 05 Oct 2009 23:00:00 -0000	[thread overview]
Message-ID: <1254783367.28192.98.camel@manticore.lanl.gov> (raw)
In-Reply-To: <7f1eaee30910050750l738876b1p41e6bd8ae5aa6d16@mail.gmail.com>

On Mon, 2009-10-05 at 10:50 -0400, James Bergstra wrote:
> Two comments:
> 
> I'm a bit rusty with my C structs... but you would need two distinct
> static classes to have const and non-const data pointers for your view
> right?

Yes, sorry. I left that out.


> Also, it sounds like writing code that will work for a tensor of any
> rank (e.g. add two tensors together) might be either tedious or
> impossible.  I recognize that part of the problem is the lack of
> templating and polymorphism, but it would at least be comforting to
> see just how bad the situation is via a use case or two in the design
> documentation.   I (naively?) fear that to get good performance will
> require a whole library of functions for even the most basic of
> operations.
> 
> gsl_marray_add_1_0( gsl_marray_1, double );
> gsl_marray_add_1_1( gsl_marray_1, gsl_marray_1);
> gsl_marray_add_1_2( gsl_marray_1, gsl_marray_2);
> gsl_marray_add_2_2(... )
> ...
> 
> gsl_marray_sub_1_0( ... )
> 
> Maybe a system of macros could be designed to help here, but it sounds
> like it will never be as easy as writing a couple of for-loops.


First, I want to be sure that we distinguish the multi-array
case from the vector/matrix/tensor case. Algebra for vectors and
matrices is well-defined and already on place; we only need to
re-write some wrappers, etc. It is handled by BLAS calls. As I
mentioned, this places a constraint on matrices, that the
fast traversals be of unit stride. It seems like we just
have to live with that constraint, for the matrix type.

Addition, subtraction, scaling of multi-arrays is not hard
because it is only defined within the same rank. So there
is only a linear complexity, and no combinatorial explosion
in the interfaces, for these operations. Of course, there
are issues of shape conformance at run-time, but that is
also true for vectors and matrices.

That leaves multi-array algebra as an open question. By algebra,
I mean technically the "cross-rank" operations, which form
some kind of general multi-linear algebra. Sounds pretty
hairy, as you suspect.

First Idea: In fact, none of these operations are required
from the library. If you have a good (fast) indexing scheme,
then the user can implement whatever they want. This is the
boost solution; boost::multi_array has no support for algebra
operations. So we just punt on this. This was my implicit
choice in the design summary.

Second Idea: Implement as much as seems reasonable, in a way which
is equivalent to what a user would do, with some loops. I am not
sure that efficiency is an issue, since I don't see how you
could do better than some loops, in the general case. Higher
efficiency can be obtained for the vector and matrix types,
using a good BLAS, but then it should be clear that the
vector and matrix types are what you want, and not the
general multi-array types.

Third Idea: Figure out a way to generate everything automatically.
Hmmm. Not likely. And the interface explosion would be ridiculous.


Finally, we come to tensors. As defined in the design summary,
tensors are multi-index objects with the same dimension at
each place. We definitely do want to support the usual
tensor algebra operations: tensor product and contraction.

The question seems to be: How much simplicity do we gain in
going from the general multi-array case to the restricted
tensor case? If the interfaces for doing the tensor stuff
are no less complicated than the general case, then we
might as well go back and solve it for general
multi-arrays.


In any case, I think multi-array support would be a good thing,
even without multi-linear algebra. Fixing up vectors and matrices
so that the view types make sense is also a good thing. These
are mostly orthogonal tasks; I just want to get them both
clear in my head so I understand the limited ways in which
they will interact. Right now I think the interaction is
limited to some light-weight conversion functions between
them and a common slicing mechanism.

Tensors are another mostly orthogonal task. Again, they will
benefit from the generic slicing mechanism, and there will be
some light-weight conversion functions. But we can solve the
problems here as we come to them.


Does this make sense?

--
G. Jungman



  reply	other threads:[~2009-10-05 23:00 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2009-10-05 10:12 Gerard Jungman
2009-10-05 14:50 ` James Bergstra
2009-10-05 23:00   ` Gerard Jungman [this message]
2009-10-05 23:45     ` James Bergstra
2009-10-06 19:59       ` Gerard Jungman
     [not found]     ` <645d17210910060537s762d6323pfd2bec8590ad28e9@mail.gmail.com>
2009-10-06 20:02       ` Gerard Jungman
2009-10-23 21:28     ` Brian Gough
2009-10-27 23:06       ` Gerard Jungman
     [not found]         ` <7f1eaee30910271628h70785125m68e47c7a7b5c25b7@mail.gmail.com>
2009-10-27 23:49           ` Gerard Jungman
2009-10-29 18:06         ` Brian Gough
2009-10-29 20:41           ` Gerard Jungman
2009-10-29 21:40             ` James Bergstra
2009-10-30 16:54               ` Brian Gough
2009-10-30 16:54             ` Brian Gough
2009-10-05 23:04   ` some general questions Gerard Jungman
2009-10-06 16:21     ` Brian Gough
2009-10-06 20:32       ` Gerard Jungman
2009-10-10 14:45         ` Tuomo Keskitalo
2009-10-24 11:16           ` Brian Gough
2009-10-14 20:00         ` Brian Gough
2009-10-15 18:39           ` Tuomo Keskitalo
2009-11-03 19:44 containers tentative design summary Gerard Jungman
2009-11-09 20:41 ` Brian Gough
2009-11-09 23:06   ` Gerard Jungman
2009-11-14 15:25     ` Brian Gough
2009-11-15  9:13       ` Tuomo Keskitalo
2009-11-15 16:44         ` Jonathan Underwood
2009-11-15 18:41           ` Robert G. Brown
2009-11-16 11:56         ` Brian Gough
2009-11-20  9:37 Justin Lenzo

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1254783367.28192.98.camel@manticore.lanl.gov \
    --to=jungman@lanl.gov \
    --cc=gsl-discuss@sourceware.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).