From: Jochem Huhmann <joh@gmx.net>
To: docbook-tools-discuss@sourceware.cygnus.com
Subject: Re: images in Docbook with Red Hat 6.2
Date: Wed, 27 Dec 2000 06:36:00 -0000 [thread overview]
Message-ID: <m3r9c86bvn.fsf@nova.revier.com> (raw)
In-Reply-To: <38F6E77C.66C4280E@cybercable.tm.fr>
* Eric Bischoff <ebisch@cybercable.tm.fr> wrote:
> Jochem Huhmann wrote:
> >
> > * Eric Bischoff <ebisch@cybercable.tm.fr> wrote:
> > > The stylesheets solution could perfectly work if there was a
> > > way to easily detect which extension(s) do exist for this
> > > file from DSSSL.
> >
> > Could this be done by looking at this from the converting
> > (shell-)backend that builds the command-line for jade? It knows the
> > output format and could pass a variable to jade via "-V variable=value",
> > which is available in the stylesheet then (or isn't it?). Just a fuzzy
> > idea...
>
> You idea is good, but this would mean that you should pass
> *for each screenshot* its file type while calling jade. If
> your doc includes, say, 120 screenshots, you can imagine the
> number of parameters to Jade... Well, a shell script could
> do the job, yes, but would jade stand so much variables ?
I didn't meant it this way. The wrapping shell-backend knows the target
format and therefore it should know which file-type for images is the
best for this target format. So it could pass this format (as in "png",
"eps", "gif", ...) in a variable to jade and make it available in the
stylesheet.
The stylesheet has the blank filename (from the SGML source) and the
needed extension (from that variable) of the image-file then. This leads
me to the assumption that this should be enough to build the right
file-name (name + extension) in the stylesheet.
It wouldn't be enough to make sure that the file in the needed format
really exists, yes. But if it exists this could work. I haven't got the
time right now to dig through the stylesheets to see how this could be
done in a driver file and I may well have missed something important,
though.
Jochem
--
Hi! I'm a .signature virus! Copy me into your ~/.signature to help me spread!
next prev parent reply other threads:[~2000-12-27 6:36 UTC|newest]
Thread overview: 24+ messages / expand[flat|nested] mbox.gz Atom feed top
2000-12-27 6:36 Sam Roberts
2000-12-27 6:36 ` Eric Bischoff
2000-12-27 6:36 ` Sam Roberts
2000-12-27 6:36 ` Eric Bischoff
2000-12-27 6:36 ` Sam Roberts
2000-12-27 6:36 ` Norman Walsh
2000-12-27 6:36 ` David C. Mason
2000-12-27 6:36 ` David C. Mason
2000-12-27 6:36 ` Jochem Huhmann
2000-12-27 6:36 ` Eric Bischoff
2000-12-27 6:36 ` Jochem Huhmann [this message]
2000-12-27 6:36 ` Eric Bischoff
2000-12-27 6:36 ` Jochem Huhmann
2000-12-27 6:36 ` Eric Bischoff
2000-12-27 6:36 ` Jochem Huhmann
2000-12-27 6:36 ` godoy
2000-12-27 6:36 ` Eric Bischoff
2000-12-27 6:36 ` godoy
2000-12-27 6:36 ` Jochem Huhmann
2000-12-27 6:36 ` Jochem Huhmann
2000-12-27 6:36 ` Diego Sevilla Ruiz (dsevilla@um.es)
2000-12-27 6:36 ` Sam Roberts
2000-12-27 6:36 ` Eric Bischoff
-- strict thread matches above, loose matches on Subject: below --
2000-12-27 6:36 Peter Toft
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=m3r9c86bvn.fsf@nova.revier.com \
--to=joh@gmx.net \
--cc=docbook-tools-discuss@sourceware.cygnus.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).