public inbox for ecos-discuss@sourceware.org
 help / color / mirror / Atom feed
* [ECOS]  typo in i2c example in reference manual?
@ 2007-02-17 16:50 Grant Edwards
  2007-02-19 17:09 ` Bart Veer
  0 siblings, 1 reply; 7+ messages in thread
From: Grant Edwards @ 2007-02-17 16:50 UTC (permalink / raw)
  To: ecos-discuss

I tried to follow the example in the reference manual shown below:

   Instantiating a bit-banged I2C bus requires the following:
   
      #include <cyg/io/i2c.h>                                         
                                                                      
      static cyg_bool                                                 
      hal_alaia_i2c_bitbang(cyg_i2c_bus* bus, cyg_i2c_bitbang_op op)  
      {                                                               
          cyg_bool result    = 0;                                     
          switch(op) {                                                
              ...
          }                                                           
          return result;                                              
      }                                                               
                                                                      
      CYG_I2C_BITBANG_BUS(&hal_alaia_i2c_bus, &hal_alaia_i2c_bitbang);
   
   This gives a structure hal_alaia_i2c_bus which can be used when defining the
   cyg_i2c_device structures.

I get a syntax error unless I remove the "&" before
hal_alaia_i2c_bus.  If this macro is declaring and allocating
(is that what "gives" means in this context?) a structure named
"hal_alaia_i2c_bus", the "&" doesn't really make sense.
   
-- 
Grant Edwards                   grante             Yow!  Hold the MAYO & pass
                                  at               the COSMIC AWARENESS...
                               visi.com            


-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ECOS]  typo in i2c example in reference manual?
  2007-02-17 16:50 [ECOS] typo in i2c example in reference manual? Grant Edwards
@ 2007-02-19 17:09 ` Bart Veer
  2007-02-19 17:34   ` [ECOS] " Grant Edwards
  0 siblings, 1 reply; 7+ messages in thread
From: Bart Veer @ 2007-02-19 17:09 UTC (permalink / raw)
  To: grante; +Cc: ecos-discuss

>>>>> "Grant" == Grant Edwards <grante@visi.com> writes:

    Grant> I tried to follow the example in the reference manual shown below:
    Grant>    Instantiating a bit-banged I2C bus requires the following:
    <snip>
    Grant>       CYG_I2C_BITBANG_BUS(&hal_alaia_i2c_bus, &hal_alaia_i2c_bitbang);
   
    Grant>    This gives a structure hal_alaia_i2c_bus which can be used when defining the
    Grant>    cyg_i2c_device structures.

    Grant> I get a syntax error unless I remove the "&" before
    Grant> hal_alaia_i2c_bus. If this macro is declaring and
    Grant> allocating (is that what "gives" means in this context?) a
    Grant> structure named "hal_alaia_i2c_bus", the "&" doesn't really
    Grant> make sense.
   
Yes, that is a typo. I'll fix it in the master docs when I get a
chance, there are a couple of other things in there that need
improving.

    Grant> I'm trying to use the i2c (with a bit-banged driver).  I've run
    Grant> into a couple glitches so far:

    Grant> 1) The "delay" that's specified appears to be just added on to
    Grant>    the intrinsic overhead of a bit-banged driver.  Is this the
    Grant>    Specifying a delay of 10,000ns on my platform results in an
    Grant>    actual clock period of about 59,000ns.  The description of
    Grant>    the delay parameter in the reference manual appears to
    Grant>    assume that there is zero overhead involved in the driver.
    Grant>    Is this the expected behavior?

It is assumed that the bitbang function just needs to manipulate a
couple of registers related to GPIO pins, which should be near enough
instantaneous. If for some reason the operation is more expensive,
there would be no easy way to measure that and allow for it. Hence the
specified delay is just used to generate the HAL_DELAY_US() parameter.
Developers still have some control since they fill in the delay field
when instantiating an I2C device.

    Grant> 2) There doesn't seem to be any way to determine when writing
    Grant>    zero bytes of data with cyg_i2c_tx() whether the operation
    Grant>    was successful or not, since it returns 0 for both cases.  I
    Grant>    presume one should use the lower-level "transaction"
    Grant>    routines for this case?

Under what circumstances does it make sense to write zero bytes of
data?

    >> 2) There doesn't seem to be any way to determine when writing
    >> zero bytes of data with cyg_i2c_tx() whether the operation was
    >> successful or not, since it returns 0 for both cases. I presume
    >> one should use the lower-level "transaction" routines for this
    >> case?

    Grant> That doesn't seem to work. i2c_transaction_tx always seems
    Grant> to write an extra byte. If I tell it to send 1 byte, it
    Grant> sends 2.

    Grant> How do I send a single byte on the i2c bus??

I suspect you are setting the start flag. That means the I2C code has
to send the device address and the direction bit before the byte of
data. I2C does not have the concept of sending a raw byte of data onto
the bus. Data must always be addressed to a device on the bus, which
means sending address bytes. The address byte also includes one bit
for the direction, so that the addressed device knows whether it
should accept or transmit data.

    Grant> It will be called from both driver init() routines and from
    Grant> threads. How tell the difference so that the function can
    Grant> call HAL_DELAY_US() in the former case and
    Grant> cyg_thread_delay() in the latter?

cyg_thread_delay() generally operates in terms of many milliseconds.
Typically low-level device drivers do not deal with things on such
long timescales, instead that is left to higher-level code or the
application. Instead typical device drivers need delays of the order
of microseconds, which always requires HAL_DELAY_US() rather than
cyg_thread_delay().

If there is a valid reason for having milliseconds of delay inside
driver code, the best bet is to check whether or not interrupts are
enabled. Typically that does not happen until the scheduler is started
and threads begin to run.

Bart

-- 
Bart Veer                                 eCos Configuration Architect
http://www.ecoscentric.com/               The eCos and RedBoot experts

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ECOS]  Re: typo in i2c example in reference manual?
  2007-02-19 17:09 ` Bart Veer
@ 2007-02-19 17:34   ` Grant Edwards
  2007-02-22 22:53     ` Bart Veer
  0 siblings, 1 reply; 7+ messages in thread
From: Grant Edwards @ 2007-02-19 17:34 UTC (permalink / raw)
  To: ecos-discuss

On 2007-02-19, Bart Veer <bartv@ecoscentric.com> wrote:

>    Grant> 1) The "delay" that's specified appears to be just added on to
>    Grant>    the intrinsic overhead of a bit-banged driver.  Is this the
>    Grant>    Specifying a delay of 10,000ns on my platform results in an
>    Grant>    actual clock period of about 59,000ns.  The description of
>    Grant>    the delay parameter in the reference manual appears to
>    Grant>    assume that there is zero overhead involved in the driver.
>    Grant>    Is this the expected behavior?
>
> It is assumed that the bitbang function just needs to manipulate a
> couple of registers related to GPIO pins, which should be near enough
> instantaneous.

Changing a pin state requires a single instruction on my
platform.  Still, setting the delay parameter to 0 results in a
SCK period of 50,000ns.  Setting the delay parameter to a
non-zero value adds to that 50,000ns.  [I'm running on a NIOS2
CPU at 44MHz.]

> If for some reason the operation is more expensive, there
> would be no easy way to measure that and allow for it.

Right, but the reference manual implies that it does when it
states that the delay value will be the SCK period.  That could
only be true if the overhead is either zero or is measured and
compensated for.

> Hence the specified delay is just used to generate the
> HAL_DELAY_US() parameter. Developers still have some control
> since they fill in the delay field when instantiating an I2C
> device.

Yup.  I've set it to 0, and I get an SCK of 20KHz.  I suppose I
could trace execution through the i2c routines and try to
figure out where the time is going.

>    Grant> 2) There doesn't seem to be any way to determine when writing
>    Grant>    zero bytes of data with cyg_i2c_tx() whether the operation
>    Grant>    was successful or not, since it returns 0 for both cases.  I
>    Grant>    presume one should use the lower-level "transaction"
>    Grant>    routines for this case?
>
> Under what circumstances does it make sense to write zero bytes of
> data?

The datasheet for the EEPROM I'm using states that in order to
determine if a write cycle has completed, one should should
send an address/control byte with the r/*w bit cleared.  If
that byte is acked, then the write cycle is finished.  If it
isn't then the write cycle is still in progress.  I've
determined that sending an extra byte after the control byte
doesn't seem to hurt anything, but I'd prefer to do things
according to the datasheet.

>    Grant> How do I send a single byte on the i2c bus??
>
> I suspect you are setting the start flag. That means the I2C
> code has to send the device address and the direction bit
> before the byte of data.

Yup.  That's what I finally deduced.

> I2C does not have the concept of sending a raw byte of data
> onto the bus. Data must always be addressed to a device on the
> bus, which means sending address bytes. The address byte also
> includes one bit for the direction, so that the addressed
> device knows whether it should accept or transmit data.

If I'm going to poll the device to see if it's done with a
program cycle, according to the device's datasheet, I need to
send start + address/write and check for the ACK. AFAICT, I can
only do that by specify 0 data bytes, but then I can't tell if
the address byte was ACKed or not since both cases return 0.

My testing seems to indicate that sending a single byte after
the address byte doesn't hurt anything (all it does is set an
internal register value that will be changed later anyway).

>    Grant> It will be called from both driver init() routines and from
>    Grant> threads. How tell the difference so that the function can
>    Grant> call HAL_DELAY_US() in the former case and
>    Grant> cyg_thread_delay() in the latter?
>
> cyg_thread_delay() generally operates in terms of many
> milliseconds.

I know.

> Typically low-level device drivers do not deal with things on
> such long timescales, instead that is left to higher-level
> code or the application.

Except there are operations in driver init() methods that may
need delays of several milliseconds in order to detect whether
or not peripheral are installed and/or working properly.

> Instead typical device drivers need delays of the order of
> microseconds, which always requires HAL_DELAY_US() rather than
> cyg_thread_delay().

Right.  But, I have a routine that requires millisecond delays
that is called from driver init() functions, RedBoot, and
normal threads which may or may not have the scheduler locked.

Using HAL_DELAY_US all the time would be bad for performance
during normal thread calls.  Using cyg_thread_delay() won't
work for the init() and locked-scheduler cases.

I know how to check the scheduler lock.  I know how to
determine if the function is being compiled for RedBoot.  What
I hadn't figured out is how to tell whether the scheduler has
been started or not.

> If there is a valid reason for having milliseconds of delay
> inside driver code,

There is.  I need to time-out in initialization code if
peripherals don't respond (they may not actually be there).

> the best bet is to check whether or not interrupts are
> enabled. Typically that does not happen until the scheduler is
> started and threads begin to run.

Thanks.  That should be pretty simple.

-- 
Grant Edwards                   grante             Yow!  Are you selling NYLON
                                  at               OIL WELLS?? If so, we can
                               visi.com            use TWO DOZEN!!


-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [ECOS]  Re: typo in i2c example in reference manual?
  2007-02-19 17:34   ` [ECOS] " Grant Edwards
@ 2007-02-22 22:53     ` Bart Veer
  2007-02-23  4:28       ` Grant Edwards
                         ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Bart Veer @ 2007-02-22 22:53 UTC (permalink / raw)
  To: grante; +Cc: ecos-discuss

>>>>> "Grant" == Grant Edwards <grante@visi.com> writes:

    Grant> On 2007-02-19, Bart Veer <bartv@ecoscentric.com> wrote:
    Grant> 1) The "delay" that's specified appears to be just added on
    Grant> to the intrinsic overhead of a bit-banged driver. Is this
    Grant> the Specifying a delay of 10,000ns on my platform results
    Grant> in an actual clock period of about 59,000ns. The
    Grant> description of the delay parameter in the reference manual
    Grant> appears to assume that there is zero overhead involved in
    Grant> the driver. Is this the expected behavior?
    >> 
    >> It is assumed that the bitbang function just needs to manipulate a
    >> couple of registers related to GPIO pins, which should be near enough
    >> instantaneous.

    Grant> Changing a pin state requires a single instruction on my
    Grant> platform. Still, setting the delay parameter to 0 results
    Grant> in a SCK period of 50,000ns. Setting the delay parameter to
    Grant> a non-zero value adds to that 50,000ns. [I'm running on a
    Grant> NIOS2 CPU at 44MHz.]

So apparently it takes 25us to change a pin state. Sounds like there
is a big problem somewhere.

    Grant> Right, but the reference manual implies that it does when
    Grant> it states that the delay value will be the SCK period. That
    Grant> could only be true if the overhead is either zero or is
    Grant> measured and compensated for.

On processors which have dedicated I2C bus master support (as opposed
to bitbanging GPIO lines) the delay is likely to be exact since it
will be used to set a clock register within the I2C hardware. For a
bit-banged bus the delay should be accurate to within a few percent,
which should be good enough for all practical purposes. It will not be
any more accurate than that because HAL_DELAY_US() is not expected to
be any more accurate than that. There is a reasonable assumption here
that the low-level bitbang operations are sufficiently cheap as to be
negligible.

    Grant> 2) There doesn't seem to be any way to determine when
    Grant> writing zero bytes of data with cyg_i2c_tx() whether the
    Grant> operation was successful or not, since it returns 0 for
    Grant> both cases. I presume one should use the lower-level
    Grant> "transaction" routines for this case?
    >> 
    >> Under what circumstances does it make sense to write zero bytes
    >> of data?

    Grant> The datasheet for the EEPROM I'm using states that in order
    Grant> to determine if a write cycle has completed, one should
    Grant> should send an address/control byte with the r/*w bit
    Grant> cleared. If that byte is acked, then the write cycle is
    Grant> finished. If it isn't then the write cycle is still in
    Grant> progress. I've determined that sending an extra byte after
    Grant> the control byte doesn't seem to hurt anything, but I'd
    Grant> prefer to do things according to the datasheet.

So the EEPROM will not acknowledge the address byte immediately after
the start condition. cyg_i2c_bitbang_tx() should detect this and
return immediately after the address byte, with no attempt to send the
data byte. As long as the data byte is effectively a no-op when the
write cycle has completed and the EEPROM does accept the data byte,
no harm will be done. There is a marginal inefficiency in that your
polling code ends up transmitting one unnecessary byte, and arguably
the I2C API should have allowed for this case, but I do not think it
is worth changing the API at this stage.

Bart

-- 
Bart Veer                                 eCos Configuration Architect
http://www.ecoscentric.com/               The eCos and RedBoot experts

-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ECOS]  Re: typo in i2c example in reference manual?
  2007-02-22 22:53     ` Bart Veer
@ 2007-02-23  4:28       ` Grant Edwards
  2007-02-24 15:52       ` Grant Edwards
  2007-02-24 17:03       ` Grant Edwards
  2 siblings, 0 replies; 7+ messages in thread
From: Grant Edwards @ 2007-02-23  4:28 UTC (permalink / raw)
  To: ecos-discuss

On 2007-02-22, Bart Veer <bartv@ecoscentric.com> wrote:

>    Grant> Changing a pin state requires a single instruction on my
>    Grant> platform. Still, setting the delay parameter to 0 results
>    Grant> in a SCK period of 50,000ns. Setting the delay parameter to
>    Grant> a non-zero value adds to that 50,000ns. [I'm running on a
>    Grant> NIOS2 CPU at 44MHz.]
>
> So apparently it takes 25us to change a pin state. Sounds like
> there is a big problem somewhere.

It sure does.  I took a look at the C++ code, and I don't see
how there could be that much overhead anywhere.

> On processors which have dedicated I2C bus master support (as
> opposed to bitbanging GPIO lines) the delay is likely to be
> exact since it will be used to set a clock register within the
> I2C hardware. For a bit-banged bus the delay should be
> accurate to within a few percent, which should be good enough
> for all practical purposes.

There's got to be something wrong with my hardware platform --
not that I'd be surprised, the NIOS2 has been a giant headache
from the beginning.

> It will not be any more accurate than that because
> HAL_DELAY_US() is not expected to be any more accurate than
> that. There is a reasonable assumption here that the low-level
> bitbang operations are sufficiently cheap as to be negligible.

I'm also going to take a look at the HAL_DELAY_US() macro.
Altera's NIOS2 eCos port has also been the source of headaches.

>>> Under what circumstances does it make sense to write zero bytes
>>> of data?
>
>    Grant> The datasheet for the EEPROM I'm using states that in order
>    Grant> to determine if a write cycle has completed, one should
>    Grant> should send an address/control byte with the r/*w bit
>    Grant> cleared. If that byte is acked, then the write cycle is
>    Grant> finished. If it isn't then the write cycle is still in
>    Grant> progress. I've determined that sending an extra byte after
>    Grant> the control byte doesn't seem to hurt anything, but I'd
>    Grant> prefer to do things according to the datasheet.
>
> So the EEPROM will not acknowledge the address byte immediately after
> the start condition.

Correct (if it's busy).  If it's not, it will ack the address
byte and the subsequent data byte which is written into a
pointer register.

> cyg_i2c_bitbang_tx() should detect this and return immediately
> after the address byte, with no attempt to send the data byte.
> As long as the data byte is effectively a no-op when the write
> cycle has completed and the EEPROM does accept the data byte,
> no harm will be done.

As far as I can tell, it's effectively a noop if you follow it
with another write command to set the register back to the
desired state.  The datasheet seems quite explicit that the
address byte (either acked or not) is followed by another start
condition and then the command you actually want to execute.
But, I can't see any reason why you can't send a data byte
along with the address byte.

> There is a marginal inefficiency in that your polling code
> ends up transmitting one unnecessary byte, and arguably the
> I2C API should have allowed for this case, but I do not think
> it is worth changing the API at this stage.

Proably not.

-- 
Grant Edwards                   grante             Yow!  An Italian is COMBING
                                  at               his hair in suburban DES
                               visi.com            MOINES!


-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ECOS]  Re: typo in i2c example in reference manual?
  2007-02-22 22:53     ` Bart Veer
  2007-02-23  4:28       ` Grant Edwards
@ 2007-02-24 15:52       ` Grant Edwards
  2007-02-24 17:03       ` Grant Edwards
  2 siblings, 0 replies; 7+ messages in thread
From: Grant Edwards @ 2007-02-24 15:52 UTC (permalink / raw)
  To: ecos-discuss

On 2007-02-22, Bart Veer <bartv@ecoscentric.com> wrote:

>     >> It is assumed that the bitbang function just needs to manipulate a
>     >> couple of registers related to GPIO pins, which should be near enough
>     >> instantaneous.
>
>    Grant> Changing a pin state requires a single instruction on my
>    Grant> platform. Still, setting the delay parameter to 0 results
>    Grant> in a SCK period of 50,000ns. Setting the delay parameter to
>    Grant> a non-zero value adds to that 50,000ns. [I'm running on a
>    Grant> NIOS2 CPU at 44MHz.]
>
> So apparently it takes 25us to change a pin state. Sounds like there
> is a big problem somewhere.

It's not a hardware problem -- that 25us is all in the i2c
infrastructure.

The following loop generates an SCK of around 20MHz.  (I'm
not sure of the exact frequency, my digital scope has a max
sample rate of 40MHz).

      while (1)
        {
          BitSet(Sck);
          BitClr(Sck);
          BitSet(Sck);
          BitClr(Sck);
          BitSet(Sck);
          BitClr(Sck);
          BitSet(Sck);
          BitClr(Sck);
          BitSet(Sck);
          BitClr(Sck);
          BitSet(Sck);
          BitClr(Sck);
          BitSet(Sck);
          BitClr(Sck);
        }

Adding two nops slows it down to the point where I can
actually measure it:

      while (1)
        {
          BitSet(Sck);
          asm(" nop");
          asm(" nop");
          BitClr(Sck);
          asm(" nop");
          asm(" nop");
          BitSet(Sck);

          [...]

That produces an SCK of 5.6MHz.

Adding in the overhead of calling the "bitbang" function

      while (1)
        {
          dm2_i2c_bitbang(NULL,CYG_I2C_BITBANG_SCL_HIGH);
          dm2_i2c_bitbang(NULL,CYG_I2C_BITBANG_SCL_LOW);
          dm2_i2c_bitbang(NULL,CYG_I2C_BITBANG_SCL_HIGH);
          dm2_i2c_bitbang(NULL,CYG_I2C_BITBANG_SCL_LOW);
          [...]
        }

That slows SCK down to about 400KHz.

Add in the layer above that by calling the tx/rx routines, and
the fastest clock rate I can get is about 20KHz.

>    Grant> Right, but the reference manual implies that it does when
>    Grant> it states that the delay value will be the SCK period. That
>    Grant> could only be true if the overhead is either zero or is
>    Grant> measured and compensated for.
>
> On processors which have dedicated I2C bus master support (as opposed
> to bitbanging GPIO lines) the delay is likely to be exact since it
> will be used to set a clock register within the I2C hardware. For a
> bit-banged bus the delay should be accurate to within a few percent,

I don't see how that can be true unless you specify long delays
on a very fast processor.  For the typical i2c clocks and a
44MHz processor, the overhead isn't negligible -- it's a 10X
larger than the requested delay.

> which should be good enough for all practical purposes. It
> will not be any more accurate than that because HAL_DELAY_US()
> is not expected to be any more accurate than that. There is a
> reasonable assumption here that the low-level bitbang
> operations are sufficiently cheap as to be negligible.

A reasonable assumption?  We must be assuming a 1GHz processor
with a cache big enough to hold the entire application.

-- 
Grant Edwards                   grante             Yow!  The PINK SOCKS were
                                  at               ORIGINALLY from 1952!! But
                               visi.com            they went to MARS around
                                                   1953!!


-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [ECOS]  Re: typo in i2c example in reference manual?
  2007-02-22 22:53     ` Bart Veer
  2007-02-23  4:28       ` Grant Edwards
  2007-02-24 15:52       ` Grant Edwards
@ 2007-02-24 17:03       ` Grant Edwards
  2 siblings, 0 replies; 7+ messages in thread
From: Grant Edwards @ 2007-02-24 17:03 UTC (permalink / raw)
  To: ecos-discuss

On 2007-02-22, Bart Veer <bartv@ecoscentric.com> wrote:

> On processors which have dedicated I2C bus master support (as
> opposed to bitbanging GPIO lines) the delay is likely to be
> exact since it will be used to set a clock register within the
> I2C hardware.

Probably true

> For a bit-banged bus the delay should be accurate to within a
> few percent,

OK, let's do some math.

On every I2C bus I've ever seen the clock period was 2500ns.

There are two sources of error which directly sum: inaccuracy
in HAL_DELAY_US() and overhead.  

We'll assume HAL_DELAY_US() has zero error.  That allows us "a
few percent" of 2500us for overhead (we'll say 3%). We have
37ns per half-bit for overhead.

We're talking about a 44MHz CPU, so a single CPU clock is 23ns.
I'm going to claim I should be allowed 1 cpu clock for my one
machine instruction set/clear an I/O bit.  That means we now
have 14ns per half-bit left over for all of the C/C++ overhead.

That's about 1/4 of an average machine instruction execution
time (which is about 2 clock cycles).  1/4 of a machine
instruction for C++ code that's shifting data around in a loop
while making indrect calls through structure fields to a
function that's executing a switch statement.

> which should be good enough for all practical purposes. It
> will not be any more accurate than that because HAL_DELAY_US()
> is not expected to be any more accurate than that.

I've assumed HAL_DELAY_US() has 0 error, yet expecting the
delay in a bit banged driver to be within a few percent is
clearly unrealistic.

> There is a reasonable assumption here that the low-level
> bitbang operations are sufficiently cheap as to be negligible.

No, that's just not a reasonable assumption.  On a 44MHz CPU
which executes an average of 1 instruction every 2 clock
cycles, an instruction time is 45ns.  A half-bit time is
1250ns.  That means that you can execute about 28 machine
instructions per half bit.  

Simply calling dm2_i2c_bitbang(bus,op) is using up over 100% of
the available time.  Adding another layer of C++ code on top of
that uses up 2000% percent of the available time.  

How can we add another 100% for HAL_US_DELAY() and expect the
result to be accurate to within a few percent?

-- 
Grant Edwards                   grante             Yow!  ... Just enough
                                  at               time to do my LIBERACE
                               visi.com            impression...


-- 
Before posting, please read the FAQ: http://ecos.sourceware.org/fom/ecos
and search the list archive: http://ecos.sourceware.org/ml/ecos-discuss

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2007-02-24 17:03 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2007-02-17 16:50 [ECOS] typo in i2c example in reference manual? Grant Edwards
2007-02-19 17:09 ` Bart Veer
2007-02-19 17:34   ` [ECOS] " Grant Edwards
2007-02-22 22:53     ` Bart Veer
2007-02-23  4:28       ` Grant Edwards
2007-02-24 15:52       ` Grant Edwards
2007-02-24 17:03       ` Grant Edwards

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for read-only IMAP folder(s) and NNTP newsgroup(s).