I am currently working on a device that shall sit on an IDE bus,
between the host and drives, where it shall do some specific tasks,
- There will be a delay through the circuit, maybe as much as 10
blocks of 16 bits from the databus (when transmitting data).
- The device shall be transparent when looking from the host and
I plan on letting abort and reset signals and such go right throuh
without any delay. And for protocol data, there will only be a short
delay. But for data that is going to be stored, thee will be, as I
previously stated, a delay of aproximately 10 blocks of 16 bit
And now to my questions:
1. What problems could arise from the fact that I introduce a delay?
2. How important would it (really) be to deal with those problems?
3. Is it possible to "stall" commands that is problematic when delay
is introduced, so that an "answer" can be fetched from the
host/device, even though the protocol states stuff like "wait for
400ms for an answer, and then do this and that"?
I have a system up and running that just doesn't care to address any
problems introduced by the delay, and it works just fine. The question
is, will it keep on doing that, if put through extensive "stress
testing", where all sorts of crazy command sequences is sent to the
P.S. No need to reply with RTFM. I'm on it, but the bugger is huge...