Selecting the right I/O board may seem to be a difficult and complex process. Sometimes data acquisition (DAQ) system requirements are well defined and not subject to change. More often, however, the specific requirements are not completely determined until a project is under way. Then the flexibility, adaptability and expandability of the data acquisition system may be critical.
The first specification to consider is throughput. Throughput is a measure of the rate at which a data acquisition system can acquire and store samples using an A-D converter. Five major factors contribute to throughput: multiplexer settling time, amplifier settling time, sample/hold acquisition time, A-D conversion time and the time required to read data from the ADC and store it in memory.
Most multichannel DAQ boards multiplex multiple inputs into a single analog-to-digital (A-D) conversion system. It is tempting to merely take the reciprocal of the A-D conversion time and assume the result is the maximum achievable sampling speed. However, A-D conversion time is only one of many factors that affect the sampling speed of the system. The typical signal path is through a solid-state multiplexer to a programmable gain amplifier (PGA), then to a sample/hold amplifier and finally to the ADC. Each element in the chain requires a short period of time to settle to its stated accuracy (typically 0,01% for a 12 bit system). All these elements must be included when determining system throughput, not just the A-D conversion time. The system throughput rate is always slower than the A-D throughput alone, but represents true system performance.
You should also be cautious of a single-channel throughput specification since the settling times of the multiplexer and PGA are left out of the equation (switching between channels is not involved). This is typically a fairly impressive specification, but seldom representative of a real-world situation.
Another significant factor that influences overall throughput is the architecture of the DAQ board. The methods used to select channels, start conversions and get the resulting data from the board can have a significant impact on throughput - regardless of the speed of the analog front-end ADC.
All the architectural factors have one thing in common - they minimise the software interaction required on a sample-by-sample basis to acquire data. In general, the less involvement the computer has with the details of the sampling process, the faster the board can acquire data. The computer is used only to set up the DAQ process and perhaps to read data from the board. The board itself controls the actual sampling process.
Hardware pacing
The architectural feature that has the most significant influence on a board's throughput is its ability to start A-D conversions using a hardware pacing signal rather than a software event. Typically, this signal is generated by a programmable timebase controlled by a crystal oscillator - often called a pacer. On more sophisticated boards, such as the Intelligent Instrumentation PCI-20098C series, an external signal can also start conversions.
Hardware pacing eliminates one significant element of software overhead - the need to start each A-D conversion with a software instruction. In addition (and perhaps even more significant), hardware pacing eliminates timing 'jitter' from the sampling process.
The goal of most high-speed DAQ processes is to reconstruct a waveform. To do this, the system must know the exact time each sample was taken as accurately as it knows its value. Any error in the timing of the samples has just as much effect on accuracy as errors in the magnitudes of the samples. For a variety of reasons, using software timing loops to pace DAQ runs is not accurate enough to produce acceptable results in many applications.
Automated channel selection
Most DAQ applications require sampling of multiple channels. In these applications, another way to reduce software overhead is to provide a way for the board to automatically select the channels to be scanned. Again, the goal is to relieve the computer of the software burden of selecting each channel before it is sampled. In the set-up process, the computer specifies the sequence of channels to be scanned and the hardware on the DAQ board handles the details of switching channels at the appropriate time.
How do you know if you need a board with hardware-automated channel selection? It is difficult to specify the speed at which hardware channel selection becomes necessary; it depends on the speed of your computer and what you want the software to do in addition to controlling DAQ. Typically, if your application involves sampling at speeds greater than 1000 sample/s, you should consider a board with hardware selection capabilities. If you have a fast computer, a very 'tight' DAQ loop and a highly skilled programmer, you could possibly sample at 10 kHz without resorting to hardware channel selection - but, in general, it is not recommended.
Automated channel scanning is typically provided in one of two ways: sequential scanning or channel/gain scan lists. A sequential scanner simply applies the output of a counter to the channel selection logic of the multiplexer. Each time a conversion begins, the counter advances the multiplexer to the next channel. Usually, you can program either the beginning channel, the ending channel or both.
The channel/gain scan list is one step beyond the sequential scanner. It operates in a fashion similar to the sequential scanner with one significant difference - the output of the counter, instead of being applied to the multiplexer, is applied to the address inputs of an on-board memory. The data stored in the memory is then applied to the selection logic of the multiplexer. Depending on the capabilities of the board, this data can also be applied to the gain-selection logic on a programmable-gain amplifier or to the logic that controls single-ended vs differential operation and so forth.
Using a channel/gain scan list you can scan any sequence of channels in any order and each channel can have a different gain, depending on what is stored in the memory. You can also specify repeat channels in the list which allows more exotic scanning strategies.
The Intelligent Instrumentation PCI-20098C series multifunction DAQ boards and PCI-20501C series EISA multifunction DAQ boards are examples of boards that use state-of-the-art channel/gain scan lists.
Direct memory access
The ability to make use of direct memory access or DMA is a major means of reducing software overhead in the DAQ process and enhancing system throughput.
A DAQ board with DMA support can store samples directly in memory without any intervention from a computer program. The host computer sets up the DAQ process, starts it and is then free to perform other tasks. The DAQ board, along with the host computer's DMA controller, handles the actual DMA process.
In the DMA process, the DAQ board communicates with the computer's bus using a special set of handshake signals that allows the board to 'steal' cycles from the process. Industry Standard Architecture (ISA) computers have seven DMA 'channels'. Four of the channels handle 8 bit DMA transfers and the other three handle 16 bit transfers. When the DAQ board has data to send to the host, it asserts the DRQn line (where n is the DMA channel number). When the host activates the corresponding DACKn line, the board writes its data to the bus. The host computer's DMA controller ensures that the data is stored to the right memory location.
The simplest form of DMA (and the one most commonly used with DAQ boards) allows the board to transfer a fixed number of samples to memory. A software command starts the process. Each sample acquired by the board's ADC is transferred to the host via DMA until a preprogrammed number of samples has been sent. At that time, the process stops. This is terminal count mode DMA.
While this process is simple to perform and to understand, it is not very well suited to data acquisition. In many DAQ applications, the computer does not control the timing of the event being captured; rather, it must respond to the event. For example, to analyse the response of a shock absorber after subjecting it to an impact, it would be best to trigger the DAQ process when the impact occurs. However, if you have to start the process with a software command, you also have to have some method of knowing when the event of interest is about to occur. This is typically difficult to manage. The solution is to trigger a sequence of A-D conversions from an external signal.
Triggering the DMA process
Many DAQ boards are advertised as having triggering capabilities, but not all manufacturers mean the same thing by the word trigger. Some boards only trigger a single A-D conversion from an external digital signal. A few boards perform analog triggering of a complete sequence of DAQ channels. Fewer boards perform pre and post-triggered DAQ which can enhance the DMA process and make it the most useful for data acquisition.
The pre and post-triggered DAQ technique allows you to capture the entire event of interest, rather than just the portion that occurred after the trigger criterion was satisfied. This feature must be build into the board, however and cannot be added later. Since this is a very desirable capability, manufacturers mention it prominently in the data sheets for such boards. If you do not see pre and post-triggered DAQ mentioned on the first page of a data sheet, it is likely that the board does not have it.
DMA and background processing
So far, we have focused on DMA as a way of speeding up DAQ throughput by minimising software overhead. DMA also offers another advantage not directly related to speed. DMA proceeds in hardware without any software intervention. Effectively, it is a background task. During a DMA process, your program is free to perform other tasks. For example, you could start a DMA process and then compute an FFT on the data from the last DMA process while waiting for the current one to finish. Since DMA requires no processor instructions while it is running, the PC can do something else. This is true even though DOS is not a multitasking operating system.
A DAQ system gathers information about a signal by capturing a series of instantaneous snapshots of the signal taken at fixed time intervals. Each sample represents the input signal at a specific instant in time. To reconstruct the characteristics of the waveform from this stream of samples, you must know the precise time of each sample and its exact value. An error in sample timing or analog-to-digital conversion value can lead to inaccurate results.
To illustrate this concept, imagine that you are sampling a 100 Hz signal by polling a timing signal and starting a conversion by software command when the edge of the timing signal is detected. If the execution time of your polling loop were 5 µs (ie the amount of time it takes to detect the edge), you would have a 5 µs window of uncertainty or jitter, in the time period between the two successive snapshots. Thus, if your input signal were a 10 V, full-scale sinewave with a frequency of 100 Hz, it could change by approximately 31 mV within that 5 µs period.
Due to this timing uncertainty, it is possible that you could have an error of 31 mV in the value of any data sample. For a 12 bit ADC, this amounts to approximately 13 LSBs of error - effectively reducing your 12 bit system to 8 bit accuracy.
Timing errors can be reduced to a level far below the A-D linearity errors simply by starting conversions with a crystal-controlled oscillator clock. Many DAQ boards available today provide this capability. Intelligent Instrumentation's PCI-20098C series boards, for example, reduce sampling jitter to a fraction of a nanosecond, effectively removing jitter from the equation.
Some DAQ applications do not require sampling at regular intervals; rather, sampling is based upon some other physical parameter. For example, a system monitoring the vibration of bearings in a rotating machine would take samples based on n degrees of rotation of the machine, rather than on a fixed time interval. In applications such as this, the DAQ board must accept input from an external signal.
In multichannel DAQ systems, it is standard practice to set the sampling clock to the desired rate and then sample a different channel on each successive tick of the clock. The channels are typically selected by some sort of automated scanner. The advantage of this method is that is eliminates jitter and is simple to understand and implement. The drawback is that there is a time skew of one sample period between successive channel readings. If you are using values from two physical channels to mathematically derive a third parameter, this method of sequential sampling can lead to significant errors.
Burst sampling
Another sampling strategy that has become popular in multichannel DAQ systems is burst sampling. Burst sampling has also been called 'pseudo-simultaneous' sampling - it gets you close to the benefits of simultaneous sampling for a much lower cost.
With a burst sampling scheme you must specify the group of channels you want to sample and how often you want to sample them. The system then reads all the channels in the group at the maximum rate of the ADC, allowing the specified amount of time in between group samples. This method results in a jitter-free sample rate (important in DAQ) and minimises the time skew between channel readings.
To illustrate the difference between sequential and burst sampling, let us consider the example of an automobile engine test system that monitors two analog signals - one proportional to torque and one proportional to RPMs. After monitoring these two signals with a PCI-20098 series board at a rate of 500 sample/s each, you could multiply the two together to get horsepower as a function of time.
Without burst sampling you would set the sampling rate to 1000 sample/s and there would always be 1 ms between the time the torque signal is acquired and the associated RPM signal is acquired. If either parameter changed in that 1 ms, then the calculated horsepower would be incorrect.
With burst sampling, you would specify a burst rate of 500 bursts, or scans, per second. Every 2 ms the PCI-20098C board would sample first the torque signal and then, as quickly as the ADC allows, the RPM signal. The time skew between these two samples would be about 22 µs instead of 1 ms. In this example, the time skew would be reduced by 98% through the use of burst sampling and the profitability of an erroneous result would be similarly reduced.
Burst sampling is just as beneficial in situations where sampling needs to be related to some physical parameter other than time (such as in the example of the rotating machine described earlier). In such applications, however, the burst of samples must be initiated by an external signal, such as a tachometer pulse, rather than by the internal timebase. The PCI-20098C series DAQ boards are among the few on the market that can handle this situation without any problem; bursts can be triggered by external signals, by the internal timebase or by any of several other events.
For further information, contact Bobby Holdcroft of Designs Unique on telephone (011) 646 1171.
Email: | [email protected] |
www: | www.designsunique.co.za |
Articles: | More information and articles about Designs Unique |
© Technews Publishing (Pty) Ltd | All Rights Reserved