In my application, I am using the niScope Fetch (2D I16) VI to fetch from a multi-record, multichannel acquisition on a PXI-5122 digitizer. In trying to optimize my code, this led me to this question.
I'm taking 32 successive acquisitions and summing (averaging) the waveforms together. Based on the advice in Scaling and Normalization of Binary Data, I first apply the wfminfo.gain parameter to the I16 data, sum that, then apply the offset. In matlab-like pseudocode for my Labview code, it's something like:
sum = zeros(1,1000) offsetsum = 0 for i = 1:32 [wfminfo, samples_i16] = niscope_fetch_i16(...); samples_float = wfminfo.gain .* float(samples_i16[0,:]); sum = sum + samples_float offsetsum = offsetsum + wfminfo.offset end sum = sum + ones(1,1000) .* offsetsum
So my question is whether it is safe to assume that the wfminfo gain (and offset) parameters will remain constant for the duration of an acquisition session (from the time that niScope Initiate Acquisition is called to the time that acquisition completes)? The above reference on the normalization coefficients indicates that these coefficients are a function of vertical resolution and calibration settings. That would allow me to move the gain multiplications outside the acquisition loop and perform integer arithmetic. Is this a safe assumption?