LV2013
As is my custom, I checked out two alternate ways of doing something, to see which was faster.
Objective: To convert a "Generic" U8 Enum to a "Specific" U8 Enum.
IOW, both are TYPEDEFed enums with 256 values.
One will remain generic, to be used in a generic transceiver method, the other will be specific, to be used in a specific application.
Directly connecting a generic control to a specific indicator will not work, as LV (properly) complains that they are not compatible.
Scheme 1 flattened the Enum into a string, then unflattened it as the specific type:
![G2S Flatten.PNG G2S Flatten.PNG]()
Scheme 2 does a typecast from general to specific:
![G2S TypeCast.PNG G2S TypeCast.PNG]()
Both are in INLINE SUBROUTINE VIs, so call overhead should be ruled out.
My Timing mechanism is well tested, been used for 20 years.
I was surprised to find that the FLATTEN + UNFLATTEN scheme was faster. Significantly. (227 nSec vs. 393 nSec).
My question (strictly academically) is WHY?
My thought is that the TYPECAST simply reuses the space occupied by the incoming variable and re-declares it to be the new type. IOW, no time at all.
Whereas, the FLATTEN has to allocated space for a string, create a LENGTH word, then take the string and extract one byte and create a new variable.
Obviously I am mistaken, so WHY? Is the optimizer really smart enough to figure out what I'm doing and bypass some of that? If so, why not on the TYPECAST case?
The VIs are attached if you care to play with it.