Quantcast
Channel: LabVIEW topics
Viewing all 69005 articles
Browse latest View live

Loop drops when connected wireless (NI 9191)

$
0
0

Hi,

I cannot figure out why a NI cDAQ 9191 chassis has connection drops when connected wireless and works

perfectly when connected thru LAN.

I wrote a little test VI to check the connection accuracy. The chassis has a NI 9375 I/O module plugged in.

A Moxa AWK3121 is used as wireles access point. What I do is check 5 digital inputs and control 3 digital outputs.

The outputs are continuously placed on/off at a different rate. I use an event loop and a main loopt that communicates

to the chassis at a loop rate of 10ms.

 

EventLoop.jpg

 

 MainLoop.jpg

 

The chassis, laptop and Moxa A.P. are all close together on my desk. The Wifi signal is excellent.

Now, the code i written is probably not optimal.

But...  If I start with connecting the chassis with a LAN cable to my laptop, Wifi Disabled, Linked Local. The communication is perfect.

No drops between the loops and the real time.         TimingLANtoPC.jpg

No communication errors. Runtime was 52 minutes.

 

Second test. Connected the chassis to the Moxa A.P. with LAN cable. Laptop connected wireless to the Moxa A.P.

Chassis still in Link Local. Ik can access the chassis in MAX. No problem.

When I run my test, I immediately see loop pauzes. After one minute there can be already a difference of 20 sec.

between calculated looptime and real time. Sometimes the active light on the chassis goes off and on.

When the active light is out, the loop is on hold. (Wich is normal as I assume the DAQ VI's are waiting for response.)

After appr.10 minutes the error -50405 "No Transfer in progress because transfer was aborted by client".

 

If I look at the timings there is already a difference of more then 5 minutes between the loop timing and the real time.

TimingWifiToPC_ChassisLANtoMoxa.jpg

 

As last I tested with the chassis and the PC connected wireless to the Moxa A.P.

I did not get the -50405 error, but again loop drops. After 10 minutes there was a shift of 2 minutes.

TimingAllThruWifitoMoxa.jpg

 

Has anyone experience with wireless connections with the NI 9191 chassis?

Can I change my code? Did I mis something? (I'm a newbie to Labview)

 

Any help or suggestions would be appreciated.

Thanks in advance.

 

Frankie.

 

Labview 2014 SP1 (32bit) on Windows 7 Enterprise (X64)

Chassis and Moxa all updated to the latest firmware.


Where can I find an up to date IMAQ manual?

$
0
0

I'm writing software to control a camera with high framerate (~100 fps) for scientific research. I've made some VI's that illustrate the basic principles using the IMAQ High Level VI's, but to save the data at high speed and trigger the acquisition properly, I know I'll have to use the IMAQ Low Level VI's.

 

I'm using the most recent IMAQ software release (I believe NI Vision from February 2015) and LabVIEW 2012 and I have been unable to find much documentation on the IMAQ Low Level VI's. The 2 sentence summaries in the IMAQ Reference Help aren't giving me much of an understanding of what they do and how to use them, and the most recent IMAQ manual I could find was from 2004 and it references outdated VI's that are not in the newer version.

 

Is there a more up to date manual or guide on the IMAQ Low Level VI's somewhere?

2D array of string to 2D array of void

$
0
0

I'm trying to wire the connector pane of my sub-vi such that one of its inputs is a 2-D array of string which I will then use in the subvi. What I did was place a 2-D void array on the front panel as a control and linked it to the connector pane. When I try to wire 2-D array of string to this input on my subvi, it gives an error since the souce is 2D array of string and the sink is 2D array of "void."

 

How can I simply create an array of  2D string input for my subvi so that I can wire any 2D array of string and have it be accepted by the subvi?

excel bold style

$
0
0

I am trying to create a vi that saves data in excel format.At the moment,i am putting the data in a 2d array and using the invoke node: export data to excel i create the excel .However,my problem is that i would like to make the text in some cells bold but am unable to do this(not manually but programmatically using labview),,,could someone please give me a hand?thank you already!!

Is it possible to include the lvprog file in a source distribution build specification

$
0
0

It seems to me that a complete source distribution would include the project files associated with it but I cannot see that it's possible to do so. Am I missing something or should I just continue to, outside of LabVIEW,  zip up the top folder that contains everything?

64 bit vs 32 bit Labview on an AMD Opteron 6380

$
0
0

When running my code on an AMD Opteron 6380 processor, Labview 2014 64bit runs 4 times slower than when running with Labview 2014 32bit.  However the same code running on and INTEL Core i7 ran twice as fast with Labview 64bit vs 32bit.  Does anyone know why?  Note that the code is math intensive running a parallel loop that does not cummunicate with the outside world, i.e. no file access, property nodes, etc.Thank you.

Already-running vi 'not compiled'?

$
0
0

I have a mixed-mode FPGA device we access by ethernet, and I have a VI on this device, and it runs, and that's all wonderful.

 

I have an application which has a reference to this VI. It used to work.

 

Then we moved to a different place and the network address changed. We adjusted the address on the device and then we could get to the VI on it and run it again. That was good.

 

But the reference to the VI in the other application? It didn't update. Fair enough. I drag the VI from the target in the project window into the reference, to replace it.

 

One would think - I would have thought - that this would fix the problem. But no, I still have a broken arrow, and the error is that the referred-to VI has not been compiled. Not compiled? It's Running!

 

So, I'm a bit confused about that. Any ideas what might be going on and how I could fix it?

Using a state machine

$
0
0

Hello all,

 

I have attached my project. I am trying to create a state machine that will Begin at 1 Volt, wait 1 second, then transition to 2 Volts, then transition to 3 volts................. . I can only get it to go through the first iteration without problems. When it gets to the second it keeps alarming on me and giving me error -200088. I have this connected to a USB-6002 DaQ device. If I take what I have in the "initialize" case except without the case structures, I am able to enter a value into the DAQmx and each time I change that constant, it changes the output of AO0. Putting it on this state machine, I can't get it to work. Any help would be greatly appreciated.

 

Thank you


Network Shared Variables syncing damaged in LV 2013 crash with RT project

$
0
0

I'm posting this to the Forum hoping to save somebody else from what I just spent days dealing with, as I had no luck searching the forum for a solution.

 

LabVIEW 2013 32-bit kept crashing with DAbort 0x89B93EF0 during various phases while working on a Real-Time/FPGA project that previous to a few small updates had been working great. It happened very sporadically, sometimes when compiling the Real-Time app, sometimes when deploying the app to a cRIO-9073. While debugging with code sections disabled, sometimes running the app in debug mode and shutting it down using a network shared variable set up for this purpose caused the DAbort as well, but it was inconsistent.

 

The change to the project preceding the issue was the addition of a single element to a complex Type Def (nested clusters) and re-ordering the elements in the sub-cluster, followed by updating a Network Shared Variable (NSV) using this TypeDef (disconnecting the type def and re-settting the NSV data type to the modified TypeDef), and a few relatively minor coding changes. In the middle of updating the Type Def, LabVIEW crashed, but seemed to recover. After recovery, it seemed to need more effort to compile the app - but did so without error - and deploy it, but the cRIO refused to run it.  The cRIO would reboot as normal, but monitoring it's CPU utilization using Distributed System Manager showed it try to start, but then the CPU dropped to ~2.5% very quickly after the normal boot-sequence extreme utilization period. When my app is working, it hovers in the 20% CPU utilization range, so I knew it had crashed. Since then, LabVIEW has been crashing while performing different actions with the same DAbort message - sometimes during compiling the Real Time top-level VI, sometimes mid-deployment (after a Compile completed without error), and other times while working in the top-level VI.

 

In troubleshooting, I created a new Build, I copied all the elements from my top-level VI to a new VI and made a new Build for this new top-level VI, I disabled various sub-sections of the top VI and built it, made separate VIs of the sub-sections that seem to be linked to the crashing (which all worked), re-compiled my FPGA code, cleared the Compiled Object Cache and rebooted the PC multiple times. I even tried a different PC. The same DAbort kept recurring.  i performed a mass-compile too, though that isn't useful for Real Time code.  Early in the process, I also made sure to "touch" all the VIs containing my modified NSV as well as the sub-VIs modified during this update.

 

As a last attempt before calling an NI engineer for support, I changed the name of the LVLIB file containing my modified NSV, something I do to prevent our lab technicians from trying to interface to a cRIO with an outdated UI. This initially had no effect, but on a hunch I then proceeded to copy the entirety of the top-level VI to a new VI, section by section. At that point, LabVIEW finally reported an issue: It said a handful of sub-VIs accessing NSVs (all of which are set to Target Relative in my VIs, hosted on the cRIO, some with buffering, most without) in the renamed LVLIB file could not be found. When I opened the properties of these NSVs, they were still pointing to the old LVLIB file name. It seems LabVIEW, likely on that very first crash, lost track of whatever "links" exists behind the scenes in the Project between the NSV representations on my block diagram, and the NSV definitions in the project.

 

Once these newly found errors were fixed, the code compiled, deployed, and ran fine.  Note I haven't tried to re-produce this (how exactly do you reproduce something that occurs during a crash?).

 


 

More detail on the DAbort, from the lvlog.txt file created in the first crash:

DAbort 0x89B93EF0: bad image in ValidateImage
c:\builds\penguin\labview\components\LVManager\trunk\13.0\source\image.cpp(13809) : DAbort 0x89B93EF0: bad image in ValidateImage

Include other porgram installers

$
0
0

Is ther a way to include other installer in an installer build? I don't mean NI support files like VISA and 488.2 I know how to include them. But I have an application that needs other programs installed like NHR IVI drivers and I would like to install MS XMLnotepad for editing config files. 

 

It wuold be nice to be able to include these other installers in the installer package.

fpga compilation problem (generating cores)

$
0
0

Hello.  I am working in a lab trying to compile an FPGA for the cRIO 9074 module.  There are no erros when beginning the compilation, and it runs smoothly until it reaches the "generating cores" step.  At this point the following message repeats itself

 

All runtime messages will be recorded in
C:\NIFPGA\jobs\XR923g4_G4EhsMk\coregen.log
Saved CGP file for project 'coregen'.
Resolving generics for 'ReallyLongUniqueName_ReallyLongUniqueName'...
Applying external generics to 'ReallyLongUniqueName_ReallyLongUniqueName'...
Delivering associated files for 'ReallyLongUniqueName_ReallyLongUniqueName'...
Generating implementation netlist for
'ReallyLongUniqueName_ReallyLongUniqueName'...
Running synthesis for 'ReallyLongUniqueName_ReallyLongUniqueName'

 

This message repeats every 5 min or so for around 29 minutes, at which point the compiler stops, presenting the following error

 

ERROR:sim - Cannot rename dependency database for library "mult_gen_v11_0", file
   is
   "_cg/_dbg/ReallyLongUniqueName_ReallyLongUniqueName_xsd/mult_gen_v11_0/hdpdep
   s.ref", Temporary database file
   "C:\NIFPGA\jobs\I1wyNri_G4EhsMk\core_NiLvXipFloat32Add\tmp\_cg\_dbg\ReallyLon
   gUniqueName_ReallyLongUniqueName_xsd\mult_gen_v11_0\xil_95296_48" will
   remain.  System error message is:  File exists

ERROR:sim - Failed executing Tcl generator.
ERROR:sim - Failed to generate 'ReallyLongUniqueName_ReallyLongUniqueName'.
   Failed executing Tcl generator.
ERROR:sim:877 - Error found during execution of IP 'Floating-point v5.0'

 

I have included the xilinx log.  For good measure I compiled an older FPGA we have running on a different chassis and it ran just fine, so it's not a problem with xilinx (at least I don't think it is).  I have spent the better part of two days wrestling with this issue and have found no viable solutions.  Any and all help would be greatly appreciated.  Thanks!

Cheers,
David

Agilent 34970A

$
0
0

I have a problem with acquiring from Agilent 34970A devices.

 

Currently I have several devices of Agilenet 34970A with 3 DAQ cards all connected to thermocouples and I have written a software that reads from all of them but sequentially meaning that I have to acquire all data from the first device buffer before starting acquring from the second device and this is causing problem for me for the sampling rate.

 

What I have read and know that I can at first , intialize all devices with required settings and then send a trigger to all devices and then acquire data from buffer once all of them finish scanning. and then resend the trigeer everytime i need a new scan which is in my case continous scanning

 

I have looked into the example Advanced Scan of agilent library but the problem I cannot understand quite how to set the machine to have a trigger sent programatically by me, the options are not understandable by me, if any one can help , it will be much appreciated.

 

 

I have attached Agilent Library and my code which is a modification for the advanced scan 

 

All Day Installation

$
0
0

As an Alliance Partner we have access to all of NI's software- which is marvelous.  When a new release comes out, it is an all day installation affair.  Virus scan can not be turned off.  Is there a solution anywhere?  We have clients who need to install RIO, FPGA and device drivers who are in a hurry but then spend a whole day downloading & installing.  Is there a better way?

Open VI reference from a static reference

$
0
0

Hello guys,

To open VIs dynamically, instead of hardcoding the VI name in a string constant, I do this as shown below to keep a dependency to my VI, so if someone ever delete the VI from the project, move it, rename it, it will create a broken wire somewhere so we'll be able to fix it faster.

 

However, don't you think it's a bit weird to have the reference to the VI already, but we need to open a new reference, is there a way to get rid of the property node to get the VI path? Is the static VI reference should be closed after the other reference is opened?

 

Would it make sense to be able to right clic the static reference and specify the option to open a call and forget reference?

 

Is there a better practice?

Cheers,

2015-06-05 16_26_46-hpht.lvlib_hpht.warning.vi Block Diagram on ABT.lvproj_My Computer rev. 21 _.png

Is an FPGA Host Interface a blocking shared resource?

$
0
0

I'm curious to know if the FPGA Host Interface node is a shared resource. Meaning, if I have a Timed Loop on a LabVIEW Real-Time target (e.g. cRIO) that is reading and writing from the FPGA using the Host Interface. Will the determinism of that loop be adversley affected if I access other signls on the FPGA using the Host Interface in a separate loop (either a Timed loop or a normal while loop).

 

Said more tersely, is the FPGA Host Interface a shared resource that can cause a priority inversion if called from two seperate loops?

 

Thanks.


robotics based project

$
0
0

Hello all,

 

I m currently pursuing MSc Mechatronics in UK. I would like to do my  ( thesis/dessertation/final year project ) in robotics field using labVIEW. I have worked in  labview cRIO. But i dont know any thing about embedded control. I tried to order the embedded motion control kit , it was very costly. i dont know how to proceed further. what are all the things required to start my project that i can afford, as i am student.

 

 

my project details

 

robot that move all around either manual control or autonomous

robot that detects the cracks in a drum (which could be used in nuclear reactor)

robot also detects temperature , pressure and gamma radiation

vision and wireless transmission

 

 

 

kindly help me to proceed with my project.

 

 

 

thanks in advance

 

 

kind regards, 

threshhold

$
0
0

In this VI CWT is applied to 1D waveform

 

I want to reduce the coefficent of CWT scalogram to zero below certain threshold.

 

Please help me in this regard.

Error -2501 Invalid TDMS File Reference

$
0
0

Hello. Im using LabVIEW 2013 for myRIO on Windows 7. Im trying to perform a simulation for sensor readings from 0-5 and then log the data. Ive used the random number generator to obtain readings. So my code runs fine (almost), Ive used Notifiers to stop multiple loops with one Stop button. I've put timers too. Im using TDMS files to log the data. Ive opened one common TDMS file in which i want to log 4 different groups of readings. Im getting the "-2501" error, "Invalid File Reference" .. How do i get rid of it ? My VI is attached below. 

Thanks in advance Smiley Happy

Help to accomplish Detrended Fluctuation Analysis (DFA) analysis

$
0
0

Hello,

I am tryting to accomplish one graph DFA of time series.

I have ECG signal, where I got the RR intervals. To accomplish the DFA analysis I need to do:

 

The method of detrended fluctuation analysis has proven useful in revealing the extent of long-range correlations in time series. Briefly, the time series to be analyzed (with N samples) is first integrated. Next, the integrated time series is divided into boxes of equal length, n. In each box of length n, a least squares line is fit to the data (representing the trend in that box). The y coordinate of the straight line segments is denoted by yn(k).

Next, we detrend the integrated time series, y(k), by subtracting the local trend, yn(k), in each box. The root-mean-square fluctuation of this integrated and detrended time series is calculated by

F(n) = ((1/N)(Sum (from k=1 to N) of ((y(k) - yn(k))**2)))**.5

This computation is repeated over all time scales (box sizes) to characterize the relationship between F(n), the average fluctuation, and the box size, n. Typically, F(n) will increase with box size. A linear relationship on a log-log plot indicates the presence of power law (fractal) scaling. Under such conditions, the fluctuations can be characterized by a scaling exponent, the slope of the line relating log F(n) to log n.

 

 

In the first step, First, we compute the integrated signal according to the formula       

where Bave is the mean value of the signal. Ok, I tried to integrate the signal subtracting the mean value of signal but It seems I dont do it correctly. It doesnst seem well done.
 
In this links you can get more data and example of what to do:
 
Any help is aprecitate.
Regards, Fred.

Value Signaling Property Causing VI To Run Infinitely When Running Multiple Times In TestStand

$
0
0

Hi everyone, I suspect my issue is with using the Value Signaling Property for a boolean when communicating between 2 while loops.

 

My VI: I have a VI that has 2 while loops.

 

Loop 1: While Loop With Event Structure in it

Loop 2: While Loop with value signaling property (ComError) used to tell Loop 1 when a communication error has happened and to tell Loop 1 to shutdown.

 

This VI is a generically re-written copy of my VI just for trying to figure out the issue purposes.

 

Issue: The idea of this VI is that the operator would hit the done button after filling out some booleans for data input (Loop1) and the other Loop 2 would be checking for communication (data) from a device to say it fully powered up.  If Loop 2 failed communication it would set a ComError value signaling property node to trigger an event called "ComError" to shut down Loop 1.

 

I need to run this VI twice in my teststand sequence.  Upon the first run when there is a ComError the VI completes, but upon the second run the DONE button is depressed and the code never runs.  The VI is caught in an infinite loop.

 

I have attached my VI and TestStand sequence.  Here is how to simulate my issue.

 

1. Run the TestStand Sequence

2. Upon the first run of the VI called sequence step UI Run 1, first select the "DONE" button and then select the "Trigger Error" stop button.

3. Upon the second run of the VI called sequence ste UI Run 2, select the "DONE" button.

 

You will notice after you hit the DONE button the DONE button stays depressed and the VI is in an infinite loop.  It's like the ComError event structure already triggered (from the Value Signaling property in Loop 2) and stopped Loop 1 so when hitting the DONE buttton since Loop 1 is already stopped the Done button won't do anything. 

 

Thanks for everyone's help.  I really appreciate it!  Have a great weekend!

Viewing all 69005 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>