Quantcast
Channel: LabVIEW topics
Viewing all 69758 articles
Browse latest View live

All of a sudden having network issues with my CompactRIO

$
0
0

I've got a compactrio plugged in to a linksys wifi extender, which is connected wirelessly to the main router here in the building. This is so I can connect to the rio wirelessly while it's moving about on a forklift.

 

It was working, and now it's not. And I'm pretty stumped as to why. Hoping maybe someone has an idea or can spot something wrong here.

 

CompactRIO info:

IP address: 10.0.10.10 (static)

Subnet Mask: 255.255.255.0

Gateway: 0.0.0.0

 

Wi-Fi Extender info:

IP address: 10.0.10.32

Subnet Mask: 255.255.255.0

Gateway: 10.0.10.1

Connected correctly with the main router's credentials

 

- I can connect to the wifi extender and access the internet, so that's working.

- With my laptop connected to the wifi extender, I can ping the compactrio and get a reply

- With my laptop connected to the main router, I can also ping the compactrio and get a reply. Which means packets are going through the main router, to the wifi extender, to the compactrio successfully

- When my laptop is connected to the main router, NI MAX does not see the compactrio at all, it says status is disconnected. 

- When my laptop is connected to the wifi extender, NI MAX can see the compactrio, the status says it's connected, but also says "There was a problem contacting the System Web Server on the target." and won't give me any other information. I'm also unable to connect to the rio from my labview project.


Build error - Cannot Open/Create/Replace AB_Cache file

$
0
0

Hello, I am getting an error when I try to build an executable, or even just generate the file preview. It looks like there is some application builder cache file that can either not be created or not be found. When I go to the directory where it should be, I see other files that were modified today, but not the file in the error message. This directory also has the "read only" attribute marked by a black box within a box.

 

I am using LV 2011 SP1 (32-bit) on Win10 64-bit.

 

This is the error message I get when trying to generate the preview, and I get a similar error message when trying to build the executable.

 

Open/Create/Replace File in NI_LVConfig.lvlib:Parse Config to Queue.vi->NI_LVConfig.lvlib:Load.vi->NI_LVConfig.lvlib:Open Config Data.vi->AB_RW_Project_Cache_Info.vi->AB_Build.lvclass:Save_Cache.vi->AB_Build.lvclass:Build_from_Wizard.vi->AB_UI_Page_Preview.vi<APPEND>
C:\Users\username\AppData\Local\Temp\AB_Cache_{EFDF26AD-2AD0-4620-86D5-40D6087192ED}.txt

Modbus setup

$
0
0

I am brand new to Modbus and I have the new free Modbus libraries for Master and Slave. I see so many VIs I don't know where to start. My application is Master LabVIEW and Slave a UR robot controller. How does the Master communicate or identify with the Slave; is it a IP address setup or device ID and which VIs are needed for simple read/write handshaking?

 

DAQmx setup for high resolution, multi channel scan and graph

$
0
0

Hi,

 

TL;DR: I wanted to see if someone more experienced could provide some guidance on the DAQmx functions to help me efficiently read multiple channels sequentially at the highest sampling rate achievable by a DAQ device. I would like to read 3 ms of data across 24 channels in a while loop. Each channel would be read at 250 kS/s and my program will loop through all 24 channels sequentially.  My target loop rate (reading all 24 channels) is 250ms. Is this possible, or is there a better way to do this?

 

Hardware: NI USB-6218, Windows 10 machine

Software: LabView 2017 Full Development Edition

NI DAQmx driver: 17.0

 

--Detail--

I am trying to use three USB-6218 DAQs to sample and graph 72 analog inputs (RSE) at the highest sample rate the DAQ can support (250kS/s). I am doing this by creating an individual DAQmx task for each channel and splitting all tasks across the three DAQs. 


The main signals of interest are on the order of 10kHz and I would like to view ~3ms of data (750 samples) on each iteration of the loop but there are aperiodic noise/dropouts that occur across all channels that are much higher in frequency (50-120kHz). I am also interested in viewing these 'broadband' noise bursts. Each graph does not need to be to be synchronous as I am hoping to capture the phenomenon on at least one channel.  

 

The problem I am running into is that because each task is currently hardware timed, I believe that a DAQmx Start/Stop/Control call must be included in my main loop (not sure if this is true). This appears to be slowing down my acquisition significantly.  I have a sequence structure that is used to time the Start Task, Read, and Control (I am using 'unreserve'), as well as the overall time required to scan through all 24 channels per DAQ.  For the former, it appears each channel on average takes about 31ms. For the latter, each DAQ takes about ~800ms to run through all 24 channels.  

 

Based on what I've read so far, it appears that software timing will not be able to achieve a resolution better than 1ms (I am on a Windows 10 machine).

 

What I've tried:

To try to alleviate this, I used a producer-consumer structure to queue up data but upon using the Tools->Profile->Timing it appeared that the majority of the timing overhead was coming from the DAQmx function calls, not graphing. 

 

I then attempted adding all channels to a single task, starting the task outside of the main loop, and then using channel property nodes to select a single channel at a time per iteration of the main loop, using an incrementing counter to iterate through all 24 channels. The instance of the DAQmx Read in this case was the '1 channel N samples'. This appeared to make the loop run much faster; however, the returned data was an empty array and I got an 'ADC overrun' error.  

 

I have not experimented with changing the convert rate/sample rate clocks, or using external clocks. I am not sure that this would be the correct way to go.  

 

Apologies in advance for not including code - my company policy does not allow me to upload. I appreciate any help greatly!

 

 

 

 

How to scan numbers with both decimal separators from string

$
0
0

Hello everyone,

 

How to scan for a number regardless of whether decimal separator is a comma (,) or a dot (.)?

 

I am aware that you can specify one or the other (%. or %,), but how can I make that both "6,4" and "6.4" would be converted to a number?

 

Now I am using "Scan From String".

 

The only option I see is to replace all the dots by commas. Can it be natively done with "Scan From String"?

 

NI MAX Task Editing Disabled

$
0
0

I have a NI-DAQmx Task that when I select it under NI MAX, it displays following message, does any know why I don't get the task setting? Is this a special feature to prevent changing of the task setting, or something is corrupted? See image attached.

 

"Interactive editing has been disabled for this task"

Adding timestamps to raw CAN frames

$
0
0

Hi NI community members,

I am writing a program that collects sensor data and broadcasts it over the CAN bus in real time. For CAN broadcasting, I am using an xnet session of type signal single point. After getting a list of signals from the dbc file (each signal corresponds to an individual sensor), I am creating an array that contains all sensor data at a given instant and writes it to the xnet session. I have confirmed that this part of the vi runs successfully. 

I also want to save all collected data in the form of a tdms file. My strategy to do that has been to first get raw CAN frames from the CAN signals and then feed those into the write tdms vi. The issue however is that I am unable to add timestamps to the CAN frames. I have run the program and the data login is successful besides the absence of timestamps. 

If you have any tips on how I could add timestamps to the frames I am writing to the tdms file, please let me know. I tried adding a second channel for an array of timestamps but that doesn't seem to do the job.  

 

Thank you!

(PS. In the VI that I am attaching with this post, you will notice I have assigned random values to each CAN signal. This is to avoid complications related to data acquisition. Once I get this program working, I will substitute the randomly generated values with real time sensor data.)

NI Scope Trigger Mode

$
0
0

I am trying to do an NI Scope capture and I'm having some difficultly. It is a capture of an I2C bus and it works with the NI Scope soft front panel (SFP) but not with my VI. With the SFP I set the Mode to SGL and the capture works. The problem is the NI Scope VIs don't have that option available in the property nodes. I am guessing that maybe that functionality is performed by the way record length and other parameters are configured but I'm not sure. Can someone point me to where the VI is to set mode to single or how to effect the same result using available VI options? 


how to out data from one if-else (case) and in another if-else (case)

$
0
0

Hello friends

 

i am using one case if-else in my project and it run at some specific time,

now i want to take out is data and in it anothe case if-else and run at different time.

 

how can i do that

any help is appriciated

thanks

asif

Excel Easy Title.vi can't run

$
0
0

Hi, 

 

My new report and Excel Easy Title.vi can't run when i click run icon, it's 

 

Lchjxlz_0-1627972871705.png

 

Different reading of myRIO FPGA connectors B (with MXP) and C

$
0
0

Hi,

 

I made a simple test with myRIO connector B with MXP. Basically on the hardware, I connect directly connectorB/AO0 to connectorB/AI0. The expected reading value should be approximately the same. In this case should be around 3V. However, the reading is completely different. Is there something wrong with my setup or code? (screenshot below)

 

I tried this because I used connectorC/AI* and it was working fine. But I used LabView2017, instead of LabView myRIO 2017.

 

fpga_myRIO.png

Labview LICENSE

$
0
0

Hello all

 

My deployment pc has expiration date, we are renew our SSP licance every year but deployment pc has no internet connection and they don't want to connect internet which means that I have to reactivate licance manually every year ?

 

thanks in advance 

 

Honour20_0-1627981665841.png

 

Setting Radio buttons value to No selection

$
0
0

Hi All,

I have a Radio button which has 3 options to select/deselect particular item. The property of radio buttons is such that atleast one among 3 options will always be selected by default, so is there any way to make all the 3 options unchecked(setting it to false)during initialization of software?

 

 

Associate a description to the physical channels to be displayed on the graph and output file

$
0
0

I would like to associate a descriptive name to each channel. I would like the save file and the graph to show the description of that channel as well as the name of the physical channel.

 

How can I make a graph that shows me the description of each channel on the legend (instead of or in addition to the name of the physical channel)?

 

Sorry but I can't do it being a newbie on Labview.

 

RobotC08_0-1627987269112.pngRobotC08_1-1627987299363.pngRobotC08_2-1627987339712.png

 

How to display intensity graph from sub vi to main vi?

$
0
0

Hi, I want to make a simulation of thermal distribution in 2d with 3 heat sources with different positions (x1,y1 ; x2,y2 ; x3,y3). I already have a simulation of thermal distribution with 1 source (which wasn't made by me). I will attach the VI pictures.

 

I only have some basic knowledges about labview, so I keep trying to find the easy way to simulate it. One idea that I have is making the thermal distribution with 1 source as a sub vi. Then in the main vi, I will use that sub vi 3 TIMES to make the simulation with 3 sources. The results of those 3 sub vi will be combined in the main vi. I don't really know if that's possible or not.

 

So first, I want to show the result of intensity graph from sub vi to main vi. And from this forum I found about control refnum as a way to show a graph that's created in a subvi. But the block diagram from my thermal distribution vi is pretty complex and I don't really understand about it.

 

Is it possible to pass 3 parameters and show 3 intensity graphs results from sub vi to main vi? Sorry if my words are confusing. block diagram.pngfront panel.pngThank you for your time and attention. 


Serial Communication with 9871 [FPGA] + cRIO 9039

$
0
0

Hi,

I am new to NI environment, I am trying to implement serial communication from NI9871 connected to cRIO 9039 to external hardware which using 6 byte information

Byte1 - 00001001

Byte 2 - 00000100

Byte 3 - ...

.

.

share me info how I can do that or if anyone already tried it before.

 

Thank You.!

problem with tdms writing and case timing

$
0
0

Hello to all,

I would like to acquire my analogue inputs and write them to a tdms file which is in a box (true/false) clocked every second. (See pdf below)

The problem is that the writing is not done and that my true case in which my writing is located never happens as well as my false case. It is as if my case was blocked and that no case works.

The other problem is that before I put in a cadence box every second, my excel file only records one sample line and I would like to be able to record the data every second in the same excel file.

 

Semantic segmentation using Deep learning VIs in LabVIEW

$
0
0

Hi!

 

I want to perform semantic segmentation of a series of input image using a deep learning model.

I have an already trained CNN model in MATLAB. The format of that trained model is in  ".mat". I want to load that trained model into LabVIEW and need to perform semantic segmentation for a given input image(similar to images used for training the model). I can find examples explaining how to load a frozen graph of a trained model from a file in .pb format. But not getting any sources related to loading a trained model in .mat format.

 

Then I want to perform semantic segmentation using that model. I can see there are some advanced VIs in Vision and Motion-->Machine VIsion-->Machine Learning-->Deep Learning. There are vis only for image classification and object detection. How to perform semantic segmentation in this case.

 

Questions:

1. How to load a saved trained Deep learning model in '.mat' format into LabVIEW?

2. If succeded in this loading of the trained model, which vis need to be used or what framework need to be used, in order to perform semantic segmentation.

 

Any type of suggestions is appreciated.

 

Best

Arya

Finding out the IP Address of a sbRIO

$
0
0

Hello Friends,

 

I have a problem finding out the IP-Address of a sbRIO-9627. I have forgotten the IP-Address which I configured months ago. Now I want to connect with the sbRIO, but I can't. Is there any way to find out the configured IP-Address ?

 

Thanks in advance. 🙂

What are the different targets which can be added to a LabVIEW project?

$
0
0

Hi.
I have seen different targets have been added to LabVIEW project.
Of course, there exists NI hardware, including cRIO, PXI, DAQ, sbRIO,...
Moreover, desktop PCs can be used as RT targets.
Also, I have seen Raspberry PI and Arduino Board added to the LabVIEW Project (which I am not sure are actually real-time or not!).
I have a general question.
What are other types of targets which can be added to LabVIEW project?

Viewing all 69758 articles
Browse latest View live


Latest Images

<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>