Waveform fails to connect to UHD_USRP device when launched in domain manager from IDE - redhawksdr

I have created a waveform in the REDHAWKSDR IDE (Version 2.0.2) comprising a USRP_UHD device and a DataConverter. I followed carefully Section 7.2 "Associating a Waveform with an FEI Device" in the manual and used the "Use Frontend Tuner Device" artifact to define a device port and connection to the DataConverter. The port and connection render on the diagram. I saved the waveform and exported it to SDR.
I created a USRP node, added the UHD_USRP device to the node, and set the IP address. I can launch the node's DeviceManager, allocate the UHD_USRP frontend tuner manually, and confirm data flow on a NextMidas plot.
When I launch the waveform in the REDHAWK_DEV domain manager, the UHD_USRP device in waveform diagram is missing port and connection. The REDHAWK Explorer shows that the UHD_USRP device is allocated but the output port shows an unknown "Connection_1" and the DataConverter input port shows no connection. I recreated the connection manually using the "Connect" menu, but I get no data flow.
This basic USRP connection should be very simple, but I find no useful discussion in this forum or elsewhere. One observation: I can get the connection in python with the following commands:
from ossie.utils import sb
import frontend
sb.catalog(objType='devices')
usrp = sb.launch('rh.USRP_UHD')
usrp.target_device.ip_address = '192.168.10.2'
alloc = frontend.createTunerAllocation("RX_DIGITIZER",
allocation_id="testing", center_frequency=925.0e6, sample_rate=20.0e6, sample_rate_tolerance=20.0)
usrp.allocateCapacity(alloc)
alloc1 = frontend.createTunerListenerAllocation("testing", "listener1")
usrp.allocateCapacity(alloc1)
converter = sb.launch('rh.DataConverter')
converter.maxTransferSize = 262144
usrp.connect(converter, usesPortName="dataShort_out", providesPortName="dataShort", connectionId='listener1')
plot2 = sb.RasterPSD(nfft=8192, frameSize=8192)
converter.connect(plot2, usesPortName="dataFloat_out", providesPortName="FloatIn")
sb.start()

This answer is based on old notes so it may not work. We do most of our connections dynamically through python these days. I didn't have a USRP on hand to test with.
I generated a quick waveform the way you described and the following XML was generated in the .sad.xml file:
<connections>
<connectinterface id="connection_1">
<usesport>
<usesidentifier>dataShort_out</usesidentifier>
<deviceusedbyapplication usesrefid="rh.USRP_UHD_1"/>
</usesport>
<providesport>
<providesidentifier>dataShort</providesidentifier>
<componentinstantiationref refid="DataConverter_1"/>
</providesport>
</connectinterface>
</connections>
<usesdevicedependencies>
<usesdevice id="rh.USRP_UHD_1">
<propertyref refid="DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d" value="FRONTEND::TUNER"/>
<propertyref refid="DCE:0f99b2e4-9903-4631-9846-ff349d18ecfb" value="USRP"/>
<structref refid="FRONTEND::tuner_allocation">
<simpleref refid="FRONTEND::tuner_allocation::tuner_type" value="RX_DIGITIZER"/>
<simpleref refid="FRONTEND::tuner_allocation::allocation_id" value="devuser:6e463f2c-fe8f-4997-98e9-39bf1364c861"/>
<simpleref refid="FRONTEND::tuner_allocation::center_frequency" value="9.0E8"/>
<simpleref refid="FRONTEND::tuner_allocation::bandwidth" value="0.0"/>
<simpleref refid="FRONTEND::tuner_allocation::bandwidth_tolerance" value="20.0"/>
<simpleref refid="FRONTEND::tuner_allocation::sample_rate" value="0.0"/>
<simpleref refid="FRONTEND::tuner_allocation::sample_rate_tolerance" value="20.0"/>
<simpleref refid="FRONTEND::tuner_allocation::device_control" value="true"/>
<simpleref refid="FRONTEND::tuner_allocation::group_id" value=""/>
<simpleref refid="FRONTEND::tuner_allocation::rf_flow_id" value=""/>
</structref>
</usesdevice>
</usesdevicedependencies>
The id in the connectinterface element used to be the same as the allocation_id of the device.
<connectinterface id="connection_1">
should be
<connectinterface id="devuser:6e463f2c-fe8f-4997-98e9-39bf1364c861">
in the generated code above instead of "connection_1". Replace the "devuser:6e463f2c-fe8f-4997-98e9-39bf1364c861" string with whatever was generated for your allocation.

Related

Hazelcast using HD

I am trying to test hazelcast hd.
<map name="testMap">
<!-- <in-memory-format>BINARY</in-memory-format> -->
<in-memory-format>NATIVE</in-memory-format>
<backup-count>1</backup-count>
<async-backup-count>0</async-backup-count>
<read-backup-data>false</read-backup-data>
</map>
<native-memory allocator-type="POOLED" enabled="true">
<size unit="GIGABYTES" value="150"/>
</native-memory>
I have no idea where the data is stored. Checked with management center and found max native memory is 30G but used is always 0.
Log from node below:
INFO: [192.168.129.155]:5701 [dev] [3.5.1] processors=4, physical.memory.total=38.4G, physical.memory.free=2.3G, swap.space.total=1024.0M, swap.space.free=997.4M, heap.memory.used=261.6M, heap.memory.free=205.4M, heap.memory.total=467.0M, heap.memory.max=8.5G, heap.memory.used/total=56.01%, heap.memory.used/max=3.00%, native.memory.used=0, native.memory.free=30.6G, native.memory.total=0, native.memory.max=30.6G, minor.gc.count=324, minor.gc.time=3225ms, major.gc.count=1, major.gc.time=74ms, load.process=100.00%, load.system=100.00%, load.systemAverage=0.40, thread.count=57, thread.peakCount=61, cluster.timeDiff=3, event.q.size=0, executor.q.async.size=0, executor.q.client.size=0, executor.q.query.size=0, executor.q.scheduled.size=0, executor.q.io.size=0, executor.q.system.size=0, executor.q.operation.size=0, executor.q.priorityOperation.size=0, executor.q.response.size=0, operations.remote.size=0, operations.running.size=0, operations.pending.invocations.count=0, operations.pending.invocations.percentage=0.00%, proxy.count=2, clientEndpoint.count=1, connection.active.count=13, client.connection.count=1, connection.count=9
Heap memory does not increase, non heap no change & native no change where is the data stored.
Am i missing some thing?
Update: using hazelcast version 3.5 and management center version 3.5 they are licensed version
You should use hazelcast and management center version 3.8.1.
The latest version of management center shows memory consumption and entries cost for binary and native memory.
Thanks

Hazelcast Cluster members going out of memory due to huge number of "IsStillRunningService" objects

We have a system that makes use of Hazelcast IExecutor Service and IMap on 3.5 version. We recently encountered with Hazelcast cluster members going Out of Memory in Production, one after the other and at the end all nodes are crashed with OOM.
While doing the causal analysis, we found that there were thousands of below log entries and log file size grew exponentially. Also the storage space where logs were present, had also ran out of space.
WARNING: [10.7.90.189]:30103 [FB] [3.5] Asking if operation execution has been started: com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService$InvokeIsStillRunningOperationRunnable#48b3ac3b
Mar 30, 2016 11:09:29 AM com.hazelcast.spi.impl.operationservice.impl.Invocation
WARNING: [10.7.90.189]:30103 [FB] [3.5] While asking 'is-executing': Invocation{ serviceName='hz:core:partitionService', op=com.hazelcast.spi.impl.operationservice.impl.operations.IsStillExecutingOperation{serviceName='hz:core:partition
Service', partitionId=-1, callId=59834, invocationTime=1459349279980, waitTimeout=-1, callTimeout=5000}, partitionId=-1, replicaIndex=0, tryCount=0, tryPauseMillis=0, invokeCount=1, callTimeout=5000, target=Address[1.2.3.4]:30102, b
ackupsExpected=0, backupsCompleted=0}
com.hazelcast.core.OperationTimeoutException: No response for 10000 ms. Aborting invocation! Invocation{ serviceName='hz:core:partitionService', op=com.hazelcast.spi.impl.operationservice.impl.operations.IsStillExecutingOperation{servic
eName='hz:core:partitionService', partitionId=-1, callId=268177, invocationTime=1459349295209, waitTimeout=-1, callTimeout=5000}, partitionId=-1, replicaIndex=0, tryCount=0, tryPauseMillis=0, invokeCount=1, callTimeout=5000, target=Addr
ess[10.7.90.190]:30102, backupsExpected=0, backupsCompleted=0} No response has been received! backups-expected:0 backups-completed: 0
at com.hazelcast.spi.impl.operationservice.impl.Invocation.newOperationTimeoutException(Invocation.java:491)
at com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService$IsOperationStillRunningCallback.setOperationTimeout(IsStillRunningService.java:224)
at com.hazelcast.spi.impl.operationservice.impl.IsStillRunningService$IsOperationStillRunningCallback.onFailure(IsStillRunningService.java:219)
at com.hazelcast.spi.impl.operationservice.impl.InvocationFuture$1.run(InvocationFuture.java:137)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)
at com.hazelcast.util.executor.HazelcastManagedThread.executeRun(HazelcastManagedThread.java:76)
at com.hazelcast.util.executor.HazelcastManagedThread.run(HazelcastManagedThread.java:92)
I understand that, cluster members will keep making Heartbeats to make sure all the members are alive and I believe default is 10sec. The problem now is that, if incase any of the member goes unresponsive or hugh state, rest of the members will keep making is-executing calls. After looking into the Heap dump, came to know that >73% heap is full of "IsStillRunningService" objects.
Questions:
How to get to know what exactly went wrong?
Running out of storage space is just a co-incidence or might have any corelation? We are suspecting that one might have lead to other, since it happened twice within a week.
Hazelcast XML Configuration:
<hazelcast xsi:schemaLocation="http://www.hazelcast.com/schema/config http://www.hazelcast.com/schema/config/hazelcast-config-3.5.xsd"
xmlns="http://www.hazelcast.com/schema/config"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance">
<map name="myMap">
<backup-count>0</backup-count>
<time-to-live-seconds>43200</time-to-live-seconds>
<eviction-policy>LRU</eviction-policy>
<max-size policy="USED_HEAP_PERCENTAGE">75</max-size>
<eviction-percentage>10</eviction-percentage>
<in-memory-format>OBJECT</in-memory-format>
</map>
<executor-service name="calculation">
<pool-size>10</pool-size>
<queue-capacity>400</queue-capacity>
</executor-service>
<executor-service name="loader">
<pool-size>5</pool-size>
<queue-capacity>400</queue-capacity>
</executor-service>
<properties>
<property name="hazelcast.icmp.timeout">5000</property>
<property name="hazelcast.initial.wait.seconds">10</property>
<property name="hazelcast.connection.monitor.interval">5000</property>
</properties>
<network>
<port auto-increment="true" port-count="100">30101</port>
<join>
<multicast enabled="false">
<multicast-group>224.2.2.3</multicast-group>
<multicast-port>54327</multicast-port>
</multicast>
<tcp-ip enabled="true">
<interface>1.2.3.4</interface>
<interface>1.2.3.5</interface>
<interface>1.2.3.6</interface>
</tcp-ip>
<aws enabled="false"/>
</join>
<interfaces enabled="false">
<interface>127.0.0.1</interface>
</interfaces>
</network>
</hazelcast>
StackTrace
LinkedBlockingQueue which holds IsStillRunningService Objects
Can you upgrade to 3.6. Fixes were added to prevent running into OOME using is-still-running. In 3.7 the whole mechanism is going to be removed and replaced by a less problematic approach.
https://github.com/hazelcast/hazelcast/pull/7719

inbound sftp channel adapter custom filter not accepting same file again

I have very simple custom filter for inbound sftp channel adapter where I just check if file extension is in list of accepted or not. If so it returns true and should allow to process that file.
What is happening is first time that file is processed it works fine. if same file is dropped in my sftp server it comes to filter and it is returning true that means file is accepted still it does not put that message on the downstream queue. Here is my sample config looks like
<int-sftp:inbound-channel-adapter id="sftpAdapter"
channel="ftpChannel"
session-factory="sftpSessionFactory"
local-directory="c:\\temp"
remote-directory="//test//inbound"
remote-file-separator="/"
auto-create-local-directory="true"
delete-remote-files="true"
filter="customfilter"
preserve-timestamp="true"
>
<int:poller cron="0/5 * * * * *" max-messages-per-poll="1"/>
</int-sftp:inbound-channel-adapter>
That's because there is one more FileListFilter in the AbstractInboundFileSynchronizingMessageSource:
private volatile FileListFilter<File> localFileListFilter = new AcceptOnceFileListFilter<File>();
Since you guarantee the duplicate logic with your filter="customfilter" you should configure local-filter:
<int-sftp:inbound-channel-adapter id="sftpAdapter"
channel="ftpChannel"
....
local-filter="acceptAllFileFilter"/>
<bean id="acceptAllFileFilter" class="org.springframework.integration.file.filters.AcceptAllFileListFilter"/>

USRP_UHD source and sink for redhawk

I install last version of Redhawk (v1.9.0) and USRP_UHD from github repository (https://github.com/RedhawkSDR/USRP_UHD) but I have no idea how to build a USRP source/sink.
Are already available these component in some repository?
If not,someone can help me to build these source/sink??
Thanks in advance,
Carmine
As John C said, in order to control the USRP_UHD Device, you must perform an allocation onto one of the tuners. In RedHawk 1.9, the steps to do this are as follows:
Open the waveform in which you would like to perform the allocation and view the *.sad.xml file
Below the connections section (after the </connections> tag) add the following:
<usesdevicedependencies>
<usesdevice id="DCE:#UUID#" type="usesUSRP">
<propertyref refid="DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d" value="FRONTEND"/>
<propertyref refid="DCE:0f99b2e4-9903-4631-9846-ff349d18ecfb" value="USRP"/>
<structref refid="FRONTEND::tuner_allocation">
<simpleref refid="FRONTEND::tuner_allocation::tuner_type" value="#TUNER_TYPE#"/>
<simpleref refid="FRONTEND::tuner_allocation::allocation_id" value="usrpAllocation"/>
<simpleref refid="FRONTEND::tuner_allocation::center_frequency" value="99100000"/>
<simpleref refid="FRONTEND::tuner_allocation::bandwidth" value="1000000"/>
<simpleref refid="FRONTEND::tuner_allocation::sample_rate" value="1000000"/>
<simpleref refid="FRONTEND::tuner_allocation::group_id" value=""/>
<simpleref refid="FRONTEND::tuner_allocation::rf_flow_id" value=""/>
</structref>
</usesdevice>
</usesdevicedependencies>
In the connections section, (after the <connections> tag) add the following:
<connectinterface id="usrpAllocation">
<usesport>
<usesidentifier>dataShort_out</usesidentifier>
<deviceusedbyapplication usesrefid="DCE:#SAME_UUID_AS_ABOVE#"/>
</usesport>
<providesport>
<providesidentifier>#INPUT_PORT_NAME#</providesidentifier>
<componentinstantiationref refid="#NAME_OF_COMPONENT_IN_WAVEFORM#"/>
</providesport>
</connectinterface>
Save the waveform and install it in SDRROOT
In the second step, what you are actually doing is specifying that the waveform depends on a certain device. The dependency is uniquely identified by the usesdevice id. Here you should replace #UUID# with the output of running the uuidgen command in a terminal. Next, you must identify which device the waveform depends on, which is accomplished with the FRONTEND and USRP property references. Finally, you have to specify the parameters of the allocation to the device so that it will set the tuner up for you. Replace #TUNER_TYPE# with RX_DIGITIZER if you would like to receive data, or TX if you would like to transmit data. The allocation id can remain as it is, unless you intend to have multiple allocations onto the device. In this case, you must have a unique allocation id for each allocation. The rest of the parameters are fairly self explanatory, although it should be noted that the center_frequency parameter should be specified in Hz, and sample_rate paramter is the complex sample rate.
In the third step, what you are doing is connecting the device to a component in your waveform. The connectinterface id should match the allocation id from the second step, and the deviceusedbyapplication usesrefid should match the usesdevice id from the second step. The #INPUT_PORT_NAME# should match the name of an input port on your component and the #NAME_OF_COMPONENT_IN_WAVEFORM# should match the usagename of the component you want to connect to.
The USRP device is a Front End Interfaces compliant device so in order to get data to flow out of the dataShort_out port a tuner must be allocated. When you perform this allocation you will provide an allocation ID that is unique. If the allocation request returns successful, you can then use this allocation ID as the connection ID to the dataShort_out port. For more information on Front End Interfaces checkout the documentation here

Spring Integration: create dynamic directories using ftp:outbound-adapter

We would like to be able to change the FTP directory on a channel, after the channel has been created. In our particular use case, the subdirectory for an FTP put is determined at runtime.for ex: we have daily reports uploaded by users.it should be store in ftp server in day wise folders. ex: test/reports/27-11-2012/abc.pdf, test/reports/28-11-2012/abc.pdf etc..
some what Like this
<int-ftp:outbound-channel-adapter id="ftpOutbound" channel="ftpChannel" remote-directory="remoteDirectoryPath"
session-factory="ftpClientFactory" />
remoteDirectoryPath - it should append runtime
Please Anybody can give us solution?
Use remote-directory-expression
#beanName.method() is currently not available in this expression; you will need to use SpEL for the directory generation...
"'test' + T(java.io.File).separator + new java.text.SimpleDateFormat('yyyyMMDD').format(new java.util.Date())"
You can assign a directory/path at Runtime into ftp:outbound-channel-adapter.
I am coping the data over here. You can check this out.
This is working for me.
xml side:
<int-ftp:outbound-channel-adapter id="ftpOutboundAdapter" session-factory="ftpSessionFactory"
channel="sftpOutboundChannel"
remote-directory-expression="#targetDir.get()"
remote-filename-generator="fileNameGenerator"/>
<bean id="targetDir" class="java.util.concurrent.atomic.AtomicReference">
<constructor-arg value="D:\PATH\"/>
</bean>
In this block...remote-directory-expression="#targetDir.get()"
is used for setting the directory/path at runtime.
Java side:
AtomicReference<String> targetDir = (AtomicReference<String>)appContext.getBean("targetDir", AtomicReference.class);
targetDir.set("E:\PATH\");
By, this you can set your path.

Resources