USRP_UHD source and sink for redhawk - redhawksdr

I install last version of Redhawk (v1.9.0) and USRP_UHD from github repository (https://github.com/RedhawkSDR/USRP_UHD) but I have no idea how to build a USRP source/sink.
Are already available these component in some repository?
If not,someone can help me to build these source/sink??
Thanks in advance,
Carmine

As John C said, in order to control the USRP_UHD Device, you must perform an allocation onto one of the tuners. In RedHawk 1.9, the steps to do this are as follows:
Open the waveform in which you would like to perform the allocation and view the *.sad.xml file
Below the connections section (after the </connections> tag) add the following:
<usesdevicedependencies>
<usesdevice id="DCE:#UUID#" type="usesUSRP">
<propertyref refid="DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d" value="FRONTEND"/>
<propertyref refid="DCE:0f99b2e4-9903-4631-9846-ff349d18ecfb" value="USRP"/>
<structref refid="FRONTEND::tuner_allocation">
<simpleref refid="FRONTEND::tuner_allocation::tuner_type" value="#TUNER_TYPE#"/>
<simpleref refid="FRONTEND::tuner_allocation::allocation_id" value="usrpAllocation"/>
<simpleref refid="FRONTEND::tuner_allocation::center_frequency" value="99100000"/>
<simpleref refid="FRONTEND::tuner_allocation::bandwidth" value="1000000"/>
<simpleref refid="FRONTEND::tuner_allocation::sample_rate" value="1000000"/>
<simpleref refid="FRONTEND::tuner_allocation::group_id" value=""/>
<simpleref refid="FRONTEND::tuner_allocation::rf_flow_id" value=""/>
</structref>
</usesdevice>
</usesdevicedependencies>
In the connections section, (after the <connections> tag) add the following:
<connectinterface id="usrpAllocation">
<usesport>
<usesidentifier>dataShort_out</usesidentifier>
<deviceusedbyapplication usesrefid="DCE:#SAME_UUID_AS_ABOVE#"/>
</usesport>
<providesport>
<providesidentifier>#INPUT_PORT_NAME#</providesidentifier>
<componentinstantiationref refid="#NAME_OF_COMPONENT_IN_WAVEFORM#"/>
</providesport>
</connectinterface>
Save the waveform and install it in SDRROOT
In the second step, what you are actually doing is specifying that the waveform depends on a certain device. The dependency is uniquely identified by the usesdevice id. Here you should replace #UUID# with the output of running the uuidgen command in a terminal. Next, you must identify which device the waveform depends on, which is accomplished with the FRONTEND and USRP property references. Finally, you have to specify the parameters of the allocation to the device so that it will set the tuner up for you. Replace #TUNER_TYPE# with RX_DIGITIZER if you would like to receive data, or TX if you would like to transmit data. The allocation id can remain as it is, unless you intend to have multiple allocations onto the device. In this case, you must have a unique allocation id for each allocation. The rest of the parameters are fairly self explanatory, although it should be noted that the center_frequency parameter should be specified in Hz, and sample_rate paramter is the complex sample rate.
In the third step, what you are doing is connecting the device to a component in your waveform. The connectinterface id should match the allocation id from the second step, and the deviceusedbyapplication usesrefid should match the usesdevice id from the second step. The #INPUT_PORT_NAME# should match the name of an input port on your component and the #NAME_OF_COMPONENT_IN_WAVEFORM# should match the usagename of the component you want to connect to.

The USRP device is a Front End Interfaces compliant device so in order to get data to flow out of the dataShort_out port a tuner must be allocated. When you perform this allocation you will provide an allocation ID that is unique. If the allocation request returns successful, you can then use this allocation ID as the connection ID to the dataShort_out port. For more information on Front End Interfaces checkout the documentation here

Related

Spring File Inbound Channel Adapter slows down

I have a spring-integration flow that starts with a file inbound-channel-adapter activated by a transactional poller (tx is handled by atomikos).
The text in the file is processed and the message goes down through the flow until it gets sent to one of the JMS queues (JMS outbound-channel-adapter).
In the middle, there are some database writes within a nested transaction.
The system is meant to run 24/7.
It happens that the single message flow, progressively slows down and when I investigated, I found that the stage that is responsable for the increasing delay is the read from filesystem.
Below, the first portion fo the integration flow:
<logging-channel-adapter id="logger" level="INFO"/>
<transaction-synchronization-factory id="sync-factory">
<after-commit expression="payload.delete()" channel="after-commit"/>
</transaction-synchronization-factory>
<service-activator input-channel="after-commit" output-channel="nullChannel" ref="tx-after-commit-service"/>
<!-- typeb inbound from filesystem -->
<file:inbound-channel-adapter id="typeb-file-inbound-adapter"
auto-startup="${fs.typeb.enabled:true}"
channel="typeb-inbound"
directory="${fs.typeb.directory.in}"
filename-pattern="${fs.typeb.filter.filenamePattern:*}"
prevent-duplicates="${fs.typeb.preventDuplicates:false}" >
<poller id="poller"
fixed-delay="${fs.typeb.polling.millis:1000}"
max-messages-per-poll="${fs.typeb.polling.numMessages:-1}">
<transactional synchronization-factory="sync-factory"/>
</poller>
</file:inbound-channel-adapter>
<channel id="typeb-inbound">
<interceptors>
<wire-tap channel="logger"/>
</interceptors>
</channel>
I read something about issues related to the prevent-duplicates option that stores a list of seen files, but that is not the case because I turned it off.
I don't think that it may be related to the filter (filename-pattern) because the expression I use in my config (*.RCV) is easy to apply and the input folder does not contain a lot of files (less than 100) at the same time.
Still, there is something that gradually makes the read from filesystem slower and slower over time, from a few millis to over 3 seconds within a few days of up-time.
Any hints?
You should remove, or move files after they have been processed; otherwise the whole directory has to be rescanned.
In newer versions, you can use a WatchServiceDirectoryScanner which is more efficient.
But it's still best practice to clean up old files.
Finally I got the solution.
The issue was related to the specific version of Spring I was using (4.3.4) that is affected by a bug I had not discovered yet.
The problem is something about DefaultConversionService and the use of converterCache (look at this for more details https://jira.spring.io/browse/SPR-14929).
Upgrading to a more recent version has resolved.

Waveform fails to connect to UHD_USRP device when launched in domain manager from IDE

I have created a waveform in the REDHAWKSDR IDE (Version 2.0.2) comprising a USRP_UHD device and a DataConverter. I followed carefully Section 7.2 "Associating a Waveform with an FEI Device" in the manual and used the "Use Frontend Tuner Device" artifact to define a device port and connection to the DataConverter. The port and connection render on the diagram. I saved the waveform and exported it to SDR.
I created a USRP node, added the UHD_USRP device to the node, and set the IP address. I can launch the node's DeviceManager, allocate the UHD_USRP frontend tuner manually, and confirm data flow on a NextMidas plot.
When I launch the waveform in the REDHAWK_DEV domain manager, the UHD_USRP device in waveform diagram is missing port and connection. The REDHAWK Explorer shows that the UHD_USRP device is allocated but the output port shows an unknown "Connection_1" and the DataConverter input port shows no connection. I recreated the connection manually using the "Connect" menu, but I get no data flow.
This basic USRP connection should be very simple, but I find no useful discussion in this forum or elsewhere. One observation: I can get the connection in python with the following commands:
from ossie.utils import sb
import frontend
sb.catalog(objType='devices')
usrp = sb.launch('rh.USRP_UHD')
usrp.target_device.ip_address = '192.168.10.2'
alloc = frontend.createTunerAllocation("RX_DIGITIZER",
allocation_id="testing", center_frequency=925.0e6, sample_rate=20.0e6, sample_rate_tolerance=20.0)
usrp.allocateCapacity(alloc)
alloc1 = frontend.createTunerListenerAllocation("testing", "listener1")
usrp.allocateCapacity(alloc1)
converter = sb.launch('rh.DataConverter')
converter.maxTransferSize = 262144
usrp.connect(converter, usesPortName="dataShort_out", providesPortName="dataShort", connectionId='listener1')
plot2 = sb.RasterPSD(nfft=8192, frameSize=8192)
converter.connect(plot2, usesPortName="dataFloat_out", providesPortName="FloatIn")
sb.start()
This answer is based on old notes so it may not work. We do most of our connections dynamically through python these days. I didn't have a USRP on hand to test with.
I generated a quick waveform the way you described and the following XML was generated in the .sad.xml file:
<connections>
<connectinterface id="connection_1">
<usesport>
<usesidentifier>dataShort_out</usesidentifier>
<deviceusedbyapplication usesrefid="rh.USRP_UHD_1"/>
</usesport>
<providesport>
<providesidentifier>dataShort</providesidentifier>
<componentinstantiationref refid="DataConverter_1"/>
</providesport>
</connectinterface>
</connections>
<usesdevicedependencies>
<usesdevice id="rh.USRP_UHD_1">
<propertyref refid="DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d" value="FRONTEND::TUNER"/>
<propertyref refid="DCE:0f99b2e4-9903-4631-9846-ff349d18ecfb" value="USRP"/>
<structref refid="FRONTEND::tuner_allocation">
<simpleref refid="FRONTEND::tuner_allocation::tuner_type" value="RX_DIGITIZER"/>
<simpleref refid="FRONTEND::tuner_allocation::allocation_id" value="devuser:6e463f2c-fe8f-4997-98e9-39bf1364c861"/>
<simpleref refid="FRONTEND::tuner_allocation::center_frequency" value="9.0E8"/>
<simpleref refid="FRONTEND::tuner_allocation::bandwidth" value="0.0"/>
<simpleref refid="FRONTEND::tuner_allocation::bandwidth_tolerance" value="20.0"/>
<simpleref refid="FRONTEND::tuner_allocation::sample_rate" value="0.0"/>
<simpleref refid="FRONTEND::tuner_allocation::sample_rate_tolerance" value="20.0"/>
<simpleref refid="FRONTEND::tuner_allocation::device_control" value="true"/>
<simpleref refid="FRONTEND::tuner_allocation::group_id" value=""/>
<simpleref refid="FRONTEND::tuner_allocation::rf_flow_id" value=""/>
</structref>
</usesdevice>
</usesdevicedependencies>
The id in the connectinterface element used to be the same as the allocation_id of the device.
<connectinterface id="connection_1">
should be
<connectinterface id="devuser:6e463f2c-fe8f-4997-98e9-39bf1364c861">
in the generated code above instead of "connection_1". Replace the "devuser:6e463f2c-fe8f-4997-98e9-39bf1364c861" string with whatever was generated for your allocation.

spring-xd custom redis-sink

So I think I need to extend the current redis-sink provided in spring-xd to write into a redis Capped list, rather than creating a new one but unfortunately it seems it gets worse as I will have to go deeper into spring-integration and further back into spring-data (spring-data-redis) because the whole redis-sink seems to be based on the generic pub/sub abstraction on redis - or is there some type of handler that can be defined once the message arrives to the channel handler?
In order to have the "effect of a capped list" when I push data redis, I need to execute both a redis "push" and then an "rtrim" as outlined here - http://redis.io/topics/data-types-intro. If I am to build a custom spring-integration / spring-data module. I believe I see support for the "ltrim" but not the"rtrim" operation here http://docs.spring.io/spring-data/redis/docs/1.7.0.RC1/api/
Any Advice on how/where to start or an easier approach would be appreciated.
Actually even Redis doesn't have such a RTRIM command. We don't need it because we reach the same behavior with the negative indexes for LTRIM:
start and end can also be negative numbers indicating offsets from the end of the list, where -1 is the last element of the list, -2 the penultimate element and so on.
I think you should use <redis:store-outbound-channel-adapter> and add something like this into its configuration:
<int-redis:request-handler-advice-chain>
<beans:bean class="org.springframework.integration.handler.advice.ExpressionEvaluatingRequestHandlerAdvice">
<beans:property name="onSuccessExpression" value="#redisTemplate.boundListOps(${keyExpression}).trim(1, -1)"/>
</beans:bean>
</int-redis:request-handler-advice-chain>
To remove the oldest element in the Redis List.

aggregator release strategy depend on another service activator running

I understand how aggregating based on size works but I also want to make the release strategy depend on another step in the pipeline to be still running. The idea is that i move files to a certain dir "source", aggregate enough files and then move from "source" to "stage" and then process the staged files. While this process is running I dont want to put more files in stage but I do want to continue to add more files to source folder (that part is handled by using a dispatcher channel connected with file inbound adapter before the aggregator)
<int:aggregator id="filesBuffered"
input-channel="sourceFilesProcessed"
output-channel="stagedFiles"
release-strategy-expression="size() == 10"
correlation-strategy-expression="'mes-group'"
expire-groups-upon-completion="true"
/>
<int:channel id="stagedFiles" />
<int:service-activator input-channel="stagedFiles"
output-channel="readyForMes"
ref="moveToStage"
method="move" />
so as you can see I dont want to release the aggregated messages if an existing instance of moveToStage service activator is running.
I thought about making the stagedFiles channel a queue channel but that doesnt seems right because I do want the files to be passed to moveToStage as a Collection not a single file which I am assuming by making stagedFiles a queue channel it will send a single file. Instead I want to get to a threshold e.g. 10 files, pass those to stagedFiles which allows the moveToStage to process those files but then until this step is done I want the aggregator to continue to aggregate files and then release all aggregated files.
Thanks
I suggest you to have some flag as a AtomicBoolean bean and use it from your moveToStage#move and check it's state from:
release-strategy-expression="size() >= 10 and #stagingFlag.get()"

Design: Spring Integration: Jdbc-Inbound-adapter in Clustered Environment

We have clustered environment with 2 nodes in Oracle Weblogic 10.3.6 server and it is Round-Robin.
I have service which goes and gets the message from a external system and puts them in the Database (Oracle DB).
I am using a jdbc-inbound-adapter to convert these messages and pass it to the channels.
And to have a message processed only once. I am planning to have a column(NODE_NAME) in the DB-table. When the first service which gets the message from the external system also updates the column with the NODE_NAME (weblogic.Name). In the SELECT query of jdbc-inbound-adapter if I specify the NODE_NAME then the messages would be processed only once.
i.e. If the Service1(of Node1) saves the message in DB then inbound-adapter1 (of node1) passes the message to channel.
Example:
<si-jdbc:inbound-channel-adapter id="jdbcInboundAdapter"
channel="queueChannel" data-source="myDataSource"
auto-startup="true"
query="SELECT * FROM STAGE_TABLE WHERE STATUS='WAITING' and NODE_NAME = '${weblogic.Name}'"
update="UPDATE STAGE_TABLE SET STATUS='IN_PROGRESS' WHERE ID IN (:Id)"
max-rows-per-poll="100" row-mapper="rowMapper"
update-per-row="true">
<si:poller fixed-rate="5000">
<si:advice-chain>
<ref bean="txAdvice"/>
<ref bean="inboundAdapterConfiguration"/>
</si:advice-chain>
</si:poller>
</si-jdbc:inbound-channel-adapter>
Is this a good design?
By second approach: using the below Select SQL in the jdbc-inbound-adapter but I am guessing this would fail as I am using Oracle Database.
SELECT * FROM TABLE WHERE STATUS='WAITING' FOR UPDATE SKIP LOCKED
It would be great if some one could point me in the right direction.
Actually, FOR UPDATE SKIP LOCKED is exactly Oracle's feature - https://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:2060739900346201280
If you are in doubt as, here is a code from Spring Integration: https://github.com/spring-projects/spring-integration/blob/master/spring-integration-jdbc/src/main/java/org/springframework/integration/jdbc/store/channel/OracleChannelMessageStoreQueryProvider.java#L39

Resources