How can I solve "64-bit block cipher 3DES vulnerable to SWEET32 attack" and "Key exchange (dh 1024) of lower strength than certificate key" problems for 5061 port on centOS7?
PORT STATE SERVICE
5061/tcp open sip-tls
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_DHE_RSA_WITH_3DES_EDE_CBC_SHA (dh 1024) - D
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 1024) - A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 1024) - A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 1024) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 1024) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 1024) - A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 1024) - A
| TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA (secp256r1) - C
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) - A
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
| TLS_RSA_WITH_3DES_EDE_CBC_SHA (rsa 2048) - C
| TLS_RSA_WITH_AES_128_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_128_CBC_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_128_GCM_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA (rsa 2048) - A
| TLS_RSA_WITH_AES_256_CBC_SHA256 (rsa 2048) - A
| TLS_RSA_WITH_AES_256_GCM_SHA384 (rsa 2048) - A
| compressors:
| NULL
| cipher preference: client
| warnings:
| 64-bit block cipher 3DES vulnerable to SWEET32 attack
| Key exchange (dh 1024) of lower strength than certificate key
|_ least strength: D
I guess you are asking a question as an administrator of a sip service.
You should offer a different cipher suite and configure it based on your security requirements.
This is an example of cipher suite for a pretty strong service:
cipher_list = HIGH:!COMPLEMENTOFDEFAULT:!kRSA:!PSK:!SRP
If you wish to keep your current cipher suite and just remove 3DES, you can do so by only disabling 3DES
cipher_list = YOURCURRENTCIPHERSUITE:!3DES
Example above, of course, are showing typical configuration when openssl is used. You need to adapt them if you are using another system.
You should also have some way to set a DH PARAMETER in your configuration and you need to configure it to a higher number of bits. For example:
$> openssl dhparam -out dhparam.pem 3072
The result would be this one:
PORT STATE SERVICE
5061/tcp open sip-tls
| ssl-enum-ciphers:
| TLSv1.2:
| ciphers:
| TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384 (secp256r1) - A
| TLS_DHE_RSA_WITH_AES_256_GCM_SHA384 (dh 3072) - A
| TLS_ECDHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (secp256r1) - A
| TLS_DHE_RSA_WITH_CHACHA20_POLY1305_SHA256 (dh 3072) - A
| TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256 (secp256r1) - A
| TLS_DHE_RSA_WITH_AES_128_GCM_SHA256 (dh 3072) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA384 (secp256r1) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA256 (dh 3072) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256 (secp256r1) - A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA256 (dh 3072) - A
| TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA (secp256r1) - A
| TLS_DHE_RSA_WITH_AES_256_CBC_SHA (dh 3072) - A
| TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA (secp256r1) - A
| TLS_DHE_RSA_WITH_AES_128_CBC_SHA (dh 3072) - A
| compressors:
| NULL
| cipher preference: server
|_ least strength: A
Related
[Question posted by a user on YugabyteDB Community Slack]
I'm running the yb-cluster of 3 logical node on 1 VM. I am trying with SSL Mode enabled cluster. Below is the property file i am using to start the cluster with SSL Mode ON:
./bin/yugabyted start --config /data/ybd1/config_1.config
./bin/yugabyted start --base_dir=/data/ybd2 --listen=127.0.0.2 --join=192.168.56.12
./bin/yugabyted start --base_dir=/data/ybd3 --listen=127.0.0.3 --join=192.168.56.12
my config file:
{
"base_dir": "/data/ybd1",
"listen": "192.168.56.12",
"certs_dir": "/root/192.168.56.12/",
"allow_insecure_connections": "false",
"use_node_to_node_encryption": "true"
"use_client_to_server_encryption": "true"
}
I am able to connect using:
bin/ysqlsh -h 127.0.0.3 -U yugabyte -d yugabyte
ysqlsh (11.2-YB-2.11.1.0-b0)
Type "help" for help.
yugabyte=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------------+----------+----------+---------+-------------+-----------------------
postgres | postgres | UTF8 | C | en_US.UTF-8 |
system_platform | postgres | UTF8 | C | en_US.UTF-8 |
template0 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
yugabyte | postgres | UTF8 | C | en_US.UTF-8 |
But when I am trying to connect to my yb-cluster from psql client. I am getting below errors.
psql -h 192.168.56.12 -p 5433
psql: error: connection to server at "192.168.56.12", port 5433 failed: FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
postgres#acff2570dfbc:~$
And in yb t-server logs I am getting below errors:
I0228 05:00:21.248733 21631 async_initializer.cc:90] Successfully built ybclient
2022-02-28 05:02:21.248 UTC [21624] FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
I0228 05:02:21.251086 21627 poller.cc:66] Poll stopped: Service unavailable (yb/rpc/scheduler.cc:80): Scheduler is shutting down (system error 108)
2022-02-28 05:54:20.987 UTC [23729] LOG: invalid length of startup packet
Any HELP in this regard is really apricated.
You’re setting your config wrong when using yugabyted tool. You want to use --master_flags and --tserver_flags like explained in the docs: https://docs.yugabyte.com/latest/reference/configuration/yugabyted/#flags.
An example:
bin/yugabyted start --base_dir=/data/ybd1 --listen=192.168.56.12 --tserver_flags=use_client_to_server_encryption=true,ysql_enable_auth=true,use_cassandra_authentication=true,certs_for_client_dir=/root/192.168.56.12/
Sending the parameters this way should work on your cluster.
I'm invoking my chaincode via node-sdk by queueing the submissions via the async package. However, I very often get a error: [EventService]: EventService[org1-peer1] timed out after:3000. There's no additional information at the peer's log. The transaction is getting processed, though.
Here's the client log (GRPC_VERBOSITY=DEBUG, GRPC_TRACE=all):
--> Starting task
2020-08-13T08:49:36.081Z | call_stream | [2] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.083Z | call_stream | [2] receive HTTP/2 data frame of length 1476
2020-08-13T08:49:36.083Z | call_stream | [2] parsed message of length 1476
2020-08-13T08:49:36.084Z | call_stream | [2] filterReceivedMessage of length 1476
2020-08-13T08:49:36.084Z | call_stream | [2] pushing to reader message of length 1471
2020-08-13T08:49:36.088Z | call_stream | [2] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.088Z | call_stream | [2] received status code 0 from server
2020-08-13T08:49:36.088Z | call_stream | [2] received status details string "" from server
2020-08-13T08:49:36.089Z | call_stream | [2] ended with status: code=0 details=""
2020-08-13T08:49:36.090Z | subchannel | 10.100.136.32:30171 callRefcount 3 -> 2
2020-08-13T08:49:36.090Z | call_stream | [2] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.098Z | call_stream | [8] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.099Z | call_stream | [8] receive HTTP/2 data frame of length 1476
2020-08-13T08:49:36.099Z | call_stream | [8] parsed message of length 1476
2020-08-13T08:49:36.099Z | call_stream | [8] filterReceivedMessage of length 1476
2020-08-13T08:49:36.099Z | call_stream | [8] pushing to reader message of length 1471
2020-08-13T08:49:36.100Z | call_stream | [8] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.100Z | call_stream | [8] received status code 0 from server
2020-08-13T08:49:36.100Z | call_stream | [8] received status details string "" from server
2020-08-13T08:49:36.100Z | call_stream | [8] ended with status: code=0 details=""
2020-08-13T08:49:36.101Z | subchannel | 10.100.136.32:30171 callRefcount 2 -> 1
2020-08-13T08:49:36.101Z | call_stream | [8] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.101Z | call_stream | [5] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.101Z | call_stream | [5] receive HTTP/2 data frame of length 1476
2020-08-13T08:49:36.101Z | call_stream | [5] parsed message of length 1476
2020-08-13T08:49:36.101Z | call_stream | [5] filterReceivedMessage of length 1476
2020-08-13T08:49:36.102Z | call_stream | [5] pushing to reader message of length 1471
2020-08-13T08:49:36.102Z | call_stream | [5] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.102Z | call_stream | [5] received status code 0 from server
2020-08-13T08:49:36.102Z | call_stream | [5] received status details string "" from server
2020-08-13T08:49:36.102Z | call_stream | [5] ended with status: code=0 details=""
2020-08-13T08:49:36.102Z | subchannel | 10.100.136.32:30171 callRefcount 1 -> 0
2020-08-13T08:49:36.102Z | call_stream | [5] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.103Z | call_stream | [6] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.103Z | call_stream | [6] receive HTTP/2 data frame of length 1476
2020-08-13T08:49:36.103Z | call_stream | [6] parsed message of length 1476
2020-08-13T08:49:36.103Z | call_stream | [6] filterReceivedMessage of length 1476
2020-08-13T08:49:36.103Z | call_stream | [6] pushing to reader message of length 1471
2020-08-13T08:49:36.103Z | call_stream | [6] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.103Z | call_stream | [6] received status code 0 from server
2020-08-13T08:49:36.103Z | call_stream | [6] received status details string "" from server
2020-08-13T08:49:36.103Z | call_stream | [6] ended with status: code=0 details=""
2020-08-13T08:49:36.104Z | subchannel | 10.100.136.32:30151 callRefcount 3 -> 2
2020-08-13T08:49:36.104Z | call_stream | [6] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.104Z | call_stream | [0] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.104Z | call_stream | [0] receive HTTP/2 data frame of length 1477
2020-08-13T08:49:36.104Z | call_stream | [0] parsed message of length 1477
2020-08-13T08:49:36.104Z | call_stream | [0] filterReceivedMessage of length 1477
2020-08-13T08:49:36.104Z | call_stream | [0] pushing to reader message of length 1472
2020-08-13T08:49:36.104Z | call_stream | [0] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.104Z | call_stream | [0] received status code 0 from server
2020-08-13T08:49:36.104Z | call_stream | [0] received status details string "" from server
2020-08-13T08:49:36.105Z | call_stream | [0] ended with status: code=0 details=""
2020-08-13T08:49:36.105Z | subchannel | 10.100.136.32:30151 callRefcount 2 -> 1
2020-08-13T08:49:36.105Z | call_stream | [0] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.105Z | call_stream | [3] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.105Z | call_stream | [3] receive HTTP/2 data frame of length 1476
2020-08-13T08:49:36.105Z | call_stream | [3] parsed message of length 1476
2020-08-13T08:49:36.105Z | call_stream | [3] filterReceivedMessage of length 1476
2020-08-13T08:49:36.106Z | call_stream | [3] pushing to reader message of length 1471
2020-08-13T08:49:36.106Z | call_stream | [3] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.106Z | call_stream | [3] received status code 0 from server
2020-08-13T08:49:36.106Z | call_stream | [3] received status details string "" from server
2020-08-13T08:49:36.106Z | call_stream | [3] ended with status: code=0 details=""
2020-08-13T08:49:36.106Z | subchannel | 10.100.136.32:30151 callRefcount 1 -> 0
2020-08-13T08:49:36.106Z | call_stream | [3] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.107Z | call_stream | [7] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.107Z | call_stream | [7] receive HTTP/2 data frame of length 1477
2020-08-13T08:49:36.107Z | call_stream | [7] parsed message of length 1477
2020-08-13T08:49:36.107Z | call_stream | [7] filterReceivedMessage of length 1477
2020-08-13T08:49:36.107Z | call_stream | [7] pushing to reader message of length 1472
2020-08-13T08:49:36.107Z | call_stream | [7] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.107Z | call_stream | [7] received status code 0 from server
2020-08-13T08:49:36.107Z | call_stream | [7] received status details string "" from server
2020-08-13T08:49:36.108Z | call_stream | [7] ended with status: code=0 details=""
2020-08-13T08:49:36.108Z | subchannel | 10.100.136.32:30161 callRefcount 3 -> 2
2020-08-13T08:49:36.108Z | call_stream | [7] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.116Z | resolving_load_balancer | dns:worker2.example.com:30151 IDLE -> IDLE
2020-08-13T08:49:36.116Z | connectivity_state | dns:worker2.example.com:30151 IDLE -> IDLE
2020-08-13T08:49:36.116Z | dns_resolver | Resolver constructed for target dns:worker2.example.com:30151
2020-08-13T08:49:36.117Z | call_stream | [1] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.117Z | call_stream | [1] receive HTTP/2 data frame of length 1477
2020-08-13T08:49:36.117Z | call_stream | [1] parsed message of length 1477
2020-08-13T08:49:36.117Z | call_stream | [1] filterReceivedMessage of length 1477
2020-08-13T08:49:36.117Z | call_stream | [1] pushing to reader message of length 1472
2020-08-13T08:49:36.117Z | call_stream | [1] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.117Z | call_stream | [1] received status code 0 from server
2020-08-13T08:49:36.118Z | call_stream | [1] received status details string "" from server
2020-08-13T08:49:36.118Z | call_stream | [1] ended with status: code=0 details=""
2020-08-13T08:49:36.118Z | subchannel | 10.100.136.32:30161 callRefcount 2 -> 1
2020-08-13T08:49:36.118Z | call_stream | [1] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.122Z | dns_resolver | Resolution update requested for target dns:worker2.example.com:30151
2020-08-13T08:49:36.122Z | resolving_load_balancer | dns:worker2.example.com:30151 IDLE -> CONNECTING
2020-08-13T08:49:36.122Z | connectivity_state | dns:worker2.example.com:30151 IDLE -> CONNECTING
2020-08-13T08:49:36.122Z | resolving_load_balancer | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.122Z | connectivity_state | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.132Z | dns_resolver | Resolved addresses for target dns:worker2.example.com:30151: [10.100.136.32:30151]
2020-08-13T08:49:36.132Z | pick_first | IDLE -> IDLE
2020-08-13T08:49:36.132Z | resolving_load_balancer | dns:worker2.example.com:30151 CONNECTING -> IDLE
2020-08-13T08:49:36.132Z | connectivity_state | dns:worker2.example.com:30151 CONNECTING -> IDLE
2020-08-13T08:49:36.132Z | dns_resolver | Resolution update requested for target dns:worker2.example.com:30151
2020-08-13T08:49:36.132Z | resolving_load_balancer | dns:worker2.example.com:30151 IDLE -> CONNECTING
2020-08-13T08:49:36.132Z | connectivity_state | dns:worker2.example.com:30151 IDLE -> CONNECTING
2020-08-13T08:49:36.133Z | resolving_load_balancer | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.133Z | connectivity_state | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.133Z | pick_first | Connect to address list 10.100.136.32:30151
2020-08-13T08:49:36.134Z | subchannel | 10.100.136.32:30151 refcount 0 -> 1
2020-08-13T08:49:36.135Z | subchannel | 10.100.136.32:30151 refcount 1 -> 2
2020-08-13T08:49:36.135Z | pick_first | Start connecting to subchannel with address 10.100.136.32:30151
2020-08-13T08:49:36.135Z | pick_first | IDLE -> CONNECTING
2020-08-13T08:49:36.135Z | resolving_load_balancer | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.135Z | connectivity_state | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.135Z | subchannel | 10.100.136.32:30151 IDLE -> CONNECTING
2020-08-13T08:49:36.135Z | pick_first | CONNECTING -> CONNECTING
2020-08-13T08:49:36.135Z | resolving_load_balancer | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.135Z | connectivity_state | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.136Z | call_stream | [4] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.136Z | call_stream | [4] receive HTTP/2 data frame of length 1477
2020-08-13T08:49:36.136Z | call_stream | [4] parsed message of length 1477
2020-08-13T08:49:36.137Z | call_stream | [4] filterReceivedMessage of length 1477
2020-08-13T08:49:36.137Z | call_stream | [4] pushing to reader message of length 1472
2020-08-13T08:49:36.137Z | call_stream | [4] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.137Z | call_stream | [4] received status code 0 from server
2020-08-13T08:49:36.137Z | call_stream | [4] received status details string "" from server
2020-08-13T08:49:36.137Z | call_stream | [4] ended with status: code=0 details=""
2020-08-13T08:49:36.137Z | subchannel | 10.100.136.32:30161 callRefcount 1 -> 0
2020-08-13T08:49:36.137Z | call_stream | [4] HTTP/2 stream closed with code 0
2020-08-13T08:49:36.142Z | dns_resolver | Resolved addresses for target dns:worker2.example.com:30151: [10.100.136.32:30151]
2020-08-13T08:49:36.142Z | subchannel | 10.100.136.32:30151 refcount 2 -> 1
2020-08-13T08:49:36.142Z | pick_first | Connect to address list 10.100.136.32:30151
2020-08-13T08:49:36.142Z | subchannel | 10.100.136.32:30151 refcount 1 -> 2
2020-08-13T08:49:36.142Z | pick_first | CONNECTING -> CONNECTING
2020-08-13T08:49:36.143Z | resolving_load_balancer | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.143Z | connectivity_state | dns:worker2.example.com:30151 CONNECTING -> CONNECTING
2020-08-13T08:49:36.237Z | subchannel | 10.100.136.32:30151 CONNECTING -> READY
2020-08-13T08:49:36.237Z | pick_first | Pick subchannel with address 10.100.136.32:30151
2020-08-13T08:49:36.237Z | pick_first | CONNECTING -> READY
2020-08-13T08:49:36.237Z | resolving_load_balancer | dns:worker2.example.com:30151 CONNECTING -> READY
2020-08-13T08:49:36.237Z | connectivity_state | dns:worker2.example.com:30151 CONNECTING -> READY
2020-08-13T08:49:36.237Z | subchannel | 10.100.136.32:30151 refcount 2 -> 3
2020-08-13T08:49:36.237Z | subchannel | 10.100.136.32:30151 refcount 3 -> 2
2020-08-13T08:49:36.239Z | channel | dns:worker2.example.com:30151 createCall [9] method="/protos.Deliver/DeliverFiltered", deadline=Infinity
2020-08-13T08:49:36.239Z | call_stream | [9] Sending metadata
2020-08-13T08:49:36.239Z | channel | Pick result: COMPLETE subchannel: 10.100.136.32:30151 status: undefined undefined
2020-08-13T08:49:36.240Z | call_stream | [9] write() called with message of length 1111
2020-08-13T08:49:36.241Z | channel | dns:worker2.example.com:30151 createCall [10] method="/protos.Deliver/DeliverFiltered", deadline=Infinity
2020-08-13T08:49:36.241Z | call_stream | [10] Sending metadata
2020-08-13T08:49:36.241Z | channel | Pick result: COMPLETE subchannel: 10.100.136.32:30151 status: undefined undefined
2020-08-13T08:49:36.241Z | call_stream | [10] write() called with message of length 1110
2020-08-13T08:49:36.241Z | call_stream | [9] deferring writing data chunk of length 1116
2020-08-13T08:49:36.241Z | call_stream | [10] deferring writing data chunk of length 1115
2020-08-13T08:49:36.241Z | subchannel | Starting stream with headers
grpc-accept-encoding: identity,deflate,gzip
accept-encoding: identity,gzip
:authority: worker2.example.com
user-agent: grpc-node-js/1.0.3
content-type: application/grpc
:method: POST
:path: /protos.Deliver/DeliverFiltered
te: trailers
2020-08-13T08:49:36.241Z | call_stream | [9] attachHttp2Stream from subchannel 10.100.136.32:30151
2020-08-13T08:49:36.241Z | subchannel | 10.100.136.32:30151 callRefcount 0 -> 1
2020-08-13T08:49:36.241Z | call_stream | [9] sending data chunk of length 1116 (deferred)
2020-08-13T08:49:36.242Z | subchannel | Starting stream with headers
grpc-accept-encoding: identity,deflate,gzip
accept-encoding: identity,gzip
:authority: worker2.example.com
user-agent: grpc-node-js/1.0.3
content-type: application/grpc
:method: POST
:path: /protos.Deliver/DeliverFiltered
te: trailers
2020-08-13T08:49:36.242Z | call_stream | [10] attachHttp2Stream from subchannel 10.100.136.32:30151
2020-08-13T08:49:36.242Z | subchannel | 10.100.136.32:30151 callRefcount 1 -> 2
2020-08-13T08:49:36.242Z | call_stream | [10] sending data chunk of length 1115 (deferred)
2020-08-13T08:49:36.242Z | channel | dns:worker2.example.com:30151 createCall [11] method="/protos.Deliver/DeliverFiltered", deadline=Infinity
2020-08-13T08:49:36.243Z | call_stream | [11] Sending metadata
2020-08-13T08:49:36.243Z | channel | Pick result: COMPLETE subchannel: 10.100.136.32:30151 status: undefined undefined
2020-08-13T08:49:36.243Z | call_stream | [11] write() called with message of length 1111
2020-08-13T08:49:36.243Z | call_stream | [11] deferring writing data chunk of length 1116
2020-08-13T08:49:36.243Z | subchannel | Starting stream with headers
grpc-accept-encoding: identity,deflate,gzip
accept-encoding: identity,gzip
:authority: worker2.example.com
user-agent: grpc-node-js/1.0.3
content-type: application/grpc
:method: POST
:path: /protos.Deliver/DeliverFiltered
te: trailers
2020-08-13T08:49:36.243Z | call_stream | [11] attachHttp2Stream from subchannel 10.100.136.32:30151
2020-08-13T08:49:36.243Z | subchannel | 10.100.136.32:30151 callRefcount 2 -> 3
2020-08-13T08:49:36.243Z | call_stream | [11] sending data chunk of length 1116 (deferred)
2020-08-13T08:49:36.273Z | call_stream | [9] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.273Z | call_stream | [9] receive HTTP/2 data frame of length 164
2020-08-13T08:49:36.274Z | call_stream | [9] parsed message of length 164
2020-08-13T08:49:36.274Z | call_stream | [9] filterReceivedMessage of length 164
2020-08-13T08:49:36.274Z | call_stream | [9] pushing to reader message of length 159
2020-08-13T08:49:36.279Z | call_stream | [11] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.279Z | call_stream | [11] receive HTTP/2 data frame of length 164
2020-08-13T08:49:36.279Z | call_stream | [11] parsed message of length 164
2020-08-13T08:49:36.279Z | call_stream | [11] filterReceivedMessage of length 164
2020-08-13T08:49:36.279Z | call_stream | [11] pushing to reader message of length 159
2020-08-13T08:49:36.286Z | channel | dns:worker2.example.com:30011 createCall [12] method="/orderer.AtomicBroadcast/Broadcast", deadline=Infinity
2020-08-13T08:49:36.286Z | call_stream | [12] Sending metadata
2020-08-13T08:49:36.286Z | channel | Pick result: COMPLETE subchannel: 10.100.136.32:30011 status: undefined undefined
2020-08-13T08:49:36.287Z | call_stream | [12] write() called with message of length 5483
2020-08-13T08:49:36.287Z | call_stream | [12] deferring writing data chunk of length 5488
2020-08-13T08:49:36.287Z | subchannel | Starting stream with headers
grpc-accept-encoding: identity,deflate,gzip
accept-encoding: identity,gzip
:authority: worker2.example.com
user-agent: grpc-node-js/1.0.3
content-type: application/grpc
:method: POST
:path: /orderer.AtomicBroadcast/Broadcast
te: trailers
2020-08-13T08:49:36.287Z | call_stream | [12] attachHttp2Stream from subchannel 10.100.136.32:30011
2020-08-13T08:49:36.287Z | subchannel | 10.100.136.32:30011 callRefcount 0 -> 1
2020-08-13T08:49:36.287Z | call_stream | [12] sending data chunk of length 5488 (deferred)
2020-08-13T08:49:36.287Z | call_stream | [10] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.287Z | call_stream | [10] receive HTTP/2 data frame of length 164
2020-08-13T08:49:36.287Z | call_stream | [10] parsed message of length 164
2020-08-13T08:49:36.287Z | call_stream | [10] filterReceivedMessage of length 164
2020-08-13T08:49:36.288Z | call_stream | [10] pushing to reader message of length 159
2020-08-13T08:49:36.315Z | call_stream | [12] Received server headers:
:status: 200
content-type: application/grpc
2020-08-13T08:49:36.315Z | call_stream | [12] receive HTTP/2 data frame of length 8
2020-08-13T08:49:36.316Z | call_stream | [12] parsed message of length 8
2020-08-13T08:49:36.316Z | call_stream | [12] filterReceivedMessage of length 8
2020-08-13T08:49:36.316Z | call_stream | [12] pushing to reader message of length 3
2020-08-13T08:49:36.319Z | call_stream | [12] end() called
2020-08-13T08:49:36.319Z | call_stream | [12] calling end() on HTTP/2 stream
2020-08-13T08:49:36.351Z | call_stream | [12] Received server trailers:
grpc-status: 0
grpc-message:
2020-08-13T08:49:36.351Z | call_stream | [12] received status code 0 from server
2020-08-13T08:49:36.352Z | call_stream | [12] received status details string "" from server
2020-08-13T08:49:36.352Z | call_stream | [12] ended with status: code=0 details=""
2020-08-13T08:49:36.352Z | subchannel | 10.100.136.32:30011 callRefcount 1 -> 0
2020-08-13T08:49:36.352Z | call_stream | [12] cancelWithStatus code: 1 details: "Cancelled on client"
2020-08-13T08:49:36.352Z | call_stream | [12] ended with status: code=1 details="Cancelled on client"
2020-08-13T08:49:36.352Z | call_stream | [12] HTTP/2 stream closed with code 0
2020-08-13T08:49:38.336Z | call_stream | [10] receive HTTP/2 data frame of length 91
2020-08-13T08:49:38.336Z | call_stream | [10] parsed message of length 91
2020-08-13T08:49:38.336Z | call_stream | [10] filterReceivedMessage of length 91
2020-08-13T08:49:38.336Z | call_stream | [10] pushing to reader message of length 86
2020-08-13T08:49:38.336Z | call_stream | [9] receive HTTP/2 data frame of length 91
2020-08-13T08:49:38.336Z | call_stream | [9] parsed message of length 91
2020-08-13T08:49:38.336Z | call_stream | [9] filterReceivedMessage of length 91
2020-08-13T08:49:38.336Z | call_stream | [9] pushing to reader message of length 86
2020-08-13T08:49:38.337Z | call_stream | [11] receive HTTP/2 data frame of length 91
2020-08-13T08:49:38.337Z | call_stream | [11] parsed message of length 91
2020-08-13T08:49:38.337Z | call_stream | [11] filterReceivedMessage of length 91
2020-08-13T08:49:38.337Z | call_stream | [11] pushing to reader message of length 86
2020-08-13T08:49:39.240Z - error: [EventService]: EventService[org1-peer1] timed out after:3000
2020-08-13T08:49:39.241Z - error: [EventService]: send[org1-peer1] - #1 - Starting stream to org1-peer1 failed
Client code:
'use strict';
const FabricCAServices = require('fabric-ca-client');
const { Wallets, Gateway } = require('fabric-network');
const fs = require('fs');
const path = require('path');
const yaml = require('js-yaml');
const async = require("async");
const user = 'benchmark';
const userpw = 'benchmarkPW';
const mspID = 'Org1MSP';
const BENCHMARK_CONCURRENCY = 1;
const BENCHMARK_OPERATIONS = 3;
const txQueue = async.queue(async (task) => {
console.log('--> Starting task');
await task;
}, BENCHMARK_CONCURRENCY);
const connectionPath = path.join(process.cwd(), 'gateway/connection.yaml');
const data = fs.readFileSync(connectionPath);
const ccp = yaml.safeLoad(data);
async function createWallet() {
try {
const walletPath = path.join(process.cwd(), 'identity/wallet');
const wallet = await Wallets.newFileSystemWallet(walletPath);
return wallet;
} catch (error) {
console.error(`Error: ${error}`);
}
}
async function enrollUser(wallet) {
try {
// Create a new CA client for interacting with the CA.
const caTLSCACerts = ccp.certificateAuthorities['org1-ca'].tlsCACerts.pem;
const caUrl = ccp.certificateAuthorities['org1-ca'].url;
const ca = new FabricCAServices(caUrl, { trustedRoots: caTLSCACerts, verify: false }, 'CA');
// Check to see if we've already enrolled the user.
const userExists = await wallet.get(user);
if (userExists) {
console.log(`An identity for the client user "${user}" already exists in the wallet`);
} else {
// Enroll signing material
let enrollment = await ca.enroll({ enrollmentID: user, enrollmentSecret: userpw });
let x509Identity = {
credentials: {
certificate: enrollment.certificate,
privateKey: enrollment.key.toBytes(),
},
mspId: mspID,
type: 'X.509',
};
await wallet.put(user, x509Identity);
console.log(`Successfully enrolled msp for user "${user}" and imported it into the wallet`);
const tlsca = new FabricCAServices(caUrl, { trustedRoots: caTLSCACerts, verify: false }, `TLSCA`);
enrollment = await tlsca.enroll({ enrollmentID: user, enrollmentSecret: userpw, profile: 'tls' });
x509Identity = {
credentials: {
certificate: enrollment.certificate,
privateKey: enrollment.key.toBytes(),
},
mspId: mspID,
type: 'X.509',
};
await wallet.put(`${user}-tls`, x509Identity);
console.log(`Successfully enrolled tls-msp for user "${user}" and imported it into the wallet`);
}
} catch (error) {
console.error(`Error enrolling user "${user}": ${error}`);
process.exit(1);
}
}
async function startBenchmark(wallet) {
try {
const gateway = new Gateway();
let connectionOptions = {
identity: `${user}`,
clientTlsIdentity: `${user}-tls`,
wallet: wallet,
discovery: { enabled: false, asLocalhost: false },
};
await gateway.connect(ccp, connectionOptions);
const network = await gateway.getNetwork('channel1');
const contract = network.getContract('cc-abac');
let startTime = 0;
let left = BENCHMARK_OPERATIONS;
startTime = Date.now();
while (left > 0) {
txQueue.push(contract.submitTransaction('invoke', 'a', 'b', '1'));
left--;
}
await txQueue.drain();
let endTime = Date.now();
let duration = (endTime - startTime) / 1000;
let tps = BENCHMARK_OPERATIONS / duration;
console.log('======== Benchmark finished ========');
console.log(`Processed ${BENCHMARK_OPERATIONS} ops in ${duration} seconds. TPS: ${tps}`);
gateway.disconnect();
process.exit(0);
} catch (error) {
console.error(`Got error:": ${error}`);
process.exit(1);
}
}
async function main() {
try {
const wallet = await createWallet();
await enrollUser(wallet);
await startBenchmark(wallet);
} catch (error) {
console.error(`Error: ${error}`);
process.exit(1);
}
}
main();
Connection Profile:
name: "Network"
version: "1.1"
client:
organization: Org1
connection:
timeout:
peer:
endorser: 120
eventHub: 60
eventReg: 30
orderer: 30
options:
grpc.keepalive_timeout_ms: 10000
channels:
channel1:
orderers:
- org1-orderer
- org2-orderer
- org3-orderer
peers:
- org1-peer1
- org2-peer1
- org3-peer1
organizations:
Org1:
mspid: Org1MSP
peers:
- org1-peer1
certificateAuthorities:
- org1-ca
Org2:
mspid: Org2MSP
peers:
- org2-peer1
certificateAuthorities:
- org2-ca
Org3:
mspid: Org3MSP
peers:
- org3-peer1
certificateAuthorities:
- org3-ca
certificateAuthorities:
org1-ca:
url: https://worker2.example.com:30051
tlsCACerts:
path: "./crypto/org1-tls-ca-ca-cert.pem"
httpOptions:
verify: false
caName: CA
org2-ca:
url: https://worker2.example.com:30052
tlsCACerts:
path: "./crypto/org2-tls-ca-ca-cert.pem"
httpOptions:
verify: false
caName: CA
org3-ca:
url: https://worker2.example.com:30053
tlsCACerts:
path: "./crypto/org3-tls-ca-ca-cert.pem"
httpOptions:
verify: false
caName: CA
orderers:
org1-orderer:
url: grpcs://worker2.example.com:30011
grpcOptions:
ssl-target-name-override: worker2.example.com
tlsCACerts:
path: "./crypto/org1-tls-ca-ca-cert.pem"
org2-orderer:
url: grpcs://worker2.example.com:30012
grpcOptions:
ssl-target-name-override: worker2.example.com
tlsCACerts:
path: "./crypto/org2-tls-ca-ca-cert.pem"
org3-orderer:
url: grpcs://worker2.example.com:30013
grpcOptions:
ssl-target-name-override: worker2.example.com
tlsCACerts:
path: "./crypto/org3-tls-ca-ca-cert.pem"
peers:
org1-peer1:
url: grpcs://worker2.example.com:30151
httpOptions:
verify: false
grpcOptions:
ssl-target-name-override: worker2.example.com
tlsCACerts:
path: "./crypto/org1-tls-ca-ca-cert.pem"
org2-peer1:
url: grpcs://worker2.example.com:30161
httpOptions:
verify: false
grpcOptions:
ssl-target-name-override: worker2.example.com
tlsCACerts:
path: "./crypto/org2-tls-ca-ca-cert.pem"
org3-peer1:
url: grpcs://worker2.example.com:30171
httpOptions:
verify: false
grpcOptions:
ssl-target-name-override: worker2.example.com
tlsCACerts:
path: "./crypto/org3-tls-ca-ca-cert.pem"
Versions used:
HLF: 2.1
Node SDK: 2.2
Node: 10.22
Every idea is highly appreciated!
That error is generated when no events are received within a timeout window after connecting the event service to receive block events from peers. It's possible that this one peer is not responding but transactions can still flow as there are enough contactable peers for endorsement to be successful. If the logs confirm that all of your peers are communicating correctly and your transactions are all executing OK then probably this error is spurious and you can ignore it.
However, even if the error log is spurious, it would be nice if it wasn't there. It might be that an unsuccessful reconnect is being triggered due to ordering of operations within the Gateway.disconnect() process but I'm just guessing here. It might be worth raising a bug report in Jira against the Node SDK for further investigation:
https://jira.hyperledger.org/projects/FABN
recently I want to add a simple progress bar to my script, I use tqdm to that, but what puzzle me is that the output is different when I am in the IDLE or in the cmd
for example this
from tqdm import tqdm
import time
def test():
for i in tqdm( range(100) ):
time.sleep(0.1)
give the expected output in the cmd
30%|███ | 30/100 [00:03<00:07, 9.14it/s]
but in the IDLE the output is like this
0%| | 0/100 [00:00<?, ?it/s]
1%|1 | 1/100 [00:00<00:10, 9.14it/s]
2%|2 | 2/100 [00:00<00:11, 8.77it/s]
3%|3 | 3/100 [00:00<00:11, 8.52it/s]
4%|4 | 4/100 [00:00<00:11, 8.36it/s]
5%|5 | 5/100 [00:00<00:11, 8.25it/s]
6%|6 | 6/100 [00:00<00:11, 8.17it/s]
7%|7 | 7/100 [00:00<00:11, 8.12it/s]
8%|8 | 8/100 [00:00<00:11, 8.08it/s]
9%|9 | 9/100 [00:01<00:11, 8.06it/s]
10%|# | 10/100 [00:01<00:11, 8.04it/s]
11%|#1 | 11/100 [00:01<00:11, 8.03it/s]
12%|#2 | 12/100 [00:01<00:10, 8.02it/s]
13%|#3 | 13/100 [00:01<00:10, 8.01it/s]
14%|#4 | 14/100 [00:01<00:10, 8.01it/s]
15%|#5 | 15/100 [00:01<00:10, 8.01it/s]
16%|#6 | 16/100 [00:01<00:10, 8.00it/s]
17%|#7 | 17/100 [00:02<00:10, 8.00it/s]
18%|#8 | 18/100 [00:02<00:10, 8.00it/s]
19%|#9 | 19/100 [00:02<00:10, 8.00it/s]
20%|## | 20/100 [00:02<00:09, 8.00it/s]
21%|##1 | 21/100 [00:02<00:09, 8.00it/s]
22%|##2 | 22/100 [00:02<00:09, 8.00it/s]
23%|##3 | 23/100 [00:02<00:09, 8.00it/s]
24%|##4 | 24/100 [00:02<00:09, 8.00it/s]
25%|##5 | 25/100 [00:03<00:09, 8.00it/s]
26%|##6 | 26/100 [00:03<00:09, 8.00it/s]
27%|##7 | 27/100 [00:03<00:09, 8.09it/s]
28%|##8 | 28/100 [00:03<00:09, 7.77it/s]
29%|##9 | 29/100 [00:03<00:09, 7.84it/s]
30%|### | 30/100 [00:03<00:08, 7.89it/s]
31%|###1 | 31/100 [00:03<00:08, 7.92it/s]
32%|###2 | 32/100 [00:03<00:08, 7.94it/s]
33%|###3 | 33/100 [00:04<00:08, 7.96it/s]
34%|###4 | 34/100 [00:04<00:08, 7.97it/s]
35%|###5 | 35/100 [00:04<00:08, 7.98it/s]
36%|###6 | 36/100 [00:04<00:08, 7.99it/s]
37%|###7 | 37/100 [00:04<00:07, 7.99it/s]
38%|###8 | 38/100 [00:04<00:07, 7.99it/s]
39%|###9 | 39/100 [00:04<00:07, 8.00it/s]
40%|#### | 40/100 [00:04<00:07, 8.00it/s]
41%|####1 | 41/100 [00:05<00:07, 8.00it/s]
I also get the same result if I make my own progress bar like
import sys
def progress_bar_cmd(count,total,suffix="",*,bar_len=60,file=sys.stdout):
filled_len = round(bar_len*count/total)
percents = round(100*count/total,2)
bar = "#"*filled_len + "-"*(bar_len - filled_len)
file.write( "[%s] %s%s ...%s\r"%(bar,percents,"%",suffix))
file.flush()
for i in range(101):
time.sleep(1)
progress_bar_cmd(i,100,"range 100")
why is that????
and there is a way to fix it???
Limiting ourselves to ascii characters, the program output of your second code is the same in both cases -- a stream of ascii bytes representing ascii chars. The language definition does not and cannot specify what an output device or display program will do with the bytes, in particular with control characters such as '\r'.
The Windows Command Prompt console at least sometimes interprets '\r' as 'return the cursor to the beginning of the current line without erasing anything'.
In a Win10 console:
>>> import sys; out=sys.stdout
>>> out.write('abc\rdef')
def7
However, when I run your second code, with the missing time import added, I do not see the overwrite behavior, but see the same continued line output as with IDLE.
C:\Users\Terry>python f:/python/mypy/tem.py
[------------------------------------------------------------] 0.0% ...range 100[#-----------------------------------------------------------] ...
On the third hand, if shorten the write to file.write("[%s]\r"% bar), then I do see one output overwritten over and over.
The tk Text widget used by IDLE only interprets \t and \n, but not other control characters. To some of us, this seems appropriate for a development environment, where erasing characters is less appropriate than in a production environment.
I'm developing network based application for control and telemetry on Linux based embedded system. I'm using ZMQ network library and Google Protocol Buffers serialization library for communication purposes.
I took a look at CurveZMQ however there is no official binding for C++ and I do not want to mess up my implementation with CZMQ binding. Therefore I decided that I do not want to use CurveZMQ extension and I want to use some external library for authentication and encryption.
I want to apply following security measurements to my application/system:
"CURVE" security mechanism, to give me strong encryption on data, and (as far as we know) unbreakable authentication.
Client public key authentication.
I would appriciate library with binding to many programming languages because my client application is going to run on many different platforms.
Nice to have:
Curve25519 elliptic curve cryptography (ECC) algorithm.
I believe I can encapsulate handshake messages in my Protocol Buffers messages.. So basically the idea is to establish secure tunnel between server and client and somehow enforce client and server authentication. Below you can find how my unsecure system is working and where I imagine to put encryption layer. I do not have any idea how to solve authentication issue at the moment.
Here is the big picture of my not secure system.
______________________ ______________________
| | | | APPLICATION
| client | | server | LAYER
| application | | application |
|______________________| |______________________|
/\ || /\ ||
|| || || ||
______||______\/______ ______||______\/______
| | | | SERIALIZATION
| protobuf | | protobuf | DESERIALIZATION
|______________________| |______________________| LAYER
/\ || /\ ||
|| || || ||
______||______\/______ ______||______\/______
| | | | TRANSPORT
| zmq_socket.send(...) |-------------------->| zmq_socket.recv(...) | LAYER
| | (TCP/IP) | |
| zmq_socket.recv(...) |<--------------------| zmq_socket.send(...) |
|______________________| |______________________|
python client C++ server
Here is how I imagine my secure system.
______________________ ______________________
| | | | APPLICATION
| client | | server | LAYER
| application | | application |
|______________________| |______________________|
/\ || /\ ||
|| || || ||
______||______\/______ ______||______\/______
| | | | SERIALIZATION
| protobuf | | protobuf | DESERIALIZATION
|______________________| |______________________| LAYER
/\ || /\ ||
|| || || ||
______||______\/______ ______||______\/______
| | | | ENCRYPTION
| encryption | | encryption | LAYER
|______________________| |______________________|
/\ || /\ ||
|| || || ||
______||______\/______ ______||______\/______
| | | | TRANSPORT
| zmq_socket.send(...) |-------------------->| zmq_socket.recv(...) | LAYER
| | (TCP/IP) | |
| zmq_socket.recv(...) |<--------------------| zmq_socket.send(...) |
|______________________| |______________________|
python client C++ server
Basically I need help with making my system secure. Could you help me with that?
I'm using Redhawk 1.10.0 on a CentOS 6.5 machine 64 bit and USRP b100 with UHD driver 3.7.2.
The USRP b100 is recognized correctly by the system. It is a USB device.
I downloaded the latest version of UHD_USRP Device ver. 3.0 for REDHAWK and I created a Node including a GPP and a UHD_USRP devices.
The Node is started without any problem but when I run a simple waveform to read data from the USRP as a RX_DIGITIZER I got the following error:
Failed to create application: usrp_test_248_173059195 Failed to satisfy 'usesdevice'
dependencies DCE:18964b3d-392e-4b98-a90d-0569b5d46ffefor application 'usrp_test_248_173059195'
IDL:CF/ApplicationFactory/CreateApplicationError:1.0
The log of the Device Manager reports that:
2014-09-05 17:31:03 TRACE FrontendTunerDevice:369 - CORBA::Boolean frontend::FrontendTunerDevice<TunerStatusStructType>::allocateCapacity(const CF::Properties&) [with TunerStatusStructType = frontend_tuner_status_struct_struct]
2014-09-05 17:31:03 INFO FrontendTunerDevice:502 - allocateCapacity: NO AVAILABLE TUNER. Make sure that the device has an initialized frontend_tuner_status
2014-09-05 17:31:03 TRACE FrontendTunerDevice:578 - void frontend::FrontendTunerDevice<TunerStatusStructType>::deallocateCapacity(const CF::Properties&) [with TunerStatusStructType = frontend_tuner_status_struct_struct]
2014-09-05 17:31:03 DEBUG FrontendTunerDevice:603 - ALLOCATION_ID NOT FOUND: [usrpAllocation]
2014-09-05 17:31:03 DEBUG FrontendTunerDevice:637 - ERROR WHEN DEALLOCATING. SKIPPING...
The Node console:
2014-09-05 17:31:39 TRACE ApplicationFactory_impl:2132 - Done building Component Info From SPD File
2014-09-05 17:31:39 TRACE ApplicationFactory_impl:1040 - Application has 1 usesdevice dependencies
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::allocation_id
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::bandwidth
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::center_frequency
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::group_id
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::rf_flow_id
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::sample_rate
2014-09-05 17:31:39 TRACE prop_utils:509 - setting struct item FRONTEND::tuner_allocation::tuner_type
2014-09-05 17:31:39 TRACE AllocationManager_impl:134 - Servicing 1 allocation request(s)
2014-09-05 17:31:39 TRACE AllocationManager_impl:144 - Allocation request DCE:18964b3d-392e-4b98-a90d-0569b5d46ffe contains 3 properties
2014-09-05 17:31:39 TRACE AllocationManager_impl:243 - Allocating against device uhd_node:USRP_UHD_1
2014-09-05 17:31:39 TRACE AllocationManager_impl:353 - Matching DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d 'FRONTEND::TUNER' eq 'FRONTEND::TUNER'
2014-09-05 17:31:39 TRACE AllocationManager_impl:353 - Matching DCE:0f99b2e4-9903-4631-9846-ff349d18ecfb 'USRP' eq 'USRP'
2014-09-05 17:31:39 TRACE AllocationManager_impl:395 - Adding external property FRONTEND::tuner_allocation
2014-09-05 17:31:39 TRACE AllocationManager_impl:407 - Matched 2 properties
2014-09-05 17:31:39 TRACE AllocationManager_impl:264 - Allocating 1 properties (1 calls)
2014-09-05 17:31:39 TRACE AllocationManager_impl:267 - Device lacks sufficient capacity
2014-09-05 17:31:39 TRACE AllocationManager_impl:243 - Allocating against device uhd_node:GPP_1
2014-09-05 17:31:39 TRACE AllocationManager_impl:353 - Matching DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d 'GPP' eq 'FRONTEND::TUNER'
2014-09-05 17:31:39 TRACE AllocationManager_impl:248 - Matching failed
2014-09-05 17:31:39 DEBUG ApplicationFactory_impl:1060 - Failed to satisfy 'usesdevice' dependencies DCE:18964b3d-392e-4b98-a90d-0569b5d46ffefor application 'usrp_test_248_173059195'
2014-09-05 17:31:39 TRACE ApplicationFactory_impl:528 - Unbinding the naming context
2014-09-05 17:31:39 TRACE Properties:85 - Destruction for properties
2014-09-05 17:31:39 TRACE PRF:312 - Deleting PRF containing 4 properties
I used the following parameters:
<usesdevicedependencies>
<usesdevice id="DCE:18964b3d-392e-4b98-a90d-0569b5d46ffe"
type="usesUSRP">
<propertyref refid="DCE:cdc5ee18-7ceb-4ae6-bf4c-31f983179b4d"
value="FRONTEND::TUNER" />
<propertyref refid="DCE:0f99b2e4-9903-4631-9846-ff349d18ecfb"
value="USRP" />
<structref refid="FRONTEND::tuner_allocation">
<simpleref refid="FRONTEND::tuner_allocation::tuner_type"
value="RX_DIGITIZER" />
<simpleref refid="FRONTEND::tuner_allocation::allocation_id"
value="usrpAllocation" />
<simpleref refid="FRONTEND::tuner_allocation::center_frequency"
value="102500000" />
<simpleref refid="FRONTEND::tuner_allocation::bandwidth"
value="320000" />
<simpleref refid="FRONTEND::tuner_allocation::sample_rate"
value="250000" />
<simpleref refid="FRONTEND::tuner_allocation::group_id"
value="" />
<simpleref refid="FRONTEND::tuner_allocation::rf_flow_id"
value="" />
</structref>
</usesdevice>
</usesdevicedependencies>
The b100 configuration is the following:
-- USRP-B100 clock control: 10
-- r_counter: 2
-- a_counter: 0
-- b_counter: 20
-- prescaler: 8
-- vco_divider: 5
-- chan_divider: 5
-- vco_rate: 1600.000000MHz
-- chan_rate: 320.000000MHz
-- out_rate: 64.000000MHz
--
_____________________________________________________
/
| Device: B-Series Device
| _____________________________________________________
| /
| | Mboard: B100
| | revision: 8192
| | serial: E5R10Z1B1
| | FW Version: 4.0
| | FPGA Version: 11.4
| |
| | Time sources: none, external, _external_
| | Clock sources: internal, external, auto
| | Sensors: ref_locked
| | _____________________________________________________
| | /
| | | RX DSP: 0
| | | Freq range: -32.000 to 32.000 Mhz
| | _____________________________________________________
| | /
| | | RX Dboard: A
| | | ID: WBX v3, WBX v3 + Simple GDB (0x0057)
| | | Serial: E5R1BW6XW
| | | _____________________________________________________
| | | /
| | | | RX Frontend: 0
| | | | Name: WBXv3 RX+GDB
| | | | Antennas: TX/RX, RX2, CAL
| | | | Sensors: lo_locked
| | | | Freq range: 68.750 to 2200.000 Mhz
| | | | Gain range PGA0: 0.0 to 31.5 step 0.5 dB
| | | | Connection Type: IQ
| | | | Uses LO offset: No
| | | _____________________________________________________
| | | /
| | | | RX Codec: A
| | | | Name: ad9522
| | | | Gain range pga: 0.0 to 20.0 step 1.0 dB
| | _____________________________________________________
| | /
| | | TX DSP: 0
| | | Freq range: -32.000 to 32.000 Mhz
| | _____________________________________________________
| | /
| | | TX Dboard: A
| | | ID: WBX v3 (0x0056)
| | | Serial: E5R1BW6XW
| | | _____________________________________________________
| | | /
| | | | TX Frontend: 0
| | | | Name: WBXv3 TX+GDB
| | | | Antennas: TX/RX, CAL
| | | | Sensors: lo_locked
| | | | Freq range: 68.750 to 2200.000 Mhz
| | | | Gain range PGA0: 0.0 to 31.0 step 1.0 dB
| | | | Connection Type: IQ
| | | | Uses LO offset: No
| | | _____________________________________________________
| | | /
| | | | TX Codec: A
| | | | Name: ad9522
| | | | Gain range pga: -20.0 to 0.0 step 0.1 dB
Where's my fault?
Thanks in advance for any help.
The frontend_tuner_status struct sequence is what defines the device capabilities and capacity. If the sequence is empty, the allocation will always result in lack of capacity. An empty frontend_tuner_status property is usually the result of not specifying the target device, or not being able to find the specified target device.
You must specify the target device using the target_device struct property. This can be done within the Node or at run-time. Previous versions of the USRP_UHD REDHAWK Device only allowed the IP address to be specified using a property, but in order to support USB-connected (and in general, non-network-connected) USRP devices, this has been replaced with the target_device struct property.
The target_device property allows the user to specify ip_address, name, serial, or type, and will use the first USRP hardware device that meets the criteria, if one is found. The information that should be specified in the target_device struct can be determined by setting the update_available_devices property to true, causing the available_devices struct sequence to be populated with all found devices (the same devices and information that is reported by the command line tool uhd_find_devices).
To determine if the USRP_UHD REDHAWK Device is connected to a target device, inspect the properties. Specifically, frontend_tuner_status sequence will be empty if not linked to a hardware device, as well as the device_motherboards and device_channels properties.