RabbitMQ Shovel won't start - linux
I have a pair of computers running arch linux with rabbitmq message queues and I want to use a shovel to move messages from the queue on the first computer to the queue on the second. Unfortunately I can't seem to create a shovel, or even verify that my rabbitmq.config file is being read.
Computer 1 has an ip address of 192.168.6.66
/etc/rabbitmq/rabbitmq-env.conf
NODENAME=bunny
NODE_IP_ADDRESS=192.168.6.66
NODE_PORT=5672
LOG_BASE=/var/log/rabbitmq
MNESIA_BASE=/var/lib/rabbitmq/mnesia
RABBITMQ_PLUGINS_DIR=/usr/lib/rabbitmq/lib/rabbitmq_server-2.7.1/plugins
/etc/rabbitmq/rabbitmq.conf
[ {mnesia, [{dump_log_write_threshold, 100}]},
{bunny, [{vm_memory_high_watermark, 0.3}]},
{rabbitmq_shovel,
[{shovels,
[{test_shovel,
[{sources, [{broker, "amqp://shoveluser:shoveluser#192.168.6.64:5672/"}]},
{destinations, [{broker, ""}]},
{queue, <<"observation2">>}
]
}]
}]
}
].
Computer 2 has an ip address of 192.168.6.64
/etc/rabbitmq/rabbitmq-env.conf
NODENAME=bunny
NODE_IP_ADDRESS=0.0.0.0
NODE_PORT=5672
LOG_BASE=/var/log/rabbitmq
MNESIA_BASE=/var/lib/rabbitmq/mnesia
RABBITMQ_PLUGINS_DIR=/usr/lib/rabbitmq/lib/rabbitmq_server-2.7.1/plugins
When I restart the rabbitmq-server on computer 1 this is the output:
[root#test_toshiba ~]# /etc/rc.d/rabbitmq-server restart
:: Stopping rabbitmq-server daemon [BUSY] Stopping and halting node bunny#localhost ...
...done.
[DONE]
:: Starting rabbitmq-server daemon [BUSY] Activating RabbitMQ plugins ...
********************************************************************************
********************************************************************************
9 plugins activated:
* amqp_client-2.7.1
* erlando-2.7.1
* mochiweb-1.3-rmq2.7.1-git
* rabbitmq_management-2.7.1
* rabbitmq_management_agent-2.7.1
* rabbitmq_mochiweb-2.7.1
* rabbitmq_shovel-2.7.1
* rabbitmq_shovel_management-2.7.1
* webmachine-1.7.0-rmq2.7.1-hg
I expected to see this "config file(s) : /etc/rabbitmq/rabbitmq.config" given the description of the config file documentation here
And after the rabbitmq-server has started I run this command and don't see a shovel:
[root#test_toshiba ~]# rabbitmqctl eval 'rabbit_shovel_status:status().'
[]
...done.
Here is the rabbitmq status
[root#test_toshiba ~]# rabbitmqctl status
Status of node bunny#localhost ...
[{pid,14225},
{running_applications,
[{rabbitmq_shovel,"Data Shovel for RabbitMQ","2.7.1"},
{erlando,"Syntax extensions for Erlang","2.7.1"},
{rabbitmq_shovel_management,"Shovel Status","2.7.1"},
{rabbitmq_management,"RabbitMQ Management Console","2.7.1"},
{rabbitmq_management_agent,"RabbitMQ Management Agent","2.7.1"},
{amqp_client,"RabbitMQ AMQP Client","2.7.1"},
{rabbit,"RabbitMQ","2.7.1"},
{os_mon,"CPO CXC 138 46","2.2.9"},
{sasl,"SASL CXC 138 11","2.2.1"},
{rabbitmq_mochiweb,"RabbitMQ Mochiweb Embedding","2.7.1"},
{webmachine,"webmachine","1.7.0-rmq2.7.1-hg"},
{mochiweb,"MochiMedia Web Server","1.3-rmq2.7.1-git"},
{inets,"INETS CXC 138 49","5.9"},
{mnesia,"MNESIA CXC 138 12","4.7"},
{stdlib,"ERTS CXC 138 10","1.18.1"},
{kernel,"ERTS CXC 138 10","2.15.1"}]},
{os,{unix,linux}},
{erlang_version,
"Erlang R15B01 (erts-5.9.1) [source] [smp:4:4] [async-threads:30] [hipe] [kernel-poll:true]\n"},
{memory,
[{total,18530752},
{processes,6813815},
{processes_used,6813800},
{system,11716937},
{atom,428361},
{atom_used,414658},
{binary,182176},
{code,8197217},
{ets,911776}]},
{vm_memory_high_watermark,0.39999999942574066},
{vm_memory_limit,417929625}]
...done.
The logs at /var/log/rabbitmq didn't have any error messages in them.
How can I verify that my config file is being used, and why won't my shovel start?
You need to define a destination for the shovel.
[ {mnesia, [{dump_log_write_threshold, 100}]},
{bunny, [{vm_memory_high_watermark, 0.3}]},
{rabbitmq_shovel,
[{shovels,
[{test_shovel,
[{sources, [{broker, "amqp://shoveluser:shoveluser#192.168.6.64:5672/"}]},
{destinations, [{broker, "amqp://shoveluser:shoveluser#192.168.6.66:5672/"}]},
{queue, >}
]
}]
}]
}
].
I think that "" is not sufficient to specify the localhost broker (in the 'destinations') term. I believe that "amqp://" is required instead.
So, your destinations term should read:
{destinations, [{broker, "amqp://"}]}
Specifying the local broker's username and hostname (the other answer) may also work, but I don't think that's necessary.
Related
IP addressing of the equipment behind the router for SNMP
I am trying to develop a Traps receiver for SNMPv1 and SNMPv2c with OpenVPN. I have used the PURESNMP and PYSNMP libraries with Python. PYSNMP can receive SNMPv1 and SNMPv2c Traps, while PURESNMP only works with SNMPv2c. My project works as follows: the OpenVPN file is loaded into a Router so that it can connect, and several devices are connected behind the Router to be monitored by SNMP. OpenVPN Router IP: 10.8.0.18, LAN Router IP: 192.168.2.1 Team A IP: 192.168.2.3 OpenVPN Server IP: 10.8.0.1 Client2 IP: 10.8.0.10 I manage to receive the Traps, but I cannot distinguish where the information comes from, if in the LAN behind the Router how is it possible to identify from which the alarm came. This is the Team's SNMPv1 Trap response: Agent is listening SNMP Trap on 10.8.0.10 , Port : 162 -------------------------------------------------- ------------------------ SourceIP: ('10.8.0.18', 49181) ----------------------Received new Trap message------------------------ --- 1.3.6.1.2.1.1.3.0 = 910296 1.3.6.1.6.3.1.1.4.1.0 = 1.3.6.1.4.1.1918.2.13.0.700 1.3.6.1.6.3.18.1.3.0 = 10.168.2.3 1.3.6.1.6.3.18.1.4.0 = public 1.3.6.1.6.3.1.1.4.3.0 = 1.3.6.1.4.1.1918.2.13 1.3.6.1.4.1.1918.2.13.10.111.12.0 = 1 1.3.6.1.4.1.1918.2.13.10.111.10.0 = 3 1.3.6.1.4.1.1918.2.13.10.111.11.0 = Digital Input 1 1.3.6.1.4.1.1918.2.13.10.111.14.0 = 4 1.3.6.1.4.1.1918.2.13.10.10.40.0 = System Location 1.3.6.1.4.1.1918.2.13.10.10.50.0 = SiteName 1.3.6.1.4.1.1918.2.13.10.10.60.0 = SiteAddress 1.3.6.1.4.1.1918.2.13.10.111.13.0 = Input D01 Disconnected. In “IP Source”, a function is used to obtain the source IP, but it shows me the IP of the Router and not of the device that is alarmed. If you look at the third line of the traps, it indicates a source IP that the SNMPV1 protocol itself has incorporated: 1.3.6.1.6.3.18.1.3.0 = 10.168.2.3 But it is not from the device or from the Router, it is as if the Router's network segment had been mixed with the device's hots segment. This is the response when receiving SNMPv2c Traps: Agent is listening SNMP Trap on 10.8.0.10 , Port : 162 -------------------------------------------------- -------------------------- SourceIP: ('10.8.0.18', 49180) ----------------------Received new Trap message------------------------ ----- 1.3.6.1.2.1.1.3.0 = 896022 1.3.6.1.6.3.1.1.4.1.0 = 1.3.6.1.4.1.1918.2.13.20.700 1.3.6.1.4.1.1918.2.13.10.111.12.0 = 1 1.3.6.1.4.1.1918.2.13.10.111.10.0 = 3 1.3.6.1.4.1.1918.2.13.10.111.11.0 = Digital Input 1 1.3.6.1.4.1.1918.2.13.10.111.14.0 = 5 1.3.6.1.4.1.1918.2.13.10.10.40.0 = System Location 1.3.6.1.4.1.1918.2.13.10.10.50.0 = SiteName 1.3.6.1.4.1.1918.2.13.10.10.60.0 = SiteAddress 1.3.6.1.4.1.1918.2.13.10.111.13.0 = Input D01 Disconnected. In Trap SNMPv2c, you can only get the source IP but it shows the IP of the Router, but it does not work for me since there are several devices behind the Router, and there is no way to identify which one the alarm came from. I am doing the tests from an OpenVPN client, and not yet from the server, it will be uploaded to the server once it works well. Could you help me because that can happen. Since I thought it was a problem with the libraries and I used another trap receiving software and the answer was the same. This is the Server configuration: port 1194 proto udp dev tun ca ca.crt cert server.crt key server.key dh none server 10.8.0.0 255.255.0.0 ifconfig-pool-persist /var/log/openvpn/ipp.txt client-config-dir ccd route 192.168.0.0 255.255.0.0 keepalive 10 120 tls-crypt ta.key cipher AES-256-GCM auth SHA256 user nobody group nogroup persist-key persist-tun status /var/log/openvpn/openvpn-status.log verb 3 explicit-exit-notify 1 This is the client configuration: client dev tun proto udp remote x.x.x.x 1194 resolv-retry infinite nobind user nobody group nogroup persist-key persist-tun remote-cert-tls server cipher AES-256-GCM auth SHA256 key-direction 1 verb 3 <ca> </ca> <cert> </cert> <key> </key> <tls-crypt> # # </tls-crypt> Firewall configuration on the server: # START OPENVPN RULES # NAT table rules *nat :POSTROUTING ACCEPT [0:0] # Allow traffic from OpenVPN client to eth0 (change to the interface you discovered!) -A POSTROUTING -s 10.8.0.0/8 -o eth0 -j MASQUERADE COMMIT # END OPENVPN RULES This is the code to Receive SNMPv1 and SNMPv2c Traps: from pysnmp.carrier.asynsock.dispatch import AsynsockDispatcher #python snmp trap receiver from pysnmp.entity import engine, config from pysnmp.carrier.asyncore.dgram import udp from pysnmp.entity.rfc3413 import ntfrcv from pysnmp.proto import api datosnmp = [] snmpEngine = engine.SnmpEngine() TrapAgentAddress='10.8.0.10' #Trap listerner address Port=162 #trap listerner port print("Agent is listening SNMP Trap on "+TrapAgentAddress+" , Port : " +str(Port)) print('--------------------------------------------------------------------------') config.addTransport(snmpEngine, udp.domainName + (1,), udp.UdpTransport().openServerMode((TrapAgentAddress, Port))) #Configure community here config.addV1System(snmpEngine, 'my-area', 'public') def cbFun(snmpEngine, stateReference, contextEngineId, contextName,varBinds, cbCtx): global datosnmp #while wholeMsg: execContext = snmpEngine.observer.getExecutionContext('rfc3412.receiveMessage:request') print("IP Source: ", execContext['transportAddress']) #IP Origen del Trap #print('snmpEngine : {0}'.format(snmpEngine)) #print('stateReference : {0}'.format(stateReference)) #print('contextEngineId : {0}'.format(contextEngineId)) #print('contextName : {0}'.format(contextName)) #print('cbCtx : {0}'.format(cbCtx)) print('{0}Received new Trap message{0}\n'.format('-' * 40)) for oid, val in varBinds: datosnmp.append(val.prettyPrint()) print('%s = %s' % (oid.prettyPrint(), val.prettyPrint())) #name = OID, val = contenido de la OID #print(datosnmp) ntfrcv.NotificationReceiver(snmpEngine, cbFun) snmpEngine.transportDispatcher.jobStarted(1) try: snmpEngine.transportDispatcher.runDispatcher() except: snmpEngine.transportDispatcher.closeDispatcher() raise Could you please help me if it could be a configuration error of the OpenVPN server, or maybe something else needs to be added. Have you seen a similar lake? Any comment is appreciated.
Nextflow with Azure Batch - Cannot find a matching VM image
While trying to set up Nextflow with Azure Batch (NF-Core), I am getting following error. I tried this on multiple workflows (sarek, ataseq etc.) I get the same error - N E X T F L O W ~ version 22.04.0 Pulling nf-core/atacseq ... downloaded from https://github.com/nf-core/atacseq.git Launching `https://github.com/nf-core/atacseq` [rhl6d5529] DSL1 - revision: 1b3a832db5 [1.2.1] Downloading plugin nf-azure#0.13.1 ---------------------------------------------------- ,--./,-. ___ __ __ __ ___ /,-._.--~' |\ | |__ __ / ` / \ |__) |__ } { | \| | \__, \__/ | \ |___ \`-._,-`-, `._,._,' nf-core/atacseq v1.2.1 ---------------------------------------------------- Run Name : rhl6d5529 Data Type : Paired-End Design File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/design.csv Genome : Not supplied Fasta File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/reference/genome.fa GTF File : https://raw.githubusercontent.com/nf-core/test-datasets/atacseq/reference/genes.gtf Mitochondrial Contig : MT MACS2 Genome Size : 1.2E+7 Min Consensus Reps : 1 MACS2 Narrow Peaks : No MACS2 Broad Cutoff : 0.1 Trim R1 : 0 bp Trim R2 : 0 bp Trim 3' R1 : 0 bp Trim 3' R2 : 0 bp NextSeq Trim : 0 bp Fingerprint Bins : 100 Save Genome Index : No Max Resources : 6 GB memory, 2 cpus, 12h time per job Container : docker - nfcore/atacseq:1.2.1 Output Dir : ./results Launch Dir : / Working Dir : /nextflow/atacseq/rhl6d5529 Script Dir : /.nextflow/assets/nf-core/atacseq User : root Config Profile : test,azurebatch Config Description : Minimal test dataset to check pipeline function Config Contact : Venkat Malladi (#vsmalladi) Config URL : https://azure.microsoft.com/services/batch/ ---------------------------------------------------- Uploading local `bin` scripts folder to az://nextflow/atacseq/rhl6d5529/tmp/66/bd55d79e42999df38ba04a81c3aa04/bin [- ] process > CHECK_DESIGN - [- ] process > CHECK_DESIGN [ 0%] 0 of 1 [- ] process > CHECK_DESIGN [ 0%] 0 of 1 Error executing process > 'CHECK_DESIGN (design.csv)' Caused by: Cannot find a matching VM image with publisher=microsoft-azure-batch; offer=centos-container; OS type=linux; verification type=verified [58/55b7f7] process > CHECK_DESIGN (design.csv) [100%] 1 of 1, failed: 1 Error executing process > 'CHECK_DESIGN (design.csv)' Caused by: Cannot find a matching VM image with publisher=microsoft-azure-batch; offer=centos-container; OS type=linux; verification type=verified I tried looking into the source code of nextflow. I found the error to be in AzBatchService.groovy (line number below). https://github.com/nextflow-io/nextflow/blob/0e593e6ab82880810d8139a4fe6e3c47ff69a531/plugins/nf-azure/src/main/nextflow/cloud/azure/batch/AzBatchService.groovy#L442 I did some further digging in my Azure Batch account instance. Basically, I wanted to confirm if the list of supported images being received from the Azure Batch account has the one that is required for this pipeline. I could confirm that the server did indeed respond with the required image - What could be the issue here? I remember running the exact same pipeline a few weeks back and it did work a few times. Am I missing something?
Just had another look through the Azure Cloud docs and think this might be relevant: By default, Nextflow creates CentOS 8-based pool nodes, but this behavior can be customised in the pool configuration. Below the configurations for image reference/SKU combinations to select two popular systems. Ubuntu 20.04: sku = "batch.node.ubuntu 20.04" offer = "ubuntu-server-container" publisher = "microsoft-azure-batch" CentOS 8 (default): sku = "batch.node.centos 8" offer = "centos-container" publisher = "microsoft-azure-batch" I think the issue here is a mismatched nodeAgentSkuId. Nextflow is expecting a CentOS 8 node agent SKU, but you have a CentOS 7 SKU. If it's not possible to change the nodeAgentSkuId somehow, the node agent SKU that Nextflow uses should be able to be overridden by adding this to your nextflow.config: azure.batch.pools.<name>.sku = 'batch.node.centos 7' Where <name> is the pool identifier: azure.batch.pools.<name>.sku Specify the ID of the Compute Node agent SKU which the pool identified with <name> supports (default: batch.node.centos 8, requires nf-azure#0.11.0). https://www.nextflow.io/docs/edge/azure.html#advanced-settings
Gammu - Entry is empty, cannot set SMSC
Short description: when I try to send a SMS I receive the error: "Failed to get SMSC number from phone." so I try to set the SMSC number and I receive the error: "Entry is empty." Commnads are: root#mail:/home/victor# echo "Dragon Ball super is Awsome!" | gammu --sendsms TEXT +40740863629 Failed to get SMSC number from phone. root#mail:/home/victor# gammu setsmsc 1 "+40748438759" Entry is empty. Result of command gammu identify is: root#mail:/home/victor# gammu identify Device : /dev/ttyUSB0 Manufacturer : Qualcomm Model : unknown (HSDPA Modem) Firmware : 01.02.04 1 [Nov 27 2015 14:33:39] SIM IMSI : +CIMI:226102317883481 Maybe my device is not supported by gammu? This is my configuration file ... I tried different configuration: [gammu] port = /dev/ttyUSB0 model = connection = at19200 synchronizetime = no logfile = /var/log/gammu.log logformat = textall use_locking = gammuloc = I used my Ubuntu gammu version 1.37.
I just read the manual of the device, the solution is to load the right driver: modprobe usbserial vendor=0x5c6 product=0x6000 After this the SMS can be send, no need to set SMSC manualy
Ceph-rgw Service stop automatically after installation
in my local cluster (4 Raspberry PIs) i try to configure a rgw gateway. Unfortunately the services disappears automatically after 2 minutes. [ceph_deploy.rgw][INFO ] The Ceph Object Gateway (RGW) is now running on host OSD1 and default port 7480 cephuser#admin:~/mycluster $ ceph -s cluster: id: 745d44c2-86dd-4b2f-9c9c-ab50160ea353 health: HEALTH_WARN too few PGs per OSD (24 < min 30) services: mon: 1 daemons, quorum admin mgr: admin(active) osd: 4 osds: 4 up, 4 in rgw: 1 daemon active data: pools: 4 pools, 32 pgs objects: 80 objects, 1.09KiB usage: 4.01GiB used, 93.6GiB / 97.6GiB avail pgs: 32 active+clean io: client: 5.83KiB/s rd, 0B/s wr, 7op/s rd, 1op/s wr After one minute the service(rgw: 1 daemon active) is no longer visible: cephuser#admin:~/mycluster $ ceph -s cluster: id: 745d44c2-86dd-4b2f-9c9c-ab50160ea353 health: HEALTH_WARN too few PGs per OSD (24 < min 30) services: mon: 1 daemons, quorum admin mgr: admin(active) osd: 4 osds: 4 up, 4 in data: pools: 4 pools, 32 pgs objects: 80 objects, 1.09KiB usage: 4.01GiB used, 93.6GiB / 97.6GiB avail pgs: 32 active+clean Many thanks for the help
Solution: On the gateway node, open the Ceph configuration file in the /etc/ceph/ directory. Find an RGW client section similar to the example: [client.rgw.gateway-node1] host = gateway-node1 keyring = /var/lib/ceph/radosgw/ceph-rgw.gateway-node1/keyring log file = /var/log/ceph/ceph-rgw-gateway-node1.log rgw frontends = civetweb port=192.168.178.50:8080 num_threads=100 https://access.redhat.com/documentation/en-us/red_hat_ceph_storage/3/html-single/object_gateway_guide_for_red_hat_enterprise_linux/index
IPv6 encapsuling on Azure ILPIP
Use of IPv6 tunnels (like tunnelbroker.net) is possible on Azure VM, via ILPIP (Instance Level Public IP)? I tried to use it, but I don't get replies for ping packets to IPv6 gateway. Internet Protocol Version 4, Src: 100.90.204.79, Dst: 216.66.84.46 0100 .... = Version: 4 .... 0101 = Header Length: 20 bytes Differentiated Services Field: 0x00 (DSCP: CS0, ECN: Not-ECT) 0000 00.. = Differentiated Services Codepoint: Default (0) .... ..00 = Explicit Congestion Notification: Not ECN-Capable Transport (0) Total Length: 124 Identification: 0x33d7 (13271) Flags: 0x02 (Don't Fragment) 0... .... = Reserved bit: Not set .1.. .... = Don't fragment: Set ..0. .... = More fragments: Not set Fragment offset: 0 Time to live: 255 Protocol: IPv6 (41) Header checksum: 0xea66 [validation disabled] [Good: False] [Bad: False] Source: 100.90.204.79 Destination: 216.66.84.46 [Source GeoIP: Unknown] [Destination GeoIP: Unknown] Internet Protocol Version 6, Src: 2001:470:1f14:105a::2, Dst: 2001:470:1f14:105a::1 0110 .... = Version: 6 .... 0000 0000 .... .... .... .... .... = Traffic class: 0x00 (DSCP: CS0, ECN: Not-ECT) .... 0000 00.. .... .... .... .... .... = Differentiated Services Codepoint: Default (0) .... .... ..00 .... .... .... .... .... = Explicit Congestion Notification: Not ECN-Capable Transport (0) .... .... .... 1001 0111 0111 0110 1010 = Flowlabel: 0x0009776a Payload length: 64 Next header: ICMPv6 (58) Hop limit: 64 Source: 2001:470:1f14:105a::2 Destination: 2001:470:1f14:105a::1 [Source GeoIP: Unknown] [Destination GeoIP: Unknown] Internet Control Message Protocol v6 Type: Echo (ping) request (128) Code: 0 Checksum: 0xd3f8 [correct] Identifier: 0x5016 Sequence: 1 [No response seen] [Expert Info (Warn/Sequence): No response seen to ICMPv6 request in frame 212] [No response seen to ICMPv6 request in frame 212] [Severity level: Warn] [Group: Sequence] Data (56 bytes) Data: 8bb5ed56000000006ed40d00000000001011121314151617... [Length: 56] I suspect that Azure is rejecting IPv4 protocol 41, am I correct?
The following is documented: Microsoft has played a leading role in helping customers to smoothly transition from IPv4 to IPv6 for the past several years. To date, Microsoft has built IPv6 support into many of its products and solutions like Windows 8 and Windows Server 2012 R2. Microsoft is committed to expanding the worldwide capabilities of the Internet through IPv6 and enabling a variety of valuable and exciting scenarios, including peer-to-peer and mobile applications. The foundational work to enable IPv6 in the Azure environment is well underway. However, we are unable to share a date when IPv6 support will be generally available at this time.