I try to remote log my OpenWRT system. For that i set /etc/config/system like:
config system
option hostname 'MySystem'
option timezone 'UTC'
option log_file '/var/log/messages'
option log_type 'file'
option log_size '64'
option log_rotated '10'
option log_ip '192.168.1.200'
On my Ubuntu system i try to receive those log messages. syslog-ng is installed. /etc/syslog-ng/syslog-ng.conf looks like:
#version: 3.5
#include "scl.conf"
#include "`scl-root`/system/tty10.conf"
# First, set some global options.
options { chain_hostnames(off); flush_lines(0); use_dns(no); use_fqdn(no);
owner("root"); group("adm"); perm(0640); stats_freq(0);
bad_hostname("^gconfd$");
};
source s_net { udp(); };
destination s_messages { file("/var/log/my_test/remote.log");};
log { source(s_net); destination(s_messages);};
#include "/etc/syslog-ng/conf.d/*.conf"
Whenever a log message is logged on OpenWRT in /var/log/messages the file says:
Mon Dec 19 15:11:18 2016 daemon.emerg logread[1021]: Logread connected to 192.168.1.200:514
Mon Dec 19 15:11:27 2016 local0.info my_service[1348]: My logging message
Mon Dec 19 15:11:27 2016 daemon.emerg logread[1021]: failed to send log data to 192.168.1.200:514 via udp
What could be the problem? Ping from OpenWRT to 192.168.1.200 is successful. I guess OpenWRT is workling fine. Problem is the syslog-ng configuration right?
Thx for any help!
Finally it worked. Problem was on my ubuntu system (firewall). OpenWRT worked fine.
I just used the config system part of this question and the server configuration instructions on this page and it worked like a charm.
I created a /etc/rsyslog.d/10-openwrt-remote-logread.conf file with this content (no iptables needed):
$ModLoad imudp
$UDPServerRun 514
:fromhost-ip, isequal, "192.168.0.1" /var/log/openwrt.log
& ~
Now I have a nice openwrt.log file on my Raspberry.
Related
I am currently working on the ARM Cortex-M4 inside the NXP i.MX8M Mini.
I am able to compile a project for M4 on Eclipse IDE on an Ubuntu VM.
I would now like to debug on the M4 via a SEGGER Flasher ARM probe, still from Ubuntu.
My probe is well recognized by Ubuntu, and I can launch the J-Link GDB server by simply typing the command :
$ sudo ./JLinkGDBServerCLExe
However, if I type the same command without sudo, I get :
$ ./JLinkGDBServerCLExe
SEGGER J-Link GDB Server V7.58b Command Line Version
JLinkARM.dll V7.58b (DLL compiled Nov 16 2021 15:04:27)
-----GDB Server start settings-----
GDBInit file: none
GDB Server Listening port: 2331
SWO raw output listening port: 2332
Terminal I/O port: 2333
Accept remote connection: yes
Generate logfile: off
Verify download: off
Init regs on start: off
Silent mode: off
Single run mode: off
Target connection timeout: 0 ms
------J-Link related settings------
J-Link Host interface: USB
J-Link script: none
J-Link settings file: none
------Target related settings------
Target device: Unspecified
Target interface: JTAG
Target interface speed: 4000kHz
Target endian: little
Connecting to J-Link...
Connecting to J-Link failed. Connected correctly?
GDBServer will be closed...
Shutting down...
Could not connect to J-Link.
Please check power, connection and settings.
My problem is that when I start eclipse, I get the same result as starting the GDB server without sudo.
It seems that this is a rights issue, how can I solve it?
As #KamilCuk said, the problem came from the udev rules.
So you just have to copy the rules provided by Segger with J-Link Software on the system:
$ sudo cp 99-jlink.rules /etc/udev/rules.d
Then you have to reboot the system:
$ reboot
I've been trying to troubleshoot this problem for some days now.
A couple of minutes after starting an SSH connection to my Namecheap server (on Mac/windows/cPanel's "Terminal"), it crashes and give the following error message :
Error: The connection to the server ended in failure at {TIME} PM. (SIGKILL)
and :
Exit Code: 137
I've tried to create some kind of log file for any SIGKILL signal, but, it seems like none can be made on a Namecheap server :
auditctl doesn't exist,
We can't get systemtap because no package managers are available.
Precision :
uname -a : Linux [-n] 2.6.32-954.3.5.lve1.4.78.el6.x86_64 #1 SMP Thu Mar 26 08:20:27 EDT 2020 x86_64 x86_64 x86_64 GNU/Linux
I calculated the time between each crash : around 6min.
I don't have a very good knowledge of Linux servers, and maybe didn't include needed information. So please ask for any specificities!
I try to run my OpenVPN client on my windows 10 machine in order to connect to a remote OpenVPN CentOS 7 server but it does not work. I get the error below:
Options error: --capath fails with 'C:\Users\Desktop\OpenVPN\ca.crt': No such process (errno=3)
Options error: --cert fails with 'C:\Users\Desktop\OpenVPN\Win10client.crt': No such process (errno=3)
Fri Mar 22 22:56:20 2019 WARNING: cannot stat file 'C:\Users\Desktop\OpenVPN\Win10client.key': No such process (errno=3)
Options error: --key fails with 'C:\Users\Desktop\OpenVPN\Win10client.key'
Fri Mar 22 22:56:20 2019 WARNING: cannot stat file 'C:\Users\Desktop\OpenVPN\myvpn.tlsauth': No such process (errno=3)
Options error: --tls-crypt fails with 'C:\Users\Desktop\OpenVPN\myvpn.tlsauth': No such process (errno=3)
This is the config that I have on my ovpn file:
client
tls-client
--capath C:\\Users\\Desktop\\OpenVPN\\ca.crt
--cert C:\\Users\\Desktop\\OpenVPN\\Win10client.crt
--key C:\\Users\\Desktop\\OpenVPN\\Win10client.key
--tls-crypt C:\\Users\\Desktop\\OpenVPN\\myvpn.tlsauth
remote-cert-eku "TLS Web Client Authentication"
proto udp
remote serveraddress 1194 udp
dev tun
topology subnet
pull
Assuming your config file is well done. Try to reinstall openvpn, and put your config file to the c:/program files/openvpn/config folder. Then you can start the openvpn Service. Therefore you dont need to use the Openvpn gui.
The same pub-sub code works on local machine (Linux zephyr 3.13.0-27-generic #50-Ubuntu SMP Thu May 15 18:08:16 UTC 2014 i686 i686 i686 GNU/Linux).
However, on EC2 machine (Linux <host> 3.2.0-60-virtual #91-Ubuntu SMP Wed Feb 19 04:13:28 UTC 2014 x86_64 x86_64 x86_64 GNU/Linux) it fails.
The security group is set to allow all for 19019 port and also, for all TCP ports starting from 0.
I tried adding prints in the NodeJS ZMQ module and was able to get the data that I am sending when I added it in flush function.
What else could be the problem?
I tried listening to pub traffic using tcpflow on port 19019 but it didn't work. How can I listen to this traffic?
sudo tcpflow -i eth0 port 19019 and sudo tcpflow -i lo port 19019
Both didn't work. Is there any tool through which I can debug this?
Pub.coffee
zmq = require 'zmq'
dpush_socket = zmq.socket 'pub'
dpush_socket.bind 'tcp://127.0.0.1:19019', (err) ->
if not err?
console.log "Bind successful"
dpush_socket.send 'pid' + ' req ' + req.query.pid
Sub.coffee
zmq = require "zmq"
endPoint = "tcp://0.0.0.0:19019"
sub = zmq.socket "sub"
sub.identity = 'worker' + process.pid;
sub.connect endPoint
console.log "worker connected!"
sub.subscribe('')
sub.on "message", (msg) ->
console.log(sub.identity + 'got ' + msg.toString())
Transport Class shall rather meet each other on the same IP:PORT#
Sub.coffee
zmq = require "zmq"
# # rather set URL, where PUB .bind() listens
endPoint = "tcp://127.0.0.1:19019" # endPoint = "tcp://0.0.0.0:19019"
Part of the answer is probably what user3666197 pointed out: you need to bind and connect on the same IP. I'm not sure what you intend with the 0.0.0.0 address, and it shouldn't work even on your local machine unless you found some undocumented corner of your network stack that supports this behavior.
The other thing is that you either want to include your send call in your callback, or probably want to use bindSync to ensure that the socket is bound before you attempt to send anything. What may be happening is that the socket is discarding your sent message because the socket hasn't completed binding by the time you get to the call. This could well be different between different machines.
The problem is I use a nodejs cluster module and in each of the work a zmq pub socket is created which binds on same port which messes up the issue. On my local machine its a single worker spawning.
In order to develop a cross-plateform syslog client, I am trying to do it without using the syslog syscall. I am developping this client in C++ and for now testing in Linux. The old syslog client that I am replacing was working perfectly fine with the syslog syscall.
For how, it simply doesn't work. The trace is not in /var/log/user.log like it should be, either anywhere else (greped). But I do receive it when I listen on the right port with netcat. Shouldn't the port 514 be already in use by the way ?
The trace is as it should be sent on UDP/514. I tried to stick the RFC 3164 but something is still obviously wrong.
Id really appreciate if someone could give me a hint to solve this.
Trace: severity: 2 (Critical); facility: 23 (Local Use 7) ==> priority: 186
sh$> sudo nc -ul localhost -p 514
<186>Oct 18 10:36:03 hostname test_trace: | 10:36:03.242995 | CRIT | xxx-MAIN[5473-000] | 00000 | 0008 : main : user_msg
Thank you !
I think I found the problem in my own question: Rsyslog (my syslog server) doesn't listen on UDP/514 correctly.
/etc/rsyslog.conf
$ModLoad imudp
$UDPServerAddress 0.0.0.0
$UDPServerRun 514
If someone has any idea of why it still doesn't listen on UDP/514, I'd be really thanksful cause I really don't see why.
Thank you again.
The syslog() call writes to /dev/log and the system logger reads this unix domain socket to pick up the message. UDP/514 is for network transmission.
So it is not clear what you want.