rsyslog: store & forward logs that have been forwarded to this server - store

What I want to do is covered here almost exactly: https://scubarda.com/2015/11/06/rsyslog-store-and-forward-messages-to-other-hosts/
There is one issue however, which is that it suggests
$template RemoteHost,"/var/spool/rsyslog/%HOSTNAME%/%$YEAR%/%$MONTH%/%$DAY%/syslog.log"
to first store the logs locally. However I'm already storing locally with a series of directives like:
:HOSTNAME, isequal, "netdevice01" /var/log/network.log
& ~
:HOSTNAME, isequal, "netdevice02" /var/log/network.log
& ~
etc. and I want to keep those that way. I only want logs from those devices to go into this one file and also get forwarded. When I use the $template RemoteHost,"/var/spool/rsyslog/%HOSTNAME%/%$YEAR%/%$MONTH%/%$DAY%/syslog.log" it stops logging to my /var/log/network.log and instead stores everything (including logs from other servers) in those new ones.
Can anyone suggest how to keep everything in /var/log/network.log the way it is now but also forward these to the remote syslog server?

Related

linux redirect localhost port to url port

I need to redirect localhost:8080 to http://url:8080/.
Some background:
I am using docker swarm stack services. One service (MAPS) creates a simple http server that lists xml files to port 8080 and another service (WAS) uses WebSphere Application Server that has a connector that uses these files, to be more precise it calls upon a file maps.xml that has the urls of the other files as http://localhost:8080/<file-name>.xml.
I know docker allows me to call on the service name and port within the services, thus I can use curl http://MAPS:8080/ from inside my WAS service and it outputs my list of xml files.
However, this will not always be true. The prod team may change the port number they want to publish or they might update the maps.xml file and forget to change localhost:8080 to MAPS:8080.
Is there a way to make it so any call to localhost:8080 gets redirected to another url, preferrably using a configuration file? I also need it to be lightweight since the WAS service is already quite heavy and I can't make it too large to deploy.
Solutions I tried:
iptables: Installed it on the WAS service container but when I tried using it it said my kernel was outdated
tinyproxy: Tried setting it up as a reverse proxy but I couldn't make it work
ncat with inetd: Tried to use this solution but it also didn't work
I am NO expert so please excuse any noob mistakes I made. And thanks in advance!
It is generally not a good idea to redirect localhost to another location as it might disrupt your local environment in surprising ways. Many packages depend on localhost being localhost :-)
it is possible to add MAPS to your hosts file (/etc/hosts) giving it the address of maps.

How to get haproxy to use a specific cluster computer via the URI

I have successfully set haproxy on my server cluster. I have run into one snag that I can't find a solution for...
TESTING INDIVIDUAL CLUSTER COMPUTERS
It can happen that for one reason or another, one computer in the cluster gets a configuration variation. I can't find a way to tell haproxy that I want to use a specific computer out of a cluster.
Basically, mysite.com (and several other domains) are served up by boxes web1, web2 and web3. And they round-robin perfectly.
I want to add something to the URL to tell haproxy that I specifically want to talk to web2 only because in a specific case, only that server is throwing an error on one web page.
Anyone know how to do that without building a new cluster with a URI filter and only have one computer in that cluster? I am hoping to use the cluster as-is but add something to the URI that will tell haproxy which server to use out of the cluster.
Thanks!
Have you thought about using different port for this? Defining new listen section with different port, because, as I understand, you can modify your URL by any means?
Basically, haproxy cannot do what I was hoping. There is no way to add a param to the URL to suggest which host in the cluster to use.
I solved my testing issue by setting up unique ports for each server in the cluster at the firewall. This could also be done at the haproxy level.
To secure this path from the outside world, I told the firewall to only accept traffic from inside our own network.
This lets us test specific servers within the cluster. We did have to add a trap in our PHP app to deal with a session cookie that is too large because we have haproxy manipulating this cookie to keep users on the server they first hit. So when the invalid session cookie is detected, we have the page simply drop the session and reload the page.
This is working well for our testing purposes.

syslogd modify $msg freebsd

I have a simple issue to ask, let me put you in context first.
I am setting up a rsyslog server (centos 7) in order to gather (actually recieve) many syslogd (FreeBSD(pfsense) default) message from distributed devices.
Everything works as expected, but I can not identify WHO sent syslog messages as long as many of the distributed devices work on public dynamic ip's.
Then the point is:
Is there any chance to modify syslgod conf file to prepend some character in, for instance, the MSG field? Any field actually, for that would solve my problem, as I should only have to set some stuff like
"if $msg contains blabla then HELL YEAH!"
Thank you in advance!
To identify the sender, you have a few options, you can either set up vpn tunnels from each machine, so that they use a static vpn address, or use a dns service and track them by name. In rsyslog reverse lookups are enabled by default, so they would just show up by assigned name.
I don't think you can rewrite messages in rsyslog, but syslog-ng has filters and rewrite rules that would do what you want.

A node.js server as a hub for other web server projects

I want to have a personal website that I can put my web stuff on. Say the domain name being something.com. Now, I can run multiple instances of node or what ever else I want and I just need to make sure that they listen on different ports. Then, to access any of those projects I just do something.com:portnumber.
But I don't really like how that looks. I'd much rather be able to do something.com/project1 and something.com/project2. Is there a way to make one node server listen to port 80, take those names in the URL and then somehow forward the request to the appropriate server?
What's more, it would be great if the URL parameters that got forwarded didn't contain the name of the actual project. To illustrate what I mean take this example:
One node server is runnning and listening on port 1000. It plays songs that you give it in the URL and is accessed like this: something.com:1000/songname/
The other node server is listening on port 2000. It does math or something and is used like this: something.com:2000/add/2/3/
What I want is to be able to use them like the following:
something.com/musicplayer/songname/ and something.com/math/add/2/3/
preferably without having to change the code of the two servers.
Now, I could do something hacky like redirecting something.com/musicplayer/songname/ to something.com:1000/songname/ but I don't want the address bar to show any redirection.
I realize that this seems very specific but I'm pretty sure I'm not the first one that had the idea. I know a lot of people that have their own personal websites where they mess around with stuff but I don't know how exactly something like this is done.

Setup virtual hosts file to host the source code from remote server

I would really appreciate your support for the below inquiry
Current Situation:
I have a web app (contains a module to upload documents) on a Linux Apache server "A" that can only be HTTP-ed through the intranet.
Required:
Another Linux Apache server "B" is required to host the same web app, while maintaining the source code on server "A" only. Server "B" can be HTTP-ed through the internet and intranet.
Blocking points:
Under the current circumstances we are unable to host the website on server "B" directly (which would seem like the logical solution).
Question:
Is it possible to setup the virtual-hosts of the httpd.conf file for such requirement?
Research:
Usually most of my findings were posts about deploying a load-sharing/load-balancing solution (not my objective), or setup a two-way synchronization process between "A" and "B" (last resort solution).
Googled strings:
share website between two servers, host website on two servers, virtual host to another server, run single website on multiple servers setup, virtual host for website on another server, host a website on two different servers, setup two linux servers to host the same website
Server Details:
Server A:
Server IP: 192.168.xxx.xxx (accessible through the intranet only)
Hosts the website source code
Apache server
OS: RHEL5
Server B:
Accessible through the intranet and internet
Apache server
OS: Same as A (RHEL5)
Summing up what you've probably found yourself by now: unfortunately, there are two things that are called proxying. The you are interested in is called a reverse proxy, in which B will take requests and forward them to A. The client never sees that A even exists. There are few security concerns, depending on what angle of security you look at:
server A only ever sees requests from B, not the original client, so any IP-based restrictions you want should be configured on server B.
The usually mentioned security concern is that a (forward) proxy will ask arbitrary servers for things on behalf of the client, so it masks the client's identity. I don't think you need to worry about this as long as you put ProxyRequests Off to disable forward proxying.
Server A might accidentally reveal its IP, which you might not be comfortable with. When B passes back the answer to the clients request that it has received from A, it will not look at the payload. So, if you return HTML documents, they better all have only relative paths. I think this might be the problem you are having: if your code still contains references to 192.168.x.y, those won't work for the external client. If you are changing paths (i.e. you have something like ProxyPass /somepath http://internal-server/otherpath), things become even more complicated, so try to avoid that. (In general, your backend application would need knowledge of what its publicly-visible URIs are. How to do this depends on the application.)

Resources