dnsmasq - forwarding local dns queries - dns

I have two Openwrt APs with dnsmasq on each ap. Lets call them DNS1 (main AP on 192.168.10.1) and DNS2 (dumb AP on 192.168.10.2). DNS1 is also the only DHCP server on my local network. I have stubby running on each instance to resolve external DNS requests on ports 5453. I have a couple of static dhcp hosts on DNS1 which I sync to DNS2 and they resolve fine. My problem is, I cannot get DNS2 to query DNS1 if it cannot resolve a local (lan) query. To clarify further: Since DNS1 is also handles DHCP, a new client (client1) will only get resolved by DNS1. Any client using DNS2 as its dns server/resolver will not be able to resolve client1 or client1.lan. I thought adding a 'server=/lan/192.168.10.1' would do the trick, but no luck. Here is my /etc/config/dhcp and autogenerated DNSMASQ.conf from DNS2:
config dnsmasq
option leasefile '/tmp/dhcp.leases'
option localservice '1'
option quietdhcp '1'
option cachesize '4096'
option readethers '1'
option localise_queries '1'
option expandhosts '1'
option noresolv '1'
option rebind_protection '1'
option rebind_localhost '1'
option filterwin2k '1'
option domain 'lan'
option domainneeded '1'
list addnhosts '/adblock/custom'
list addnhosts '/adblock/dlhosts'
list addnhosts '/adblock/dlhosts-ipv6'
option local_ttl '300'
list server '/lan/192.168.10.1'
list server '127.0.0.1#5453'
# auto-generated config file from /etc/config/dhcp
conf-file=/etc/dnsmasq.conf
domain-needed
filterwin2k
no-resolv
localise-queries
read-ethers
enable-ubus=dnsmasq
expand-hosts
bind-dynamic
local-service
quiet-dhcp
cache-size=4096
domain=lan
server=/lan/192.168.10.1
server=127.0.0.1#5453
addn-hosts=/tmp/hosts
addn-hosts=/adblock/custom
addn-hosts=/adblock/dlhosts
addn-hosts=/adblock/dlhosts-ipv6
dhcp-leasefile=/tmp/dhcp.leases
local-ttl=300
stop-dns-rebind
rebind-localhost-ok
dhcp-broadcast=tag:needs-broadcast
conf-dir=/tmp/dnsmasq.d
user=dnsmasq
group=dnsmasq
dhcp-ignore-names=tag:dhcp_bogus_hostname
bogus-priv
conf-file=/usr/share/dnsmasq/rfc6761.conf

This is likely dnsmasq's rebind protection kicking in from stop-dns-rebind. Check your logs, if you see lines like this then that is your issue.
dnsmasq[3835]: possible DNS-rebind attack detected: hostname.lan
You want to add rebind-domain-ok=lan to your dnsmasq.conf. Your OpenWRT config should look like this:
config dnsmasq
list rebind_domain 'lan'

Related

rsyslog , collect log from files outside /var/log

I have different logs that are written to our moutend nfs share that i need to send to our syslog-server (graylog) they are located outside /var/log folder.
So i add some extra conf in /etc/rsyslog.d/
For this example i have two files with following config:
atlassian-application-confluence-log.conf
module(load="imfile")
module(load="imklog")
$MaxMessageSize 50k
global(workDirectory="/atlassian/test/confluence/logs")
# This is the main application log file
input(type="imfile"
File="/atlassian/test/confluence/logs/atlassian-confluence.log"
Tag="atlassian"
PersistStateInterval="200"
)
# This file contains entries related to the search index.
input(type="imfile"
File="/atlassian/test/confluence/logs/atlassian-confluence-index.log"
Tag="atlassian"
PersistStateInterval="200"
)
# Send to Graylog
action(type="omfwd" target="log-server-company.com" port="5140")
# if you want to keep a local copy of the logs.
action(type="omfile" File="/var/log/rsyslog.log" template="RSYSLOG_TraditionalFileFormat")
atlassian-application-jira-log.conf
module(load="imfile")
module(load="imklog")
$MaxMessageSize 50k
global(workDirectory="/atlassian/test/jira/log")
# Contains logging for most of Jira, including logs that aren’t specifically written elsewhere
input(type="imfile"
File="/atlassian/test/jira/log/atlassian-jira.log"
Tag="atlassian"
PersistStateInterval="200"
)
# Send to Graylog
action(type="omfwd" target="log-server-company.com" port="5140")
# if you want to keep a local copy of the logs.
action(type="omfile" File="/var/log/rsyslog.log" template="RSYSLOG_TraditionalFileFormat")
So to my problem.
When i check the Rsyslogd configuration with following command:
rsyslogd -N1 -f /etc/rsyslog.d/atlassian-application-confluence-log.conf
It says it is valid.
When i restart the rsyslog service i get the following errors:
module 'imfile' already in this config, cannot be added [v8.2102.0-10.el8 try https://www.rsyslog.com/e/2221 ]
module 'imklog' already in this config, cannot be added [v8.2102.0-10.el8 try https://www.rsyslog.com/e/2221 ]
error during parsing file /etc/rsyslog.d/atlassian-tomcat-confluence-log.conf, on or before line 6: parameter 'workdirectory' specified more than once - one instance is ignored. Fix config [v8.2102.0-10.el8 try https://www.rsyslog.com/e/2207]>
error during parsing file /etc/rsyslog.d/atlassian-tomcat-confluence-log.conf, on or before line 6: parameter 'workDirectory' not known -- typo in config file? [v8.2102.0-10.el8 try https://www.rsyslog.com/e/2207]
module 'imfile' already in this config, cannot be added [v8.2102.0-10.el8 try https://www.rsyslog.com/e/2221 ]
module 'imklog' already in this config, cannot be added [v8.2102.0-10.el8 try https://www.rsyslog.com/e/2221 ]
[origin software="rsyslogd" swVersion="8.2102.0-10.el8" x-pid="379288" x-info="https://www.rsyslog.com"] start
imjournal: journal files changed, reloading... [v8.2102.0-10.el8 try https://www.rsyslog.com/e/0 ]
How can i get rid of the warnings?
I have already tried to put the two modules in /etc/rsyslog.conf
I get following errors from that config:
parameter 'PersistStateInterval' not known
parameter 'Tag' not known
parameter 'File' not known
If there are multiple configuration files, they are processed in ascending sort order of the file name (numerically/alphabetically), See: $IncludeConfig.
Therefore you don't have to include any configuration parameters (modules, work directories, rulesets etc.) multiple times. You can include them once in the config which is loaded first.

How do I set up oVirt Engine in Cockpit? I get an error with "deployment failed"

I can't seem to set up an ovirt engine.
In my DNS (Pihole, so basically DNSMasq) I set local DNS record for the engine [192.168.0.235 -> ovirt-engine.MYDUMMY.DOMAIN], as well for the node [192.168.0.97 -> ovirt-1.MYDUMMY.DOMAIN; this is the IP I have set in router DHCP for the machine running oVirt].
During setup wizard in Cockpit, I set the DNS server to the local IP of my Pihole instance (192.168.0.66), and network to static (192.168.0.235). Under "Advanced", I also set the "HOST FQDN" to "ovirt-1.MYDUMMY.DOMAIN".
The FQDN validate, but if I click next, and try preparing VM, I get "Deployment failed". In the logs I get some permission errors:
cockpit-bridge
/var/lib/ovirt-hosted-engine-setup/cockpit/ansibleVarFilepQBj37.var.1: couldn't remove temp file: Permission denied
COCKPIT_DOMAIN cockpit-bridge
PRIORITY 4
SYSLOG_IDENTIFIER cockpit-bridge
_AUDIT_LOGINUID 1000
_AUDIT_SESSION 15
_BOOT_ID 7ff6008112ed4dec863cd7daa5c7a49d
_CAP_EFFECTIVE 0
_CMDLINE cockpit-bridge
_COMM cockpit-bridge
_EXE /usr/bin/cockpit-bridge
_GID 1001
_HOSTNAME ovirt-1.MYDUMMY.DOMAIN
_MACHINE_ID 5383b28b838b48bfb83e51082ce922be
_PID 72856
_SELINUX_CONTEXT unconfined_u:unconfined_r:unconfined_t:s0-s0:c0.c1023
_SOURCE_REALTIME_TIMESTAMP 1604910992144612
_SYSTEMD_CGROUP /user.slice/user-1000.slice/session-15.scope
_SYSTEMD_INVOCATION_ID 6ab1b13e1e754c4385527d093d72880c
_SYSTEMD_OWNER_UID 1000
_SYSTEMD_SESSION 15
_SYSTEMD_SLICE user-1000.slice
_SYSTEMD_UNIT session-15.scope
_SYSTEMD_USER_SLICE -.slice
_TRANSPORT journal
_UID 1000
__CURSOR s=3f72f173a7184617893f8997f7e868c1;i=f2c;b=7ff6008112ed4dec863cd7daa5c7a49d;m=39c1504502;t=5b3a875953937;x=c0f9c1bd3c843a54
__MONOTONIC_TIMESTAMP 248056399106
__REALTIME_TIMESTAMP 1604910992144695
Any idea what I'm doing wrong? And how to fix it?
Fixing permissions for the directory /var/lib/ovirt-hosted-engine-setup/cockpit/ worked for me!

RabbitMQ Over SSL

I'm trying to set RabbitMQ to work over SSL.
I have changed the configuration file (/etc/rabbitmq/rabbitmq.config) as mentioned in the following link
https://www.rabbitmq.com/ssl.html to:
# Defaults to rabbit. This can be useful if you want to run more than one node
# per machine - RABBITMQ_NODENAME should be unique per erlang-node-and-machine
# combination. See the clustering on a single machine guide for details:
# http://www.rabbitmq.com/clustering.html#single-machine
#NODENAME=rabbit
# By default RabbitMQ will bind to all interfaces, on IPv4 and IPv6 if
# available. Set this if you only want to bind to one network interface or#
# address family.
#NODE_IP_ADDRESS=127.0.0.1
# Defaults to 5672.
#NODE_PORT=5672
listeners.ssl.default = 5671
ssl_options.cacertfile = /home/myuser/rootca.crt
ssl_options.certfile = /home/myuser/mydomain.com.crt
ssl_options.keyfile = /home/myuser/mydomain.com.key
ssl_options.verify = verify_peer
ssl_options.password = 1234
ssl_options.fail_if_no_peer_cert = false
I keep getting the following errors:
sudo rabbitmq-server
/usr/lib/rabbitmq/bin/rabbitmq-server: 15: /etc/rabbitmq/rabbitmq-env.conf: listeners.ssl.default: not found
If I remove the above line I get the following error:
sudo rabbitmq-server
/usr/lib/rabbitmq/bin/rabbitmq-server: 17: /etc/rabbitmq/rabbitmq-env.conf: ssl_options.cacertfile: not found
It is worth to mention that without the above, SSL configuration, everything works just fine.
Could you please assist?
Thanks :)
It's very important when you request assistance with software that you always state what version of the software you're using. In the case of RabbitMQ, providing the Erlang version and operating system used is also necessary.
In your case, you have (commented-out) environment configuration in /etc/rabbitmq/rabbitmq-env.conf, as well as RabbitMQ configuration, which is not correct. The following lines must be removed from rabbitmq-env.conf and put into the /etc/rabbitmq/rabbitmq.conf file:
listeners.ssl.default = 5671
ssl_options.cacertfile = /home/myuser/rootca.crt
ssl_options.certfile = /home/myuser/mydomain.com.crt
ssl_options.keyfile = /home/myuser/mydomain.com.key
ssl_options.verify = verify_peer
ssl_options.password = 1234
ssl_options.fail_if_no_peer_cert = false
Please also see the documentation
NOTE: the RabbitMQ team monitors the rabbitmq-users mailing list and only sometimes answers questions on StackOverflow.
in the Rabbitmq.config change the following to listen on 5673
listeners.ssl.default = 5673

Logstash output to https

i have logstash running on windows 8.1
the output is set to https address. wich is hosted on windows server 2012 r2.
when the request gets to the server it gets rejected. this is what i get from logstash:
←[33mUnhandled exception {:request=><FTW::Request(#13468) #headers=FTW::HTTP::He
aders <{"host"=>"taasukaesb.ewavetest.co.il", "connection"=>"keep-alive", "conte
nt-type"=>"application/json", "content-length"=>517}> #method="POST" #body="{\"A
ctivityTime\":\"2015-08-25T08:48:11.691Z\",\"LogType\":\"Info\",\"ServiceName\":
\"Portal.Nlog2Esbtest\",\"Context\":\"Portal\",\"UserId\":\"8664e362-f63d-4d10-8
a23-3b86b9f22cc7\",\"LogSubType\":\"Info\",\"TransactionCode\":\"5d14a403-a566-4
9ab-b8bc-a40c999eebe5\",\"Servers\":\"SlavaNili\",\"ServiceId\":\"8664e362-f63d-
4d10-8a23-3b86b9f22cc7\",\"RequestIPs\":\"172.19.5.8\",\"EntityClass\":\" \",\"M
ethods\":\"Button2_Click\",\"SourceFilePath\":\" \",\"SourceLineNum\":34,\"Title
\":\"TEST INFO\",\"Details\":\"this is a test to tell that something happend, bu
t its cool..\"}" #logger=#<Cabin::Channel:0xd0816b0 #metrics=#<Cabin::Metrics:0x
192e6ba1 #metrics_lock=#<Mutex:0x306a119a>, #metrics={}, #channel=#<Cabin::Chann
el:0xd0816b0 ...>>, #subscriber_lock=#<Mutex:0x7fd74c77>, #level=:info, #subscri
bers={}, #data={}> #request_uri="/TaasukaService.svc/TransactionLog/CreateJson?C
ontext=Portal&UserToken=a441b37f-3403-43fd-8f58-d1da3024133a" #version=1.1 #port
=443 #protocol="https" >, :response=>nil, :exception=>#<OpenSSL::SSL::SSLError:
An existing connection was forcibly closed by the remote host>, :stacktrace=>["o
rg/jruby/ext/openssl/SSLSocket.java:195:in `connect_nonblock'", "C:/logstash/ins
tall/vendor/bundle/jruby/1.9/gems/ftw-0.0.44/lib/ftw/connection.rb:413:in `do_se
cure'", "C:/logstash/install/vendor/bundle/jruby/1.9/gems/ftw-0.0.44/lib/ftw/con
nection.rb:393:in `secure'", "C:/logstash/install/vendor/bundle/jruby/1.9/gems/f
tw-0.0.44/lib/ftw/agent.rb:449:in `connect'", "C:/logstash/install/vendor/bundle
/jruby/1.9/gems/ftw-0.0.44/lib/ftw/agent.rb:283:in `execute'", "C:/logstash/inst
all/vendor/bundle/jruby/1.9/gems/logstash-output-http-1.0.0/lib/logstash/outputs
/http.rb:126:in `receive'", "C:/logstash/install/vendor/bundle/jruby/1.9/gems/lo
gstash-core-1.5.3-java/lib/logstash/outputs/base.rb:88:in `handle'", "(eval):83:
in `output_func'", "C:/logstash/install/vendor/bundle/jruby/1.9/gems/logstash-co
re-1.5.3-java/lib/logstash/pipeline.rb:244:in `outputworker'", "C:/logstash/inst
all/vendor/bundle/jruby/1.9/gems/logstash-core-1.5.3-java/lib/logstash/pipeline.
rb:166:in `start_outputs'"], :level=>:warn}←[0m
notice the ssl error.
now this serivce is fine and accessible. works with no problem from this computer. but only when setting the output in logstash to HTTPS this happens.
this is a follow up from this post:
https://discuss.elastic.co/t/logstash-output-to-https/27946/3
that no one in elastic answerd.
thanks for any help.

Improper behavior of Host Alias

I have two host alias in my directory structure that fail to properly register with g-wan. My folder structure is as follows:
/srv/gwan_linux64-bit/192.168.3.101_80/$dg.lcl
/srv/gwan_linux64-bit/192.168.3.101_80/$myapp
/srv/gwan_linux64-bit/192.168.3.101_80/#192.168.3.101
/srv/gwan_linux64-bit/192.168.3.101_80/#192.168.3.101:gwan.klickitat.lcl
/srv/gwan_linux64-bit/192.168.3.101_80/#192.168.3.101:test.lcl
When starting g-wan, I receive the error:
loading.........
* unresolved aliases: 2
From the sample server report in the default g-wan configuration:
Listeners 5 host(s): 192.168.3.101_80 Virtual: $dg.lcl
Root: #test.lcl Root: #gwan.klickitat.lcl Virtual: $myapp
Root: #192.168.3.101
As you can see, g-wan identifies the two root aliases as additional roots. G-wan only allows a single root host, so the two alias fail to function in the browser with a 404 error. Each of the hosts respond properly to ping, so they are accounted for by the dns. The virtual hosts and root host function as expected.
Thoughts?
Additional research:
I have corrected my posting error and simplified the presentation. I hope that you will find this to be concise.
My hosts file is as follows for all tests:
127.0.0.1 localhost.klickitat.lcl localhost
192.168.3.101 gwan.klickitat.lcl test.lcl
I implemented an example that is identical to your test with the exception that I used a different IP address to match my local subnet and I eliminated the virtual hosts, which do not impact my result in my testing.
The only changes to the default gwan configuration are as follows:
Changed the listener from 0.0.0.0_8080 to 192.168.3.101_8080
Changed the root host IP from #0.0.0.0 to #192.168.3.101
Added two host aliases #192.168.3.101:gwan.klickitat.lcl and
#192.168.3.101:test.lcl
This is my folder structure:
/srv/gwan_linux64-bit/192.168.3.101_8080
/srv/gwan_linux64-bit/192.168.3.101_8080/#192.168.3.101
/srv/gwan_linux64-bit/192.168.3.101_8080/#192.168.3.101:gwan.klickitat.lcl
/srv/gwan_linux64-bit/192.168.3.101_8080/#192.168.3.101:test.lcl
This is my result as reported by gwans included server report application:
3 host(s): 192.168.3.101_8080 Root:  #test.lcl Root:
 #gwan.klickitat.lcl Root:  #192.168.3.101
Gwan does not recognize the aliases and I cannot access the aliased urls. My result is inconsistent with yours.
The rest of this post is intended only to illustrate that aliases are reported by gwan in alternate configurations in my environment, but with some inconsistencies in the expected outcome. I simply identify the folder structure and my result.
Alternate Config 1
/srv/gwan_linux64-bit/0.0.0.0_8080
/srv/gwan_linux64-bit/0.0.0.0_8080/#localhost
/srv/gwan_linux64-bit/0.0.0.0_8080/#localhost:gwan.klickitat.lcl
/srv/gwan_linux64-bit/0.0.0.0_8080/#localhost:test.lcl
Result:
3 host(s): 0.0.0.0_8080
Root:  #localhost
Alias:  0.0.0.0:#gwan.klickitat.lcl
Alias:  0.0.0.0:#test.lcl
Alternate Config 2
/srv/gwan_linux64-bit/192.168.3.101_8080
/srv/gwan_linux64-bit/192.168.3.101_8080/#localhost
/srv/gwan_linux64-bit/192.168.3.101_8080/#localhost:gwan.klickitat.lcl
/srv/gwan_linux64-bit/192.168.3.101_8080/#localhost:test.lcl
Result:
3 host(s): 192.168.3.101_8080
Root:  #localhost
Alias:  192.168.3.101:#gwan.klickitat.lcl
Alias:  192.168.3.101:#test.lcl
While the alternate configurations function, note that the aliases naming varies from the explicit naming in the folder structure. It appears that the listeners are being properly set up, but that there is some issue in how the host laiases are being generated. I'm happy to test further if you so desire.
Using G-WAN v4.18 I used the following structure without problem:
5 host(s): 192.168.2.8:8080
Root: #192.168.2.8
Alias: 192.168.2.8:#gwan.ch
Virtual: $trustleap.com
Alias: 192.168.2.8:#gwan.com
Virtual: $secure.gwan.ch
The hosts were defined on a LAN with /etc/hosts which is atomic (changes are immediately reflected).
As expected, they are all reachable from the Internet browser, and display the expected documents.
Note that unlike in your report, there's no such a thing like Root: #gwan.ch (the alias is reported as expected: Alias: 192.168.2.8:#gwan.com).
I suggest that (1) you make sure you are using v4.18 (today's latest release) and (2) test your configuration with /etc/hosts so you don't have possible DNS issues.

Resources