DRBD Parse error: got 'incon-degr-cmd' (TK 282) on CentOS - linux

Setup
I currently have two NFS servers. And the plan is that they mirror their data to each other in realtime using DRBD and monitor each other using heartbeat.
This is my current /etc/drbd.d/t0.res config.
resource t0 {
protocol C;
incon-degr-cmd "halt -f";
startup {
degr-wfc-timeout 120; # 2 minutes.
}
disk {
on-io-error detach;
}
net {
}
syncer {
rate 10M;
group 1;
al-extents 257;
}
on node1 {
device /dev/drbd0;
disk /dev/loop0;
address 172.16.2.101:7788;
meta-disk internal;
}
on node2 {
device /dev/drbd0;
disk /dev/loop0;
address 172.16.2.102:7788;
meta-disk internal;
}
}
Error
When I try to use a drbdadm command I get the following error:
drbd.d/contentserver.res:4: Parse error: 'protocol | on | disk | net | syncer | startup | handlers | ignore-on | stacked-on-top-of' expected,
but got 'incon-degr-cmd' (TK 282)

I believe your resource file should read like this:
resource t0 {
protocol C;
pri-on-incon-degr "halt -f";
startup {
degr-wfc-timeout 120; # 2 minutes.
}
disk {
on-io-error detach;
}
net {
}
syncer {
rate 10M;
group 1;
al-extents 257;
}
on node1 {
device /dev/drbd0;
disk /dev/loop0;
address 172.16.2.101:7788;
meta-disk internal;
}
on node2 {
device /dev/drbd0;
disk /dev/loop0;
address 172.16.2.102:7788;
meta-disk internal;
}
}

Related

If "keyword" in message not working for logstash

I am receiving logs from 5 different sources on one single port. In fact it is a collection of files being sent through syslog from a server in realtime. The server stores logs from 4 VPN servers and one DNS server. Now the server admin started sending all 5 types of files on a single port although I asked something different. Anyways, I thought to make this also work now.
Below are the different types of samples-
------------------
<13>Sep 30 22:03:28 xx2.20.43.100 370 <134>1 2021-09-30T22:03:28+05:30 canopus.domain1.com1 PulseSecure: - - - id=firewall time="2021-09-30 22:03:28" pri=6 fw=xx2.20.43.100 vpn=ive user=System realm="google_auth" roles="" proto= src=1xx.99.110.19 dst= dstname= type=vpn op= arg="" result= sent= rcvd= agent="" duration= msg="AUT23278: User Limit realm restrictions successfully passed for /google_auth "
------------------
<134>Sep 30 22:41:43 xx2.20.43.101 1 2021-09-30T22:41:43+05:30 canopus.domain1.com2 PulseSecure: - - - id=firewall time="2021-09-30 22:41:43" pri=6 fw=xx2.20.43.101 vpn=ive user=user22 realm="google_auth" roles="Domain_check_role" proto= src=1xx.200.27.62 dst= dstname= type=vpn op= arg="" result= sent= rcvd= agent="" duration= msg="NWC24328: Transport mode switched over to SSL for user with NCIP xx2.20.210.252 "
------------------
<134>Sep 30 22:36:59 vpn-dns-1 named[130237]: 30-Sep-2021 22:36:59.172 queries: info: client #0x7f8e0f5cab50 xx2.30.16.147#63335 (ind.event.freefiremobile.com): query: ind.event.freefiremobile.com IN A + (xx2.31.0.171)
------------------
<13>Sep 30 22:40:31 xx2.20.43.101 394 <134>1 2021-09-30T22:40:31+05:30 canopus.domain1.com2 PulseSecure: - - - id=firewall time="2021-09-30 22:40:31" pri=6 fw=xx2.20.43.101 vpn=ive user=user3 realm="google_auth" roles="Domain_check_role" proto= src=1xx.168.77.166 dst= dstname= type=vpn op= arg="" result= sent= rcvd= agent="" duration= msg="NWC23508: Key Exchange number 1 occurred for user with NCIP xx2.20.214.109 "
Below is my config file-
syslog {
port => 1301
ecs_compatibility => disabled
tags => ["vpn"]
}
}
I tried to apply a condition first to get VPN logs (1st sample logline) and pass it to dissect-
filter {
if "vpn" in [tags] {
#if ([message] =~ /vpn=ive/) {
if "vpn=ive" in [message] {
dissect {
mapping => { "message" => "%{reserved} id=firewall %{message1}" }
# using id=firewall to get KV pairs in message1
}
}
}
else { drop {} }
# \/ end of filter brace
}
But when I run with this config file, I am getting mixture of all 5 types of logs in kibana. I don't see any dissect failures as well. I remember this worked in some other server for other type of log, but not working here.
Another question is, if I have to process all 5 types of logs in one config file, will below be a good approach?
if "VPN-logline" in [message] { use KV plugin and add tag of "vpn" }
else if "DNS-logline" in [message] { use JSON plugin and tag of "dns"}
else if "something-irrelevant" in [message] { drop {} }
Or can it be done in input section of config?
So, the problem was to assign every logline with the tag pf vpn. I was doing so because I had to merge this config to a larger config file that carries many more tags.Anyways, now thought to keep this config file separate only.
input {
syslog {
port => 1301
ecs_compatibility => disabled
}
}
filter {
if "vpn=ive" in [message] {
dissect {
mapping => { "message" => "%{reserved} id=firewall %{message1}" }
}
}
else { drop {} }
}
output {
elasticsearch {
hosts => "localhost"
index => "vpn1oct"
user => "elastic"
password => "xxxxxxxxxx"
}
stdout { }
}

Bind9: limit query from subnet

I have a linux machine with a WiFi Hotspot assigning IP's in the 172.30.108.0/24 network.
I have bind 9 installed.
my named.conf only includes "include "/etc/bind/named.conf.local";", everything else is disabled.
My named.conf.local has:
options {
listen-on port 53 { 0.0.0.0; };
listen-on-v6 port 53 { ::1; };
directory "/var/cache/bind";
allow-query { localhost; };
recursion yes;
querylog yes;
};
acl clients {
172.30.108.0/24;
};
view "internal-view" {
match-clients { internal; };
allow-query { internal; };
allow-query-cache { internal; };
zone "limit.com." {
type master;
file "/etc/bind/db.limit.com";
};
# Mapping: Everything else to 127.0.0.1
zone "." {
type master;
file "/etc/bind/db.mapping";
};
};
view "external-view" {
match-clients { any; };
allow-query { any; };
allow-recursion { any; };
allow-query-cache { any; };
zone "wiincon.de." {
type master;
file "/etc/bind/db.limit.com";
};
include "/etc/bind/named.conf.default-zones";
};
My db.limit.com:
; BIND reverse data file for broadcast zone
;
$TTL 180
# IN SOA localhost. root.localhost. (
5 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
# IN NS localhost.
# IN A 192.168.5.5
www IN A 192.168.5.5
and finally my db.mapping:
; BIND reverse data file for broadcast zone
;
$TTL 3600
# IN SOA localhost. root.localhost. (
4 ; Serial
604800 ; Refresh
86400 ; Retry
2419200 ; Expire
604800 ) ; Negative Cache TTL
;
# IN NS localhost.
* IN A 127.0.0.1
My problem is now: the clients in 172.30.108.0/24 can query anything BUT www.limit.com
Actually: Clients from 172.30.108.0/24 should be able to resolve limit.com and www.limit.com, everything else should respond with 127.0.0.1.
When doing a nslookup I get
Non-authoraive answer:
*** can't find limit.com: no answer
I'm sure I'm missing something very obvious here. Any help is highly appreciated.
Found it. The problem was that the "SYSTEM" itself had 8.8.8.8 as the name server, hence bypassing everything local.

Error while submitting a spark job using spark-jobserver

I face following error occasionally while submitting job. This error goes away if I remove the rootdir of filedao, datadao and sqldao. That means I have to restart the job-server and re-upload my jar.
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/1995aeba-com.spmsoftware.distributed.job.TestJob#-1370794810]] after [10000 ms]. Sender[null] sent message of type \"spark.jobserver.JobManagerActor$StartJob\".",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)", "akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)", "scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)", "scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:331)", "akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:282)", "akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:286)", "akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:238)", "java.lang.Thread.run(Thread.java:745)"]
}
}
My config file is as follows:
# Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = <spark_master>
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 4
jobserver {
port = 8090
context-per-jvm = false
context-creation-timeout = 100 s
# Note: JobFileDAO is deprecated from v0.7.0 because of issues in
# production and will be removed in future, now defaults to H2 file.
jobdao = spark.jobserver.io.JobSqlDAO
filedao {
rootdir = /tmp/spark-jobserver/filedao/data
}
datadao {
rootdir = /tmp/spark-jobserver/upload
}
sqldao {
slick-driver = slick.driver.H2Driver
jdbc-driver = org.h2.Driver
rootdir = /tmp/spark-jobserver/sqldao/data
jdbc {
url = "jdbc:h2:file:/tmp/spark-jobserver/sqldao/data/h2-db"
user = ""
password = ""
}
dbcp {
enabled = false
maxactive = 20
maxidle = 10
initialsize = 10
}
}
result-chunk-size = 1m
short-timeout = 60 s
}
context-settings {
num-cpu-cores = 2 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc.
}
}
akka {
remote.netty.tcp {
# This controls the maximum message size, including job results, that can be sent
# maximum-frame-size = 200 MiB
}
}
# check the reference.conf in spray-can/src/main/resources for all defined settings
spray.can.server.parsing.max-content-length = 250m
I am using spark-2.0-preview version.
I have faced the same error before and was related with timeout, for sure is an syncronus request (sync=true) togheter you must provide the timeout (in seconds) who is a value relative with how long it takes to process your request.
This an example how the request should look like:
curl -k --basic -d '' 'http://localhost:5050/jobs?appName=app&classPath=Main&context=test-context&sync=true&timeout=40'
if your request needs more than 40 seconds maybe you also need to modify the application.conf located on
spark-jobserver-master/job-server/src/main/resources/application.conf
ànd on the spray.can.server section modify:
idle-timeout = 210 s
request-timeout = 200 s

Given an array of hostnames, how can I generate a set of files based on those hostnames in puppet

I am not sure there is a way to even do this in puppet, but here is what I am trying to do.
Given this skeletal puppet class definition ...
class make_files (
$rabbit_servers = ['rabbit-1','rabbit-2'],
$mongo_servers = ['mongo-1','mongo-2'],
) {
...
}
... generate the files ...
# pwd
/root/conf/hosts
# more rabbit-*
::::::::::::::
rabbit-1.cfg
::::::::::::::
define host {
use linux-server
host_name rabbit-1
alias Rabbit MQ host
hostgroups rabbit_hosts
address 10.29.103.33
}
::::::::::::::
rabbit-2.cfg
::::::::::::::
define host {
use linux-server
host_name rabbit-2
alias Rabbit MQ host
hostgroups rabbit_hosts
address 10.29.103.34
}
# more mongo-*
::::::::::::::
mongo-1.cfg
::::::::::::::
define host {
use linux-server
host_name mongo-1
alias Mongo DB host
hostgroups mongo_hosts
address 10.29.103.31
}
::::::::::::::
mongo-2.cfg
::::::::::::::
define host {
use linux-server
host_name mongo-2
alias Mongo DB host
hostgroups mongo_hosts
address 10.29.103.32
}
Where the IP addresses are the IP address of the corresponding host.
Any help is much appreciated.
If the names belong to nodes that are managed by Puppet, you can fetch the addresses from PuppetDB directly using the puppetdbquery module
$ipaddress = query_facts("hostname=$host_name", ['ipaddress'])['ipaddress']
The code is untested, I'm not sure whether the invocation is correct.
I have found a solution ...
class make_files (
$rabbit_servers = ['rabbit-1:10.29.103.33','rabbit-2:10.29.103.34'],
$mongo_servers = ['ost-mongo-el7-001:10.29.103.31'],
) {
define my_file ($content, $hostgroup) {
$tuple = split($name, ':')
$host_name = $tuple[0]
$file_name = "/tmp/$host_name.cfg"
$ipaddress = $tuple[1]
$config = "define host {
use linux-server
host_name $host_name
alias $host_name
hostgroups $hostgroup
address $ipaddress
}"
file { $file_name:
ensure => file,
content => $config,
}
}
$rabbit_content = join($rabbit_servers, ',')
my_file { $rabbit_servers: content => $rabbit_content, hostgroup => 'rabbit_hosts' }
$mongo_content = join($mongo_servers, ',')
my_file { $mongo_servers: content => $mongo_content, hostgroup => 'mongo_hosts' }
}
... and I found that I needed to change the content of the files slightly.
This is probably not the best answer but it seems to work. I am open to suggested improvements.
Thanks

BIND9, nsupdate and damn DDNS

I've scoured through so many HOWTO pages on DDNS to try and fix this... I'm at a loss.
WorkstationX = CentOS 6.2 x64
ServerX = Ubuntu 12.04 LTS x64
I don't understand why it's not working... I'm literally out of ideas. I have regenerated and reconfigured everything several times.
I've made sure:
Running NTPD on both hosts, I have verified NTP is working
TZ is correct for both nodes (Hardware is UTC)
I've followed these guides:
http://linux.yyz.us/nsupdate/
http://agiletesting.blogspot.com.au/2012/03/dynamic-dns-updates-with-nsupdate-and.html
http://www.cheshirekow.com/wordpress/?p=457
https://www.erianna.com/nsupdate-dynamic-dns-updates-with-bind9
http://consultancy.edvoncken.net/index.php/HOWTO_Manage_Dynamic_DNS_with_nsupdate
http://blog.philippklaus.de/2013/01/updating-dns-entries-with-nsupdate-or-alternative-implementations-your-own-ddns/
Some of them have varying ways of generating the key, but the rest is the same... and still, when I try nsupdate - even on the server where dnssec-keygen was run (and where bind is), I get the same log entries:
Aug 14 11:20:38 vps named[31247]: 14-Aug-2013 11:20:38.032 security: error: client 127.0.0.1#29403: view public: request has invalid signature: TSIG domain2.com.au.: tsig verify failure (BADKEY)
from this nsupdate:
nsupdate -k Kdomain2.com.au.+157+35454.key
server localhost
zone domain2.com.au.
update add test.domain2.com.au. 86400 IN A 10.20.30.40
show
send
What I gather is the CORRECT generated method:
dnssec-keygen -a HMAC-MD5 -b 512 -n HOST domain2.com.au.
named.conf (IPs have been changed for privacy):
acl ipv4 { 0.0.0.0/0; };
acl ipv6 { 2000::/3; ::1; fe80::/10; fec0::/10; };
acl safehosts { 127.0.0.0/8; 3.2.2.40; 44.44.14.12; };
include "/etc/bind/rndc.key";
controls {
inet * port 953
allow { safehosts; } keys { "rndc-key"; };
};
options
{
auth-nxdomain yes;
empty-zones-enable no;
zone-statistics yes;
dnssec-enable yes;
listen-on { any; };
listen-on-v6 { any; };
directory "/etc/bind/db";
managed-keys-directory "/etc/bind/keys";
memstatistics-file "/etc/bind/data/bind.memstats";
statistics-file "/etc/bind/data/bind.qstats";
};
logging
{
## CUT ##
};
view "public"
{
recursion yes;
allow-query-cache { safehosts; };
allow-recursion { safehosts; };
zone "." IN {
type hint;
file "root.zone";
};
zone "0.0.127.in-addr.arpa" {
type master;
allow-update { none; };
allow-transfer { none; };
file "0.0.127.in-addr.arpa.zone";
};
zone "localhost" {
type master;
allow-update { none; };
allow-transfer { none; };
file "localhost.zone";
};
zone "3.2.2.in-addr.arpa" {
type master;
allow-update { none; };
allow-transfer { none; };
file "3.2.2.in-addr.arpa.zone";
};
zone "domain1.com.au" {
type master;
notify yes;
allow-update { key "rndc-key"; };
allow-transfer { key "rndc-key"; };
file "domain1.com.au.zone";
};
zone "domain2.com.au" {
type master;
notify yes;
allow-update { key "rndc-key"; };
allow-transfer { key "rndc-key"; };
file "doomain2.com.au.zone";
};
};
/etc/bind/rndc.key:
key "rndc-key" {
algorithm hmac-md5;
secret "vZwCYBx4OAOsBrbdlooUfBaQx+kwEi2eLDXdr+JMs4ykrwXKQTtDSg/jp7eHnw39IehVLMtuVECTqfOwhXBm0A==";
};
Kdomain1.com.au.+157+35454.private
Private-key-format: v1.3
Algorithm: 157 (HMAC_MD5)
Key: vZwCYBx4OAOsBrbdlooUfBaQx+kwEi2eLDXdr+JMs4ykrwXKQTtDSg/jp7eHnw39IehVLMtuVECTqfOwhXBm0A==
Bits: AAA=
Created: 20130814144733
Publish: 20130814144733
Activate: 20130814144733
SOLUTION:
I have no idea why, but it is now working. The only things I did is the following:
# chown -R named:named /var/named
# find . -type d -exec chmod 770 {} \;
# find . -type f -exec chmod 660 {} \;

Resources