rsync not working in two way lsyncd configuration - linux

I have configured lsyncd on 2 servers. below are the lsyncd config file settings for both servers:
for server 1: rsync setting:
source= server1
destination= server 2
and for server 2 setting is vice versa:
source= server2
destination= server1
rsync works perfectly on server 1, and immediately reflects new changes on server2,
though it has a different user and permission on the same folder.
on the second server rsunc only works when I run lsyncd config file. command: sudo service lsyncd status -l.
rsync does not work for server2, changes are not reflected immediately on server1.
Here is my lsync config setting:
server1:
*-- lsyncd config file for 2-way sync
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status",
statusInterval = 1,
nodaemon = true,
insist = true
}
sync {
default.rsync,
source = "path/uploads",
target = "bitnami#<IP_server2>:/path/uploads",
delete = 'running',
--delay = 5,
rsync = {
-- timeout = 3000,
update = true,
times = true,
archive = true,
compress = true,
perms = true,
acls = true,
owner = true,
rsh = "<ssh_key>"
}
}
server 2:
settings {
logfile = "/var/log/lsyncd/lsyncd.log",
statusFile = "/var/log/lsyncd/lsyncd.status",
statusInterval = 1,
nodaemon = true,
insist = true
}
sync {
default.rsync,
source = "path/uploads",
target = "bitnami#<IP_server1>:/path/uploads",
delete = 'running',
--delay = 5,
rsync = {
-- timeout = 3000,
update = true,
times = true,
archive = true,
compress = true,
perms = true,
acls = true,
owner = true,
rsh = "<ssh_key>"
}
}
Can anyone help me out with this?

Related

When sending the configuration to the puppet agent, I get an error

When sending the configuration to the agent, I get an error Error 500 on SERVER: Server Error: Evaluation Error: Error while evaluating a Resource Statement, Evaluation Error: Error while evaluating a Function Call, no implicit conversion of String into Integer (file:/etc/puppetlabs/code/environments/production/modules/accounts/manifests/init.pp, line: 24, column: 24) on node
init.pp
**# See README.md for details.**
class accounts(
$groups = {},
$groups_membership = undef,
$ssh_keys = {},
$users = {},
$usergroups = {},
$accounts = {},
$start_uid = undef,
$start_gid = undef,
$purge_ssh_keys = false,
$ssh_authorized_key_title = '%{ssh_key}-on-%{account}',
$shell = undef,
$managehome = true,
$forcelocal = true,
) {
include ::accounts::config
create_resources(group, $groups)
create_resources(accounts::account, $accounts)
** # Remove users marked as absent**
$absent_users = keys(absents($users))
user { $absent_users:
ensure => absent,
managehome => $managehome,
forcelocal => $forcelocal,
}
}
_______________________________________________________________________________________________
I'm new to puppet and using it together with foreman
Puppet master version 7.2.0
Puppet agent version 6.27.0
Foreman version 3.4.0
All settings were made in Foreman. Manifests, like any other changes, were not made in the console.

KVM with Terraform: SSH permission denied (Cloud-Init)

I have a KVM host. I'm using Terraform to create some virtual servers using KVM provider. Here's the relevant section of the Terraform file:
provider "libvirt" {
uri = "qemu+ssh://root#192.168.60.7"
}
resource "libvirt_volume" "ubuntu-qcow2" {
count = 1
name = "ubuntu-qcow2-${count.index+1}"
pool = "default"
source = "https://cloud-images.ubuntu.com/bionic/current/bionic-server-cloudimg-amd64.img"
format = "qcow2"
}
resource "libvirt_network" "vm_network" {
name = "vm_network"
mode = "bridge"
bridge = "br0"
addresses = ["192.168.60.224/27"]
dhcp {
enabled = true
}
}
# Use CloudInit to add our ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
name = "commoninit.iso"
pool = "default"
user_data = "data.template_file.user_data.rendered"
network_config = "data.template_file.network_config.rendered"
}
data "template_file" "user_data" {
template = file("${path.module}/cloud_config.yaml")
}
data "template_file" "network_config" {
template = file("${path.module}/network_config.yaml")
}
The cloud_config.yaml file contains the following info:
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ${file("/path/to/keyfolder/homelab.pub")}
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
The server gets created successfully, I can ping the device from the host on which I ran the Terraform script. I cannot seem to login through SSH though despite the fact that I pass my SSH key through the cloud-init file.
From the folder where all my keys are stored I run:
homecomputer:keyfolder wim$ ssh -i homelab ubuntu#192.168.80.86
ubuntu#192.168.60.86: Permission denied (publickey).
In this command, homelab is my private key.
Any reasons why I cannot login? Any way to debug? I cannot login to the server now to debug. I tried setting the passwd in the cloud-config file but that also does not work
*** Additional information
1) the rendered template is as follows:
> data.template_file.user_data.rendered
manage_etc_hosts: true
users:
- name: ubuntu
sudo: ALL=(ALL) NOPASSWD:ALL
groups: users, admin
home: /home/ubuntu
shell: /bin/bash
lock_passwd: false
ssh-authorized-keys:
- ssh-rsa AAAAB3NzaC1y***Homelab_Wim
ssh_pwauth: false
disable_root: false
chpasswd:
list: |
ubuntu:linux
expire: False
package_update: true
packages:
- qemu-guest-agent
growpart:
mode: auto
devices: ['/']
I also faced the same problem, because i'm missing the fisrt line
#cloud-config
in the cloudinit.cfg file
You need to add libvirt_cloudinit_disk resource to add ssh-key to VM,
code from my TF-script:
# Use CloudInit ISO to add ssh-key to the instance
resource "libvirt_cloudinit_disk" "commoninit" {
count = length(var.hostname)
name = "${var.hostname[count.index]}-commoninit.iso"
#name = "${var.hostname}-commoninit.iso"
# pool = "default"
user_data = data.template_file.user_data[count.index].rendered
network_config = data.template_file.network_config.rendered
i , i had the same problem . i had resolved in this way:
user_data = data.template_file.user_data.rendered
without double quote!

probelm with puppet tagmail puppetlabs module

i'm using puppet 6.14.0 and tagmail module 3.2.0 on Centos 7.
below is my config on the master:
[master]
dns_alt_names=*******
vardir = /opt/puppetlabs/server/data/puppetserver
logdir = /var/log/puppetlabs/puppetserver
rundir = /var/run/puppetlabs/puppetserver
pidfile = /var/run/puppetlabs/puppetserver/puppetserver.pid
codedir = /etc/puippetlabs/code
confdir = /etc/puppetlabs/puppet
reports = puppetdb,console,tagmail
tagmap = $confdir/tagmail.conf
tagmail.conf(using a local smtp server, i'm able to telnet it )
[transport]
reportfrom = **********
smtpserver = localhost
smtpport = 25
smtphelo = localhost
[tagmap]
all: my_email_address
and below is my config on one managed node
[main]
certname = *********
server = *********
environment =uat
runinterval = 120
[agent]
report = true
pluginsync = true
but i'm not receiving any report from tagmail.
is someone having the same problem or i'm missing something on my config ?

Error while submitting a spark job using spark-jobserver

I face following error occasionally while submitting job. This error goes away if I remove the rootdir of filedao, datadao and sqldao. That means I have to restart the job-server and re-upload my jar.
{
"status": "ERROR",
"result": {
"message": "Ask timed out on [Actor[akka://JobServer/user/context-supervisor/1995aeba-com.spmsoftware.distributed.job.TestJob#-1370794810]] after [10000 ms]. Sender[null] sent message of type \"spark.jobserver.JobManagerActor$StartJob\".",
"errorClass": "akka.pattern.AskTimeoutException",
"stack": ["akka.pattern.PromiseActorRef$$anonfun$1.apply$mcV$sp(AskSupport.scala:604)", "akka.actor.Scheduler$$anon$4.run(Scheduler.scala:126)", "scala.concurrent.Future$InternalCallbackExecutor$.unbatchedExecute(Future.scala:601)", "scala.concurrent.BatchingExecutor$class.execute(BatchingExecutor.scala:109)", "scala.concurrent.Future$InternalCallbackExecutor$.execute(Future.scala:599)", "akka.actor.LightArrayRevolverScheduler$TaskHolder.executeTask(LightArrayRevolverScheduler.scala:331)", "akka.actor.LightArrayRevolverScheduler$$anon$4.executeBucket$1(LightArrayRevolverScheduler.scala:282)", "akka.actor.LightArrayRevolverScheduler$$anon$4.nextTick(LightArrayRevolverScheduler.scala:286)", "akka.actor.LightArrayRevolverScheduler$$anon$4.run(LightArrayRevolverScheduler.scala:238)", "java.lang.Thread.run(Thread.java:745)"]
}
}
My config file is as follows:
# Template for a Spark Job Server configuration file
# When deployed these settings are loaded when job server starts
#
# Spark Cluster / Job Server configuration
# Spark Cluster / Job Server configuration
spark {
# spark.master will be passed to each job's JobContext
master = <spark_master>
# Default # of CPUs for jobs to use for Spark standalone cluster
job-number-cpus = 4
jobserver {
port = 8090
context-per-jvm = false
context-creation-timeout = 100 s
# Note: JobFileDAO is deprecated from v0.7.0 because of issues in
# production and will be removed in future, now defaults to H2 file.
jobdao = spark.jobserver.io.JobSqlDAO
filedao {
rootdir = /tmp/spark-jobserver/filedao/data
}
datadao {
rootdir = /tmp/spark-jobserver/upload
}
sqldao {
slick-driver = slick.driver.H2Driver
jdbc-driver = org.h2.Driver
rootdir = /tmp/spark-jobserver/sqldao/data
jdbc {
url = "jdbc:h2:file:/tmp/spark-jobserver/sqldao/data/h2-db"
user = ""
password = ""
}
dbcp {
enabled = false
maxactive = 20
maxidle = 10
initialsize = 10
}
}
result-chunk-size = 1m
short-timeout = 60 s
}
context-settings {
num-cpu-cores = 2 # Number of cores to allocate. Required.
memory-per-node = 512m # Executor memory per node, -Xmx style eg 512m, #1G, etc.
}
}
akka {
remote.netty.tcp {
# This controls the maximum message size, including job results, that can be sent
# maximum-frame-size = 200 MiB
}
}
# check the reference.conf in spray-can/src/main/resources for all defined settings
spray.can.server.parsing.max-content-length = 250m
I am using spark-2.0-preview version.
I have faced the same error before and was related with timeout, for sure is an syncronus request (sync=true) togheter you must provide the timeout (in seconds) who is a value relative with how long it takes to process your request.
This an example how the request should look like:
curl -k --basic -d '' 'http://localhost:5050/jobs?appName=app&classPath=Main&context=test-context&sync=true&timeout=40'
if your request needs more than 40 seconds maybe you also need to modify the application.conf located on
spark-jobserver-master/job-server/src/main/resources/application.conf
ànd on the spray.can.server section modify:
idle-timeout = 210 s
request-timeout = 200 s

No value from hiera on puppet manifests when installed foreman

If try to get data from module use calling_class the data don't come to puppet manifests, if put the variable to common or osfamily yaml file value will be available from manifets.
My environment:
Puppet Master 3.7.4 + Foreman 1.7 + Hiera 1.3.4
Hiera configs:
---
:backends:
- yaml
:hierarchy:
- "%{::environment}/node/%{::fqdn}" #node settings
- "%{::environment}/profile/%{calling_class}" # profile settings
- "%{::environment}/%{::environment}" # environment settings
- "%{::environment}/%{::osfamily}" # osfamily settings
- common # common settings
:yaml:
:datadir: '/etc/puppet/hiera'
/etc/puppet/hiera/production/profile/common.yaml
profile::common::directory_hierarchy:
- "C:\\SiteName"
- "C:\\SiteName\\Config"
profile::common::system: "common"
And on profile module manifest /etc/puppet/environments/production/modules/profile/manifests/common.pp
class profile::common (
$directory_hierarchy =undef,
$system =undef
)
{
notify { "Dir is- $directory_hierarchy my fqdn is $fqdn, system = $system": }
}
Puppet config /etc/puppet/puppet.config
[main]
logdir = /var/log/puppet
rundir = /var/run/puppet
ssldir = $vardir/ssl
privatekeydir = $ssldir/private_keys { group = service }
hostprivkey = $privatekeydir/$certname.pem { mode = 640 }
autosign = $confdir/autosign.conf { mode = 664 }
show_diff = false
hiera_config = $confdir/hiera.yaml
[agent]
classfile = $vardir/classes.txt
localconfig = $vardir/localconfig
default_schedules = false
report = true
pluginsync = true
masterport = 8140
environment = production
certname = puppet024.novalocal
server = puppet024.novalocal
listen = false
splay = false
splaylimit = 1800
runinterval = 1800
noop = false
configtimeout = 120
usecacheonfailure = true
[master]
autosign = $confdir/autosign.conf { mode = 664 }
reports = foreman
external_nodes = /etc/puppet/node.rb
node_terminus = exec
ca = true
ssldir = /var/lib/puppet/ssl
certname = puppet024.novalocal
strict_variables = false
environmentpath = /etc/puppet/environments
basemodulepath = /etc/puppet/environments/common:/etc/puppet/modules:/usr/share/puppet/modules
parser = future
And the more interesting thing that if deploy the same code without foreman it will be working.
Maybe I've missed some configuration or plugins?
You need have a environment (production in your sample) folder structures as below:
/etc/puppet/hiera/environments/production/node/%{::fqdn}.yaml
/etc/puppet/hiera/environments/production/profile/%{calling_class}.yaml
/etc/puppet/hiera/environments/production/production/*.yaml
/etc/puppet/hiera/environments/production/%{::osfamily}.yaml
/etc/puppet/hiera/environments/common.yaml
So the environment path you pasted is wrong also.
/etc/puppet/hiera/production/profile/common.yaml
Side notes
By first view, shouldn't mix hieradata with modulepath, so if can, move the modules out of basemodulepath
basemodulepath = /etc/puppet/environments/common
With the puppet.conf you pasted, the real profile module path is at one of three folders:
/etc/puppet/environments/common/modules/profile
/etc/puppet/modules/profile
/usr/share/puppet/modules/profile

Resources