I want to change the variable "enabled" in jail.conf of Fail2ban according to the status of Proftpd on the agent machine.
Ex: If on the agent machine, Proftpd is running, "enable = true" (Fail2ban will monitor Proftpd)
If Proftpd is stopped, "enable = false" (Fail2ban won't monitor Proftpd)
My init.pp file :
class fail2ban {
package { "fail2ban":
ensure => "installed",
}
service { "fail2ban":
ensure => "running",
enable => "true",
require => Package["fail2ban"],
}
$path = "/var/run/proftpd.pid"
$status = inline_template("<% if File.exist?(#path) -%>true<% else -%>false<%end -%>")
file { "jail.conf":
path => '/etc/fail2ban/jail.conf',
ensure => file,
require => Package['fail2ban'],
content => template("fail2ban/jail.conf.erb"),
notify => Service['fail2ban'],
}
My template jail.conf.erb file:
[proftpd]
enabled = <%= $status %>
port = ftp,ftp-data,ftps,ftps-data
filter = proftpd
logpath = /var/log/proftpd/proftpd.log
maxretry = 5
The problem is that my "enabled" result is according to the check on Puppet Master, not the agent machine, while I need to do the check on the agent machine.
Can anyone help me ?
Related
I am trying to build a VM on my Proxmox (from a template I created w Packer), and all is well except it does not take the IP I specified, but gets one from DHCP.
This is my provider config:
# Proxmox Provider
# ---
# Initial Provider Configuration for Proxmox
terraform {
required_version = ">= 0.13.0"
required_providers {
proxmox = {
source = "telmate/proxmox"
version = "2.9.3"
}
}
}
variable "proxmox_api_url" {
type = string
}
variable "proxmox_api_token_id" {
type = string
}
variable "proxmox_api_token_secret" {
type = string
}
provider "proxmox" {
pm_api_url = var.proxmox_api_url
pm_api_token_id = var.proxmox_api_token_id
pm_api_token_secret = var.proxmox_api_token_secret
# (Optional) Skip TLS Verification
pm_tls_insecure = true
}
And this is my .tf
# Proxmox Full-Clone
# ---
# Create a new VM from a clone
resource "proxmox_vm_qemu" "doc-media-0" {
# VM General Settings
target_node = "proxmox01"
vmid = "100"
name = "doc-media-0"
desc = "Docker media server running on Ubuntu"
# VM Advanced General Settings
onboot = true
# VM OS Settings
clone = "ubuntu-server-jammy-docker"
# The destination resource pool for the new VM
pool = "prod"
# VM System Settings
agent = 1
# VM CPU Settings
cores = 3
sockets = 2
cpu = "host"
# Storage settings
disk {
/* id = 0 */
type = "virtio"
storage = "data-fast"
/* storage_type = "directory" */
size = "20G"
/* backup = true */
}
# VM Memory Settings
memory = 10240
# VM Network Settings
network {
bridge = "vmbr0"
model = "virtio"
}
# VM Cloud-Init Settings
os_type = "cloud-init"
# (Optional) IP Address and Gateway
ipconfig0 = "ip=192.168.1.20/16,gw=192.168.1.1"
# (Optional) Name servers
nameserver = "192.168.1.1"
# (Optional) Default User
ciuser = "fabrice"
# (Optional) Add your SSH KEY
sshkeys = <<EOF
ssh-ed25519 <publick-ssh-key-removed>
EOF
}
Expected result
IP is 192.168.1.20
by virtue of ipconfig0 = "ip=192.168.1.20/16,gw=192.168.1.1"
Actual result
VM got a DHCP address
What is odd, the other settings applied, so my gateway is correct, my user is there, and my publick ssh key
Ignore me; I think it done it just not right after the tf completion.
I tried to use terraform without any Cloud instance - only for local install cloudflared tunnel using construction:
resource "null_resource" "tunell_install" {
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "/home/uzer/script/tunnel.sh"
}
}
instead something like:
provider "google" {
project = var.gcp_project_id
}
but after running
$ terraform apply -auto-approve
successfully created /etc/cloudflared/cert.json with content:
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
but as I undestood here must be values instead variables? It's seems that metadata_startup_script from instance.tf only applied to Google instance. How it's possible to change it for using terraform with install CF tunnel locally and running tunnel? Maybe also need to use templatefile but in other .tf file? The curent code block metadata_startup_script:
// This is where we configure the server (aka instance). Variables like web_zone take a terraform variable and provide it to the server so that it can use them as a local variable
metadata_startup_script = templatefile("./server.tpl",
{
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
})
Content of server.tpl file:
# Script to install Cloudflare Tunnel
# cloudflared configuration
cd
# The package for this OS is retrieved
wget https://bin.equinox.io/c/VdrWdbjqyF/cloudflared-stable-linux-amd64.deb
sudo dpkg -i cloudflared-stable-linux-amd64.deb
# A local user directory is first created before we can install the tunnel as a system service
mkdir ~/.cloudflared
touch ~/.cloudflared/cert.json
touch ~/.cloudflared/config.yml
# Another herefile is used to dynamically populate the JSON credentials file
cat > ~/.cloudflared/cert.json << "EOF"
{
"AccountTag" : "${account}",
"TunnelID" : "${tunnel_id}",
"TunnelName" : "${tunnel_name}",
"TunnelSecret" : "${secret}"
}
EOF
# Same concept with the Ingress Rules the tunnel will use
cat > ~/.cloudflared/config.yml << "EOF"
tunnel: ${tunnel_id}
credentials-file: /etc/cloudflared/cert.json
logfile: /var/log/cloudflared.log
loglevel: info
ingress:
- hostname: ssh.${web_zone}
service: ssh://localhost:22
- hostname: "*"
service: hello-world
EOF
# Now we install the tunnel as a systemd service
sudo cloudflared service install
# The credentials file does not get copied over so we'll do that manually
sudo cp -via ~/.cloudflared/cert.json /etc/cloudflared/
# Now we can start the tunnel
sudo service cloudflared start
In argo.tf exist this code:
data "template_file" "init" {
template = file("server.tpl")
vars = {
web_zone = var.cloudflare_zone,
account = var.cloudflare_account_id,
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id,
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name,
secret = random_id.tunnel_secret.b64_std
}
}
If you are asking about how to create the file locally and populate the values, here is an example:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = "webzone"
account = "account"
tunnel_id = "id"
tunnel_name = "name"
secret = "secret"
}
)
filename = "${path.module}/server.sh"
}
For this to work, you would have to assign the real values for all the template variables listed above. From what I see, there are already examples of how to use variables for those values. In other words, instead of hardcoding the values for template variables you could use standard variables:
resource "local_file" "cloudflare_tunnel_script" {
content = templatefile("${path.module}/server.tpl",
{
web_zone = var.cloudflare_zone
account = var.cloudflare_account_id
tunnel_id = cloudflare_argo_tunnel.auto_tunnel.id
tunnel_name = cloudflare_argo_tunnel.auto_tunnel.name
secret = random_id.tunnel_secret.b64_std
}
)
filename = "${path.module}/server.sh"
}
This code will populate all the values and create a server.sh script in the same directory you are running the Terraform code from.
You could complement this code with the null_resource you wanted:
resource "null_resource" "tunnel_install" {
depends_on = [
local_file.cloudflare_tunnel_script,
]
triggers = {
always_run = timestamp()
}
provisioner "local-exec" {
command = "${path.module}/server.sh"
}
}
I am new to terraform and i am trying to deploy a machine (with qcow2 image) on KVM automatically via Terraform.
i found this tf file:
provider "libvirt" {
uri = "qemu:///system"
}
#provider "libvirt" {
# alias = "server2"
# uri = "qemu+ssh://root#192.168.100.10/system"
#}
resource "libvirt_volume" "centos7-qcow2" {
name = "centos7.qcow2"
pool = "default"
source = "https://cloud.centos.org/centos/7/images/CentOS-7-x86_64-GenericCloud.qcow2"
#source = "./CentOS-7-x86_64-GenericCloud.qcow2"
format = "qcow2"
}
# Define KVM domain to create
resource "libvirt_domain" "db1" {
name = "db1"
memory = "1024"
vcpu = 1
network_interface {
network_name = "default"
}
disk {
volume_id = "${libvirt_volume.centos7-qcow2.id}"
}
console {
type = "pty"
target_type = "serial"
target_port = "0"
}
graphics {
type = "spice"
listen_type = "address"
autoport = true
}
}
my questions are:
(source) the path of my qcow file has to be localy on my computer ?
I have a KVM machine ip that i connected it remotely by its ip. where should i put this ip in this tf file?
when i did it manually, i run "virt manager", do i need to write it here anywhere?
thank's a lot.
No. It can be https also.
Do you mean a KVM host that VMs will be created ? Then you need to configure remote kvm access on that host and in the uri section of provider block you need to write its ip.
uri = "qemu+ssh://username#IP_OF_HOST/system"
You dont need virt-manager when you use terraform. You should use terraform resources for managing VM.
https://registry.terraform.io/providers/dmacvicar/libvirt/latest/docs
https://github.com/dmacvicar/terraform-provider-libvirt/tree/main/examples/v0.13
I have a server running Plex and two other services I want to monitor with Icinga2 and for the life of me I can't figure out how to get that to work. If I run the following command:
./check_procs -c 1:1 -a '/usr/lib/plexmediaserver/Plex Media Server'
Which returns the following when I manually kill Plex:
PROCS CRITICAL: 0 processes with args '/usr/lib/plexmediaserver/Plex Media Server' | procs=0;;1:1;0;
I just can't figure out how to add this check to the server.. where do I put it ?
I tried adding another declaration to /etc/icinga2/conf.d/services.conf as follows:
apply Service "procs"
{
import "generic-service"
check_command = "procs"
assign where host.name == NodeName
arguments =
{
"-a" =
{
value = "/usr/lib/plexmediaserver/Plex Media Server"
description = "service name"
required = true
}
}
}
But then the agent wouldn't start at all.
I solved this by defining a service:
apply Service for (service => config in host.vars.processes_linux) {
import "generic-service"
check_command = "nrpe"
display_name = config.display_name
vars.nrpe_command = "check_process"
vars.nrpe_arguments = [ config.process, config.warn_range, config.crit_range ]
}
In the host definition I then just add a config, let's say for mongodb:
vars.processes_linux["trench-srv-lin-process-mongodb"] = {
display_name = "MongoDB processes"
process = "mongod"
warn_range = "1:"
crit_range = "1:"
}
On the remote host, I need to install the package nagios-nrpe-server
And in the configfile /etc/nagios/nrpe_local.cfg I add this line:
command[check_procs]=/usr/lib/nagios/plugins/check_procs -w $ARG1$ -c $ARG2$ -s $ARG3$
i am running a small cluster of Raspberry Pi's which i am monitoring with Icinga2. On the master node of my cluster i have a dhcp server running. I check it's status the following way.
First i downloaded the check service status plugin from the Icinga Exchange, made it executable and moved it to /usr/lib/nagios/plugins (your path may differ).
Then i defined a check command for it:
object CheckCommand "Check Service" {
import "plugin-check-command"
command = [ PluginDir + "/check_service.sh" ]
arguments += {
"-o" = {
required = true
value = "$check_service_os$"
}
"-s" = {
required = true
value = "$check_service_name$"
}
}
}
Now all that was left was defining a Service:
object Service "Check DHCP" {
host_name = "Localhost"
check_command = "Check Service"
enable_perfdata = true
event_command = "Restart DHCP"
vars.check_service_name = "isc-dhcp-server"
vars.check_service_os = "linux"
}
As a Bonus you can even define a event command that restarts your service:
object EventCommand "Restart DHCP" {
import "plugin-event-command"
command = [ "/usr/bin/sudo", "systemctl", "restart" ]
arguments += {
"(no key)" = {
skip_key = true
value = "$check_service_name$"
}
}
vars.check_service_name = "isc-dhcp-server"
}
But for this to work, you have to give your nagios user (or whatever user runs your icinga service) sudo privileges to restart services. Add this line to your sudoers file:
nagios ALL = (ALL) NOPASSWD: /bin/systemctl restart *
I hope this helps you with your problem :-)
Jan
After this existing block
prefix 2a03:2267:4e6f:7264:0000:0000:0000:0000/64
{
};
I want to add a new block, if it doesn't exist already:
prefix fdda:fee6:0187:0000:0000:0000:0000:0000/64
{
};
in /etc/radvd.conf ( not at the end of the file)
and then /etc/init.d/radvd restart
How do I manage this with puppet?
Install
puppet module install puppetlabs-stdlib
Then create a script addblock.pp:
file_line { "ensure $line in /etc/radvd.conf":
path => '/etc/radvd.conf',
line => "prefix fdda:fee6:0187:0000:0000:0000:0000:0000/64\n{\n};",
}
exec { "restart":
command => '/etc/init.d/radvd restart',
provider => shell,
require => File_line["ensure $line in /etc/radvd.conf"],
}