Transfering syslog-ng config from systemd to initd system - linux

Our team have changed linux distro from arch to crux, and I have a problem moving the logging setup, which was done on the old distro. I think the issue is mainly with template-functions and rewrite rules in syslog-ng config:
template-function "node" "${.SDATA.NODE:-}";
template-function "host" "${.SDATA.HOST:-}";
template-function "login" "${.SDATA.LOGIN:-}";
template-function "type" "${.SDATA.TYPE:-}";
template-function "menu" "${.SDATA.MENU:-}";
template-function "action" "${.SDATA.ACTION:-}";
template-function "status" "${.SDATA.STATUS:-}";
template-function "unit" "${.SDATA._SYSTEMD_UNIT:-}";
template-function "priority" "${LEVEL_NUM:-}";
template-function "facility" "${FACILITY_NUM:-}";
template-function "hostname" "${HOST:-}";
template-function "program" "${PROGRAM:-}";
template-function "search" "${MESSAGE:-}";
template-function "address" "${FULLHOST_FROM:-}";
source s_local {
channel {
source {
#systemd-journal(prefix(".SDATA.journald."));
file("/var/log/messages");
internal();
};
rewrite { set("localhost" value("NODE")); };
junction {
channel {
parser { json-parser(prefix(".json.")); };
rewrite {
set("${.json.context.HOST}" value("HOST"));
set("${.json.context.LOGIN}" value("LOGIN"));
set("${.json.context.TYPE}" value("TYPE"));
set("${.json.context.MENU}" value("MENU"));
set("${.json.context.MSG.action}" value("ACTION"));
set("${.json.context.MSG.status}" value("STATUS"));
};
flags(final);
};
channel {
flags(final);
};
};
};
};
...
Promtail config:
server:
http_listen_address: localhost
grpc_listen_address: localhost
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /var/lib/promtail/positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: syslog
syslog:
listen_address: localhost:1514
label_structured_data: true
labels:
job: syslog
relabel_configs:
- source_labels:
- __syslog_message_hostname
target_label: host
- source_labels:
- __syslog_message_NODE
target_label: node
- source_labels:
- __syslog_message_HOST
target_label: host
- source_labels:
- __syslog_message_LOGIN
target_label: login
- source_labels:
- __syslog_message_TYPE
target_label: type
- source_labels:
- __syslog_message_MENU
target_label: menu
- source_labels:
- __syslog_message_ACTION
target_label: action
- source_labels:
- __syslog_message_STATUS
target_label: status
- source_labels:
- __syslog_message__SYSTEMD_UNIT
target_label: unit
- source_labels:
- __syslog_message_PRIORITY
target_label: priority
- source_labels:
- __syslog_message_SYSLOG_FACILITY
target_label: facility
- source_labels:
- __syslog_message_hostname
target_label: hostname
- source_labels:
- __syslog_message_app_name
target_label: program
Loki config:
auth_enabled: false
server:
http_listen_address: 0.0.0.0
grpc_listen_address: localhost
http_listen_port: 3100
ingester:
wal:
enabled: true
dir: /var/lib/loki/wal
lifecycler: { address: 127.0.0.1, ring: { kvstore: { store: inmemory }, replication_factor: 1 }, final_sleep: 0s }
chunk_idle_period: 1h
max_chunk_age: 1h
chunk_target_size: 1048576
chunk_retain_period: 30s
max_transfer_retries: 0
schema_config:
configs: [{ from: '2020-10-24', store: boltdb-shipper, object_store: filesystem, schema: v11, index: { prefix: index_, period: 24h } }]
storage_config:
boltdb_shipper: { active_index_directory: /var/lib/loki/boltdb-shipper-active, cache_location: /var/lib/loki/boltdb-shipper-cache, cache_ttl: 24h, shared_store: filesystem }
filesystem: { directory: /var/lib/loki/chunks }
compactor:
working_directory: /var/lib/loki/boltdb-shipper-compactor
shared_store: filesystem
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 30d
table_manager:
retention_deletes_enabled: true
retention_period: 30d
frontend:
address: 0.0.0.0
I have already modified the old configs by changing such lines
- source_labels:
- __syslog_message_sd_journald_HOST
# into
- source_labels:
- __syslog_message_HOST
And
template-function "login" "${.SDATA.journald.LOGIN:-}";
# into
template-function "login" "${.SDATA.LOGIN:-}";
Well, I guess I did something wrong, since it doesnt work as before :) I test it by querying /loki/api/v1/query?query={job="syslog"}.
I would appreciate if someone could decode this config to me, and explain what an I missing, and what should I check

Related

How to configure trojan to make it fall back to the site correctly?

I use the mirror jwilder/nginx-proxy to automatically HTTPS, and I deploy the trojan-go service through the compose.yml file. The content of the compose.yml file is shown below. I can open the HTTPS website correctly by the domain name, but trojan-go does not fall back to the website correctly, and the log shows
github.com/p4gefau1t/trojan-go/proxy.(*Node).BuildNext:stack.go:29 invalid redirect address. check your http server: trojan_web:80 | dial tcp 172.18.0.2:80: connect: connection refused, where is the problem? thank you very much!
version: '3'
services:
trojan-go:
image: teddysun/trojan-go:latest
restart: always
volumes:
- ./config.json:/etc/trojan-go/config.json
- /opt/trojan/nginx/certs/:/opt/crt/:ro
environment:
- "VIRTUAL_HOST=domain name"
- "VIRTUAL_PORT=38232"
- "LETSENCRYPT_HOST=domain name"
- "LETSENCRYPT_EMAIL=xxx#gmail.com"
expose:
- "38232"
web1:
image: nginx:latest
restart: always
expose:
- "80"
volumes:
- /opt/trojan/nginx/html:/usr/share/nginx/html:ro
environment:
- VIRTUAL_HOST=domain name
- VIRTUAL_PORT=80
- LETSENCRYPT_HOST=domain name
- LETSENCRYPT_EMAIL=xxx#gmail.com
networks:
default:
external:
name: proxy_nginx-proxy
the content of trojan-go config.conf is shown below:
{
"run_type": "server",
"local_addr": "0.0.0.0",
"local_port": 38232,
"remote_addr": "trojan_web",
"remote_port": 80,
"log_level": 1,
"password": [
"mypasswd"
],
"ssl": {
"verify": true,
"verify_hostname": true,
"cert": "/opt/crt/domain name.crt",
"key": "/opt/crt/domain name.key",
"sni":"domain name"
},
"router":{
"enabled": true,
"block": [
"geoip:private"
]
}
}
(ps:I confirm that the trojan-go service and the web container are on the same intranet and can communicate with each other)

How to deploy Express Gateway to Azure

I am able to run an express gateway Docker container and a Redis Docker container locally and would like to deploy this to Azure. How do I go about it?
This is my docker-compose.yml file:
version: '2'
services:
eg_redis:
image: redis
hostname: redis
container_name: redisdocker
ports:
- "6379:6379"
networks:
gateway:
aliases:
- redis
express_gateway:
build: .
container_name: egdocker
ports:
- "9090:9090"
- "8443:8443"
- "9876:9876"
volumes:
- ./system.config.yml:/usr/src/app/config/system.config.yml
- ./gateway.config.yml:/usr/src/app/config/gateway.config.yml
networks:
- gateway
networks:
gateway:
And this is my system.config.yml file:
# Core
db:
redis:
host: 'redis'
port: 6379
namespace: EG
# plugins:
# express-gateway-plugin-example:
# param1: 'param from system.config'
crypto:
cipherKey: sensitiveKey
algorithm: aes256
saltRounds: 10
# OAuth2 Settings
session:
secret: keyboard cat
resave: false
saveUninitialized: false
accessTokens:
timeToExpiry: 7200000
refreshTokens:
timeToExpiry: 7200000
authorizationCodes:
timeToExpiry: 300000
And this is my gateway.config.yml file:
http:
port: 9090
admin:
port: 9876
hostname: 0.0.0.0
apiEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/apiEndpoints
api:
host: '*'
paths: '/ip'
methods: ["POST"]
serviceEndpoints:
# see: http://www.express-gateway.io/docs/configuration/gateway.config.yml/serviceEndpoints
httpbin:
url: 'https://httpbin.org/'
policies:
- basic-auth
- cors
- expression
- key-auth
- log
- oauth2
- proxy
- rate-limit
- request-transformer
pipelines:
# see: https://www.express-gateway.io/docs/configuration/gateway.config.yml/pipelines
basic:
apiEndpoints:
- api
policies:
- request-transformer:
- action:
body:
add:
payload: "'Test'"
headers:
remove: ["'Authorization'"]
add:
Authorization: "'new key here'"
- key-auth:
- proxy:
- action:
serviceEndpoint: httpbin
changeOrigin: true
Mounting the YAML files and then hitting the /ip endpoint is where I am stuck.
According to the configuration file you've posted I'd say you need to instruct Express Gateway to listen on 0.0.0.0 if run from a container, otherwise it won't be able to listed to external connections.

How can I set proxyTimeout in express-gateway?

Where exactly do I put proxyTimeout in gateway.config.yml?
You can set timeout for proxy in pipelines:
pipelines:
- name: default
apiEndpoints:
- test
policies:
- proxy:
- action:
serviceEndpoint: testService
proxyTimeout: 6000

puphpet failes to install packages

I'm new to puphpet and I'm tring to deploy a machine with the following config file:
vagrantfile:
target: local
vm:
provider:
local:
box: puphpet/ubuntu1404-x64
box_url: puphpet/ubuntu1404-x64
box_version: '0'
chosen_virtualizer: virtualbox
virtualizers:
virtualbox:
modifyvm:
natdnshostresolver1: false
showgui: 0
machines:
vflm_iuacklmx1q1l:
id: roes
hostname: roes.puphpet
network:
private_network: 192.168.50.101
forwarded_port:
vflmnfp_f8witajbpcdk:
host: '80'
guest: '80'
memory: '3072'
cpus: '3'
provision:
puppet:
manifests_path: puphpet/puppet/manifests
module_path: puphpet/puppet/modules
options:
- '--verbose'
- '--hiera_config /vagrant/puphpet/puppet/hiera.yaml'
synced_folder:
vflsf_c0lu72vfwayv:
source: 'C:\Users\Jelle\symfony\VrijwilligersTool'
target: /var/www/VrijwilligersTool
sync_type: default
smb:
smb_host: ''
smb_username: ''
smb_password: ''
mount_options:
dir_mode: '0775'
file_mode: '0664'
rsync:
args:
- '--verbose'
- '--archive'
- '-z'
exclude:
- .vagrant/
- .git/
auto: 'true'
owner: www-data
group: www-data
usable_port_range:
start: 10200
stop: 10500
post_up_message: ''
ssh:
host: 'false'
port: 'false'
private_key_path: 'false'
username: vagrant
guest_port: 'false'
keep_alive: '1'
forward_agent: 'false'
forward_x11: 'false'
shell: 'bash -l'
insert_key: 'false'
vagrant:
host: detect
proxy:
http: ''
https: ''
ftp: ''
no_proxy: ''
server:
install: '1'
packages: { }
users_groups:
install: '1'
groups: { }
users: { }
locale:
install: '1'
settings:
default_locale: nl_BE.UTF-8
locales:
- nl_BE.UTF-8
timezone: Europe/Brussels
firewall:
install: '1'
rules: { }
cron:
install: '1'
jobs: { }
python:
install: '1'
packages: { }
versions: { }
hhvm:
install: '1'
nightly: 0
composer: '1'
composer_home: ''
settings: { }
server_ini:
hhvm.server.host: 127.0.0.1
hhvm.server.port: '9000'
hhvm.log.use_log_file: '1'
hhvm.log.file: /var/log/hhvm/error.log
php_ini:
display_errors: 'On'
error_reporting: '-1'
date.timezone: UTC
mysql:
install: '1'
settings:
version: '5.7'
root_password: secret
override_options: { }
adminer: 0
users:
mysqlnu_rw5biplek6za:
name: homestead
password: secret
databases:
mysqlnd_osndwm6tnrw5:
name: homestead
sql: ''
grants:
mysqlng_j4jxg3loo0o1:
user: homestead
table: '*.*'
privileges:
- ALL
elastic_search:
install: '0'
settings:
version: 2.3.1
java_install: true
instances:
esi_cygoi2r38jmu:
name: es-01
When is execute vagrant up I'm getting a sea of errors.
Normally I'd go through them all but this is a very niche thing to google and I'm totally new to this. Can someone help me out and show me my mistakes?
log file: http://pastebin.com/YK65A7zR

How to configure vagrant to work with node.js

I have problem with running node.js with vagrant.
I have following structure of project:
- public
- hello.js
- vagrant
- puphpet
- Vagrantfile
Here's my puphpet config:
---
vagrantfile-local:
vm:
box: puphpet/debian75-x64
box_url: E:\vagrant boxes\debian-7.5-x86_64-v1.2-virtualbox.box
hostname: ''
memory: '1024'
cpus: '1'
chosen_provider: virtualbox
network:
private_network: 192.168.56.102
forwarded_port:
BD200PpFPN2U:
host: '3000'
guest: '3000'
post_up_message: ''
provider:
virtualbox:
modifyvm:
natdnshostresolver1: on
vmware:
numvcpus: 1
parallels:
cpus: 1
provision:
puppet:
manifests_path: puphpet/puppet
manifest_file: site.pp
module_path: puphpet/puppet/modules
options:
- '--verbose'
- '--hiera_config /vagrant/puphpet/puppet/hiera.yaml'
- '--parser future'
synced_folder:
uREBTumUq032:
owner: www-data
group: www-data
source: ../
target: /var/www
sync_type: default
rsync:
args:
- '--verbose'
- '--archive'
- '-z'
exclude:
- .vagrant/
auto: 'false'
usable_port_range:
start: 10200
stop: 10500
ssh:
host: null
port: null
private_key_path: null
username: vagrant
guest_port: null
keep_alive: true
forward_agent: false
forward_x11: false
shell: 'bash -l'
vagrant:
host: detect
server:
install: '1'
packages: { }
firewall:
install: '1'
rules: null
apache:
install: '1'
settings:
user: www-data
group: www-data
default_vhost: true
manage_user: false
manage_group: false
sendfile: 0
modules:
- rewrite
vhosts:
XWIOX0y1wPTF:
servername: nodeapp.com
docroot: /var/www/public
port: '80'
setenv:
- 'APP_ENV dev'
override:
- All
options:
- Indexes
- FollowSymLinks
- MultiViews
engine: php
custom_fragment: ''
ssl_cert: ''
ssl_key: ''
ssl_chain: ''
ssl_certs_dir: ''
mod_pagespeed: 0
Here is hello.js file
var http = require('http');
var server = http.createServer(function (request, response) {
response.writeHead(200, {"Content-Type": "text/plain"});
response.end("Hello World!");
});
server.listen(3000);
I log to SSH then I go to var/www/public there is file hello.js. I run
node hello.js
I dont get any error/message.
Then I go to 192.168.56.102:3000 and after while I get:
The connection has timed out
Address 192.168.56.102 returns 404 status code, so apache is working.
I tried change host and guest in config.yaml to port 8080 but it didnt work.
Did I do something wrong?
Try removing the forwarded port to 3000 and adding that into the firewall section.

Resources