Can't restrict API access by positional args via external_auth SaltStack - cherrypy

I'm trying to restrict the calling state.apply only for specific SLS files via the pam module.
external_auth:
pam:
myuser:
- '#runner':
- jobs.list_job
- '*':
- test.ping
- 'state.apply':
args:
- 'path/to/sls'
When I call the API via CherryPy API I get 401.
curl http://sat_master/run -H 'content-type: application/json' \
-d [{"tgt":"target","arg":["path/to/sls"],"kwarg":{"pillar":{"foo1":"bar1","foo2":"bar2"}},"client":"local_async","fun":"state.apply","username":"myuser","password":"<passwrod>","eauth":"pam"}]
What I also tried:
external_auth:
pam:
myuser:
- '#runner':
- jobs.list_job
- '*':
- test.ping
- 'state.apply':
args:
- '.*'
external_auth:
pam:
myuser:
- '#runner':
- jobs.list_job
- '*':
- test.ping
- 'state.apply':
args:
- '.*'
kwargs:
'.*' : '.*'
If I don't specify args it works:
external_auth:
pam:
myuser:
- '#runner':
- jobs.list_job
- '*':
- test.ping
- state.apply
How do correctly do it?

The args field should be the field of the function object. I.e. :
Wrong:
'*':
- state.apply:
args:
- 'path/to/sls'
The JSON equivalent
{
"*": [
{
"state.apply": null,
"args": [
"path/to/sls"
]
}
]
}
Right:
'*':
- state.apply:
args:
- 'path/to/sls'
The JSON equivalent
{
"*": [
{
"state.apply": {
"args": [
"path/to/sls"
]
}
}
]
}

Related

Transfering syslog-ng config from systemd to initd system

Our team have changed linux distro from arch to crux, and I have a problem moving the logging setup, which was done on the old distro. I think the issue is mainly with template-functions and rewrite rules in syslog-ng config:
template-function "node" "${.SDATA.NODE:-}";
template-function "host" "${.SDATA.HOST:-}";
template-function "login" "${.SDATA.LOGIN:-}";
template-function "type" "${.SDATA.TYPE:-}";
template-function "menu" "${.SDATA.MENU:-}";
template-function "action" "${.SDATA.ACTION:-}";
template-function "status" "${.SDATA.STATUS:-}";
template-function "unit" "${.SDATA._SYSTEMD_UNIT:-}";
template-function "priority" "${LEVEL_NUM:-}";
template-function "facility" "${FACILITY_NUM:-}";
template-function "hostname" "${HOST:-}";
template-function "program" "${PROGRAM:-}";
template-function "search" "${MESSAGE:-}";
template-function "address" "${FULLHOST_FROM:-}";
source s_local {
channel {
source {
#systemd-journal(prefix(".SDATA.journald."));
file("/var/log/messages");
internal();
};
rewrite { set("localhost" value("NODE")); };
junction {
channel {
parser { json-parser(prefix(".json.")); };
rewrite {
set("${.json.context.HOST}" value("HOST"));
set("${.json.context.LOGIN}" value("LOGIN"));
set("${.json.context.TYPE}" value("TYPE"));
set("${.json.context.MENU}" value("MENU"));
set("${.json.context.MSG.action}" value("ACTION"));
set("${.json.context.MSG.status}" value("STATUS"));
};
flags(final);
};
channel {
flags(final);
};
};
};
};
...
Promtail config:
server:
http_listen_address: localhost
grpc_listen_address: localhost
http_listen_port: 9080
grpc_listen_port: 0
positions:
filename: /var/lib/promtail/positions.yaml
clients:
- url: http://localhost:3100/loki/api/v1/push
scrape_configs:
- job_name: syslog
syslog:
listen_address: localhost:1514
label_structured_data: true
labels:
job: syslog
relabel_configs:
- source_labels:
- __syslog_message_hostname
target_label: host
- source_labels:
- __syslog_message_NODE
target_label: node
- source_labels:
- __syslog_message_HOST
target_label: host
- source_labels:
- __syslog_message_LOGIN
target_label: login
- source_labels:
- __syslog_message_TYPE
target_label: type
- source_labels:
- __syslog_message_MENU
target_label: menu
- source_labels:
- __syslog_message_ACTION
target_label: action
- source_labels:
- __syslog_message_STATUS
target_label: status
- source_labels:
- __syslog_message__SYSTEMD_UNIT
target_label: unit
- source_labels:
- __syslog_message_PRIORITY
target_label: priority
- source_labels:
- __syslog_message_SYSLOG_FACILITY
target_label: facility
- source_labels:
- __syslog_message_hostname
target_label: hostname
- source_labels:
- __syslog_message_app_name
target_label: program
Loki config:
auth_enabled: false
server:
http_listen_address: 0.0.0.0
grpc_listen_address: localhost
http_listen_port: 3100
ingester:
wal:
enabled: true
dir: /var/lib/loki/wal
lifecycler: { address: 127.0.0.1, ring: { kvstore: { store: inmemory }, replication_factor: 1 }, final_sleep: 0s }
chunk_idle_period: 1h
max_chunk_age: 1h
chunk_target_size: 1048576
chunk_retain_period: 30s
max_transfer_retries: 0
schema_config:
configs: [{ from: '2020-10-24', store: boltdb-shipper, object_store: filesystem, schema: v11, index: { prefix: index_, period: 24h } }]
storage_config:
boltdb_shipper: { active_index_directory: /var/lib/loki/boltdb-shipper-active, cache_location: /var/lib/loki/boltdb-shipper-cache, cache_ttl: 24h, shared_store: filesystem }
filesystem: { directory: /var/lib/loki/chunks }
compactor:
working_directory: /var/lib/loki/boltdb-shipper-compactor
shared_store: filesystem
limits_config:
reject_old_samples: true
reject_old_samples_max_age: 168h
chunk_store_config:
max_look_back_period: 30d
table_manager:
retention_deletes_enabled: true
retention_period: 30d
frontend:
address: 0.0.0.0
I have already modified the old configs by changing such lines
- source_labels:
- __syslog_message_sd_journald_HOST
# into
- source_labels:
- __syslog_message_HOST
And
template-function "login" "${.SDATA.journald.LOGIN:-}";
# into
template-function "login" "${.SDATA.LOGIN:-}";
Well, I guess I did something wrong, since it doesnt work as before :) I test it by querying /loki/api/v1/query?query={job="syslog"}.
I would appreciate if someone could decode this config to me, and explain what an I missing, and what should I check

Gitlab CICD: use functions inside gitlab-ci.yml

I have a .gitlab-ci.yml file which allows needs to execute the same function for every step. I have the following and this works.
image:
name: hashicorp/terraform
before_script:
- export MYDATE=$(date "+%d/%m/%y - %H:%M:%S")
stages:
- validate
- plan
validate:
stage: validate
script:
- terraform validate
- 'curl --request POST --header "Authorization: Bearer $bearer" --form "text=$MYDATE $msg" https://api.teams.com/v1/messages'
variables:
msg: "Example1"
plan:
stage: plan
script:
- terraform validate
- 'curl --request POST --header "Authorization: Bearer $bearer" --form "text=$MYDATE $msg" https://api.teams.com/v1/messages'
variables:
msg: "Example2"
Given it is always the same curl command, I wanted to use a function which I declare once and can use in every step. Something along the lines of below snippet.
image:
name: hashicorp/terraform
before_script:
- export MYDATE=$(date "+%d/%m/%y - %H:%M:%S")
.send_message: &send_message
script:
- 'curl --request POST --header "Authorization: Bearer $bearer" --form "text=$MYDATE $msg" https://api.teams.com/v1/messages'
stages:
- validate
- plan
validate:
stage: validate
script:
- terraform validate
- &send_message
variables:
msg: "Example1"
plan:
stage: plan
script:
- terraform validate
- &send_message
variables:
msg: "Example2"
How could I use such a function in a .gitlab-ci.yml file.
you can used include with !reference such as:
functions.yml
.send_message:
script:
- 'curl --request POST --header "Authorization: Bearer $bearer" --form "text=$MYDATE $msg" https://api.teams.com/v1/messages'
.gitlab-ci.yml
include:
- local: functions.yml
default:
image:
name: hashicorp/terraform
before_script:
- export MYDATE=$(date "+%d/%m/%y - %H:%M:%S")
stages:
- validate
- plan
validate:
stage: validate
script:
- terraform validate
- !reference [.send_message, script]
variables:
msg: "Example1"
plan:
stage: plan
script:
- terraform validate
- !reference [.send_message, script]
variables:
msg: "Example2"
ref: https://docs.gitlab.com/ee/ci/yaml/yaml_optimization.html#reference-tags
You can also use a regular old bash function defined at the top:
before_script:
- export MYDATE=$(date "+%d/%m/%y - %H:%M:%S")
- send_bearer () { terraform validate; curl --request POST --header "Authorization: Bearer $bearer" --form "text=$MYDATE $1" https://api.teams.com/v1/messages; }
...
validate:
stage: validate
script:
- send_bearer $msg
variables:
msg: "Example1"
plan:
stage: plan
script:
- send_bearer $msg
variables:
msg: "Example2"

Can we match multiple conditions in Terraform before creating a resource?

I am trying to convert an AWS CloudFormation script to Terraform but the problem I am facing here is Cloudformation has something called conditions were we can specify multiple conditions to match before creating a resource but I am struggling to replicate the same in terraform.
Example code of ClodeFormation:
Conditions:
NACLDefaultPublicAllowed: !Equals [ !Ref NACLOpenByDefault, "true"]
NACLDefaultPrivateOnly: !Equals [ !Ref NACLOpenByDefault, "false"]
InboundSSHIsAllowed: !Equals [ !Ref AllowInboundSSH, "true"]
InboundRDPIsAllowed: !Equals [ !Ref AllowInboundRDP, "true"]
InboundVPNIsAllowed: !Equals [ !Ref AllowInboundVPN, "true"]
OutboundHTTPIsAllowed: !Equals [ !Ref AllowOutboundHTTP, "true"]
OutboundHTTPSIsAllowed: !Equals [ !Ref AllowOutboundHTTPS, "true"]
HasRemoteHomeNetwork: !Not [ !Equals [ !Ref RemoteHomeNetworkCIDR, ""]]
HasRemoteRepositories: !Not [ !Equals [ !Ref RemoteRepositoriesCIDR, ""]]
AddMGMTInboundSSHRules: !And
- !Condition HasRemoteHomeNetwork
- !Condition NACLDefaultPrivateOnly
- !Condition InboundSSHIsAllowed
AddMGMTInboundRDPRules: !And
- !Condition HasRemoteHomeNetwork
- !Condition NACLDefaultPrivateOnly
- !Condition InboundRDPIsAllowed
AddMGMTInboundVPNRules: !And
- !Condition HasRemoteHomeNetwork
- !Condition NACLDefaultPrivateOnly
- !Condition InboundVPNIsAllowed
AddMGMTOutboundEphemeralRemoteHomeNetworkRules: !Or
- !Condition AddMGMTInboundSSHRules
- !Condition AddMGMTInboundVPNRules
AddOutboundHTTPAnywhereRules: !And
- !Condition OutboundHTTPIsAllowed
- !Condition NACLDefaultPrivateOnly
AddOutboundHTTPSAnywhereRules: !And
- !Condition OutboundHTTPSIsAllowed
- !Condition NACLDefaultPrivateOnly
AddInboundEphemeralAnywhereRules: !Or
- !Condition AddOutboundHTTPAnywhereRules
- !Condition AddOutboundHTTPSAnywhereRules
AddRemoteRepositoriesCIDR: !And
- !Condition HasRemoteRepositories
- !Condition NACLDefaultPrivateOnly
now when I create a resource(in CloudFormation) I can directly use:
rNACLEntryAllowOutboundHTTPfromPUBLtoRemoteRepositories:
Type: "AWS::EC2::NetworkAclEntry"
Condition: AddRemoteRepositoriesCIDR
Properties:
xxxx
rNACLEntryAllowOutboundHTTPSfromPUBLtoRemoteRepositories:
Type: "AWS::EC2::NetworkAclEntry"
Condition: HasRemoteHomeNetwork
Properties:
xxxx
and so on
How can I get the same result in terraform?
In Terraform we represent this sort of condition as conditionally choosing between zero or one instances of a resource. If you want to factor out the conditions and give them names like you did in CloudFormation then you can assign the conditions to named local values like this:
variable "allow_inbound_ssh" {
type = bool
}
variable "nacl_open_by_default" {
type = bool
}
variable "remote_home_network_cidr" {
type = string
default = null
}
locals {
inbound_ssh_is_allowed = var.allow_inbound_ssh
nacl_default_private_only = !var.nacl_open_by_default
has_remote_home_network = var.remote_home_network_cidr != null
add_management_inbound_ssh_rules = (
local.has_remote_home_network &&
local.nacl_default_private_only &&
local.inbound_ssh_is_allowed
)
}
You can then use these local values as part of the conditional count expression in each resource, like this:
# (I'm assuming that aws_network_acl_rule is the Terraform
# equivalent of CloudFormation's AWS::EC2::NetworkAclEntry,
# but I'm not sure.)
resource "aws_network_acl_rule" "example" {
count = local.add_management_inbound_ssh_rules ? 1 : 0
# ...
}
With that special count argument in place, aws_network_acl_rule will either be a single-element list or a zero-element list depending on the final value of local.add_management_inbound_ssh_rules.

hyperledger fabric sdk node orderer, node client Failed to connect before the deadline

i'm using fabric sample project 'basic network' as implementary enviroment, to develop chaincode and nodejs client app(REST API) base on fabric node client sdk, the node app resident in same host with fabric peer.
while all the docker container(ca,orderer,peer,couchdb,client) in one host, i've succeed in creating and joining channel, installing and instantiating chaincode, so with nodejs client, the query and invoke function performed succeessfully. the connection.json file are copy from basic network sample.
when i've moved the orderer container to another host, modified the container docker-compose yaml file, and the connection.json, the operation result in client container doesn't changed, they are all OK, the nodejs client app query oepration can proceed but the invoke(insert and modify) failed,the log is:
2019-03-23T03:32:38.769Z - debug: [Remote.js]: getUrl::grpc://192.168.122.6:7050
2019-03-23T03:32:38.769Z - error: [Remote.js]: Error: Failed to connect before the deadline URL:grpc://192.168.122.6:7050
2019-03-23T03:32:38.770Z - error: [Remote.js]: Error: Failed to connect before the deadline URL:grpc://192.168.122.6:7050
2019-03-23T03:32:38.772Z - debug: [Remote.js]: getUrl::grpc://192.168.122.6:7050
2019-03-23T03:32:38.772Z - error: [Orderer.js]: Orderer grpc://192.168.122.6:7050 has an error Error: Failed to connect before the deadline URL:grpc://192.168.122.6:7050
2019-03-23T03:32:38.772Z - error: [Orderer.js]: Orderer grpc://192.168.122.6:7050 has an error Error: Failed to connect before the deadline URL:grpc://192.168.122.6:7050
here,the '192.168.122.6' is the host which the orderer container resident in. below is the connection.json file used by nodejs app, i've turned off the tls between orderer and peer:
{
"name": "basic-network",
"version": "1.0.0",
"client": {
"organization": "Org1",
"connection": {
"timeout": {
"peer": {
"endorser": "300"
},
"orderer": "300"
}
}
},
"channels": {
"mychannel": {
"orderers": [
"orderer.example.com"
],
"peers": {
"peer0.org1.example.com": {}
}
}
},
"organizations": {
"Org1": {
"mspid": "Org1MSP",
"peers": [
"peer0.org1.example.com"
],
"certificateAuthorities": [
"ca.example.com"
]
}
},
"orderers": {
"orderer.example.com": {
"url": "grpc://192.168.122.6:7050"
}
},
"peers": {
"peer0.org1.example.com": {
"url": "grpc://127.0.0.1:7051"
}
},
"certificateAuthorities": {
"ca.example.com": {
"url": "http://127.0.0.1:7054",
"caName": "ca.example.com"
}
}
}
i guess,there is something wrong in connection.json,but i don't know which is.
below is the content about orderer and peer in docker-compose.yaml:
orderer.example.com:
container_name: orderer.example.com
image: hyperledger/fabric-orderer
environment:
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=net_basic
- ORDERER_GENERAL_LOGLEVEL=info
- FABRIC_LOGGING_SPEC=info
- ORDERER_GENERAL_LISTENADDRESS=0.0.0.0
- ORDERER_GENERAL_GENESISMETHOD=file
- ORDERER_GENERAL_GENESISFILE=/etc/hyperledger/configtx/genesis.block
- ORDERER_GENERAL_LOCALMSPID=OrdererMSP
- ORDERER_GENERAL_LOCALMSPDIR=/etc/hyperledger/msp/orderer/msp
- ORDERER_GENERAL_TLS_ENABLED=false
working_dir: /opt/gopath/src/github.com/hyperledger/fabric/orderer
command: orderer
ports:
- 7050:7050
volumes:
- ./config/:/etc/hyperledger/configtx
- ./crypto-config/ordererOrganizations/example.com/orderers/orderer.example.com/:/etc/hyperledger/msp/orderer
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/:/etc/hyperledger/msp/peerOrg1
networks:
- basic
peer0.org1.example.com:
container_name: peer0.org1.example.com
image: hyperledger/fabric-peer
environment:
- CORE_LEDGER_STATE_STATEDATABASE=CouchDB
- CORE_LEDGER_STATE_COUCHDBCONFIG_COUCHDBADDRESS=couchdb:5984
- CORE_PEER_NETWORKID=basic
- CORE_PEER_ID=peer0.org1.example.com
- CORE_PEER_ADDRESS=peer0.org1.example.com:7051
- CORE_PEER_CHAINCODEADDRESS=peer0.org1.example.com:7052
- CORE_PEER_CHAINCODELISTENADDRESS=0.0.0.0:7052
- CORE_PEER_GOSSIP_BOOTSTRAP=peer0.org1.zte.com:7051
- CORE_PEER_GOSSIP_EXTERNALENDPOINT=peer0.org1.zte.com:7051
- CORE_PEER_LOCALMSPID=Org1MSP
- CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock
# the following setting starts chaincode containers on the same
# bridge network as the peers
# https://docs.docker.com/compose/networking/
- CORE_VM_DOCKER_HOSTCONFIG_NETWORKMODE=net_basic
- FABRIC_LOGGING_SPEC=debug
- CORE_CHAINCODE_LOGGING_LEVEL=debug
- CORE_PEER_MSPCONFIGPATH=/etc/hyperledger/msp/peer/
# The CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME and CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD
# provide the credentials for ledger to connect to CouchDB. The username and password must
# match the username and password set for the associated CouchDB.
- CORE_LEDGER_STATE_COUCHDBCONFIG_USERNAME=
- CORE_LEDGER_STATE_COUCHDBCONFIG_PASSWORD=
- CORE_PEER_TLS_ENABLED=false
- CORE_PEER_GOSSIP_SKIPHANDSHAKE=true
- CORE_PEER_GOSSIP_USELEADERELECTION=true
- CORE_PEER_GOSSIP_ORGLEADER=false
- CORE_PEER_PROFILE_ENABLED=false
working_dir: /opt/gopath/src/github.com/hyperledger/fabric
command: peer node start
# command: peer node start --peer-chaincodedev=true
ports:
- 7051:7051
- 7052:7052
- 7053:7053
volumes:
- /var/run/:/host/var/run/
- ./crypto-config/peerOrganizations/org1.example.com/peers/peer0.org1.example.com/msp:/etc/hyperledger/msp/peer
- ./crypto-config/peerOrganizations/org1.example.com/users:/etc/hyperledger/msp/users
- ./config:/etc/hyperledger/configtx
depends_on:
# - orderer.example.com
- couchdb
networks:
- basic
extra_hosts:
- "orderer.example.com:192.168.122.6"
- "peer0.org1.example.com:127.0.0.1"

puphpet failes to install packages

I'm new to puphpet and I'm tring to deploy a machine with the following config file:
vagrantfile:
target: local
vm:
provider:
local:
box: puphpet/ubuntu1404-x64
box_url: puphpet/ubuntu1404-x64
box_version: '0'
chosen_virtualizer: virtualbox
virtualizers:
virtualbox:
modifyvm:
natdnshostresolver1: false
showgui: 0
machines:
vflm_iuacklmx1q1l:
id: roes
hostname: roes.puphpet
network:
private_network: 192.168.50.101
forwarded_port:
vflmnfp_f8witajbpcdk:
host: '80'
guest: '80'
memory: '3072'
cpus: '3'
provision:
puppet:
manifests_path: puphpet/puppet/manifests
module_path: puphpet/puppet/modules
options:
- '--verbose'
- '--hiera_config /vagrant/puphpet/puppet/hiera.yaml'
synced_folder:
vflsf_c0lu72vfwayv:
source: 'C:\Users\Jelle\symfony\VrijwilligersTool'
target: /var/www/VrijwilligersTool
sync_type: default
smb:
smb_host: ''
smb_username: ''
smb_password: ''
mount_options:
dir_mode: '0775'
file_mode: '0664'
rsync:
args:
- '--verbose'
- '--archive'
- '-z'
exclude:
- .vagrant/
- .git/
auto: 'true'
owner: www-data
group: www-data
usable_port_range:
start: 10200
stop: 10500
post_up_message: ''
ssh:
host: 'false'
port: 'false'
private_key_path: 'false'
username: vagrant
guest_port: 'false'
keep_alive: '1'
forward_agent: 'false'
forward_x11: 'false'
shell: 'bash -l'
insert_key: 'false'
vagrant:
host: detect
proxy:
http: ''
https: ''
ftp: ''
no_proxy: ''
server:
install: '1'
packages: { }
users_groups:
install: '1'
groups: { }
users: { }
locale:
install: '1'
settings:
default_locale: nl_BE.UTF-8
locales:
- nl_BE.UTF-8
timezone: Europe/Brussels
firewall:
install: '1'
rules: { }
cron:
install: '1'
jobs: { }
python:
install: '1'
packages: { }
versions: { }
hhvm:
install: '1'
nightly: 0
composer: '1'
composer_home: ''
settings: { }
server_ini:
hhvm.server.host: 127.0.0.1
hhvm.server.port: '9000'
hhvm.log.use_log_file: '1'
hhvm.log.file: /var/log/hhvm/error.log
php_ini:
display_errors: 'On'
error_reporting: '-1'
date.timezone: UTC
mysql:
install: '1'
settings:
version: '5.7'
root_password: secret
override_options: { }
adminer: 0
users:
mysqlnu_rw5biplek6za:
name: homestead
password: secret
databases:
mysqlnd_osndwm6tnrw5:
name: homestead
sql: ''
grants:
mysqlng_j4jxg3loo0o1:
user: homestead
table: '*.*'
privileges:
- ALL
elastic_search:
install: '0'
settings:
version: 2.3.1
java_install: true
instances:
esi_cygoi2r38jmu:
name: es-01
When is execute vagrant up I'm getting a sea of errors.
Normally I'd go through them all but this is a very niche thing to google and I'm totally new to this. Can someone help me out and show me my mistakes?
log file: http://pastebin.com/YK65A7zR

Resources