I'm trying to get a basic Suave application running in IIS (IIS 10.0) using HttpPlatformHandler (version 1.2).
When I have it return a single WebPart such as
(OK "Hello World")
The application runs in IIS fine and I can make requests to it by name at http://localhost/testapp (testapp is the name of the application under the Default Web Site).
However, if I use something more complex for the WebPart such as
let app =
choose
[ GET >=> choose
[ path "/hello" >=> OK "Hello GET"
path "/goodbye" >=> OK "Good bye GET" ]
POST >=> choose
[ path "/hello" >=> OK "Hello POST"
path "/goodbye" >=> OK "Good bye POST" ] ]
The website starts up but I cannot reach it by application name. I am still able to reach it by port however.
When I hit the application by name I receive an HTTP 503.2 (bad gateway) response.
The application is started from a FAKE script executed by the HttpPlatformHandler.
For context, this is the FAKE script which starts the application:
#r "./tools/FakeLib.dll"
#r "Suave.dll"
open System
open Suave
open Suave.Successful
open Fake
open System.Net
open Suave.Filters
open Suave.Sockets
open Suave.Operators
open System.IO
Environment.CurrentDirectory <- __SOURCE_DIRECTORY__
let port = Sockets.Port.Parse <| getBuildParamOrDefault "port" "8083"
let serverConfig =
{ defaultConfig with
logger = Logging.Loggers.saneDefaultsFor Logging.LogLevel.Verbose
bindings = [ HttpBinding.mk HTTP IPAddress.Loopback port ]
}
let app =
choose
[ GET >=> choose
[ path "/hello" >=> OK "Hello GET"
path "/goodbye" >=> OK "Good bye GET" ]
POST >=> choose
[ path "/hello" >=> OK "Hello POST"
path "/goodbye" >=> OK "Good bye POST" ] ]
startWebServer serverConfig (OK "Hello")
//startWebServer serverConfig app
The above script works as expected. However if I use the app WebPart instead of (OK "Hello") I encounter the issue described above.
For completeness here is the web.config set up for the HttpPlatformHandler:
<?xml version="1.0" encoding="UTF-8"?>
<configuration>
<system.webServer>
<handlers>
<remove name="httpplatformhandler" />
<add name="httpplatformhandler" path="*" verb="*" modules="httpPlatformHandler" resourceType="Unspecified" />
</handlers>
<httpPlatform
stdoutLogEnabled="true" startupTimeLimit="20"
processPath=".\tools\FAKE.exe"
arguments=".\test.fsx port=%HTTP_PLATFORM_PORT%" >
<environmentVariables>
</environmentVariables>
</httpPlatform>
</system.webServer>
</configuration>
I've reviewed the logs, but unfortunately I can't see anything that indicates an error.
I've checked the Event Viewer, and the only clue that something might be wrong is this information event in the application log:
The description for Event ID 1001 from source HttpPlatformHandler cannot be found. Either the component that raises this event is not installed on your local computer or the installation is corrupted. You can install or repair the component on the local computer.
Here is a portion of the log from the case where the app does run as expected (without choose):
[V] 2016-01-19T02:43:40.6932823Z: initialising BufferManager with 827392 bytes [Suave.Socket.BufferManager]
[I] 2016-01-19T02:43:40.7114149Z: listener started in 20.885 ms with binding 127.0.0.1:18450 [Suave.Tcp.tcpIpServer]
[V] 2016-01-19T02:43:40.8146603Z: 127.0.0.1 connected, total: 1 clients [Suave.Tcp.job]
[V] 2016-01-19T02:43:40.8166665Z: reserving buffer: 811008, free count: 99 [Suave.Tcp.job] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:43:40.8217965Z: -> processor [Suave.Web.httpLoop.loop]
[V] 2016-01-19T02:43:40.8228181Z: reading first line of request [Suave.Web.processRequest]
[V] 2016-01-19T02:43:40.8378128Z: reserving buffer: 802816, free count: 98 [Suave.Web.readMoreData] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:43:40.8498033Z: reading headers [Suave.Web.processRequest]
[V] 2016-01-19T02:43:40.8776578Z: freeing buffer: 802816, free count: 99 [Suave.Web.split] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:43:40.8866594Z: parsing post data [Suave.Web.processRequest]
[V] 2016-01-19T02:43:40.8886553Z: <- processor [Suave.Web.httpLoop.loop]
[V] 2016-01-19T02:43:40.9057610Z: 'Connection: keep-alive' recurse [Suave.Web.httpLoop.loop]
[V] 2016-01-19T02:43:40.9057610Z: -> processor [Suave.Web.httpLoop.loop]
[V] 2016-01-19T02:43:40.9057610Z: reading first line of request [Suave.Web.processRequest]
[V] 2016-01-19T02:43:40.9057610Z: reserving buffer: 802816, free count: 98 [Suave.Web.readMoreData] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:43:45.2531307Z: reading headers [Suave.Web.processRequest]
[V] 2016-01-19T02:43:45.2541141Z: freeing buffer: 802816, free count: 99 [Suave.Web.split] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:43:45.2541141Z: parsing post data [Suave.Web.processRequest]
[V] 2016-01-19T02:43:45.2541141Z: <- processor [Suave.Web.httpLoop.loop]
[V] 2016-01-19T02:43:45.2551164Z: 'Connection: keep-alive' recurse [Suave.Web.httpLoop.loop]
And here is a portion of the log where the app does not work as expected (with routing via choose):
[V] 2016-01-19T02:44:59.6356127Z: initialising BufferManager with 827392 bytes [Suave.Socket.BufferManager]
[I] 2016-01-19T02:44:59.6537478Z: listener started in 20.987 ms with binding 127.0.0.1:18708 [Suave.Tcp.tcpIpServer]
[V] 2016-01-19T02:44:59.8848907Z: 127.0.0.1 connected, total: 1 clients [Suave.Tcp.job]
[V] 2016-01-19T02:44:59.8879891Z: reserving buffer: 811008, free count: 99 [Suave.Tcp.job] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:44:59.8929862Z: -> processor [Suave.Web.httpLoop.loop]
[V] 2016-01-19T02:44:59.8939749Z: reading first line of request [Suave.Web.processRequest]
[V] 2016-01-19T02:44:59.9068548Z: reserving buffer: 802816, free count: 98 [Suave.Web.readMoreData] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:44:59.9209857Z: reading headers [Suave.Web.processRequest]
[V] 2016-01-19T02:44:59.9259688Z: freeing buffer: 802816, free count: 99 [Suave.Web.split] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:44:59.9338521Z: parsing post data [Suave.Web.processRequest]
[V] 2016-01-19T02:44:59.9378580Z: <- processor [Suave.Web.httpLoop.loop]
[V] 2016-01-19T02:44:59.9518518Z: freeing buffer: 811008, free count: 100 [Suave.Tcp.job] [Suave.Socket.BufferManager]
[V] 2016-01-19T02:44:59.9518518Z: Shutting down transport. [Suave.Tcp.job]
[V] 2016-01-19T02:44:59.9528516Z: 127.0.0.1 disconnected, total: 0 clients [Suave.Tcp.job]
When the app executes the connection opens and then closes immediately. When I hit the app by port a new connection opens and then closes immediately (again).
Am I doing something wrong with the host configuration for the app, or am I missing something in how I'm using the choose function? Any help would be appreciated. Thank you!
I think the routes should look like :
path "/app/hello"
Related
I am struggling on how to capture systemd-journald properties into rsyslog files.
My setup
ubuntu inside docker on arm (raspberrypi): FROM arm64v8/ubuntu:20.04
docker command (all subsequent actions taken inside running docker container)
$ docker run --privileged -ti --cap-add SYS_ADMIN --security-opt seccomp=unconfined --cgroup-parent=docker.slice --cgroupns private --tmpfs /tmp --tmpfs /run --tmpfs /run/lock systemd:origin
rsyslog under $ sytemctl status rsyslog
● rsyslog.service - System Logging Service
Loaded: loaded (/lib/systemd/system/rsyslog.service; enabled; vendor prese>
Active: active (running)
...
[origin software="rsyslogd" swVersion="8.2001.0" x-pid="39758" x-info="https://www.rsyslog.com"] start
...
My plan
Having a small c program to put some information into journal:
#include <systemd/sd-journal.h>
#include <stdio.h>
#include <unistd.h>
int main(int arcg, char** args) {
char buffer [50];
sprintf (buffer, "%lu", (unsigned long)getpid());
printf("writing to journal\n");
sd_journal_print(LOG_WARNING, "%s", "a little journal test message");
sd_journal_send("MESSAGE=%s", "there shoud be a text", "SYSLOG_PID=%s", buffer, "PRIORITY=%i", LOG_ERR, "DOCUMENTATION=%s", "any doc link", "MESSAGE_ID=%s", "e5e4132e441541f89bca0cc3e7be3381", "MEAS_VAL=%d", 1394, NULL);
return 0;
}
Compile it: $ gcc joutest.c -lsystemd -o jt
Execute it: $ ./jt
This results inside the journal as $ journalctl -r -o json-pretty:
{
"_GID" : "0",
"MESSAGE" : "there shoud be a text",
"_HOSTNAME" : "f1aad951c039",
"SYSLOG_IDENTIFIER" : "jt",
"_TRANSPORT" : "journal",
"CODE_FILE" : "joutest.c",
"DOCUMENTATION" : "any doc link",
"_BOOT_ID" : "06a36b314cee462591c65a2703c8b2ad",
"CODE_LINE" : "14",
"MESSAGE_ID" : "e5e4132e441541f89bca0cc3e7be3381",
"_CAP_EFFECTIVE" : "3fffffffff",
"__REALTIME_TIMESTAMP" : "1669373862349599",
"_SYSTEMD_UNIT" : "init.scope",
"CODE_FUNC" : "main",
"_MACHINE_ID" : "5aba31746bf244bba6081297fe061445",
"SYSLOG_PID" : "39740",
"PRIORITY" : "3",
"_COMM" : "jt",
"_SYSTEMD_SLICE" : "-.slice",
"MEAS_VAL" : "1394",
"__MONOTONIC_TIMESTAMP" : "390853282189",
"_PID" : "39740",
"_SOURCE_REALTIME_TIMESTAMP" : "1669373862336503",
"_UID" : "0",
"_SYSTEMD_CGROUP" : "/init.scope",
"__CURSOR" : "s=63a46a30bbbb4b8c9288a9b12c622b37;i=6cb;b=06a36b314cee46>
}
Now as a test, extracting all properties from that journal entry via rsyslog; property in the jargon of rsyslog in principle is the name of a key in the formatted json entry. But if a property (or key name) matches, the whole dictionary item (key and value) shall be captured
To start with this, I've configured rsyslog as:
module(load="imjournal")
module(load="mmjsonparse")
action(type="mmjsonparse")
if $programname == 'jt' and $syslogseverity == 3 then
action(type="omfile" file="/var/log/jt_err.log" template="RSYSLOG_DebugFormat")
This config is located in /etc/rsyslog.d/filter.conf and gets automatically included by /etc/rsyslog.conf:
# /etc/rsyslog.conf configuration file for rsyslog
#
# For more information install rsyslog-doc and see
# /usr/share/doc/rsyslog-doc/html/configuration/index.html
#
# Default logging rules can be found in /etc/rsyslog.d/50-default.conf
#################
#### MODULES ####
#################
#module(load="imuxsock") # provides support for local system logging
#module(load="immark") # provides --MARK-- message capability
# provides UDP syslog reception
#module(load="imudp")
#input(type="imudp" port="514")
# provides TCP syslog reception
#module(load="imtcp")
#input(type="imtcp" port="514")
# provides kernel logging support and enable non-kernel klog messages
module(load="imklog" permitnonkernelfacility="on")
###########################
#### GLOBAL DIRECTIVES ####
###########################
#
# Use traditional timestamp format.
# To enable high precision timestamps, comment out the following line.
#
$ActionFileDefaultTemplate RSYSLOG_TraditionalFileFormat
# Filter duplicated messages
$RepeatedMsgReduction on
#
# Set the default permissions for all log files.
#
$FileOwner syslog
$FileGroup adm
$FileCreateMode 0640
$DirCreateMode 0755
$Umask 0022
$PrivDropToUser syslog
$PrivDropToGroup syslog
#
# Where to place spool and state files
#
$WorkDirectory /var/spool/rsyslog
#
# Include all config files in /etc/rsyslog.d/
#
$IncludeConfig /etc/rsyslog.d/*.conf
Applied this config: $ systemctl restart rsyslog
Which results in the following: $ cat /var/log/jt_err.log
Debug line with all properties:
FROMHOST: 'f1aad951c039', fromhost-ip: '127.0.0.1', HOSTNAME:
'f1aad951c039', PRI: 11,
syslogtag 'jt[39765]:', programname: 'jt', APP-NAME: 'jt', PROCID:
'39765', MSGID: '-',
TIMESTAMP: 'Nov 25 11:47:50', STRUCTURED-DATA: '-',
msg: ' there shoud be a text'
escaped msg: ' there shoud be a text'
inputname: imuxsock rawmsg: '<11>Nov 25 11:47:50 jt[39765]: there
shoud be a text'
$!:{ "msg": "there shoud be a text" }
$.:
$/:
My problem
Looking on the resulting rsyslog, I miss a majority, if not all, of items originating from the journal entry.
There is really no property (key) matching. Shouldn't be there all properties matched as it is a debug output?
Specifically I am concentrating on my custom property, MEAS_VAL, it is not there.
The only property which occurs is "msg", which by the way is questionable whether it is a match of the journal, since the originating property name attached to the resulting content "there shoud be a text" is MESSAGE
So it feels that I don't hit the whole journal capturing mechanism at all, why?
Can we be sure that imjournal gets loaded properly?
I would say yes because of systemd's startup messages:
Nov 28 16:27:38 f1aad951c039 rsyslogd[144703]: imjournal: Journal indicates no msgs when positioned at head. [v8.2212.0.master try https://www.rsyslog.com/e/0 ]
Nov 28 16:27:38 f1aad951c039 rsyslogd[144703]: imjournal: journal files changed, reloading... [v8.2212.0.master try https://www.rsyslog.com/e/0 ]
Nov 28 16:27:38 f1aad951c039 rsyslogd[144703]: imjournal: Journal indicates no msgs when positioned at head. [v8.2212.0.master try https://www.rsyslog.com/e/0 ]
Edit 2022-11-29
Meanwhile I've compiled my own version 8.2212.0.master. But the phenomenon persists.
You're missing most items originating from the journal, because both templates RSYSLOG_DebugFormat and RSYSLOG_TraditionalFileFormat do not have the needed properties (See Reserved template names). RSYSLOG_DebugFormat, however, includes atleast some fields, e.g. procid, msgid and structured-data - which can be seen in the output you've provided.
This means, that if you want to include all the fields, you'll have to create your own template.
The journal fields are stored in key-value pairs. The imjournal module is able to parse these key-value pairs and generate the jsonf property,
which then can be used to access fields of the log message as if they were fields in a JSON object.
# load imjournal module
module(load="imjournal")
# specify journal as input source
input(type="imjournal")
template(name="journalTemplate" type="list") {
property(name="timestamp" dateFormat="rfc3339")
constant(value=" ")
property(name="hostname")
constant(value=" ")
property(name="syslogtag")
constant(value=": {")
property(name="jsonf")
constant(value="}")
}
if $programname == 'jt' and $syslogseverity == 3 then {
action(type="omfile" file="/var/log/jt_err.log" template="journalTemplate")
stop
}
The output of the provided log would then look something like the following:
YYYY-MM-DDTHH:mm:ss myHostname syslogtag: {"_GID" : "0", "MESSAGE" : "there shoud be a text", ... }
As seen in the log above, the output of the provided properties will be in JSON. By using the json property parser this can be prevented, as the output can be tailored as desired. If this is used, however, each property must be defined specifically.
template(name="journalTemplate" type="list") {
property(name="timestamp" dateFormat="rfc3339")
constant(value=" ")
property(name="hostname")
constant(value=" ")
property(name="syslogtag")
constant(value=": _GID=")
property(name="$._GID" format="json")
constant(value=" MESSAGE=")
property(name="$.MESSAGE" format="json")
constant(value=" _HOSTNAME=")
property(name="$._HOSTNAME" format="json")
...
}
I am using sparklyr in a batch setup where multiple concurrent jobs with different parameters, are arriving and are processed by same sparklyr codebase. In "certain" random situations, the code gives errors (as below). I think it is under high load.
I am seeking guidance on the best way to troubleshoot it (including understanding the architecture of different components in the call chain). Therefore, beside pointing any correction in the code used to establish connection, any pointers to study further will be appreciated.
Thanks.
Stack versions:
Spark version 2.3.2.3.1.5.6091-7
Using Scala version 2.11.12 (OpenJDK 64-Bit Server VM, Java 1.8.0_322)
SparklyR version: sparklyr-2.1-2.11.jar
Yarn Cluster: hdp-3.1.5
Error:
2022-03-03 23:00:09 | Connecting to SPARK ...
2022-03-03 23:02:19 | Couldn't connect to SPARK (Error). Error in force(code): Failed while connecting to sparklyr to port (10980) for sessionid (38361): Sparklyr gateway did not respond while retrieving ports information after 120 seconds
Path: /usr/hdp/3.1.5.6091-7/spark2/bin/spark-submit
Parameters: --driver-memory, 3G, --executor-memory, 3G, --keytab, /etc/security/keytabs/appuser.headless.keytab, --principal, appuser#myorg.com, --class, sparklyr.Shell, '/usr/lib64/R/library/sparklyr/java/sparklyr-2.1-2.11.jar', 10980, 38361
Log: /tmp/RtmpyhpKkv/file187b82097bb55_spark.log
---- Output Log ----
22/03/03 23:00:17 INFO sparklyr: Session (38361) is starting under 127.0.0.1 port 10980
22/03/03 23:00:17 INFO sparklyr: Session (38361) found port 10980 is available
22/03/03 23:00:17 INFO sparklyr: Gateway (38361) is waiting for sparklyr client to connect to port 10980
22/03/03 23:01:17 INFO sparklyr: Gateway (38361) is terminating backend since no client has connected after 60 seconds to 192.168.1.55/10980.
22/03/03 23:01:17 INFO ShutdownHookManager: Shutdown hook called
22/03/03 23:01:17 INFO ShutdownHookManager: Deleting directory /tmp/spark-4fec5364-e440-41a8-87c4-b5e94472bb2f
---- Error Log ----
Connection code:
conf <- spark_config()
conf$spark.executor.memory <- "10G"
conf$spark.executor.cores <- 6
conf$spark.executor.instances <- 6
conf$spark.driver.memory <- "10g"
conf$spark.driver.memoryOverhead <-"3g"
conf$spark.shuffle.service.enabled <- "true"
conf$spark.port.maxRetries <- 125
conf$spark.sql.hive.convertMetastoreOrc <- "true"
conf$spark.local.dir = '/var/log/myapp/sparkjobs'
conf$'sparklyr.shell.driver-memory' <- "3G"
conf$'sparklyr.shell.executor-memory' <- "3G"
conf$spark.serializer <- "org.apache.spark.serializer.KryoSerializer"
conf$hive.metastore.uris = configs$DEFAULT$HIVE_METASTORE_URL
conf$spark.sql.session.timeZone <- "UTC"
# fix as per cloudera suggestion for future timeout issue
conf$spark.sql.broadcastTimeout <- 1200
conf$sparklyr.shell.keytab = "/etc/security/keytabs/appuser.headless.keytab"
conf$sparklyr.shell.principal = "appuser#myorg.com"
conf$spark.yarn.keytab= "/etc/security/keytabs/appuser.headless.keytab"
conf$spark.yarn.principal= "appuser#myorg.com"
conf$spark.sql.catalogImplementation <- "hive"
conf$sparklyr.gateway.config.retries <- 10
conf$sparklyr.connect.timeout <- 120
conf$sparklyr.gateway.port.query.attempts <- 10
conf$sparklyr.gateway.port.query.retry.interval.seconds <- 60
conf$sparklyr.gateway.port <- 10090 + round(runif(1, 1, 1000))
tryCatch
(
{
logging(paste0("Connecting to SPARK ... "))
withTimeout({ sc <- spark_connect(master = "yarn-client", spark_home = eval(SPARK_HOME_PATH), version = "2.1.0",app_name = "myjobname", config = conf) }, timeout = 540)
if (!is.null(sc)) {
return(sc)
}
},
TimeoutException = function(ex)
{
logging(paste0("Couldn't connect to SPARK (Timed up).", ex));
stop("Timeout occured");
},
error = function(err)
{
logging(paste0("Couldn't connect to SPARK (Error). ", err));
stop("Exception occured");
}
)
I'm testing a deployment of the Eclipse IoT Cloud2Edge package and have followed the instructions here https://www.eclipse.org/packages/packages/cloud2edge/tour/ to test. After creating the new tenant and device, and configuring the connection between Hono and Ditto, I can send telemetry to the new device via the Hono http adapter as shown here:
curl -i -u my-auth-id-1#my-tenant:my-password -H 'application/json' --data-binary '{
"topic": "my-tenant/org.acme:my-device-1/things/twin/commands/modify",
"headers": {},
"path": "/features/temperature/properties/value",
"value": 53
}' http://${HTTP_ADAPTER_IP}:${HTTP_ADAPTER_PORT_HTTP}/telemetry
HTTP/1.1 202 Accepted
vary: origin
content-length: 0
and expected to see this property value updated in Ditto. The updated device property value does not update in Ditto, and when I check the Ditto logs I see the following entries:
2022-02-13 20:11:35,265 INFO [] o.e.d.c.s.m.a.AmqpConsumerActor akka://ditto-cluster/system/sharding/connection/3/hono-connection-for-my-tenant/pa/$a/c1/amqpConsumerActor-0-telemetry%2Fmy-tenant-010 - Received message from AMQP 1.0 with externalMessageHeaders: {orig_adapter=hono-http, qos=0, device_id=org.acme:my-device-1, creation-time=1644783095260, message-id=ID:AMQP_NO_PREFIX:GenericSenderLink-12, content-type=application/x-www-form-urlencoded, to=telemetry/my-tenant, orig_address=/telemetry}
2022-02-13 20:11:35,271 INFO [81c41f10-4d59-435b-8ae1-bf5194dcf6bf] o.e.d.c.s.m.InboundDispatchingSink - onMapped mappedHeaders ImmutableDittoHeaders [{ditto-entity-id=thing:my-tenant:org.acme:my-device-1, ditto-inbound-payload-mapper=default, content-type=application/x-www-form-urlencoded, hono-device-id=org.acme:my-device-1, ditto-reply-target=0, ditto-expected-response-types=["response","error"], ditto-origin=hono-connection-for-my-tenant, ditto-auth-context={"type":"pre-authenticated-connection","subjects":["pre-authenticated:hono-connection"]}, correlation-id=81c41f10-4d59-435b-8ae1-bf5194dcf6bf}]
2022-02-13 20:11:35,278 INFO [b3b11410-6df8-4bfc-a940-fafa87d65be2] o.e.d.c.s.m.InboundDispatchingSink - Got exception <connectivity:connection.id.enforcement.failed> when processing external message with mapper <default>: <The configured filters could not be matched against the given target with ID 'org.acme:my-device-1'.>
2022-02-13 20:11:35,278 INFO [b3b11410-6df8-4bfc-a940-fafa87d65be2] o.e.d.c.s.m.InboundDispatchingSink - Resolved mapped headers of ImmutableDittoHeaders [{ditto-inbound-payload-mapper=default, ditto-entity-id=thing:my-tenant:org.acme:my-device-1, response-required=false, content-type=application/x-www-form-urlencoded, hono-device-id=org.acme:my-device-1, ditto-reply-target=0, ditto-expected-response-types=["response","error"], ditto-origin=hono-connection-for-my-tenant, ditto-auth-context={"type":"pre-authenticated-connection","subjects":["pre-authenticated:hono-connection"]}, correlation-id=b3b11410-6df8-4bfc-a940-fafa87d65be2}] : with HeaderMapping Optional[ImmutableHeaderMapping [mapping={hono-device-id={{ header:device_id }}, content-type={{ header:content-type }}}]] : and external headers {orig_adapter=hono-http, qos=0, device_id=org.acme:my-device-1, creation-time=1644783095260, message-id=ID:AMQP_NO_PREFIX:GenericSenderLink-12, content-type=application/x-www-form-urlencoded, to=telemetry/my-tenant, orig_address=/telemetry}
2022-02-13 20:11:35,283 INFO [] o.e.d.c.s.m.a.AmqpConsumerActor akka://ditto-cluster/system/sharding/connection/3/hono-connection-for-my-tenant/pa/$a/c1/amqpConsumerActor-0-telemetry%2Fmy-tenant-010 - Acking <ID:AMQP_NO_PREFIX:GenericSenderLink-12> with original external message headers=<{orig_adapter=hono-http, qos=0, device_id=org.acme:my-device-1, creation-time=1644783095260, message-id=ID:AMQP_NO_PREFIX:GenericSenderLink-12, content-type=application/x-www-form-urlencoded, to=telemetry/my-tenant, orig_address=/telemetry}>, isSuccess=<true>, ackType=<1 accepted>
I think the problem is the "connectivity:connection.id.enforcement.failed" error but I don't know how to troubleshoot. Any advice appreciated.
What you configured is the Connection source enforcement which makes sure that a Hono device (identified via the AMQP header device_id) may only updates the twin with the same "thing id" in Ditto.
That enforcement fails as your thingId you set in the Ditto Protocol JSON is my-tenant:org.acme:my-device-1 - the topic's first segment is the namespace, the second segment the name - combined those 2 segments become the "thing ID", see also: Protocol topic specification.
So you probably want to send the following message instead:
{
"topic": "org.acme/my-device-1/things/twin/commands/modify",
...
}
I have in hiera node variable solr_enabled = true. Also I have in this node list of fstab mount points like:
fstab_homes:
'/home1':
device: 'UUID=ac2ca97e-8bce-4774-92d7-051482253089'
'/home2':
device: 'UUID=d9daaeed-4e4e-40e9-aa6b-73632795e661'
'/home3':
device: 'UUID=21a358cf-2579-48cb-b89d-4ff43e4dd104'
'/home4':
device: 'UUID=c68041de-542a-4f72-9488-337048c41947'
'/home16':
device: 'UUID=d55eff53-3087-449b-9667-aeff49c556e7'
In solr.pp I want to get the first mounted home disk, create there folder and make symbolic link to /home/cpanelsolr.
For this I wrote the code /etc/puppet/environments/testing/modules/cpanel/manifests/solr.pp:
# Install SOLR - dovecot full text search plugin
class cpanel::solr(
$solr_enable = hiera('solr_enabled',false),
$homes = hiera_hash('fstab_homes', false),
$homesKeys = keys($homes),
)
{
if $solr_enable == true {
notify{"Starting Solr Installation ${homesKeys[0]}":}
if $homes != false and $homesKeys[0] != '/home' {
file { "Create Solr home symlink to ${homesKeys[0]}":
path => '/home/cpanelsolr',
ensure => 'link',
target => "${homesKeys[0]}/cpanelsolr",
}
}
exec { 'cpanel-dovecot-solr':
command => "/bin/bash -c
'/usr/local/cpanel/scripts/install_dovecot_fts'",
}
}
}
But when I run this in dev node I get error:
root#webcloud2 [/home1]# puppet agent -t --no-use_srv_records --server=puppet.development.internal --environment=testing --tags=cpanel::solr
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
2018-08-03 6:04:54 140004666824672 [Note] libgovernor.so found
2018-08-03 6:04:54 140004666824672 [Note] All governors functions found too
2018-08-03 6:04:54 140004666824672 [Note] Governor connected
2018-08-03 6:04:54 140004666824672 [Note] All governors lve functions found too
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: keys(): Requires hash to work with at
/etc/puppet/environments/testing/modules/cpanel/manifests/solr.pp:6 on node webcloud2.development.internal
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
What's wrong?
You have at least two problems there.
First problem is $home won't be set at all in that context. You would need to rewrite as:
class cpanel::solr(
$solr_enable = hiera('solr_enabled',false),
$homes = hiera_hash('fstab_homes', false),
)
{
$homes_keys = keys($homes)
...
}
Second problem is that your YAML isn't correctly indented, so fstab_homes would not actually return a Hash. It should be:
fstab_homes:
'/home1':
device: 'UUID=ac2ca97e-8bce-4774-92d7-051482253089'
'/home2':
device: 'UUID=d9daaeed-4e4e-40e9-aa6b-73632795e661'
'/home3':
device: 'UUID=21a358cf-2579-48cb-b89d-4ff43e4dd104'
'/home4':
device: 'UUID=c68041de-542a-4f72-9488-337048c41947'
'/home16':
device: 'UUID=d55eff53-3087-449b-9667-aeff49c556e7'
Finally, be aware that use of camelCase in parameter names in Puppet can cause you issues in some contexts, so best to use snake_case.
I have the following configuration for my Snap
Local.withPool 2 $ \pool -> do
Local.parallel_ pool [ httpServe (setPort (read port) config) Main.skite
--, httpServe (setPort 8003 config) Ws.brz
]
--httpServe (setPort 8003 config) Ws.brz
where
config =
setErrorLog ConfigNoLog $
setAccessLog ConfigNoLog $
setSSLPort 443 $
setSSLCert "/etc/letsencrypt/../cert.pem" $
setSSLKey "/etc/letsencrypt/../privkey.pem" $
defaultConfig
After i am building and uploading, all the certs are in the place, yet the https:// won't work. Do you have any clues?
Thanks
I did it:
First of all i added this two lines to the config
setSSLBind "0.0.0.0" $
setSSLChainCert False $
After this, is very important to build with ghc -threaded and this will get it up and running