On my Sensu server (non-enterprise) I first installed the https://github.com/sensu-plugins/sensu-plugins-slack plugin via sudo sensu-install -p slack.
My configuration files located on my sensu server are as following.
/etc/sensu/conf.d/handler_config_slack.json:
{
"handlers": {
"slack": {
"type": "pipe",
"command": "/usr/local/bin/handler-slack.rb",
"severites": ["critical", "unknown"]
}
},
"slack": {
"webhook_url": "https://hooks.slack.com/services/...",
"username": "sensu",
"channel": "#ops",
"timeout": 10
}
}
/etc/sensu/conf.d/client.json:
{
"client": {
"name": "sensu-server-client-test",
"address": "x.x.x.x",
"subscriptions": [
"test"
],
"keepalive": {
"thresholds": {
"warning": 30,
"critical": 40
},
"handlers": ["slack"],
"refresh": 300
}
}
}
And the sensu remote client servers file /etc/sensu/conf.d/client.json:
{
"client": {
"name": "sensu-client-test",
"address": "x.x.x.x",
"subscriptions": [
"test"
],
"keepalive": {
"thresholds": {
"warning": 30,
"critical": 40
},
"handlers": ["slack"],
"refresh": 300
}
}
}
/var/log/sensu/sensu-srver.log:
{"timestamp":"2016-02-21T15:04:59.771989+0000","level":"info","message":"handler output","handler":{"type":"pipe","command":"handler-slack.rb","severites":["critical","unknown"],"name":"slack"},"output":["only handling every 180 occurrences: sensu-server-client-test/disk\n"]}
I get a remote sensu client running and connected and I then deliberately stop the remote client server to produce warning and critical events from the keepalive checks. I would like a message to be sent to my slack channel however nothing is being sent.
What am I doing wrong?
Simple error, changed the following:
"command": "/usr/local/bin/handler-slack.rb",
To the following:
"command": "handler-slack.rb",
Related
I currently have a set of nodejs modules (containers) that connect to a local mongodb (also a container).
All modules works perfectly but with one of the I have the following problem:
https://i.stack.imgur.com/xuFqw.png
The connection seems to be accepted by mongo but the client doesn't receive the response and keeps waiting until it times out after 30 second (watch timestamps in the log)
The code of the client is the following:
async function connectToMongo() {
try {
const mongoUrl = 'mongodb://mongo:27017/LocalDB'
let result = await MongoClient.connect(mongoUrl, { useUnifiedTopology: true })
logger.info("Connected to Mongo DB")
return result
} catch (err) {
logger.error("Cannot connect to Mongo - " + err)
throw err
}
}
And is exactly the same as the other modules (that works).
Mongo is started without any settings or authentication.
I have tried using the following hostnames:
mongo : name of the container in which mongo is running
host.docker.internal : to address the host since the 27017 port of the container is binded to the host port
I honestly don't know what to try anymore...does anybody have an idea about what could be the problem?
UPDATE: as requested i'm adding info about how i'm starting the containers. Anyway I have omitted this part because they are started as Azure IoT Edge Modules. Anyway here is the infos from the deployment template:
"modules": {
"mongo": {
"settings": {
"image": "mongodb:0.0.1-replset-noauth",
"createOptions": "{\"HostConfig\":
{\"PortBindings\":{\"27017/tcp\":[{\"HostPort\":\"27017\"}]}}}"
},
"type": "docker",
"version": "1.0",
"status": "running",
"restartPolicy": "always"
},
"healthcheck": {
"settings": {
"image": "${MODULES.HealthCheck.debug}",
"createOptions": {
"HostConfig": {
"PortBindings": {
"9229/tcp": [
{
"HostPort": "9233"
}
]
}
}
}
},
"type": "docker",
"version": "1.0",
"env": {
"MONGO_CONTAINER_NAME": {
"value": "mongo"
},
"MONGO_SERVER_PORT": {
"value": "27017"
},
"DB_NAME": {
"value": "LocalDB"
}
},
"status": "running",
"restartPolicy": "always"
}
},
While this is the configuration of another module that perfectly works:
"shovel": {
"settings": {
"image": "${MODULES.Shovel.debug}",
"createOptions": {
"HostConfig": {
"PortBindings": {
"9229/tcp": [
{
"HostPort": "9229"
}
]
}
}
}
},
"type": "docker",
"version": "1.0",
"env": {
"MONGO_CONTAINER_NAME": {
"value": "mongo"
},
"MONGO_SERVER_PORT": {
"value": "27017"
},
"DB_NAME": {
"value": "LocalDB"
}
},
"status": "running",
"restartPolicy": "always"
},
(it's basically the same)
The modules are currently being launched using the IoT Simulator or on an IoT Edge device as Single Device deployments.
My colleagues and I have been working to fix a reported issue on our Amazon Alexa CBT Test regarding the value “DeepQuery=true”.
Our code has been modified, so that every state change is reported automatically and all the used interfaces have the properties “proactivelyReported” and “retrievable” set to true.
As has been suggested by the WWA-Support we used the Smart Home Debugger of the Developer Console to validate the ReportEvents (e.g. Discovery or ChangeReport) and we checked the state of our device on the “View Device State” page (both pages are referenced on: https://developer.amazon.com/en-US/docs/alexa/smarthome/debug-your-smart-home-skill.html).
For debugging purposes we scaled our device capabilities down to just the PowerController. The AddOrUpdateReport of Alexa.Discovery looks to our eyes now exactly as expected/documented. Same goes for the ChangeReport, which we proactively send right after the AddOrUpdateReport (Two sample-Reports for both are provided at the end).
Unfortunately we are still faced with the issue, that “DeepQuery=true” on the “View Device State” page.
If we set the interface property “retrievable” to false, “DeepQuery=false”, but the Alexa-App does not retain the current state of the device. In this configuration the Alexa-App can only be used to send commands, which unfortunately will lead to other test cases to fail.
Does anyone know how to solve this issue?
How can we set “proactivelyReported” and “retrievable” to true and have “DeepQuery=false”?
Any help would be greatly appreciated and I will gladly provide more informations if needed.
Sample AddOrUpdateReport from Smart Home Debugger
{
"header": {
"namespace": "SkillDebugger",
"name": "CaptureDebuggingInfo",
"messageId": "05b030fb-6393-4ae0-80d0-47fc27876f0e"
},
"payload": {
"skillId": "amzn1.ask.skill.055ca62d-3cf8-4f51-a683-9a98b36f4637",
"timestamp": "2021-09-09T13:28:21.629Z",
"dialogRequestId": null,
"skillRequestId": null,
"type": "SmartHomeAddOrUpdateReportSuccess",
"content": {
"addOrUpdateReport": {
"event": {
"header": {
"namespace": "Alexa.Discovery",
"name": "AddOrUpdateReport",
"messageId": "2458b969-7c3e-47e2-ab0b-6e13a999be76",
"payloadVersion": "3"
},
"payload": {
"endpoints": [
{
"manufacturerName": "Our Company Name",
"description": "Our Product Name",
"endpointId": "device--cb12b420-1171-11ec-81f3-cb34e87ea438",
"friendlyName": "Lampe 1",
"capabilities": [
{
"type": "AlexaInterface",
"version": "3",
"interface": "Alexa.PowerController",
"properties": {
"supported": [
{
"name": "powerState"
}
],
"proactivelyReported": true,
"retrievable": true
}
},
{
"type": "AlexaInterface",
"interface": "Alexa",
"version": "3"
}
],
"displayCategories": [
"LIGHT"
],
"connections": [],
"relationships": {},
"cookie": {}
}
],
"scope": null
}
}
}
}
}
}
Sample ChangeReport from Smart Home Debugger
{
"header": {
"namespace": "SkillDebugger",
"name": "CaptureDebuggingInfo",
"messageId": "194a96a1-6747-46ba-8751-5c9ef715fd34"
},
"payload": {
"skillId": "amzn1.ask.skill.055ca62d-3cf8-4f51-a683-9a98b36f4637",
"timestamp": "2021-09-09T13:28:23.227Z",
"dialogRequestId": null,
"skillRequestId": null,
"type": "SmartHomeChangeReportSuccess",
"content": {
"changeReport": {
"event": {
"header": {
"namespace": "Alexa",
"name": "ChangeReport",
"messageId": "8972e386-9622-40e6-85e7-1a7d81c79c8a",
"payloadVersion": "3"
},
"endpoint": {
"scope": null,
"endpointId": "device--cb12b420-1171-11ec-81f3-cb34e87ea438"
},
"payload": {
"change": {
"cause": {
"type": "APP_INTERACTION"
},
"properties": [
{
"namespace": "Alexa.PowerController",
"name": "powerState",
"value": "ON",
"timeOfSample": "2021-09-09T13:28:18.088Z",
"uncertaintyInMilliseconds": 500
}
]
}
}
},
"context": {
"properties": []
}
}
}
}
}
I am working on Azure edge module where I created a custom module that actually received a message from another module within edge runtime.
My requirement is to send data to an external gateway(example thingsboard) over the HTTP post method. In docker file, expose 80 port and bind with local machine 80 port.
The problem is whenever I tried to send message from a custom module to external gateway(thingsboards) using the HTTP post method, getting an error of HTTP 500.
If I run the container directly(not under the edge runtime) is working fine.
As my code in container want to send message to external world(external device or gateway) over internet. I am sending message to thingsboard (http://demo.thingsboard.io/api/v1/camera/telemetry).
Edge deployment manifest:
{
"modulesContent": {
"$edgeAgent": {
"properties.desired": {
"schemaVersion": "1.0",
"runtime": {
"type": "docker",
"settings": {
"minDockerVersion": "v1.25",
"loggingOptions": "",
"registryCredentials": {
"cameraedgedev": {
"username": "username",
"password": "password",
"address": "cameraedgedev.azurecr.io"
}
}
}
},
"systemModules": {
"edgeAgent": {
"type": "docker",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-agent:1.0",
"createOptions": "{}"
}
},
"edgeHub": {
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/azureiotedge-hub:1.0",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"5671/tcp\":[{\"HostPort\":\"5671\"}],\"8883/tcp\":[{\"HostPort\":\"8883\"}],\"443/tcp\":[{\"HostPort\":\"443\"}],\"8082/tcp\":[{\"HostPort\":\"8082\"}]}}}"
}
}
},
"modules": {
"lvaEdge": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/media/live-video-analytics:2",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"8082/tcp\":[{\"HostPort\":\"8082\"}]},\"LogConfig\":{\"Type\":\"\",\"Config\":{\"max-size\":\"10m\",\"max-file\":\"10\"}},\"Binds\":[\"/var/media:/var/media/\",\"/var/lib/azuremediaservices:/var/lib/azuremediaservices\"]}}"
}
},
"rtspsim": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "mcr.microsoft.com/lva-utilities/rtspsim-live555:1.2",
"createOptions": "{\"HostConfig\":{\"Binds\":[\"/home/cameraedgevm_user/samples/input:/live/mediaServer/media\"]}}"
}
},
"lvaSupport": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"settings": {
"image": "cameraedgedev.azurecr.io/lvasupport:1.2",
"createOptions": "{\"HostConfig\":{\"PortBindings\":{\"80/tcp\":[{\"HostPort\":\"80\"}]}}}"
}
},
"ai_module": {
"version": "1.0",
"type": "docker",
"status": "running",
"restartPolicy": "always",
"env": {
"PYTHONUNBUFFERED": {
"value": "1"
},
"NVIDIA_VISIBLE_DEVICES": {
"value": "all"
}
},
"settings": {
"image": "cameraedgedev.azurecr.io/ai_module:1.2",
"createOptions": "{\"HostConfig\":{\"Runtime\":\"nvidia\",\"PortBindings\":{\"8082/tcp\":[{\"HostPort\":\"8082\"}]},\"Binds\":[\"/home/cameraedgevm_user/ai_module/data:/ai_module/data/\"]}}"
}
}
}
}
},
"$edgeHub": {
"properties.desired": {
"schemaVersion": "1.0",
"routes": {
"LVAToHub": "FROM /messages/modules/lvaEdge/outputs/* INTO $upstream",
"LVATolvaSupportModule": "FROM /messages/modules/lvaEdge/outputs/* INTO BrokeredEndpoint(\"/modules/lvaSupport/inputs/input1\")"
},
"storeAndForwardConfiguration": {
"timeToLiveSecs": 7200
}
}
},
"lvaEdge": {
"properties.desired": {
some properties
}
},
"lvaSupport": {
"properties.desired": {
"Gateway": "http://demo.thingsboard.io/api/v1/camera/telemetry",
"gatewayMethod": "POST"
}
}
}
}
How should I resolve this problem?
How can i send message to external gateway
Thanks.
By default, your modules in Azure IoT Edge will be able to access the internet, unless the configuration was changed, or a firewall is in place that restricts you from reaching thingsboards.io To verify if your module can reach that URL, you can try the following:
docker run -i --rm -p 80:80 --network="azure-iot-edge" alpine ping thingsboard.io
This will run a container on the same Docker network as your modules and see if you can resolve the URL.
You mentioned the HTTP status code 500, which indicates that a server (from Thingsboard) is having an issue with your request. Because you mentioned that this works if you just run the container manually, this could be because your payload is different when you run your module in the Azure IoT Edge runtime. An easy way to find out is to use logging to check what you're sending to Thingsboard in each scenario.
Note: in your question, you said
In docker file, expose 80 port and bind with local machine 80 port
This is only needed if you want your module to accept incoming requests over port 80. For outgoing requests (such as messages to Thingsboard) this is not needed.
Today I was trying to create a confluence addon for my company and I've try following atlassian documents.
My problem comes trying to run the express app when adding a new customContent to the atlassian-connect.json, after running npm start I get the following error.
Failed to register with host https://admin:xxx#xxx.atlassian.net/wiki (200)
{"type":"INSTALL","pingAfter":300,"status":{"done":true,"statusCode":200,"con
tentType":"application/vnd.atl.plugins.task.install.err+json","subCode":"upm.
pluginInstall.error.descriptor.not.from.marketplace","source":"https://1a0adc
8f.ngrok.io/atlassian-connect.json","name":"https://1a0adc8f.ngrok.io/atlassi
an-connect.json"},"links":{"self":"/wiki/rest/plugins/1.0/pending/b88594d3-c3
c2-4760-b687-c8d860c0a377","alternate":"/wiki/rest/plugins/1.0/tasks/b88594d3
-c3c2-4760-b687-c8d860c0a377"},"timestamp":1502272147602,"userKey":"xxx","id":"xxx"}
Add-on not registered; no compatible hosts detected
This is my atlassian-connect.json file:
{
"key": "my-add-on",
"name": "Ping Pong",
"description": "My very first add-on",
"vendor": {
"name": "Angry Nerds",
"url": "https://www.atlassian.com/angrynerds"
},
"baseUrl": "{{localBaseUrl}}",
"links": {
"self": "{{localBaseUrl}}/atlassian-connect.json",
"homepage": "{{localBaseUrl}}/atlassian-connect.json"
},
"authentication": {
"type": "jwt"
},
"lifecycle": {
"installed": "/installed"
},
"scopes": [
"READ"
],
"modules": {
"generalPages": [
{
"key": "hello-world-page-jira",
"location": "system.top.navigation.bar",
"name": {
"value": "Hello World"
},
"url": "/hello-world",
"conditions": [{
"condition": "user_is_logged_in"
}]
},
{
"key": "customersViewer",
"location": "system.header/left",
"name": {
"value": "Hello World"
},
"url": "/hello-world",
"conditions": [{
"condition": "user_is_logged_in"
}]
}
],
"customContent": [
{
"key": "customer",
"name": {
"value": "Customers"
},
"uiSupport": {
"contentViewComponent": {
"moduleKey": "customersViewer"
},
"listViewComponent": {
"moduleKey": "customerList"
},
"icons": {
"item": {
"url": "/images/customers.png"
}
}
},
"apiSupport": {
"supportedContainerTypes": ["space"]
}
}
]
}
}
Does anybody has an idea on whats going on?
The contentViewComponent can't find the generalPage it is referencing in moduleKey.
From the docs:
In the snippet above, the moduleKey “customersViewer” maps to a
generalPage module we have defined in our add-on. This generalPage is
passed the context parameters we specify, and visualizes our content
accordingly.
If you change the generalPage with the key hello-world-page-confluence to customersVieweryou be able to install and get up and running.
I want to forward only newly created logs to logstash server from logstash-forwarder
I have a logstash-forwarder.conf file
{
"network": {
"servers": [ "X.X.X.X:5001" ],
"ssl certificate": "/etc/ssl/certs/logstash-forwarder.crt",
"ssl key": "/etc/ssl/private/logstash-forwarder.key",
"ssl ca": "/etc/ssl/certs/logstash-forwarder.crt",
"timeout": 300
},
# The list of files configurations
"files": [
{
"paths": [ "/home/user/Programs/zookeeperlog.log" ],
"fields": { "logtype": "zookeeper-log", "type": "logs" }
},
{
"paths": [ "/var/log/syslog" ],
"fields": { "logtype": "kafka-log", "type": "logs" }
},
{
"paths": [ "/var/log/syslog" ],
"fields": { "logtype": "storm-log", "type": "logs" }
}
]
}
I wanted to know if there is a option something like "start_position => end" to add to this configuration file.
I also tried using -tail=true while running logstash-forwarder. It takes snapshots but harvest doesn't seem to registrar events. It registers events only without -tail command line option or when -tail=false is given, which forwards all logs from beginning.