When I add a replication to CouchDB, it doesn't start. i.e. I get the following doc after saving:
{
"_id": "xxx",
"_rev": "yyy",
"target": "https://user:pswd.domain/db",
"source": "db",
"create_target": true,
"continuous": true,
"user_ctx": {
"name": "admin",
"roles": [
"_admin"
]
},
"owner": "admin"
}
Usually after creating a replication, the replication is triggered and the doc updated to include:
"_replication_state": "triggered" or "error",
"_replication_state_time": "some time",
"_replication_id": "some ID"
I am using CouchDB 1.6.0 on Ubuntu 16.04. What could cause this to happen? Replication was working fine until about an hour ago when 80 of 140 or so replications failed at once.
There are 60 replications that are seen as 'triggered' in couch. But the _active_tasks endpoint only shows 46.
As it turned out our server was experiencing high traffic from super dodgy origins. This was causing replications to timeout, and not allowing replications to restart. The Nginx access logs seemed to show repeated attempts at fishing for insecure PHP settings and open MySQL databases
Related
I'm working on converting the nested python dictionnary of whatportistoll to a new format with a new organisatio of keys, but I'm encountering the following issue :
- key skipping problem
Here is the format of the existing nested dictionary :
{
"_default": {
"0": {
"name": "Service name",
"port": "Port Number",
"protocol": "Transport Protocol",
"description": "Description"
},
"1": {
"name": "",
"port": "0",
"protocol": "tcp,udp",
"description": "Port 0 is reserved by IANA, it is technically invalid to use, but possible. It is sometimes used to fingerprint machines, because different operating systems respond to this port in different ways. Some ISPs may block it because of exploits. Port 0 can be used by applications when calling the bind() command to request the next available dynamically allocated source port number."
},
...
}
here is the targeted format :
{
"0": {
"tcp": {
"name": "",
"port": "0",
"protocol": "tcp,udp",
"description": "Port 0 is reserved by IANA, it is technically invalid to use, but possible. It is sometimes used to fingerprint machines, because different operating systems respond to this port in different ways. Some ISPs may block it because of exploits. Port 0 can be used by applications when calling the bind() command to request the next available dynamically allocated source port number."
},
"udp": {
"name": "",
"port": "0",
"protocol": "tcp,udp",
"description": "Port 0 is reserved by IANA, it is technically invalid to use, but possible. It is sometimes used to fingerprint machines, because different operating systems respond to this port in different ways. Some ISPs may block it because of exploits. Port 0 can be used by applications when calling the bind() command to request the next available dynamically allocated source port number."
}
},
"1": {
"tcp": {
"name": "tcpmux",
"port": "1",
"protocol": "tcp",
"description": "Scans against this port are commonly used to test if a machine runs SGI Irix (as SGI is the only system that typically has this enabled). This service is almost never used in practice.RFC1078 - TCPMUX acts much like Sun's portmapper, or Microsoft's end-point mapper in that it allows services to run on arbitrary ports. In the case of TCPMUX, however, after the \"lookup\" phase, all further communication continues to run over that port.builtins.c in Xinetd before 2.3.15 does not check the service type when the tcpmux-server service is enabled, which exposes all enabled services and allows remote attackers to bypass intended access restrictions via a request to tcpmux port 1 (TCP/UDP). References: [CVE-2012-0862] [BID-53720] [OSVDB-81774]Trojans that use this port: Breach.2001, SocketsDeTroieAlso see: CERT: CA-95.15.SGI.lp.vul"
},
"udp": {}
},
...
}
Could you please help resolving this issue ?
Your help is much appreciated.
Best Regards,
Jihane
I have enabled http request logging for my webapp in Azure insights. I don't understand there is such a large response time disparity between identical requests, received just a few seconds or a few minutes apart. A log example with three records:
{ "time": "2022-07-20T06:08:41.7548330Z", "EventTime": "2022-07-20T06:08:41.7548330Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":1426,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "1*.*.*.*"}
{ "time": "2022-07-20T06:09:42.2283150Z", "EventTime": "2022-07-20T06:09:42.2283150Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":279,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "*.*.*.*"}
{ "time": "2022-07-20T06:10:15.0636460Z", "EventTime": "2022-07-20T06:10:15.0636460Z", "resourceId": "***", "properties": "{\"CsHost\":"***",\"CIp\":\"195.235.205.153\",\"SPort\":\"80\",\"CsUriStem\":\"\\/mensajes\",\"CsUriQuery\":\"desde=20220705T125027\",\"CsMethod\":\"GET\",\"TimeTaken\":2629,\"ScStatus\":\"200\",\"Result\":\"Success\",\"CsBytes\":\"976\",\"ScBytes\":\"302\",\"UserAgent\":\"RestSharp 104.2.0.0\",\"Cookie\":\"--\",\"CsUsername\":\"\",\"Referer\":\"\",\"ComputerName\":\"RD501AC5BF5D04\"}", "category": "AppServiceHTTPLogs", "EventStampType": "Stamp", "EventPrimaryStampName": "waws-prod-am2-325", "EventStampName": "waws-prod-am2-325d", "Host": "RD501AC5BF5D04", "EventIpAddress": "*.*.*.*"}
The three requests are the same and therefore trigger the same process on the server side (same endpoint, made from the same origin). The result can be seen in the 'timeTaken' field: 1426ms/279ms/2629ms Any suggestion is appreciated.
According to Microsoft-Documentation it say,
If they remain idle for a predetermined amount of time, web apps are by default unloaded. You can activate the Always On feature in Basic and Standard service levels to keep the app constantly loaded.This eliminates longer load times after the app is idle.
Open your Web app=> Then click on Configuration=>Then click on General settings=> then on.
References:
Smart detection - performance anomalies - Azure Monitor | Microsoft Docs
Exploring performance issues with Azure Application Insights | by Thomas Weiss | Medium
I'm trying to consume data from hono. I do so by following the guide on Starting a consumer on hono documentation.
I'm currently trying to subscribe to all tenants by add --tenant.id=* at the end of the the mvn command. This results in following command:
mvn spring-boot:run -Drun.arguments=--hono.client.host=localhost,--hono.client.username=consumer#HONO,--hono.client.password=verysecret,--destination.TopicTemplate=gw/\!{tenant}/\!{device}/alp,--destination.Hoscalhost,--destination.Port=11883,--tenant.id=*
I'm not getting any messages when I subscribe like this. When I subscribe using the example command (only for DEFAULT_TENANT), I'm consuming the messages.
The current user permission looks like this:
"consumer#HONO": {
"mechanism": "PLAIN",
"password": "verysecret",
"authorities": [ "application" ]
}
The current application role looks like this:
"application": [
{
"resource": "telemetry/*",
"activities": [ "READ" ]
},
{
"resource": "event/*",
"activities": [ "READ" ]
},
{
"resource": "control/*",
"activities": [ "READ", "WRITE" ]
}
Both of them are still the original ones from Hono github.
EDIT: The consumer also subscribes to event/tenant. In my case this is event/. Events published on topic event/DEFAULT_TENANT and event/MY_TENANT are consumed. However, the consumer for telemetry/ seems not to be registered.
I've finally found out what was going on.
It seems the message is blocked in the QPID dispatch router because of folowwing error: "Parse tree match not found".
This can be resolved by changing the qpid configuration. In this configuration you should be able to find following records:
["linkRoute", {
"prefix": "event/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "event/",
"direction": "out",
"connection": "broker"
}],
["address", {
"prefix": "telemetry/",
"distribution": "balanced"
}],
It creates linkroutes (in and out) for event topic but not for the telemetry topic. Adding these records for the telemetry topic resolves the problem.
["linkRoute", {
"prefix": "event/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "event/",
"direction": "out",
"connection": "broker"
}],
["linkRoute", {
"prefix": "telemetry/",
"direction": "in",
"connection": "broker"
}],
["linkRoute", {
"prefix": "telemetry/",
"direction": "out",
"connection": "broker"
}],
["address", {
"prefix": "telemetry/",
"distribution": "balanced"
}],
Hono does not (as of now) support consuming messages of all tenants. The consumer is always scoped to a single tenant only. This is also reflected in the (northbound) Telemetry and Event API specifications.
The usage of wildcard characters in order to receive data for multiple/all tenants is not supported. The change you have made to the Dispatch Router configuration may have led you to believe that it does work indeed. However, defining the telemetry address to use link routing instead of the default message routing has some consequences you should be aware of:
All telemetry messages will be routed to the message broker (Artemis) instead of being routed directly to consumers attached to the Dispatch Router. This means that all messages will be written to a queue/topic in Artemis. Depending on the Artemis configuration this might also mean that (telemetry) messages get persisted which will have quite a negative impact on throughput.
Your clients/consumers will now explicitly depend on the (Artemis) broker's support for wildcards being used in AMQP 1.0 link source addresses to receive messages from multiple addresses. While this might be what you want to achieve in the first place, beware that it ties your application to the specific implementation of the AMQP Messaging Network (in this case Artemis) which is not part of Hono.
I just now started using Thingsboard and I came across this one,https://thingsboard.io/docs/iot-gateway/getting-started/. I have implemented it but the problems that I'm facing are,
1.I can transmit only one Key-value pair. How can I transmit multiple key-value sensor data?
2.Also if there is any other way to access the Cassandra Database so that I can retrieve all mine data to Thingsboard.
Please help. Thanking you.
You are asking two very different things.
1) You can transmit more key-value pairs at once by correctly mapping the gateway incoming messages. I suppose you are working with MQTT protocol. The default mapping for this protocol is specified in /etc/tb-gateway/conf/mqtt-config.json. This file specifies how to translate the incoming MQTT messages from the broker into the ThingsBoard key-value format, before sending to the server instance of ThingsBoard.
To map more than one reading from sensor, you can do somethings like this:
{
"brokers": [
{
"host": "localhost",
"port": 1883,
"ssl": false,
"retryInterval": 5000,
"credentials": {
"type": "anonymous"
},
"mapping": [
{
"topicFilter": "WeatherSensors",
"converter": {
"type": "json",
"filterExpression": "",
"deviceNameJsonExpression": "${$.WeatherStationName}",
"timeout": 120000,
"timeseries": [
{
"type": "double",
"key": "temperature",
"value": "${$.temperature}"
},
{
"type": "double",
"key": "humidity",
"value": "${$.humidity}"
}
]
}
}
]
}
]
}
This way, if you send a message like {"WeatherStationName":"test", "temperature":25, "humidity":40} to the topic WeatherSensors you will see the two key-value pairs in ThingsBoard server, in a device named "test".
2) The best way to access data stored in the internal ThingsBoard server is via REST API, so that you can query any ThingsBoard instance with the same piece of code regardless of the technology used for the database (Cassandra, PostgreSQL, etc.). You can find a Python example in this repo.
The alternative is to use a specific query language for the database, such as SQL for PostgreSQL or CQL for Cassandra.
For example, humidity, temperature, gas.
In this case you use one access token/single mqtt session and send data in single json like this
{"humidity":42.2, "temperature":23.3, "gas":45}
If you have multiple sensors attached to single device, send them like this
{"sensorA.humidity":42.2, "sensorB.temperature":23.3, "sensorC.gas":45}
Available topics are static and listed here:
https://thingsboard.io/docs/reference/mqtt-api/#telemetry-upload-api
I am using Marathon to deploy my Docker containerised node.js application. My Marathon app spec is as follows :
{
"id": "<some-name>",
"cmd": null,
"cpus": 1,
"mem": 2800,
"disk": 30720,
"instances": 1,
"container": {
"docker": {
"image": "<some-docker-registry-IP>:5000/<repo>",
"network": "BRIDGE",
"privileged": true,
"forcePullImage": true,
"parameters": [
{
"key": "net",
"value": "host"
}
],
"portMappings": [
{
"containerPort": <some-port>,
"hostPort": <some-port>,
"protocol": "tcp",
"name": null
}
]
},
"type": "DOCKER"
}
}
The problem however is that this leads to restarting my server where application is deployed once it is out of memory. I need my services to listen on private IP of host machine and that's why I am using --net=host.
Is it possible to just kill the task freeing up the memory so that Marathon can re-spawn it without restarting/shutting down server? Or is there any other way to make the Docker container routable to the outside world without using --net=host?
Basically, I think there is a problem with you Node application if it shows a memory leaking behavior. Thats the first point I'd address.
Second one is that you should use something like pm2 in your application's Docker image which will take care of restarting you application (in the container itself) when it encounters a problem.
Furthermore, you could implement a Marathon health endpoint, so that Marathon can recognize that the application actually has problems.
To reach some redundancy, I'd strongly advise that you run at least two instances of the application, and use Mesos DNS and a load balancer like marathon-lb on the public slave node(s), which will take care of the routing. This also allows you to use bridged networking if you want to.