I encountered a problem when using the swupdate image built by yocto.
Software Update started !
[network_initializer] : Software update started
[extract_file_to_tmp] : Found file
[extract_file_to_tmp] : filename sw-description
[extract_file_to_tmp] : size 303
[get_common_fields] : Version 0.1.0
[get_common_fields] : Description Firmware update for XXXXX Project
[parse_hw_compatibility] : Accepted Hw Revision : 1.0
[parse_hw_compatibility] : Accepted Hw Revision : 1.2
[parse_hw_compatibility] : Accepted Hw Revision : 1.3
[_parse_images] : Found Image: rootfs.ext4.gz in device : /dev/mmcblk2p4 for handler raw
[check_hw_compatibility] : Hardware myir Revision: 1.0
[check_hw_compatibility] : Hardware compatibility verified
[extract_files] : Found file
[extract_files] : filename rootfs.ext4.gz
[extract_files] : size 373258053 required
ERROR : Not enough free space to extract rootfs.ext4.gz (needed 373258053, got 223219712)
Image invalid or corrupted. Not installing ...
[network_initializer] : Main thread sleep again !
Waiting for requests...
ERROR : Writing to IPC fails due to Broken pipe
As shown in the figure, it indicates that there is not enough space, and then I use resize2fs /dev/mmcblk2p4 to expand the space. Now it has 1g of space. But still the same hint. Please let me know what you think.
Try using the installed-directly flag in the sw-description file.
files: (
{
filename = "rootfs.ext4.gz";
sha256 = "bc57b9c737033d0d6826db51618d596da7ecf3fdc0cb48dc9986a6094f529413";
type = "archive";
path = "/path/to/extract";
preserve-attributes = true;
installed-directly = true; <---------- this option
properties: {
create-destination = "true";
}
}
I'm trying to the state of the PRM_RSTST register of my ARM Cortex A8 processor to find the reason of resets because WDIOC_GETBOOTSTATUS isn't implemented for my processor, a TI8148. I know for the datasheet that the offset/adress is supposed to be 0xA8. However if I try to read in in my kernel driver with __raw_readl(0xA8) I get a seg fault. The other idea I had was to use /dev/mem, however if I go in with devmem2 0xA8 I get
/dev/mem opened.Unhandled fault: Precise External Abort on non-linefetch (0x018) at 0x401270a8
Memory mapped at address 0x40127000.
Bus error (core dumped)
So I looked at the mapping of memory with cat /proc/iomem
00000000-00000000 : omap2-nand.0
08000000-08000003 : omap2-nand
20000000-2fffffff : pcie-nonprefetch
47400000-47400fff : usbss
47401000-474017ff : musb0
47401000-474017ff : musb0
47401800-47401fff : musb1
47401800-47401fff : musb1
48010000-480100ff : omap-iommu.1
48010000-480100ff : omap-iommu.1
48020000-48021fff : omap_uart.0
48020000-48021fff : omap_uart
48022000-48023fff : omap_uart.1
48022000-48023fff : omap_uart
48024000-48025fff : omap_uart.2
48024000-48025fff : omap_uart
48028000-48028fff : omap_i2c.1
48028000-48028fff : omap_i2c
4802a000-4802afff : omap_i2c.2
4802a000-4802afff : omap_i2c
48030100-480301ff : omap2_mcspi.1
48030100-480301ff : omap2_mcspi.1
48032000-48032fff : omap_gpio.0
48038000-4803afff : mcasp
48038000-4803afff : davinci-mcasp
4803c000-4803efff : mcasp
4803c000-4803efff : davinci-mcasp
4804c000-4804cfff : omap_gpio.1
48080000-48081fff : omap2_elm.1
48080000-48081fff : omap2_elm.1
480c0000-480c0fff : omap_rtc
480c0000-480c0fff : omap_rtc
480c8000-480c8143 : omap-mailbox
48105500-481058ff : ti81xxvin
48105a00-48105dff : ti81xxvin
481a0100-481a01ff : omap2_mcspi.2
481a0100-481a01ff : omap2_mcspi.2
481a2100-481a21ff : omap2_mcspi.3
481a2100-481a21ff : omap2_mcspi.3
481a4100-481a41ff : omap2_mcspi.4
481a4100-481a41ff : omap2_mcspi.4
481a6000-481a7fff : omap_uart.3
481a6000-481a7fff : omap_uart
481a8000-481a9fff : omap_uart.4
481a8000-481a9fff : omap_uart
481aa000-481abfff : omap_uart.5
481aa000-481abfff : omap_uart
481ac000-481acfff : omap_gpio.2
481ae000-481aefff : omap_gpio.3
481c7000-481c7fff : omap_wdt
481c7000-481c7fff : omap_wdt
481cc000-481cffff : d_can
481cc000-481cffff : d_can
481d8100-481e80ff : mmci-omap-hs.0
481d8100-481e80ff : mmci-omap-hs
49000000-49007fff : edma_cc0
49000000-49007fff : edma
49800000-498003ff : edma_tc0
49900000-499003ff : edma_tc1
49a00000-49a003ff : edma_tc2
49b00000-49b003ff : edma_tc3
4a100000-4a1007ff : cpsw.0
4a100000-4a1007ff : eth0
4a100800-4a1008ff : davinci_mdio.0
4a100800-4a1008ff : davinci_mdio.0
4a100900-4a1009ff : cpsw.0
4a100900-4a1009ff : eth0
4a140000-4a150fff : ahci.0
51000000-51003fff : pcie-regs
55082000-550820ff : omap-iommu.0
55082000-550820ff : omap-iommu.0
80000000-bfffffff : pcie-inbound0
80000000-917fffff : System RAM
80044000-8058cfff : Kernel text
8058e000-8061770f : Kernel data
bd000000-bf7fffff : System RAM
So apparently 0x40127000, where devmem2 wants to look isn't mapped.
So where do I find the register with offset 0xA8?
After smoothly working for more than 10 months, I start getting this error on production suddenly while doing simple search queries.
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
}
],
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
},
"status" : 503
}
Initially, I was getting this error while doing simple term queries when I got this circuit_breaking_exception error, To debug this I tried _cat/health query on elasticsearch cluster, but still, the same error, even the simplest query localhost:9200 is giving the same error Not sure what happens to the cluster suddenly.
Her is my circuit breaker status:
"breakers" : {
"request" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 0,
"estimated_size" : "0b",
"overhead" : 1.0,
"tripped" : 0
},
"fielddata" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 406826332,
"estimated_size" : "387.9mb",
"overhead" : 1.03,
"tripped" : 0
},
"in_flight_requests" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 560,
"estimated_size" : "560b",
"overhead" : 1.0,
"tripped" : 0
},
"accounting" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 146387859,
"estimated_size" : "139.6mb",
"overhead" : 1.0,
"tripped" : 0
},
"parent" : {
"limit_size_in_bytes" : 745517875,
"limit_size" : "710.9mb",
"estimated_size_in_bytes" : 553214751,
"estimated_size" : "527.5mb",
"overhead" : 1.0,
"tripped" : 0
}
}
I found a similar issue hereGithub Issue that suggests increasing circuit breaker memory or disabling the same. But I am not sure what to choose. Please help!
Elasticsearch Version 6.3
After some more research finally, I found a solution for this i.e
We should not disable circuit breaker as it might result in OOM error and eventually might crash elasticsearch.
dynamically increasing circuit breaker memory percentage is good but it is also a temporary solution because at the end after solution increased percentage might also fill up.
Finally, we have a third option i.e increase overall JVM heap size which is 1GB by default but as recommended it should be around 30-32 GB on production, also it should be less than 50% of available total memory.
For more info check this for good JVM memory configurations of elasticsearch on production, Heap: Sizing and Swapping
In my case I have an index with large documents, each document has ~30 KB and more than 130 fields (nested objects, arrays, dates and ids).
and I was searching all fields using this DSL query:
query_string: {
query: term,
analyze_wildcard: true,
fields: ['*'], // search all fields
fuzziness: 'AUTO'
}
Since full-text searches are expensive. Searching through multiple fields at once is even more expensive. Expensive in terms of computing power, not storage.
Therefore:
The more fields a query_string or multi_match query targets, the
slower it is. A common technique to improve search speed over multiple
fields is to copy their values into a single field at index time, and
then use this field at search time.
please refer to ELK docs that recommends searching as few fields as possible with the help of copy-to directive.
After I changed my query to search one field:
query_string: {
query: term,
analyze_wildcard: true,
fields: ['search_field'] // search in one field
}
everything worked like a charm.
I got this error with my docker container so I increase the java_opts to 1GB and now it works without any error.
Here are the docker-compose.yml
version: '1'
services:
elasticsearch-cont:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
networks:
elastic:
driver: bridge
In my case, I also have an index with large documents which store system running logs and I searched the index with all fields. I use the Java Client API, like this:
TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("uid", uid);
searchSourceBuilder.query(termQueryBuilder);
When I changed my code like this:
TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("uid", uid);
searchSourceBuilder.fetchField("uid");
searchSourceBuilder.fetchSource(false);
searchSourceBuilder.query(termQueryBuilder);
the error disappeared.
I have log file with vert.x logs that i want to show in Kibana. How do i phrase following log, I want to phrase only IP, Request and Response time.
Here is my sample log.
[vert.x-eventloop-thread-7] 2016-09-27T07:13:53.263Z INFO [com.term.local.rest.server.Server] SUCCESS RESPONSE : 1.get.mydata, remoteIp : 192.168.1.1, processing time : 115
From this log i want only following information to visualise in Kibana
SUCCESS RESPONSE : 1.get.mydata, remoteIp : 192.168.1.1, processing time : 115
grok {
match => {
"message" => [
"%{GREEDYDATA}SUCCESS RESPONSE : %{NOTSPACE:response}, remoteIp : %{IP:remoteip}, processing time : %{INT:proceesingTime}"
]
}
}
We have created a real time chat application, with a socket.io, nodejs & mongodb. It is working fine on to a local & modulus server, but not working as expected on AWS.
Socket gets disconnected randomly with a ping timeout after around 60 sec. I also set "Heart Timeout" & "Heart Interval" but still it gets disconnected.
Here is the attached config file for node,
var config = {
local : {
mode : "LOCAL",
port : 8080,
db_path : "mongodb://localhost/local_db",
site_loc : "http://dummy.local/",
api_loc : "http://dummy.dummy.com/"
},
dev : {
mode : "DEV",
port : 8080,
db_path : "mongodb://dbath:27017/dev_db",
site_loc : 'http://dummy.dummy.com/',
api_loc : 'http://dummy.dummy.com/'
},
stage : {
mode : "STAGE",
port : 3000,
db_path : "mongodb://localhost:27017/stage_db",
site_loc : 'http://dummy.dummy.com/',
api_loc : 'http://dummy.dummy.com/'
},
production : {
mode : "PROD",
port : 443,
db_path : "mongodb://localhost:27017/live_db",
site_loc : 'https://dummy.com/',
api_loc : 'https://dummy.dummy.com/'
}
}
module.exports = function(mode) {
return config[mode || process.argv[2]] || config.local;
}
I resolved this issue, by just adding a port number besides link like,
http://www.chat.com:3000/
by d way thanks everyone...