Varnish version: 4.0.2
I can observe that varnish crashes frequently, error msg:
Child (2094) died signal=6
Child (2094) Panic message:
Assert error in vbf_fetch_thread(), cache/cache_fetch.c line 842:
Condition(uu == bo->fetch_obj->len) not true....
Is that problem with config or a varnish bug?
You are using a VERY old Varnish version, and that's related with an old bug.
Related
Getting intermittent issue for server startup of hybris(1905 version).
--> Wrapper Started as Console
wrapper-macosx-universal-64(12108,0x2001ce600) malloc: Heap corruption detected, free list is damaged at 0x600002104910
*** Incorrect guard value: 105553116300640
wrapper-macosx-universal-64(12108,0x2001ce600) malloc: *** set a breakpoint in malloc_error_break to debug
after getting issue as workaround ,doing ant clean all and some time it work some time it's not working any solution.
After following the instructions specified here to compile the source code of RedisJson, got the rejson.so file at project_root/target/release, then I entered this command sudo redis-server --loadmodule /home/username/RedisJSON/target/release/rejson.so to load redis module. But I got this error message.
Server initialized
7666:M 14 Sep 2021 13:27:38.795 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
7666:M 14 Sep 2021 13:27:38.795 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
7666:M 14 Sep 2021 13:27:38.862 * <ReJSON> Exported RedisJSON_V1 API
thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-module-0.23.0/src/raw.rs:580:42
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
fatal runtime error: failed to initiate panic, error 5
Aborted
How can I get this fixed, please?
RedisJSON requires Redis 6+, it seems like you're running on an older version of Redis.
Node version: 4.8.0
Platform: Linux 3.16.0-4-amd64 #1 SMP Debian 3.16.36-1+deb8u2 (2016-10-19) x86_64 GNU/Linux
Node crashed during the Garbace collection but without any other high level pattern (maybe related to https://github.com/nodejs/node/issues/3715).
Unfurtunately I don't have any code to reproduce as I was not able to isolate the problem.
This is the crash stack trace captured with segfault-handler module:
PID 24495 received SIGSEGV for address: 0x3809f3d021f8
<path_node_modules>/segfault-handler/build/Release/segfault-handler.node(+0x1a5b)[0x7f7dd565ca5b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f7dd9c20890]
/usr/bin/nodejs(_ZN2v88internal20MarkCompactCollector22ProcessWeakCollectionsEv+0xfd)[0xaec4dd]
/usr/bin/nodejs(_ZN2v88internal20MarkCompactCollector15MarkLiveObjectsEv+0x214)[0xaf3a14]
/usr/bin/nodejs(_ZN2v88internal20MarkCompactCollector14CollectGarbageEv+0x11)[0xaf47e1]
/usr/bin/nodejs(_ZN2v88internal4Heap11MarkCompactEv+0x60)[0xaaafe0]
/usr/bin/nodejs(_ZN2v88internal4Heap24PerformGarbageCollectionENS0_16GarbageCollectorENS_15GCCallbackFlagsE+0x4c0)[0xac2be0]
/usr/bin/nodejs(_ZN2v88internal4Heap14CollectGarbageENS0_16GarbageCollectorEPKcS4_NS_15GCCallbackFlagsE+0x238)[0xac30f8]
/usr/bin/nodejs(_ZN2v88internal4Heap15HandleGCRequestEv+0x8f)[0xac3aef]
/usr/bin/nodejs(_ZN2v88internal10StackGuard16HandleInterruptsEv+0x31c)[0xa6041c]
/usr/bin/nodejs(_ZN2v88internal18Runtime_StackGuardEiPPNS0_6ObjectEPNS0_7IsolateE+0x2b)[0xca51ab]
[0x2f2137d0963b]
And also this other stack sometimes:
PID 7545 received SIGSEGV for address: 0x68233500009
/home/documentapp/node_modules/segfault-handler/build/Release/segfault-handler.node(+0x1a5b)[0x7f89249bfa5b]
/lib/x86_64-linux-gnu/libpthread.so.0(+0xf890)[0x7f8928f83890]
/usr/bin/nodejs(_ZN2v88internal32IncrementalMarkingMarkingVisitor26VisitFixedArrayIncrementalEPNS0_3MapEPNS0_10HeapObjectE+0x3fe)[0xad51ee]
/usr/bin/nodejs(_ZN2v88internal18IncrementalMarking4StepElNS1_16CompletionActionENS1_18ForceMarkingActionENS1_21ForceCompletionActionE+0x30c)[0xad2a7c]
/usr/bin/nodejs(_ZN2v88internal8NewSpace15SlowAllocateRawEiNS0_19AllocationAlignmentE+0x78)[0xb00f18]
/usr/bin/nodejs(_ZN2v88internal4Heap11AllocateRawEiNS0_15AllocationSpaceES2_NS0_19AllocationAlignmentE+0x109)[0xa64719]
/usr/bin/nodejs(_ZN2v88internal4Heap20AllocateFillerObjectEibNS0_15AllocationSpaceE+0x19)[0xaabd19]
/usr/bin/nodejs(_ZN2v88internal7Factory15NewFillerObjectEibNS0_15AllocationSpaceE+0x2d)[0xa64c5d]
/usr/bin/nodejs(_ZN2v88internal29Runtime_AllocateInTargetSpaceEiPPNS0_6ObjectEPNS0_7IsolateE+0x5e)[0xca52ee]
[0x1e31ede06355]
Can someone give me some hint on how I can find the prblem? Thanks
If you prefer you can also answer on the node issues that I have created:
https://github.com/nodejs/node/issues/11606
Additional information:
Node framework: express, Sails.js
My native modules founded with find node_modules -name '*.node' are:
node_modules/bcrypt/build/Release/bcrypt_lib.node
node_modules/bcrypt/build/Release/obj.target/bcrypt_lib.node
node_modules/segfault-handler/build/Release/segfault-handler.node
node_modules/segfault-handler/build/Release/obj.target/segfault-handler.node
The problems seems to be cause by mongodb logs that fill up the disk space a some point. Was actually hard to see because we clean this periodically so was not critical at the moment I've checked.
Fatal Error -32988: LoadLibrary failed, rc=193 [MsgId: MERR-32988]
Fatal Error -26000: xfbLrwiWebInfraGlobalInitOK failed [MsgId: MERR-26000]
Warning: Extension lrwreplaymain.dll reports error -1 on call to function ExtPerProcessInitialize
Error: Thread Context: Call to service of the driver failed, reason - thread context wasn't initialized on this thread.
Recording is passed but cannot replayed due to the above error.
Thanks for the help in advance.
Either lrwreplaymain.dll or some of the DLLs it depends on were corrupted (error code 193 = ERROR_BAD_EXE_FORMAT). This may have been caused by virus attack, file system corruption etc.
The easiest solution is to re-install LoadRunner. You can also check which DLLs are corrupted by means of the Dependency Walker x86. Just start it and open <LoadRunner installation folder>\bin\lrwreplaymain.dll
I am trying to upgrade a Cassandra 2.1.0 cluster to 2.1.8 (latest release).
When I start a first node with 2.1.8 runtime, I get an error and the node refuses to start.
This is the error's stack trace :
org.apache.cassandra.io.FSReadError: java.lang.NullPointerException
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:642) ~[apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302) [apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524) [apache-cassandra-2.1.8.jar:2.1.8]
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613) [apache-cassandra-2.1.8.jar:2.1.8]
Caused by: java.lang.NullPointerException: null
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:634) ~[apache-cassandra-2.1.8.jar:2.1.8]
... 3 common frames omitted
FSReadError in Failed to remove unfinished compaction leftovers (file: /home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Statistics.db). See log for details.
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:642)
at org.apache.cassandra.service.CassandraDaemon.setup(CassandraDaemon.java:302)
at org.apache.cassandra.service.CassandraDaemon.activate(CassandraDaemon.java:524)
at org.apache.cassandra.service.CassandraDaemon.main(CassandraDaemon.java:613)
Caused by: java.lang.NullPointerException
at org.apache.cassandra.db.ColumnFamilyStore.removeUnfinishedCompactionLeftovers(ColumnFamilyStore.java:634)
... 3 more
Exception encountered during startup: java.lang.NullPointerException
The cluster has 7 nodes and it turns on AWS Linux EC2 instances.
The node I try to upgrade was stopped after a nodetool drain.
Then I tried to come back to 2.1.0 runtime but I now get a similar error.
I also tried to stop and start another node and everything was ok, the node restarted without any problem.
I tried to touch the missing file (as it should be removed, I thought it would perhaps not need a specific content). I had two other files with the same error that I also touched. And finally the node fails further while trying to read these files.
Anyone has any idea what I should do ?
Thank you for any help.
It might be worth opening a Jira for that issue, so if nothing else, they can catch the NPE and provide a better error message.
It looks like it's trying to open:
file: /home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Statistics.db
It's possible that it's trying to read that file because it finds the associated data file: (/home/nudgeca2/datas/data/main/segment-97b5ba00571011e49a928bffe429b6b5/main-segment-ka-15432-Data.db). Does that data file exist? I'd be tempted to move it out of the way, and see if it starts properly.