Unable to load redis module, RedisJSON - linux

After following the instructions specified here to compile the source code of RedisJson, got the rejson.so file at project_root/target/release, then I entered this command sudo redis-server --loadmodule /home/username/RedisJSON/target/release/rejson.so to load redis module. But I got this error message.
Server initialized
7666:M 14 Sep 2021 13:27:38.795 # WARNING overcommit_memory is set to 0! Background save may fail under low memory condition. To fix this issue add 'vm.overcommit_memory = 1' to /etc/sysctl.conf and then reboot or run the command 'sysctl vm.overcommit_memory=1' for this to take effect.
7666:M 14 Sep 2021 13:27:38.795 # WARNING you have Transparent Huge Pages (THP) support enabled in your kernel. This will create latency and memory usage issues with Redis. To fix this issue run the command 'echo never > /sys/kernel/mm/transparent_hugepage/enabled' as root, and add it to your /etc/rc.local in order to retain the setting after a reboot. Redis must be restarted after THP is disabled.
7666:M 14 Sep 2021 13:27:38.862 * <ReJSON> Exported RedisJSON_V1 API
thread '<unnamed>' panicked at 'called `Option::unwrap()` on a `None` value', /root/.cargo/registry/src/github.com-1ecc6299db9ec823/redis-module-0.23.0/src/raw.rs:580:42
note: run with `RUST_BACKTRACE=1` environment variable to display a backtrace
fatal runtime error: failed to initiate panic, error 5
Aborted
How can I get this fixed, please?

RedisJSON requires Redis 6+, it seems like you're running on an older version of Redis.

Related

Submitting first job to pacemaker

I followed this guide:
https://clusterlabs.org/doc/en-US/Pacemaker/1.1-pcs/html/Clusters_from_Scratch/
I stayed with the Active/Passive DRBD file system sharing. I had to reboot my cluster and now I am getting the following error:
Current DC: rbx-1 (version 1.1.16-12.el7_4.4-94ff4df) - partition with quorum
Last updated: Tue Nov 28 17:01:14 2017
Last change: Tue Nov 28 16:40:09 2017 by root via cibadmin on rbx-1
2 nodes configured
5 resources configured
Node rbx-2: UNCLEAN (offline)
Online: [ rbx-1 ]
Full list of resources:
ClusterIP (ocf::heartbeat:IPaddr2): Started rbx-1
WebSite (ocf::heartbeat:apache): Stopped
Master/Slave Set: WebDataClone [WebData]
WebData (ocf::linbit:drbd): FAILED rbx-1 (blocked)
Stopped: [ rbx-2 ]
WebFS (ocf::heartbeat:Filesystem): Stopped
Failed Actions:
* WebData_stop_0 on rbx-1 'invalid parameter' (2): call=20, status=complete, exitreason='none',
last-rc-change='Tue Nov 28 16:27:58 2017', queued=0ms, exec=3ms
Daemon Status:
corosync: active/disabled
pacemaker: active/disabled
pcsd: active/enabled
Any ideas?
Also does anyone have any recommended guides for submitting jobs?
This post is relatively old at this point but I'll leave this here for others to find if they stumble upon the same issue.
This problem has to do with an issue with the DRBD integration script that pacemaker uses. If it's broken, missing, has incorrect permissions, etc. you can get an error like this. In CentOS 7 that script is located at /usr/lib/ocf/resource.d/drbd
Note: This is specifically for the guide mentioned by OP but may help you:
Section 7.1 has a big "IMPORTANT" block that talks about replacing the Pacemaker integration script due to a bug. If you use the command it tells you to there, you actually replace the script with a 404 Error page which obviously doesn't work, causing the error. You can fix this issue by replacing the script with the original, either by reinstalling DRBD...
yum remove -y kmod-drbd84 drbd84-utils
yum install -y kmod-drbd84 drbd84-utils
...or finding just the drbd script elsewhere and adding/replacing it to /usr/lib/ocf/resource.d/drbd. Make sure its permissions are correct and that it is set as executable.
Hope that helps!

developer sample docker compose up timeout

I am trying the Chaincode for Developer samples, and running the docker compose up on the runtime, Terminal 1 - Start the network,
fails
orderer | 2017-11-11 13:48:52.252 UTC [orderer/common/deliver] deliverBlocks -> DEBU 32c [channel: myc] Received seekInfo (0xc420a12e60) start: > stop: > from 172.18.0.3:33048
ERROR: compose.cli.errors.log_timeout_error: An HTTP request took too long to complete. Retry with --verbose to obtain debug information.
If you encounter this issue regularly because of slow network conditions, consider setting COMPOSE_HTTP_TIMEOUT to a higher value (current value: 60).
this is running on ubuntu 16.04. I don't see anything that would stop connections, the compose file looks right. everything was just downloaded yesterday Nov 10.
This is not really a major error in terms of actually running the sample - it's an error occasionally thrown by Docker Compose itself. As the error says, you can simply set the variable prior to running the sample(s):
export COMPOSE_HTTP_TIMEOUT=600

Neo4j refused to connect

Characteristics :
Linux
Neo4j version 3.2.1
Access on remote
Installation
I Had install neo4j and gave the folder chmod 777 .
Im running it remotely on my machine and I had already enabled non local access
Doing NEo4j start i get this message
Active database: graph.db
Directories in use:
home: /home/cloudera/Muna/apps/neo4j
config: /home/cloudera/Muna/apps/neo4j/conf
logs: /home/cloudera/Muna/apps/neo4j/logs
plugins: /home/cloudera/Muna/apps/neo4j/plugins
import: /home/cloudera/Muna/apps/neo4j/import
data: /home/cloudera/Muna/apps/neo4j/data
certificates: /home/cloudera/Muna/apps/neo4j/certificates
run: /home/cloudera/Muna/apps/neo4j/run
Starting Neo4j.
WARNING: Max 1024 open files allowed, minimum of 40000 recommended. See the Neo4j manual.
Started neo4j (pid 9469). It is available at http://0.0.0.0:7474/
There may be a short delay until the server is ready.
See /home/cloudera/Muna/apps/neo4j/logs/neo4j.log for current status.
and it is not connecting in the browser .
running neo4j console
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 409600000 bytes for AllocateHeap
# An error report file with more information is saved as:
# /home/cloudera/hs_err_pid18598.log
where could the problem be coming from ?
Firstly, you should set the maximum open files to 40000, which is the recommended value. Then you do not get the WARNING. Like this: http://neo4j.com/docs/1.6.2/configuration-linux-notes.html
Secondly,'failed to allocate memory' means that the Java virtual machine cannot allocate the memory you start it with.
It can be a misconfiguration, or you physically do not have enough memory.
Please read the memory sizing guidelines here:
https://neo4j.com/docs/operations-manual/current/performance/

i have just begun to use android studio and i cant seem to get my gradle to sync with my application. here is what it shows :

7:46:20 PM Gradle sync started
7:46:35 PM Gradle sync failed: Unable to start the daemon process.
This problem might be caused by incorrect configuration of the daemon.
For example, an unrecognized jvm option is used.
Please refer to the user guide chapter on the daemon at https://docs.gradle.org/2.10/userguide/gradle_daemon.html
Please read the following process output to find out more:
-----------------------
Error occurred during initialization of VM
Could not reserve enough space for object heap
Error: Could not create the Java Virtual Machine.
Error: A fatal exception has occurred. Program will exit.
Consult IDE log for more details (Help | Show Log)
the jvm version is 1.7.0_79
and the studio version is 2.1.1
Error occurred during initialization of VM Could not reserve enough space for object heap Error: Could not create the Java Virtual Machine.
There's no space available in RAM. To fix go to /android-studio-dir/bin and edit studio.vmoptions and studio64.vmoptions to increment the -Xmx and to reserve more memory to Java. Note that the number of processes active may influence on that.
Probably, the /tmp location is full..
Found this somewhere..
Use df command
df
You should see an output with a line like this:
tmpfs 102400 102312 88 100% /tmp
So to change the size of the tmp file:
sudo mount -o remount,size=2G /tmp
Done! Now, It should work..

PHP exec(myexe) fails in PHP App, but not CLI. Fails Running Under User "apache"

I have a custom program (e.g. myexe) being executed by a web app using PHP's exec() function. It does not fail when run using the PHP CLI nor does myexe fail when run from the command line with me as a user. I have built myexe so that there are no memory issues when profiled using valgrind. myexe is about 26MB in size.
To simplify the situation, I have run myexe on the command line under the user 'apache' and reproduced the failure.
su -s /bin/sh apache -c "/usr/local/bin/myexe parm1 parm2..."
==> Segmentation fault (core dumped)
BUT when I change the user to myself and run the same command above, it works.
su -s /bin/sh mike -c "/usr/local/bin/myexe parm1 parm2..."
==> WORKS
Here's the error from the system log file:
Jul 9 18:26:15 DEVSTN-1 kernel: myexe[27352]: segfault at 7fffa2bf9ff8 ip 0000000000410324 sp 00007fffa2bfa000 error 6 in myexe[400000+5ae000]
Jul 9 18:26:16 DEVSTN-1 abrt[27353]: Saved core dump of pid 27352 (/usr/local/bin/myexe) to /var/spool/abrt/ccpp-2015-07-09-18:26:15-27352 (13631488 bytes)
Jul 9 18:26:16 DEVSTN-1 abrtd: Directory 'ccpp-2015-07-09-18:26:15-27352' creation detected
Jul 9 18:26:17 DEVSTN-1 abrtd: Executable '/usr/local/bin/myexe' doesn't belong to any package and ProcessUnpackaged is set to 'no'
Jul 9 18:26:17 DEVSTN-1 abrtd: 'post-create' on '/var/spool/abrt/ccpp-2015-07-09-18:26:15-27352' exited with 1
Jul 9 18:26:17 DEVSTN-1 abrtd: Deleting problem directory '/var/spool/abrt/ccpp-2015-07-09-18:26:15-27352'
My configuration:
CentOS6 2.6.32-504.23.4.el6.x86_64
Apache/2.2.15 (CentOS)
PHP Version 5.3.3
Am I correct with assuming that PHP has nothing to do with the error?
What should I do next?
Correct; PHP has nothing to do with the error. This is a segmentation fault caused by invalid memory access (either overflowing a buffer, or accessing already-freed memory) in myexe. It seems to have saved a core dump to /var/spool/abrt/ccpp-2015-07-09-18:26:15-27352, so, try debugging with GDB:
gdb /usr/local/bin/myexe -c /var/spool/abrt/ccpp-2015-07-09-18:26:15-27352
(gdb) bt
And try to see where the executable is failing. To get useful output, it will need to be compiled with debugging symbols. If it doesn't fail running as root or a different user, or running in an interactive terminal, I'd look for bugs that could be triggered by being unable to open a file, unable to read an expected environment variable, etc. to help isolate your problem.
Running the executable under strace might help figure out what's going on as well.
Found the problem by entering a bash shell user user apache and running the program using gdb.
Turns out myexe was trying to create a directory under the user's home dir (/home/apache) which doesn't exist.
What helped me was knowing how to start a shell under a different user and using gdb.
Here's the command to start a shell under another user (apache):
su -s /bin/bash apache

Resources