This is the error message:
/home/hulukfyacom/nodevenv/myapp/10/bin/npm: fork: Cannot allocate memory
/home/hulukfyacom/nodevenv/myapp/10/bin/npm: line 13: /home/hulukfyacom/myapp/activate: No such file or directory
/home/hulukfyacom/nodevenv/myapp/10/bin/npm: line 14: /home/hulukfyacom/myapp/set_env_vars.py: No such file or directory
mkdir: cannot create directory '/lib/node_modules': Read-only file system
ln: failed to create symbolic link '/lib/package.json': Read-only file system
/home/hulukfyacom/nodevenv/myapp/10/bin/npm: line 35: /usr/bin/npm: No such file or directory
I am currently hosting my nodejs website and when I try to install npm it showed me this error. How can I solve this error please...any help...
You will need to create a symlink to your node and npm executables:
ln -s /opt/cpanel/ea-nodejs10/bin/npm /usr/local/sbin/npm
ln -s /opt/cpanel/ea-nodejs10/bin/node /usr/local/sbin/node
Not 100% sure if this is accurate since I don't have access to CPanel to test.
Although on further inpsection, it looks like you may be out of memory or even the file permissions are wrong. If the symlink does not work, can you run the following commands?
htop
If your memory is maxed out, this may be the source of your problem.
df -lh
This command will list the free hard disk space. Make sure the disk is not full.
ls -l
This will list the files and permissions, make sure that the file permissions are correct.
^These things are presuming that you have command line access. You should be able to check the resource usage from the CPanel interface if you need.
Related
I was using my git fine but suddenly when I tried to use git status it showed me
fatal: unable to access '/home/username/.gitconfig': Bad message
I first thought it is because of my git config file is in the venv, but it was OK and, as the error shows, it is because of the .gitconfig file in my user directory!
First of all as what I found on the internet I used git config --global user.name "NewUser" and git config --global user.email newuser#example.com
but it ends in the same error as before.
After I closed and opened my VScode again a Git extension gave me the warning that it cant detect any git exe file on the system and I need to install the Git again!
And when I checked for git --version it gave me the same error again!
Literally I cant use any git command!
Then I tried to write the configuration by myself and when I opened it in terminal with nano it was nothing in it! and when I wanted to save the changes it gave another error as
[ Error writing .gitconfig: Bad message ]
Then I looked at the permissions of the file and it was just question mark instead of normal permissions
Then I tried to change the permission by chmod but it gave the access denied error again!
chmod: cannot access '.gitconfig': Bad messag
Then I tried to remove it but it cant be removed too!
rm: cannot remove '.gitconfig': Bad message
I installed and removed git for several time! it did not work either.
I am using manjaro 20.2
Any idea would be appreciated!
UPDATED
This is the errors in for dmesg
Bunch of these errors here and there
As mentioned here, the question marks in the ls output just indicate that it could not stat() the directory entry.
It could be a disk error or possibly filesystem corruption: for an ArchLinux distro, see sudo journalctl --since=today or, form last boot, sudo journalctl -b -1 to reveal further details.
Check the partition you are in (df -h .) and see if this is a filesystem mounting issue.
If you see a bunch of messages like:
EXT4-fs error (device nvme0n1p2): ext4_lookup:1574: inode #4833958: comm ls: iget: checksum invalid
You should look at your partition from an external session (meaning a live session from an USB disk), as in here:
Rebooted into the live USB
Mounted a secondary drive.
fsck.ext4 -p [my root device]
fsck.ext4 -z [undo file on secondary drive] [my root device]
Stepped through all errors and allowed fsck to fix them
Rebooted into the system and opted to reinstall all packages (see script below)
Rebooted again ...
The OP amir-mohammadian confirms in the comments:
Because I am in my first steps in Linux, I always have a live boot of my distro.
So it was quick, and when I used it "fsck" first it said I cant use any -p or -z because of some errors and I have to do it manually, so I was just typing y and y and y!
And when I boot again it worked!!
Maybe this will help:
sudo chown username .gitconfig
My server is very slow right now. I asked my provider to check whats wrong and i got this:
root#Pixel:~# tail -f /var/log/apache2/error.log
sh: 1: compgen: not found
ls: cannot access /usr/bin/convert: No such file or directory
ls: cannot access /usr/local/bin/convert: No such file or directory
sh: 1: compgen: not found
ls: cannot access /usr/bin/convert: No such file or directory
ls: cannot access /usr/local/bin/convert: No such file or directory
sh: 1: compgen: not found
ls: cannot access /usr/bin/convert: No such file or directory
ls: cannot access /usr/local/bin/convert: No such file or directory
sh: 1: compgen: not found
and that apache is consuming CPU.
We are using Wordpress with woocommerce on our site
any fix?
You have a program, or add-on to wordpress, trying to use ImageMagick convert program for image manipulation. But ImageMagick is not installed, so you need to install it no doubt.
If you have SSH access, I would suggest:
sudo apt-get install imagemagick
Let me know if this is not an option or doesn't help.
I'm using a linux machine. The Elasticsearch installation guide told me to simply download and unzip the file, and then run bin/elasticsearch. But when I ran it, I got this error:
$ bin/elasticsearch
-bash: bin/elasticsearch: No such file or directory
I can clearly see the elasticsearch file in my home directory, and all the files within it seem intact. What do I do?
If you are trying to access the file with:
bin/elasticsearch
You get the error.
Try to tell the full route.
First move to the directory with cd
cd route/to/your/directory
Then use . for calling from your actual position:
./bin/elasticsearch
Note: If you get an error about the access or something similar, try giving execution permission to your file:
chmod u+x ./bin/elasticsearch
I am trying to run Go's executable file after using command go build instead of typing go run filename.go.
I typed go build in the directory where the Golang source file resides. After the executable file had been created, I typed ./filename to run it. Then the terminal displayed a line :
bash : ./filename : Permission denied
I had tried to change the permission of the filename by typing :
chmod u+x filename
But this action doesn't give any effects. The permission denied error still occurs whenever I type ./filename.
Is there another way to build a Golang applications from source code, and then run it from executable file?
All things done well if I do this task in Windows command prompt, after typing go build, the filename.exe is created and there is no any problem when I run it by typing ./filename.exe.
NTFS and FAT have different permission models than Unix. This especially means that there is no executable flag on such a file system. Calling chmod a+x FILE is a no-op. Linux emulates classical Unix permissions on NTFS file systems by setting a mask for each file that contains the would-be permissions.
To fix these problemss, either move executables to a different file system or change the mount flags to use a permission mask that enables the executable-flag (for all files).
We're trying to use oprofile to track down performance problems on a server cluster. However, the servers in question have a read-only file system, where /var/tmp is the only writeable directory.
OProfile wants to create two directories whenever it runs: /root/.oprofile and /var/lib/oprofile, but it can't, because the filesystem is read-only. I can use the --session-dir command line option to make it write its logs to elsewhere than /var/lib, but I can't find any such option to make it use some other directory than /root/.oprofile.
The filesystem is read-only because it is on nonwriteable media, not because of permissions -- ie, not even superuser can write to those directories. We can cook a new ROM image of the filesystem (which is how we installed oprofile, obviously), but there is no way for a runtime program to write to /root, whether it is superuser or not.
I tried creating a symlink in the ROM that points /root/.oprofile -> /var/tmp/oprofile, but apparently oprofile doesn't see this symlink as a directory, and fails when run:
redacted#redacted:~$ sudo opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
mkdir: cannot create directory `/root/.oprofile': File exists
Couldn't mkdir -p /root/.oprofile
We must run our profilers on this particular system, because the performance issues we're trying to investigate don't manifest if we build and run the app on a development server. We can't just run our tests on a programmer's workstation and profile the app there, because the problem doesn't happen there.
Is there some way to configure oprofile so that it doesn't use /root ?
I guess it should be as simple as overriding the HOME environment variable:
HOME=/tmp/fakehome sudo -E opcontrol --no-vmlinux --start --session-dir=/var/tmp/oprofile/foo
If that doesn't work out, you could have a look at
unionfs
aufs
to create a writable overlay. You might even just mount tmpfs on /root,or something simple like that.
It turns out that this directory is hardcoded into the opcontrol bash script:
# location for daemon setup information
SETUP_DIR="/root/.oprofile"
SETUP_FILE="$SETUP_DIR/daemonrc"
Editing those lines seemed to get it working, more or less.