I suppose that "nft list ruleset" only list current rules on my system.
Although, when I use...
nft list ruleset > nftables.conf
...I expected to see the rules on this file, however my file get tottaly blank instead.
I suspect this command "nft list ruleset > nftables.conf" is wrong, even so why is this just erasing my nftables file?
I ran into this when I didn't run it as the root user. The nft tool quits with no errors when you're not root.
If you run the command nft list ruleset without root rights, it will print nothing. Redirecting nothing to the file with > nftables.conf will then end up erasing anything you had in that file.
I'd suggest first checking that the ruleset gets correctly displayed by running sudo nft list ruleset without the redirection to a file, and when the output looks good, add > nftables.conf to the command to save the output to file.
Related
So I was wondering if there's a way to add in a way to force all files being created in e.g. /tmp/test directory to always have execute permission?
I tried via the setfacl, but that one doesn't allow files to have execute permissions, it allows directories though strangely enough...
Any other ideas? I can try and do the obvious of making a cronjob or script that loops and just adds those permissions in that directory, but that's a bit ham-fisted and rough.
The files cannot be given execution permissions by default as it can cause a security concern.
Other permissions can be set using umask:
umask [-p] [-S] [mode]
The user file-creation mask is set to mode. If mode begins > with a digit, it is interpreted as an octal number; otherwise it is >interpreted as a symbolic mode mask
similar to that accepted by chmod(1). If mode is omitted, the >current value of the mask is printed. The -S option causes the mask to be >printed in symbolic form;
the default output is an octal number. If the -p option is >supplied, and mode is omitted, the output is in a form that may be reused as >input. The return status
is 0 if the mode was successfully changed or if no mode >argument was supplied, and false otherwise.
You will not be able to access (change to) a directory unless it is executable. Otherwise, a permission denied error will occur. I don't believe it is possible to make every file executable upon creation with setfacl.
I want to empty (not delete) log files daily at a particular time. something like
echo "" > /home/user/dir/log/*.log
but it returns
-bash: /home/user/dir/log/*.log: ambiguous redirect
is there any way to achieve this?
You can't redirect to more than one file, but you can tee to multiple files.
tee /home/user/dir/log/*.log </dev/null
The redirect from /dev/null also avoids writing an empty line to the beginning of each file, which was another bug in your attempt. (Perhaps specify nullglob to avoid creating a file with the name *.log if the wildcard doesn't match any existing files, though.)
However, a much better solution is probably to use the utility logrotate which is installed out of the box on every Debian (and thus also Ubuntu, Mint, etc) installation. It runs nightly by default, and can be configured by dropping a file in its configuration directory. It lets you compress the previous version of a log file instead of just overwrite, and takes care to preserve ownership and permissions etc.
When I type
(module load /scratch/userName/productName/modules/d
followed by a tab in order to get
(module load /scratch/userName/productName/modules/debug
bash hangs for some time and does not accept input.
If I use strace to debug this, I can see that bash is calling stat() on more than 5000 (unrelated) files in 800 (unrelated) directories.
Could anybody explain this to me? Or even better, explain how to tell bash to only search in the specified directory?
edit:
The modules directory exists and contains only two normal files (debug and release). All of the parent directories are normal directories.
edit:
I guess this has something to do with bash ability to forward filename completion to the client being used. In this case this is module but I've also seen it for git.
Somebody somewhere registered some bash function to perform filename completion for the client module. In order to disable this I added the following line into my ~/.bashrc:
complete -o default module
Thanks to
https://stackoverflow.com/users/3266847/benjamin-w
for the hint!
I attempted to download VirtualBox from terminal. Now, when I try to update, or input a command this reads out:
tyiese#penguin:~$ apt-get update
E: Malformed entry 1 in list file /etc/apt/sources.list.d/virtualbox.list (Component)
E: The list of sources could not be read.
tyiese#penguin:~$ rm /etc/apt/sources.list.d/virtualbox.list
rm: remove write-protected regular file '/etc/apt/sources.list.d/virtualbox.list'? Y
rm: cannot remove '/etc/apt/sources.list.d/virtualbox.list': Permission denied
I did attempt to remove the file - I think - but, as you can see it was not accepted.
As for the file removal, the last line of the output you provided hints what the problem is. Given your question, I assume you're not too familiar with users and permissions in GNU/Linux. The $ sign means you're running your commands as ordinary user, whereas to modify most system/configuration files (such as those pertaining to apt) you need root privileges. You typically obtain those on a per-command basis by prepending a command with sudo. So in your case that would be:
sudo rm /etc/apt/sources.list.d/virtualbox.list
After that you would be prompted for your password and (assuming your user is allowed to do so) the command would be run as root.
As for your original problem - malformed entry in sources file - I cannot help you unless you post the contents of said file. It might be a missing keyword or missing newline at the end. Hard to say.
One remark for the future. When pasting multi-line transcripts or snippets of code, please place them between two sets of triple backquotes (```) on lines of their own for better formatting.
root cause for this error is recent update made by you.
Generally copy n paste resultant to new line to the file which for some reason is causing the file to go in invalid state.
use sudo to edit the file and remove the unnecessary line.
This will work 99%.
cheers
I'm new to Linux Bash script and learning. I'm just wondering is it possible to redirect stderr to a file only if the stderr contains ERROR.
I am executing Hadoop Hive commands, which I put it in a Bash script file to schedule the process. The hive command generates a lot of logs, and I don't want to redirect the logs to a file every time, But if the log contains Errors, then I want to redirect the log to a file and want to mail the error file to someone.
Please let me know how to achieve this. Thanks in advance..
Regards,
Jeeva
If I understand correctly, if an error occurs, you want to preserve the
entire error log in a file (including lines that might not match your error-detection pattern.). I don't think there's any way to achieve what you
want purely through I/O redirection.
Instead, you can unconditionally redirect stderr to its own file. Then,
as a post-processing step, you can grep through that file to see if
ERROR shows up, and depending on the outcome, either mail the file
to someone, or delete it.
You have to use the stderr file descriptor which is 2
For example:
rm this-file-does-not-exist.txt
>>>> rm: cannot remove ‘asdfasf.js’: No such file or directory
# to redirect that error to a file you do this
rm this-file-does-not-exist.txt 2>/tmp/logError.txt
cat /tmp/logError.txt
>>>> rm: cannot remove ‘asdfasf.js’: No such file or directory
# if you want to check if the output contains `ERROR` do this
badcommand | grep "ERROR" 1>/tmp/logError.txt # if greping for "ERROR" was successfull redirect stdout to /tmp/logError.txt
# the 1 is a file descriptor for stdout
How to use linux mail command