smbclient -c with ls -l option - linux

i am trying to get folder lists from remote server, and it is not possible to mount remote server into my local computer (because of the permission issue).
i used
smbclient "//165.186.89.21/DeptDQ_141Q_FOTA" "--user=myid" -c 'ls;'
to get lists of the folder.
and the result was success.
but, actually i want to use ls -l with the above the command line
and when i try to get results using the line
smbclient "//165.186.89.21/DeptDQ_141Q_FOTA" "--user=LGE\final.lee" -c 'ls -l;'
it returns
NT_STATUS_NO_SUCH_FILE listing \-l
64000 blocks of size 16777216. 6503 blocks available
...
how should i use smbclient operator with ls -l option?
please help me!

smbclient ls does not run a native ls command, but rather invokes built-in functionality. As such, it does not support the usual options which a native, POSIX-compliant ls command would provide.
Thus, you cannot do this.
If your goal is to read metadata, consider trying the smbclient stat [filename] subcommand instead (if your server supports UNIX extensions), or smbclient allinfo [filename] (otherwise).

Related

How to Create a long listing of all the files in this directory to a file called etcFiles.txt (In Linux)

I started to learn Linux. But I dont know how to solve this problem. I want to Create a long listing of all the files in /etc/ directory to a file called etcFiles.txt.When i try to run this terminal says "Permission denied".enter image description here
To long list a file in Linux, you need to use the command
ls -l
It displays the contents on the console. To store it in the file you need to redirect it using redirection operation > to a file like
ls -l directoryPath > outputFile.txt.
Here, to store the result of long listing /etc/ to file you need to use
ls -l /etc/ > etcFiles.txt
In the image linked, to store the contents of the current directory to a file you need to provide the current directory as the argument to ls command. In Unix/ Linux, the current directory is represented by ., so as shown in the screenshot, you are already in /etc/ directory, to store long listing contents of current directory i.e. /etc/ to the file, you need to use
ls -l . > ~/etcFiles.txt
However ls command takes the current directory as default argument . above can be avoided and the following command will also work
ls -l > ~/etcFiles.txt
Linux /Unix by default does not give any user permission to write/ create files in /etc/ directory and requires elevated permission to make any changes in this directory. Since you do not have permission to create a new file in /etc/ directory, either you need to redirect the output to the file in some directory where you have permission like above, we are storing it in the home directory ~ or else you will have to use sudo for superuser permission to create new file in /etc/ itself.
Since we need redirection operator to write file in /etc/, we can't simply run
sudo ls -l > etcFiles.txt
because ls will run with superuser permission and redirection will be done with default user permission. So you need to club in both to run in elevated permission.
To achieve that spawn a new shell with elevated permission using sudo sh and pass the command as a string with -c option as shown below
Solution 1
sudo sh -c 'ls -l . > etcFiles.txt'
Solution 2
You can make use of pipe | by piping the output of ls -l to a command called tee which basically reads the standard input and writes it to both the standard output and one or more files.
Since you need to write to a file inside /etc/ directory, you need to run tee with sudo for elevated permission.
ls -l | sudo tee etcFiles.txt
This will also print the output to the console. To avoid output to the console, redirect output to /dev/null (take it as dustbin sink to throw unwanted outputs) and your final command becomes
ls -l | sudo tee etcFiles.txt > /dev/null

How Do I Create A User & Set Password Without User Interaction?

I have been recently working on a project named: arch loop, which is an automated installer for Arch Linux. I have seen a few installers and scripts to make Arch installation easier, but I am someone who installs Arch Linux, more than three times a day, so following the Arch-way takes a long time and constantly requires user interaction.
The Problem:
The password is, the information about non-root user is to be created is taken before itself, and when the appropriate time comes, we will be using the following command:
arch-chroot /mnt useradd -m -g users -G wheel -s /usr/bin/bash archuser
arch-chroot /mnt bash -c "echo -e 'password\npassword\n' | passwd
arch-chroot /mnt bash -c "echo -e 'rootpassword\nrootpassword\n' | passwd root
to send the password to passwd binary in the chroot system. But I don't know why it does not work. When the password is being verified by the sudo command after the installation is finished. The password seems to be perfectly working. But when tried to log in with the non-root user from tty, the password seems to be incorrect.
Things I Have Already Tried:
Manually encrypting the provided password with the below code and passing it to the useradd binary with -p option:
perl -e 'print crypt("password", "\$6\$SALTsalt\$") . "\n"'"
Please guide me on how to set a user's provided password at a later time, without requiring any user interaction.
Thank You :)
There exists the chpasswd command. It is just there only to make passwd available in batch scripts. Just do:
echo "root:rootpassword" | arch-chroot /mnt chpasswd
or maybe better, without the need for mount -o bind the sys proc and dev directories:
echo "root:rootpassword" | chpasswd -R /mnt
#subjective: Sorry for the opinion, the project looks ok, however much more work is to be done. I guess the aim is to bring Archlinux closer to "normal" users. However, I don't like the choose of python for the project. Going with plain POSIX sh would make this available for all. I don't like hardcoded partitions, mlocate (do you really use mlocate?), multiple arch-chroot calls where you could just do a single big script, not handling os.system error codes (!), multiple pacman calls without even -Sy (!) (pacman can fail if upstream updates the repos), and few more things I don't like. Except for that, nice python abstraction and cool aim. I remember the old archlinux installation scripts few (or more) years ago, they were nice, however I think used commands themselves anyway. Good luck.
The way as below works for Ubuntu, I think it should work for Arch too.
First, you should have had a machine, which has installed the Arch. Then you add the user that you need with the two commands: useradd and passwd. After that, you could cat /etc/shadow | grep [username] to get the information of the password of the user added by you, it should be a string, let's say it is XXX.
Now, on your target system, after arch-chroot /mnt useradd -m -g users -G wheel -s /usr/bin/bash archuser, you add the string coming from cat /etc/shadow | grep [username] into the /etc/shadow of the target system. The command should be like arch-chroot /mnt sed -i "XXX" /etc/shadow.
One more thing, you must make sure that the version of the Arch which you get the information of the password and the version of the target system are the same.

Is there a way to set kptr_restrict to 0?

I am currently having trouble running linux perf, mostly because /proc/sys/kernel/kptr_restrict is currently set to 1.
However, if I try to /proc/sys/kernel/kptr_restrict by echoing 0 to it as follows...
echo 0 > /proc/sys/kernel/kptr_restrict
I get a permission denied error. I don't think I can change permissions on it either.
Is there a way to set this directly somehow? I am super user. I don't think perf will function acceptably without this being set.
In your example, echo is running as root, but your shell is running as you.
So please try this command:
sudo sh -c " echo 0 > /proc/sys/kernel/kptr_restrict"
All the files located in /proc/sys can only be modified by root (actually 99.9% files, check with ls -l). Therefore you have to use sudo to modify those files (or your preferred way to execute commands as root).
The proper way to modify the files in /proc/sys is to use the sysctl tool. Note that yu should replace the slashes (/) with dots (.) and omit the /proc/sys/ prefix... read the fine manual.
Read the current value:
$ sysctl kernel.kptr_restrict
kernel.kptr_restrict = 1
Modify the value:
$ sudo sysctl -w kernel.kptr_restrict=0
sysctl kernel.kptr_restrict=1
To make your modifications reboot persistent, you should edit /etc/sysctl.conf or create a file in /etc/sysctl.d/50-mytest.conf (edit the file as root or using sudoedit), containing:
kernel.kptr_restrict=1
In which case you should execute this command to reload your configuration:
$ sysctl -p /etc/sysctl.conf
P.S. it is possible to directly write in the virtual file. https://stackoverflow.com/users/321730/cdyson37 command is quite elegant: echo 0 | sudo tee /proc/sys/kernel/kptr_restrict

rsync over SSH preserve ownership only for www-data owned files

I am using rsync to replicate a web folder structure from a local server to a remote server. Both servers are ubuntu linux. I use the following command, and it works well:
rsync -az /var/www/ user#10.1.1.1:/var/www/
The usernames for the local system and the remote system are different. From what I have read it may not be possible to preserve all file and folder owners and groups. That is OK, but I would like to preserve owners and groups just for the www-data user, which does exist on both servers.
Is this possible? If so, how would I go about doing that?
** EDIT **
There is some mention of rsync being able to preserve ownership and groups on remote file syncs here: http://lists.samba.org/archive/rsync/2005-August/013203.html
** EDIT 2 **
I ended up getting the desired affect thanks to many of the helpful comments and answers here. Assuming the IP of the source machine is 10.1.1.2 and the IP of the destination machine is 10.1.1.1. I can use this line from the destination machine:
sudo rsync -az user#10.1.1.2:/var/www/ /var/www/
This preserves the ownership and groups of the files that have a common user name, like www-data. Note that using rsync without sudo does not preserve these permissions.
You can also sudo the rsync on the target host by using the --rsync-path option:
# rsync -av --rsync-path="sudo rsync" /path/to/files user#targethost:/path
This lets you authenticate as user on targethost, but still get privileged write permission through sudo. You'll have to modify your sudoers file on the target host to avoid sudo's request for your password. man sudoers or run sudo visudo for instructions and samples.
You mention that you'd like to retain the ownership of files owned by www-data, but not other files. If this is really true, then you may be out of luck unless you implement chown or a second run of rsync to update permissions. There is no way to tell rsync to preserve ownership for just one user.
That said, you should read about rsync's --files-from option.
rsync -av /path/to/files user#targethost:/path
find /path/to/files -user www-data -print | \
rsync -av --files-from=- --rsync-path="sudo rsync" /path/to/files user#targethost:/path
I haven't tested this, so I'm not sure exactly how piping find's output into --files-from=- will work. You'll undoubtedly need to experiment.
As far as I know, you cannot chown files to somebody else than you, if you are not root. So you would have to rsync using the www-data account, as all files will be created with the specified user as owner. So you need to chown the files afterwards.
The root users for the local system and the remote system are different.
What does this mean? The root user is uid 0. How are they different?
Any user with read permission to the directories you want to copy can determine what usernames own what files. Only root can change the ownership of files being written.
You're currently running the command on the source machine, which restricts your writes to the permissions associated with user#10.1.1.1. Instead, you can try to run the command as root on the target machine. Your read access on the source machine isn't an issue.
So on the target machine (10.1.1.1), assuming the source is 10.1.1.2:
# rsync -az user#10.1.1.2:/var/www/ /var/www/
Make sure your groups match on both machines.
Also, set up access to user#10.1.1.2 using a DSA or RSA key, so that you can avoid having passwords floating around. For example, as root on your target machine, run:
# ssh-keygen -d
Then take the contents of the file /root/.ssh/id_dsa.pub and add it to ~user/.ssh/authorized_keys on the source machine. You can ssh user#10.1.1.2 as root from the target machine to see if it works. If you get a password prompt, check your error log to see why the key isn't working.
I had a similar problem and cheated the rsync command,
rsync -avz --delete root#x.x.x.x:/home//domains/site/public_html/ /home/domains2/public_html && chown -R wwwusr:wwwgrp /home/domains2/public_html/
the && runs the chown against the folder when the rsync completes successfully (1x '&' would run the chown regardless of the rsync completion status)
Well, you could skip the challenges of rsync altogether, and just do this through a tar tunnel.
sudo tar zcf - /path/to/files | \
ssh user#remotehost "cd /some/path; sudo tar zxf -"
You'll need to set up your SSH keys as Graham described.
Note that this handles full directory copies, not incremental updates like rsync.
The idea here is that:
you tar up your directory,
instead of creating a tar file, you send the tar output to stdout,
that stdout is piped through an SSH command to a receiving tar on the other host,
but that receiving tar is run by sudo, so it has privileged write access to set usernames.
rsync version 3.1.2
I mostly use windows in local, so this is the command line i use to sync files with the server (debian) :
user#user-PC /cygdrive/c/wamp64/www/projects
$ rsync -rptgoDvhP --chown=www-data:www-data --exclude=.env --exclude=vendor --exclude=node_modules --exclude=.git --exclude=tests --exclude=.phpintel --exclude=storage ./website/ username#hostname:/var/www/html/website

cygwin ssh batch script for windows 2008

I configured cygwin in Windows Server 2008, now we need to implement automation
I am writing a batch script to add user to cygwin\etc\passwd file using following command
mkpasswd -l -u %username% -p /home >> /etc/passwd
Please help me how to execute following cmd in batch file
echo off
C:
chdir C:\cygwin\bin
bash --login -i
mkpasswd -l -u %username% -p /home >> /etc/passwd
It's not working
You're mixing Windows and Unix in your windows batch file. The batch file is running as a Windows command, as is the mkpasswd command in it. Windows has no concept of /etc/passwd and will throw an error. Probably something like;
D:\cygwin\bin>mkpasswd -l -u testusr -p /home >> /etc/passwd
The system cannot find the path specified.
Given what you want to do with mkpasswd I'd suggest you find a way to run your automation from within Cygwin. Perhaps setting up a cron job.

Resources