I have Cygwin+OpenSSH installed on Windows XP workstation.
Simple SFTP process (batch file) runs fine when launched from the Command Prompt and fails when launched by a Scheduler from Novell Desktop Management. Scheduled job uses exactly the same batch file and runs as Interactive User impersonation.
WhoAmI embedded in the batch file returns the same user string. Cygwin itself called from a scheduled job shows the same user id in the prompt as when launched directly from the dektop icon.
I get the following error from a scheduled sftp job:
"Permission denied (publickey,keyboard-interactive).
Connection closed"
The problem was with file permissions on the id_dsa file.
Apparently though both manual and scheduled processes ran with the same user id, they used different authentication: domain vs. workstation.
Related
I have an odd set of constraints and I'm not sure if what I want to do is possible. I'm writing a Python script that can restart programs/services for me via an Uvicorn/FastAPI server. I need the following:
For the script to always be running and to restart if it stops
To be constantly logged on as the standard (non-admin) user
To stop/start a Windows service that requires admin privileges
To start a program as the current (non-admin) user that displays a GUI
I've set up Task Scheduler to run this script as admin, whether logged in or not. This was the only way I found to be able to stop/start Windows services. With this, I'm able to do everything I need except for running a program as the current user. If I set the task to run as the current user, I can do everything except the services.
Within Python, I've tried running the program with os.startfile(), subprocess.Popen(), and subprocess.run(), but it always runs with no GUI, and seemingly as the admin since I can't kill the process without running Task Manager as admin. I'm aware of the 'user' flag in subprocess, but as I'm on Windows 8, the latest Python version I can use is 3.8.10, and 'user' wasn't introduced until Python 3.9.
I've tried the 'runas' cmd command (run through os.system() as well as a separate batch script), but this doesn't work as it prompts for the user's password. I've tried the /savecred flag and I've run the script manually both as a user and as admin just fine, but if I run this through Task Scheduler, either nothing happens, or there is a perpetual 'RunAs' process that halts my script.
I've tried PsExec, but again that doesn't work in Task Scheduler. If I run even a basic one-line batch file with PsExec as a task, I get error 0xC0000142, which from what I can tell is some DLL error: NT_STATUS_DLL_INIT_FAILED.
The only solution I can think of is running two different Python scripts in Task Scheduler (one as admin, one as non-admin), but this is not ideal as I want only one Uvicorn/FastAPI server running with one single port.
EDIT -
I figured out a way to grant service perms to the user account with ServiceSecurityEditor, but I'm still open to any suggestions that may be better. I want the setup process for a new machine to be as simple as possible.
Hope you are doing great.
I am reaching out to the community as I am currently stuck with a problem of executing a sequence of commands from a linux machine using jmeter.
A bit of Background:-
I have an external VM which is used to mimic the transfer of file to various inbound channels.
This VM is basically acting as a third party which hosts files which are then transferred to different location by following a sequence of commands
The Sequence of Commands that I am trying to execute to mimic the third party are as below
ls (to list the files in the Home Dir)
mv test123.txt test456.txt (This renames the file in the home Dir from test123.txt to test456.txt)
Then we connect to the File exchange server using the command below
sftp -P 24033 testuser#test-perf.XYZ.com
password is test#123456
Once Connected we execute the below sequence
ls(This will list folders Inbound or Route)
CD Route (To change Dir to Route)
ls (List the account ID's)
put test456.txt 12345 (12345 is the account ID)
Post the execution of the last command the file is transferred to internal folder based on account ID
I did some search on stack over flow and found a couple of links but was not able to make successful use of it to simulate the above sequence of commands
The closest one I could find is as below
How to execute Linux command or shell script from APACHE JMETER
But this does not talk about executing from a linux machine itself
Any help on how to approach this one will help me out. Thanks in advance
PS:- I am using jmeter cause I have to keep this sequence executing continuously till I transfer the expected number of file in a peak hour durations and these files are of different sizes ranging from few MB's to a couple of GB's
New Edit
I used the JSR223 Pre-Processor where I have my sequence of commands and then I call that command in the OSS Sampler and created a script as below
The script executes on the Linux box without any error but the file is not transferred to the destination. Am I missing something?
On some research I did found an lftp command but not sure how to use in my case and if that will work or not.
Any suggestions?
To execute commands on local Linux machine you can use OS Process Sampler
To execute commands on remote Linux machine you can use SSH Command Sampler
See How to Run External Commands and Programs Locally and Remotely from JMeter article for more information if needed.
To transfer the file from local to remote you can use SSH SFTP Sampler
In order to get SSH Command and SSH SFTP Samplers install SSH Protocol Support plugin using JMeter Plugins Manager:
i have been searching around for a while to find a way to Trigger a logon on a remote machine as a different user.
This is for an Blueprism RPA requirement. We have few virtual machines that run RPA processes and these machines will need to be logged in with the bot account for the processes to run. We have a Login agent that can be used to trigger logons on the machines, but they need to be done per machine basis which can sometimes be time consuming.
I can remote login to those machine to initiate the logons, but the automation fails if I close the session due to some display thingy.
If there is something like a command that I can trigger from my CMD that would do the job for me would be of great help
TIA
If you'd like to ensure that the machine is logged in, before the process start, then you can build it in into the scheduler.
Set the first step in the process as "login" and no matter if it completes or fails, after set amount of time run the process.
Finally managed to get this done using AutomateC.exe utility that comes with Blueprism. You can pretty much run any process on any VM and also specify input parameters. This is pretty handy when there is a need to interact with too many VMs.
So I got some child processes that need to be able to adjust System time on a windows 10 system. The way it has been done in the past iterations of Windows was simply forking children as Administrator so they would have permission to edit the system time.
Things I have tried:
Opening up the permissions for changing system time through the Local Security Policy so that Admin privileges were no longer required.
Making a custom task in tasksched.msc to run the child process as administrator.
Passing ruunas /user:Administrator app.exe as the executing command to run the child process, the problem here is that prompting for the password is not an option every-time this process needs to run.
Elevating the parent process is not an option sadly, though it does work.
I'm not sure what to try next.
So I found a work around. I used the 2003 windows resource kit utility 'ntrights.exe' to open up the permissions on windows 10.
I ran ntrights.exe from the terminal and then the command:
ntrights -U "UserAccountName" +R SeSystemtimePrivilege
This allowed the process to set the time as necessary without needing administrator privilege.
I have created a web application where user can run Java code in the browser.
I am using chroot for executing user submitted code in the web server.
In the chroot script I am doing mounting and then unmounting some required directories.
this works very well normally but when I fire that executing requests in a row like
20-30 requests, then for some response I am getting this message /bin/su: user XXX does not exist where XXX is username for the Linux system where I am mounting the required directories.
While for others I am getting the expected output result.
My concern is "is there any side effect of doing mount and unmount repeatedly in the Linux box?
Or is there any setting in the Linux to make this config to support?
In order to use /bin/su you need to have the user information provided by /etc/passwd. Have you mounted that directory or (as I would recommend) copied it to the /etc/ in the new root directory?
Concerning your mount issues, yes, mounting and unmounting can take some time and is not guaranteed to be instantaneous (especially the unmounting can plainly fail if something is still active on the mounted file system). So maybe you should check if the unmount failed and retry in that case.
Thanks for the reply...Yes you are absolutely right Alfe! it is the problem of mounting/unmounting in a row. I have checked this by SSH login to my web server. when I executed 20-30 program commands repeatedly(separated by semicolon) then I got the desired output in a sequence on my window . then I opened another SSH window and again I executed 10 commands from that window and 20 commands from previous window . when I saw the output then for some commands in both the windows I got that message of "/bin/bash user XXX doesnt exist". so one conclusion is that when I make web requests concurrently then execution of commands(chroot/unchroot) are not in a sync. that's why I am getting this message. I am not very good in Linux . I don't know How can I address this issue.