Azure startup tasks not always firing - azure

Working with a WorkerRole, I have two startup tasks that are defined. In the first one, firewall rules, cygwin download and install and calling powershell scripts is executed from a .cmd file.
The problem I am seeing, is that "sometimes", it is not executed. I can run a deployment, and it will work, then another it will not. The logs are not providing any help.
I have tried ensuring file encoding is correct, but there does not seem to be any consistency to the problem.
The .cmd file is as follows:
netsh advfirewall firewall add rule name="SSH" dir=in action=allow service=any enable=yes profile=any localport=22 protocol=tcp
IF EXIST C:\alreadySetup\NUL GOTO ALREADYSETUP
mkdir c:\alreadySetup
mkdir e:\approot\uploads
set remoteuser=user
set remotepassword=password
net user %remoteuser% %remotepassword% /add
net localgroup Administrators %remoteuser% /add
.\Startup\ntrights.exe +r SeServiceLogonRight -u "%remoteuser%"
.\Startup\ntrights.exe +r SeServiceLogonRight -u "Administrators"
netsh advfirewall firewall add rule name="SSH" dir=in action=allow service=any enable=yes profile=any localport=22 protocol=tcp
powershell -executionpolicy unrestricted -file .\Startup\DownloadBlob.ps1
c:\cygwin\bin\dos2unix.exe e:\approot\Startup\startSSHd.sh
schtasks /CREATE /TN "StartSSHd" /SC ONCE /SD 01/01/2020 /ST 00:00:00 /RL HIGHEST /RU %remoteuser% /RP %remotepassword% /TR "c:\cygwin\bin\bash --login /cygdrive/e/approot/startup/startSSHd.sh" /F
icacls d:\windows\system32\tasks\StartSSHd /grant:r Administrators:(RX)
schtasks /RUN /TN "StartSSHd"
schtasks /CREATE /TN "CopyAndCleanupSFTPFiles" /SC MINUTE /SD 01/01/2000 /ST 00:00:00 /RL HIGHEST /RU %remoteuser% /RP %remotepassword% /TR "e:\approot\Startup\CopyAndCleanupSFTPFiles.cmd" /F
schtasks /RUN /TN "CopyAndCleanupSFTPFiles"
:ALREADYSETUP
Of note, is that when it doesn't run, the alreadySetup folder is not even created, which is the first part of the script.

Do not use drive letter E: in the script. Deployments sometimes use E:, sometimes F:. I think this is why your script crashes/fails.

Related

Setting more than one trigger with SCHTASKS command

I want to add multiple triggers for the task I'm creating through windows command line interface. I went through the documentation for schtasks create here but it doesn't specify how we can do this, or even if we can or not at all.
An example of how I create a task with one trigger, specifically for Every day at 23:00:
SCHTASKS /CREATE /SC DAILY /ST 23:00 /TN "TaskName" /TR "D:\TaskDir\TaskName.exe taskArgument" /NP
I also want to set two additional triggers for 09:00 and 16:00.
How can I achieve this?
To schedule the .eve task program, if for regular hours schedule in a day like for to run every 7 hours, for 24 hours total, according to the doc: https://learn.microsoft.com/en-us/windows-server/administration/windows-commands/schtasks-create#examples-1 , type:
SCHTASKS /CREATE /SC DAILY /ST 09:00 /TN "TaskName" /TR "D:\TaskDir\TaskName.exe taskArgument" /NP/sc hourly /mo 7 /st 00:00 /du 0024:00
Then, if for random time points, it cannot be achieved in a single script, it should be divided into three scripts like below:
SCHTASKS /CREATE /SC DAILY /ST 09:00 /TN "TaskName" /TR "D:\TaskDir\TaskName.exe taskArgument" /NP
SCHTASKS /CREATE /SC DAILY /ST xx:00 /TN "TaskName2" /TR "D:\TaskDir\TaskName.exe taskArgument" /NP
SCHTASKS /CREATE /SC DAILY /ST xx:00 /TN "TaskName3" /TR "D:\TaskDir\TaskName.exe taskArgument" /NP

How to run Shell script through crontab or cron job on windows

I want to Do the Backup and restore with automated Process. I am looking for MongoDB native features as it takes very less time to do the backup and restores as compare to Other NPM Modules.
So for that, I have created the Shell script which is as follow:
#!/bin/sh
DIR=`date +%m%d%y`
DEST=/db_backups/$DIR
mkdir $DEST
mongodump -h 127.0.0.1:27017 -d mydbname -o $DEST
Now I want to Run this script through cron. What is the best approach for doing this?
I am on nodejs Windows Environment. Any help is really appreciated.
For that we have to open CMD and create the Job which can be run according to your reqirement.
Code will be on CMD is
schtasks /create /tn "mongodb_automated_job" /tr "location/mongo.sh" /sc minute /mo 1
It will run every minute.

Prevent stop auditd service in Redhat 7

Curently, i want auditd service run forever and user can not stop this via any commands.
Current my auditd service:
~]# systemctl cat auditd
# /usr/lib/systemd/system/auditd.service
[Unit]
Description=Security Auditing Service
DefaultDependencies=no
After=local-fs.target systemd-tmpfiles-setup.service
Conflicts=shutdown.target
Before=sysinit.target shutdown.target
RefuseManualStop=yes
ConditionKernelCommandLine=!audit=0
[Service]
ExecStart=/sbin/auditd -n
## To not use augenrules, copy this file to /etc/systemd/system/auditd.service
## and comment/delete the next line and uncomment the auditctl line.
## NOTE: augenrules expect any rules to be added to /etc/audit/rules.d/
ExecStartPost=-/sbin/augenrules --load
#ExecStartPost=-/sbin/auditctl -R /etc/audit/audit.rules
ExecReload=/bin/kill -HUP $MAINPID
[Install]
WantedBy=multi-user.target
# /etc/systemd/system/auditd.service.d/override.conf
[Service]
ExecReload=
ExecReload=/bin/kill -HUP $MAINPID ; /sbin/augenrules --load
I can't stop this service from command:
# systemctl stop auditd.service
Failed to stop auditd.service: Operation refused, unit auditd.service may be requested by dependency only.
But when i using service auditd stop command. I can stop this service normally.
# service auditd stop
Stopping logging: [ OK ]
How can i prevent it? Thanks
The administrator (root) will always be able to manually kill the auditd process (which is what the service command does). What systemd is doing here is only to prevent the administrator from doing it via the systemctl interface.
In both cases, unprivileged users can not kill the daemon.
If you want to restrict even what root can do, you will have to use SELinux and customize the policy.
Some actions of service command are not redirected to systemctl but run some specific scripts located in /usr/libexec/initscripts/legacy-actions.
In this case, stop command will call this script:
/usr/libexec/initscripts/legacy-actions/auditd/stop
If you want that, the audited service can't be stopped by service command, you can remove this script, the action "stop" will be redirected to systemctl, which will block it b/c of the parameter "RefuseManualStop=yes".
But this doesn't mean that you can't kill the process of course.

Arch Linux / systemd - prevent any kind of shutdown/rebboot

I'm running Arch-based Manjaro Linux and wrote myself a little update program, that starts every 7 hours and runs completely in the background. This update program is started by systemd.
What I wanna know is: How can I prevent any system shutdown/reboot during the time this program runs no matter if the user just wants to turn it off or any program wants to do so.
The best would be, if any shutdown/reboot action wouldn't be cancelled but delayed instead, so when the update program has finished its run, the shutdown/reboot continues.
My systemd parts are:
uupgrades.timer
[Unit]
Description=UU Upgrades Timer
[Timer]
OnBootSec=23min
OnUnitActiveSec=7h
Unit=uupgrades.target
[Install]
WantedBy=basic.target
uupgrades.target
[Unit]
Description=UU Upgrades Timer Target
StopWhenUnneeded=yes
and in the folder uupgrades.target.wants
uupgrades.service
[Unit]
Description=UU Update Program
[Service]
Nice=19
IOSchedulingClass=2
IOSchedulingPriority=7
ExecStart=/usr/bin/uupgrades
How can I achieve this?
If a user with sufficient permissions to reboot the server or manipulate processes wants to stop or reboot the machine you cant stop them. That's just how linux works. You should set up permissions and accounts such that no other users have root permissions or permissions sufficient to manipulate the process or user that the process is running as.
When I want to block myself from rebooting or shutdown, I alias my usual shutdown and reboot aliases to beep;beep;beep;.
In multiuser environments you could move the reboot, shutdown etc. binaries and move them back, when shutdown should be allowed again.
You could also temporarily move an executable shell script outputting information about the postponed shutdown possibility in place of the corresponding binaries. This script could set a flag, if a shutdown was requested.
Q&D example script:
#!/usr/bin/env bash
echo "preventing reboot"
BACKUPBINARY_REBOOT=$(mktemp);
mv /bin/reboot $BACKUPBINARY_REBOOT;
FLAGFILE=$(mktemp);
echo '#!/usr/bin/env bash' > /bin/reboot;
echo '# original reboot binary was moved to'"$BACKUPBINARY_REBOOT" >> /bin/reboot;
echo 'echo request-reboot > '"$FLAGFILE" >> /bin/reboot;
echo 'echo reboot is prevented, your request will trigger later' >> /bin/reboot;
chmod 666 "$FLAGFILE";
chmod +x /bin/reboot;
echo "postponed reboot - press enter to allow it again and make up for requested reboot";
read;
mv "$BACKUPBINARY_REBOOT" /bin/reboot;
if grep -q request-reboot "$FLAGFILE"; then
rm $FLAGFILE;
/bin/reboot;
fi
You can add another systemd service at /usr/lib/systemd/system-shutdown/ which will be run at shutdown, and have it check if your update script is running, and if so, cancel or delay the shutdown.

How to get IIS AppPool Worker Process ID

I have a PowerShell script that is run automatically when our monitoring service detects that a website is down.
It is supposed to stop the AppPool (using Stop-WebAppPool -name $AppPool;), wait until it is really stopped and then restart it.
Sometimes it the process does not actually stop, manifested by the error
Cannot Start Application Pool:
The service cannot accept control messages at this time.
(Exception from HRESULT: 0x80070425)"
when you try to start it again.
If it takes longer than a certain number of seconds to stop (I will chose that amount of time after I have timed several stops to see how long it usually takes), I want to just kill the process.
I know that I can get the list of processes used by workers in the AppPool by doing dir IIS:\AppPools\MyAppPool\WorkerProcesses\,
Process ID State Handles Start Time
---------- ----- ------- ----------
7124 Running
but I can't figure out how to actually capture the process id so I can kill it.
In case that Process ID is really the id of process to kill, you can:
$id = dir IIS:\AppPools\MyAppPool\WorkerProcesses\ | Select-Object -expand processId
Stop-Process -id $id
or
dir IIS:\AppPools\MyAppPool\WorkerProcesses\ | % { Stop-Process -id $_.processId }
In Command Prompt on the server, I just do the following for a list of running AppPool PIDs so I can kill them with taskkill or Task Mgr:
cd c:\windows\system32\inetsrv
appcmd list wp
taskkill /f /pid *PIDhere*
(Adding answer from Roman's comment, since there maybe cache issues with stej's solution)
Open Powershell as an Administrator on the web server, then run:
gwmi -NS 'root\WebAdministration' -class 'WorkerProcess' | select AppPoolName,ProcessId
You should see something like:
AppPoolName ProcessId
----------- ---------
AppPool_1 8020
AppPool_2 8568
You can then use Task Manager to kill it or in Powershell use:
Stop-Process -Id xxxx
If you get Get-WmiObject : Could not get objects from namespace root/WebAdministration. Invalid namespace then you need to enable the IIS Management Scripts and Tools feature using:
ipmo ServerManager
Add-WindowsFeature Web-Scripting-Tools

Resources