How can I run a cron job only when my laptop is active [WSL]? - cron

I have a small script (on WSL / Debian) to rsync my files to my Debian server.
I've set the script to run every two hours.
My laptop is a Windows 10 machine.
When put to sleep / idle, the laptop goes into a Modern Standby state:
When Modern Standby-capable systems enter sleep, the system is still
in S0 (a fully running state, ready and able to do work).
This is different from the old S3 state which is where the machine truly was on idle:
Windows and the SoC hardware are always listening for interesting
events (such as a network packet or user input at a keyboard) and will
wake up instantly when needed. The system will wake when there is real
time action required, such as for OS maintenance or when a user wakes
the system
This means that even when I put my laptop to sleep, it's still running the cron job every two hours, which is unnecessary.
Is there any way that WSL can retrieve the power state, e.g. through powercfg so that it only runs the script when the computer is awake?

Related

A sharp increase in the number of threads when executing a program Net.Core on Debian OS

Good afternoon.
There is a program (console) written in Net.Core 3.1.
On Windows OS it works fine, on a test machine with Linux Debian OS, it also works fine, but on a customer machine with Linux Debian OS, the following problematic situation occurs:
There is a constant growth of threads, for example, + 10 threads every minute and after a few hours the number of threads exceeds several thousand (for example, 6000) and after that the program freezes and tries to restart, and falls.
Question:
In which direction to look for the cause of this situation, in the settings of the virtual machine and Debian Linux, or in the peculiarities of working with Net.Core 3.1 in Linux?

Yocto runs only one task inside at a time

I have setup my development environment inside Virtual Machine running Ubuntu 14.04. My company doesn't allow me to run direct Linux Flavoured OS may be due to security reasons.
One thing I have observed is that in VM it only runs one task ast a time whereas if i run on my personal laptop it runs multiple tasks at a time.
Is there any way to configure poky in local.conf file for example or any other file for it to run multiple tasks at the same time. I have given more than 6 GB of RAM to the VM.
As it is running one task, build is taking a lot of time..
Thanks for your time
bitbake task executor enquires for number of CPUs dynamically, so it seems that you might have allocated 1 cpu to your VM. You might be able to see CPUs with below command in VM
lscpu
You might want to allocate more CPUs. VirtualBox lets you do that
Stop virtual machine
Click settings-> click system -> click processor -> Change the number of procesors.

Linux command on wake up from hibernate/suspend

I'm new to Linux and running Mint. I've seen a lot of documentation on creating commands that run when the os is started up from the computer being powered off. Is there a way to make similar commands to run when the os wakes up from hibernate or suspend? (For context, I'm running 'rfkill block bluetooth' on startup and would like to when my pc wakes up from hibernate as well).
place your commands in a script file and ensure you have sufficient owner/permissions to execute in /lib/systemd/system-sleep/ so once your OS suspends from sleep, its going to execute.
For more information
man systemd-sleep
https://askubuntu.com/questions/226278/run-script-on-wakeup

Does the system execution time of a program change if it's running on a virtual machine?

A friend asked for a command that can be used to find the real system
execution time for a program in Linux. I replied that the time
command is a great one.
He then asked, is the time of execution (via time) for a program which is returned by the virtual machine when you query for the execution time, the same as the real system execution time of the program?
My instinct was to say it depends:
If there are enough resources on the machine, then the VM time returned would be the same as the real/wall clock time of the program execution.
If the system does NOT have sufficient resources, then the VM time will be much different than the real (system) execution time. Moreover, the VM is an application on top of the OS, which has its own scheduler. This means that the VM needs to invoke systems calls which are then processed by the OS which in turn communicate with hardware and then provide a real (system) execution time. Hence, the time returned can be different than real time in this situation.
If the executed program is simple, then the VM time could be equal to real (system) time.
If the executed program is NOT simple, then the VM time could be much different.
Are my assumptions correct?
I now wonder: How could you find/calculate the real execution time of a program ran on a virtual machine? Any ideas?
The complexity of the program doesn't matter.
The guest OS doesn't have visibility into the host. If you run 'time' on the guest, the 'sys' value returned is describing guest system resources used, only.
Or to rephrase: in a typical virtual machine setup, you're going to allocate only a portion of the host CPU resources to the guest OS. But that's all the guest can see, and thus, that's all the 'time' command can report on. Since you allocated this when you launched the VM, the guest cannot exhaust all of the host resources.
This is true of pretty much any VM: it would be a major security issue if the guest had knowledge of the hypervisor.
So yes the sys time could absolutely differ for VM versus real hardware, because the guest won't have full resources. You could also see variability depending on whether you're dealing with hardware or software virtualization.
Some good reading here (sections 10.3 through 10.5):
https://www.virtualbox.org/manual/ch10.html#hwvirt

Starting of a remote device manager

I am running a waveform that has devices on more than one computer. The domain manager and a device manager starts up on one GPP(1). A device manager starts up on the other GPP(2). The domain manager and the device managers are started when the GPP(s) are being boot up.
I can have a procedure for the operator, that states start GPP(2) up first, then GPP(1). But this approach is not preferable. I would like the ability to start the Device Manager on GPP(2) from GPP(1) after the domain manager has been started.
Does REDHAWK have a standard way of starting remote Device Managers?
The DeviceManager is designed to wait for the DomainManager on startup. So, the standard procedure is to add a script to /etc/init.d so that the remote DeviceManager will start up whenever the remote machine running it boots.
To clarify, let's elaborate using your example. Machine 1 will run the DomainManager and DeviceManager(1)/GPP(1). This machine could be at 192.168.1.1. Machine 2 will run DeviceManager(2)/GPP(2). This machine could be at 192.168.1.2.
The DomainManager will start up whenever machine 1 boots. It will wait happily for DeviceManagers to register with it.
Machine 2's /etc/omniORB.cfg file is set to point to 192.168.1.1. When it boots, the DeviceManger will attempt to register with the DomainManager. One of two things will happen:
The DomainManager at 192.168.1.1 is already up and running. In this case, DeviceManager(2) registers successfully and is ready to run applications.
The DomainManager at 192.168.1.1 is not yet running. In this case, DeviceManager(2) will hang out and wait for the DomainManager to come up.

Resources