I want to know if there is a way to run random processes to simulate normal activity from a user that could work in the PC.
For example generate random processes that consume resources as a browser or a pdf would, until has 50% or 60% of memory working.
I am trying to get data from virtual machines but I would like to have the most heterogeneous data that I could.
I have tried the following:
Run random command in bash script
https://unix.stackexchange.com/questions/174688/how-can-i-start-a-process-with-any-name-which-does-nothing
But that's no exactly what I am looking for.
Can you help me?
Thanks in advance.
Would something like this work for you?
#!/usr/bin/env python3
# -*- coding: utf-8 -*-
import random
import subprocess
import time
import psutil
timeout = 5 * 60 # seconds
poll_time = 5 # seconds
mem_limit = psutil.virtual_memory().total * (50/100) # limit mem usage at 50%
cmds = [
'firefox www.some_website.com',
'firefox www.other_website.com',
'okular some_document.pdf',
'vlc some_video.mp4',
'vlc some_audio.mp3',
# etc.
]
procs = []
quit = False
init_time = time.time()
while not quit:
if mem_limit < psutil.virtual_memory().used:
i = random.randint(0, len(cmds))
cmd = cmds[i]
procs.append(subprocess.Popen(cmd, shell=True))
time.sleep(poll_time)
if timeout > (time.time() - init_time):
for proc in procs:
proc.kill()
quit = True
Related
Process P1:
#sub.py
#Find the sum of two numbers
def sum_ab(a,b):
return a+b
def main():
print(sum_ab(3,6))
if __name__ == '__main__':
main()
Process P2:
#run.py
#Execute sub.py 10 times
import psutil as ps
cmd = ["python3", "sub.py"]
for i in range(10):
process = ps.Popen(cmd)
The above is the scenario I'm working with. I need to find the CPU and memory utilization of each of the sub process called by the 'run.py' script. Can anyone help me derive the resource information of the RUNNING Processes. How to derive the following in python.
What is the CPU utilization by each sub process 'sub.py'
What is the memory utilization by each sub process 'sub.py'
With quite a bit of search time and effort, I found that it is possible to get the subprocess resource utilization analysis.
The updated Process P2 is as follows:
# run.py
# Execute sub.py 10 times
# Import the required utilities
import psutil as ps
import time
from subprocess import PIPE
# define the command for the subprocess
cmd = ["python3", "sub.py"]
for i in range(10):
# Create the process
process = ps.Popen(cmd, stdout=PIPE)
peak_mem = 0
peak_cpu = 0
# while the process is running calculate resource utilization.
while(process.is_running()):
# set the sleep time to monitor at an interval of every second.
time.sleep(1)
# capture the memory and cpu utilization at an instance
mem = process.memory_info().rss/ (float)(2**30)
cpu = process.cpu_percent()
# track the peak utilization of the process
if mem > peak_mem:
peak_mem = mem
if cpu > peak_cpu:
peak_cpu = cpu
if mem == 0.0:
break
# Print the results to the monitor for each subprocess run.
print("Peak memory usage for the iteration {} is {} GB".format(i, peak_mem))
print("Peak CPU utilization for the iteration {} is {} %".format(i, peak_cpu))
I am trying to find out if gsutil mv is called without the -m option, what the defaults are. I see in the config.py source code that it looks like even without the -m option the default would be to calculate the number of CPU cores and set that along with 5 threads. So by default if you had a 4 core machine you would get 4 processes and 5 threads, basically multi-threaded out of the box. How would we find out what -m does, I think I saw in some documentation that -m defaults to 10 threads, but how many processes are spawned? I know you can override these settings but whats default with -m?
should_prohibit_multiprocessing, unused_os =ShouldProhibitMultiprocessing()
if should_prohibit_multiprocessing:
DEFAULT_PARALLEL_PROCESS_COUNT = 1
DEFAULT_PARALLEL_THREAD_COUNT = 24
else:
DEFAULT_PARALLEL_PROCESS_COUNT = min(multiprocessing.cpu_count(), 32)
DEFAULT_PARALLEL_THREAD_COUNT = 5
Also would a mv command in a for loop take advantage of -m or will it just feed the gsutil command one at a time rendering parallel obsolete? The reason I ask because using the below loop with 50000 files took 24 hours to complete, I wanted to know if I used the -m option if it would of helped? Not sure if calling the gsutil command each iteration would allow full threading or would it just do it with 10 processes and 10 threads making it twice as fast?
#!/bin/bash
for files in $(cat listing2.txt) ; do
echo "Renaming: $files --> ${files#removeprefix-}"
gsutil mv gs://testbucket/$files gs://testbucket/${files#removeprefix-}
done
Thanks to the commenters #guillaume blaquiere,
I engineered a python program that would multi process the API calls to move the files in the cloud with 25 concurrent processes. I will share the code here to hopefully help others.
import time
import subprocess
import multiprocessing
class GsRenamer:
def __init__(self):
self.gs_cmd = '~/google-cloud-sdk/bin/gsutil'
def execute_jobs(self, cmd):
try:
print('RUNNING PARALLEL RENAME: [{0}]'.format(cmd))
print(cmd)
subprocess.run(cmd, check=True, shell=True)
except subprocess.CalledProcessError as e:
print('[{0}] FATAL: Command failed with error [{1}]').format(cmd,
e)
def get_filenames_from_gs(self):
self.file_list = []
cmd = [self.gs_cmd, 'ls',
'gs://gs-bucket/jason_testing']
p = subprocess.Popen(cmd, stdout=subprocess.PIPE)
output = p.stdout.readlines()
for files in output:
files = files.decode('utf-8').strip()
tokens = files.split('/')[-1]
self.file_list.append(tokens)
self.file_list = list(filter(None, self.file_list))
def rename_files(self, string_original, string_replace):
final_rename_list = []
for files in self.file_list:
renamed_files = files.replace(string_original,
string_replace)
rename_command = "{0} mv gs://gs-bucket/jason_testing/{1} " \
"gs://gs-bucket/jason_testing/{2}".format(
self.gs_cmd, files, renamed_files)
final_rename_list.append(rename_command)
final_rename_list.sort()
multiprocessing.pool = multiprocessing.Pool(
processes=25)
multiprocessing.pool.map(self.execute_jobs, final_rename_list)
def main():
gsr = GsRenamer()
gsr.get_filenames_from_gs()
#gsr.rename_files('sample', 'jason')
gsr.rename_files('jason', 'sample')
if __name__ == "__main__":
main()
Here's the Python code to run an arbitrary command returning its stdout data, or raise an exception on non-zero exit codes:
proc = subprocess.Popen(
cmd,
stderr=subprocess.STDOUT, # Merge stdout and stderr
stdout=subprocess.PIPE,
shell=True)
The subprocess module does not support execution-time and if it exceeds specific threshold => timeout(ability to kill a process running for more than X number of seconds)
What is the simplest way to implement get_execution_time and timeout in Python2.6 program meant to run on Linux?
Good question. Here is the complete code for this:
import time, subprocess # Importing modules.
timeoutInSeconds = 1 # Our timeout value.
cmd = "sleep 5" # Your desired command.
proc = subprocess.Popen(cmd,shell=True) # Starting main process.
timeStarted = time.time() # Save start time.
cmdTimer = "sleep "+str(timeoutInSeconds) # Waiting for timeout...
cmdKill = "kill "+str(proc.pid)+" 2>/dev/null" # And killing process.
cmdTimeout = cmdTimer+" && "+cmdKill # Combine commands above.
procTimeout = subprocess.Popen(cmdTimeout,shell=True) # Start timeout process.
proc.communicate() # Process is finished.
timeDelta = time.time() - timeStarted # Get execution time.
print("Finished process in "+str(timeDelta)+" seconds.") # Output result.
I'm trying to build a scapy program that scans for Beacon Frames. Every router should send beacon frames to the air in an interval of X milliseconds so the possible hosts know the router(AP) is alive.
I'm getting nothing, the only kind of Dot11 frames I've been able to get so far is Prob Request, very rarely some data or control frames as well. I setup my wireless card to monitor mode before running the script and it supports it as well. I don't what I might be doing wrong... Here's the code :
from scapy.all import *
global list_prob
list_prob = []
def search_prob(packet1):
if (packet1.haslayer(Dot11)) and (packet1[Dot11].type == 0) and\
(packet1[Dot11].subtype == 8) : #type 4 == ProbRequest
if packet1[Dot11].addr2 not in list_prob:
if packet1[Dot11].info not in list_prob:
print('[>]AP',packet1[Dot11].addr2,'SSID',packet1[Dot11].info)
list_prob.append(packet1[Dot11].addr2)
list_prob.append(packet1[Dot11].info)
sniff(iface='wlan0mon',prn=search_prob)
Ive also tried it with Dot11Beacon instead of subtype 8 and nothing changed . I'm programming with python3.5 on Linux.
Any ideas ?
Code to constantly change channel of network interface using python :
from threading import Thread
import subprocess,shlex,time
import threading
locky = threading.Lock()
def Change_Freq_channel(channel_c):
print('Channel:',str(channel_c))
command = 'iwconfig wlan1mon channel '+str(channel_c)
command = shlex.split(command)
subprocess.Popen(command,shell=False) # To prevent shell injection attacks !
while True:
for channel_c in range(1,15):
t = Thread(target=Change_Freq_channel,args=(channel_c,))
t.daemon = True
locky.acquire()
t.start()
time.sleep(0.1)
locky.release()
Firstly, I'd like to say I just begin to learn python, And I want to execute maven command inside my python script (see the partial code below)
os.system("mvn surefire:test")
But unfortunately, sometimes this command will time out, So I wanna to know how to set a timeout threshold to control this command.
That is to say, if the executing time is beyond X seconds, the program will skip the command.
What's more, can other useful solution deal with my problem? Thanks in advance!
use the subprocess module instead. By using a list and sticking with the default shell=False, we can just kill the process when the timeout hits.
p = subprocess.Popen(['mvn', 'surfire:test'])
try:
p.wait(my_timeout)
except subprocess.TimeoutExpired:
p.kill()
Also, you can use in terminal timeout:
Do like that:
import os
os.system('timeout 5s [Type Command Here]')
Also, you can use s, m, h, d for second, min, hours, day.
You can send different signal to command. If you want to learn more, see at:
https://linuxize.com/post/timeout-command-in-linux/
Simple answer
os.system not support timeout.
you can use Python 3's subprocess instead, which support timeout parameter
such as:
yourCommand = "mvn surefire:test"
timeoutSeconds = 5
subprocess.check_output(yourCommand, shell=True, timeout=timeoutSeconds)
Detailed Explanation
in further, I have encapsulate to a function getCommandOutput for you:
def getCommandOutput(consoleCommand, consoleOutputEncoding="utf-8", timeout=2):
"""get command output from terminal
Args:
consoleCommand (str): console/terminal command string
consoleOutputEncoding (str): console output encoding, default is utf-8
timeout (int): wait max timeout for run console command
Returns:
console output (str)
Raises:
"""
# print("getCommandOutput: consoleCommand=%s" % consoleCommand)
isRunCmdOk = False
consoleOutput = ""
try:
# consoleOutputByte = subprocess.check_output(consoleCommand)
consoleOutputByte = subprocess.check_output(consoleCommand, shell=True, timeout=timeout)
# commandPartList = consoleCommand.split(" ")
# print("commandPartList=%s" % commandPartList)
# consoleOutputByte = subprocess.check_output(commandPartList)
# print("type(consoleOutputByte)=%s" % type(consoleOutputByte)) # <class 'bytes'>
# print("consoleOutputByte=%s" % consoleOutputByte) # b'640x360\n'
consoleOutput = consoleOutputByte.decode(consoleOutputEncoding) # '640x360\n'
consoleOutput = consoleOutput.strip() # '640x360'
isRunCmdOk = True
except subprocess.CalledProcessError as callProcessErr:
cmdErrStr = str(callProcessErr)
print("Error %s for run command %s" % (cmdErrStr, consoleCommand))
# print("isRunCmdOk=%s, consoleOutput=%s" % (isRunCmdOk, consoleOutput))
return isRunCmdOk, consoleOutput
demo :
isRunOk, cmdOutputStr = getCommandOutput("mvn surefire:test", timeout=5)