How to lock virtualbox to get a screenshot through SOAP API - python-3.x

I'm trying to use the SOAP interface of Virtualbox 6.1 from Python to get a screenshot of a machine. I can start the machine but get locking errors whenever I try to retrieve the screen layout.
This is the code:
import zeep
# helper to show the session lock status
def show_lock_state(session_id):
session_state = service.ISession_getState(session_id)
print('current session state:', session_state)
# connect
client = zeep.Client('http://127.0.0.1:18083?wsdl')
service = client.create_service("{http://www.virtualbox.org/}vboxBinding", 'http://127.0.0.1:18083?wsdl')
manager_id = service.IWebsessionManager_logon('fakeuser', 'fakepassword')
session_id = service.IWebsessionManager_getSessionObject(manager_id)
# get the machine id and start it
machine_id = service.IVirtualBox_findMachine(manager_id, 'Debian')
progress_id = service.IMachine_launchVMProcess(machine_id, session_id, 'gui')
service.IProgress_waitForCompletion(progress_id, -1)
print('Machine has been started!')
show_lock_state(session_id)
# unlock and then lock to be sure, doesn't have any effect apparently
service.ISession_unlockMachine(session_id)
service.IMachine_lockMachine(machine_id, session_id, 'Shared')
show_lock_state(session_id)
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
print(service.IDisplay_getGuestScreenLayout(display_id))
The machine is started properly but the last line gives the error VirtualBox error: rc=0x80004001 which from what I read around means locked session.
I tried to release and acquire the lock again, but even though it succeeds the error remains. I went through the documentation but cannot find other types of locks that I'm supposed to use, except the Write lock which is not usable here since the machine is running. I could not find any example in any language.

I found an Android app called VBoxManager with this SOAP screenshot capability.
Running it through a MITM proxy I reconstructed the calls it performs and wrote them as the Zeep equivalent. In case anyone is interested in the future, the last lines of the above script are now:
console_id = service.ISession_getConsole(session_id)
display_id = service.IConsole_getDisplay(console_id)
resolution = service.IDisplay_getScreenResolution(display_id, 0)
print(f'display data: {resolution}')
image_data = service.IDisplay_takeScreenShotToArray(
display_id,
0,
resolution['width'],
resolution['height'],
'PNG')
with open('screenshot.png', 'wb') as f:
f.write(base64.b64decode(image_data))

Related

Boto3 client in multiprocessing pool fails with "botocore.exceptions.NoCredentialsError: Unable to locate credentials"

I'm using boto3 to connect to s3, download objects and do some processing. I'm using a multiprocessing pool to do the above.
Following is a synopsis of the code I'm using:
session = None
def set_global_session():
global session
if not session:
session = boto3.Session(region_name='us-east-1')
def function_to_be_sent_to_mp_pool():
s3 = session.client('s3', region_name='us-east-1')
list_of_b_n_o = list_of_buckets_and_objects
for bucket, object in list_of_b_n_o:
content = s3.get_object(Bucket=bucket, Key=key)
data = json.loads(content['Body'].read().decode('utf-8'))
write_processed_data_to_a_location()
def main():
pool = mp.Pool(initializer=set_global_session, processes=40)
pool.starmap(function_to_be_sent_to_mp_pool, list_of_b_n_o_i)
Now, when processes=40, everything works good. When processes = 64, still good.
However, when I increases to processes=128, I get the following error:
botocore.exceptions.NoCredentialsError: Unable to locate credentials
Our machine has the required IAM roles for accessing S3. Moreover, the weird thing that happens is that for some processes, it works fine, whereas for some others, it throws the credentials error. Why is this happening, and how to resolve this?
Another weird thing that happens is that I'm able to trigger two jobs in 2 separate terminal tabs (each of which has a separate ssh login shell to the machine). Each job spawns 64 processes, and that works fine as well, which means there are 128 processes running simultaneously. But 80 processes in one login shell fails.
Follow up:
I tried creating separate sessions for separate processes in one approach. In the other, I directly created s3-client using boto3.client. However, both of them throw the same error with 80 processes.
I also created separate clients with the following extra config:
Config(retries=dict(max_attempts=40), max_pool_connections=800)
This allowed me to use 80 processes at once, but anything > 80 fails with the same error.
Post follow up:
Can someone confirm if they've been able to use boto3 in multiprocessing with 128 processes?
This is actually a race condition on fetching the credentials. I'm not sure how fetching credentials under the hood works, but the I saw this question in Stack Overflow and this ticket in github.
I was able to resolve this by keeping a random wait time for each of the processes. The following is the updated code which works for me:
client_config = Config(retries=dict(max_attempts=400), max_pool_connections=800)
time.sleep(random.randint(0, num_processes*10)/1000) # random sleep time in milliseconds
s3 = boto3.client('s3', region_name='us-east-1', config=client_config)
I tried keeping the range for sleep time lesser than num_processes*10, but that failed again with the same issue.
#DenisDmitriev, since you are getting the credentials and storing them explicitly, I think that resolves the race condition and hence the issue is resolved.
PS: values for max_attempts and max_pool_connections don't have a logic. I was plugging several values until the race condition was figured out.
I suspect that AWS recently reduced throttling limits for metadata requests because I suddenly started running into the same issue. The solution that appears to work is to query credentials once before creating the pool and have the processes in the pool use them explicitly instead of making them query credentials again.
I am using fsspec with s3fs, and here's what my code for this looks like:
def get_aws_credentials():
'''
Retrieve current AWS credentials.
'''
import asyncio, s3fs
fs = s3fs.S3FileSystem()
# Try getting credentials
num_attempts = 5
for attempt in range(num_attempts):
credentials = asyncio.run(fs.session.get_credentials())
if credentials is not None:
if attempt > 0:
log.info('received credentials on attempt %s', 1 + attempt)
return asyncio.run(credentials.get_frozen_credentials())
time.sleep(15 * (random.random() + 0.5))
raise RuntimeError('failed to request AWS credentials '
'after %d attempts' % num_attempts)
def process_parallel(fn_d, max_processes):
# [...]
c = get_aws_credentials()
# Cache credentials
import fsspec.config
prev_s3_cfg = fsspec.config.conf.get('s3', {})
try:
fsspec.config.conf['s3'] = dict(prev_s3_cfg,
key=c.access_key,
secret=c.secret_key)
num_processes = min(len(fn_d), max_processes)
from concurrent.futures import ProcessPoolExecutor
with ProcessPoolExecutor(max_workers=num_processes) as pool:
for data in pool.map(process_file, fn_d, chunksize=10):
yield data
finally:
fsspec.config.conf['s3'] = prev_s3_cfg
Raw boto3 code will look essentially the same, except instead of the whole fs.session and asyncio.run() song and dance, you'll work with boto3.Session itself and call its get_credentials() and get_frozen_credentials() methods directly.
I get the same problem with multi process situation. I guess there is a client init problem when you use multi process. So I suggest that you can use get function to get s3 client. It works for me.
g_s3_cli = None
def get_s3_client(refresh=False):
global g_s3_cli
if not g_s3_cli or refresh:
g_s3_cli = boto3.client('s3')
return g_s3_cli

Flask App Simulating blocking API to abstract from web hook based callback

I have Angular 8 web app. It needs to send some data for analysis to python flask App. This flask App will send it to 3rd party service and get response through webhook.
My need is to provide a clean interface to the client so that there is no need to provide webhook from client.
Hence I am trying to initiate a request from Flask app, wait until I get data from webhook and then return.
Here is the code.
#In autoscaled micro-service this will not work. Right now, the scaling is manual and set to 1 instance
#Hence keeping this sliding window list in RAM is okay.
reqSlidingWindow =[]
#app.route('/handlerequest',methods = ['POST'])
def Process_client_request_and_respond(param1,param2,param3):
#payload code removed
#CORS header part removed
querystring={APIKey, 'https://mysvc.mycloud.com/postthirdpartyresponsehook'}
response = requests.post(thirdpartyurl, json=payload, headers=headers, params=querystring)
if(response.status_code == SUCCESS):
respdata = response.json()
requestId = respdata["request_id"]
requestobject = {}
requestobject['requestId'] = requestId
reqSlidingWindow.append(requestobject)
#Now wait for the response to arrive through webhook
#Keep checking the list if reqRecord["AllAnalysisDoneFlag"] is set to True.
#If set, read reqRecord["AnalysisResult"] = jsondata
jsondata = None
while jsondata is None:
time.sleep(2)
for reqRecord in reqSlidingWindow:
if(reqRecord["requestId"] == da_analysis_req_Id ):
print("Found matching req id in req list. Checking if analysis is done.")
print(reqRecord)
if(reqRecord["AllAnalysisDoneFlag"] == True):
jsondata = reqRecord["AnalysisResult"]
return jsonify({"AnalysisResult": "Success", "AnalysisData":jsondata})
#app.route('/webhookforresponse',methods = ['POST'])
def process_thirdparty_svc_response(reqIdinResp):
print(request.data)
print("response receieved at")
data = request.data.decode('utf-8')
jsondata = json.loads(data)
#
for reqRecord in reqSlidingWindow:
#print("In loop. current analysis type" + reqRecord["AnalysisType"] )
if(reqRecord["requestId"] == reqIdinResp ):
reqRecord["AllAnalysisDoneFlag"] = True
reqRecord["AnalysisResult"] = jsondata
return
I'm trying to maintain sliding window of requests in list. Upon the
Observations so far:
First, this does not work. The function of 'webhookforresponse' does not seem to run until my request function comes out. i.e. my while() loop appears to block everything though I have a time.sleep(). My assumption is that flask framework would ensure that callback is called since sleep in another 'route' gives it adequate time and internally flask would be multithreaded?
I tried running python threads for the sliding window data structure and also used RLocks. This does not alter behavior. I have not tried flask specific threading.
Questions:
What is the right architecture of the above need with flask? I need clean REST interface without callback for angular client. Everything else can change.
If the above code to be used, what changes should I make? Is threading required at all?
Though I can use database and then pick from there, it still requires polling the sliding window.
Should I use multithreading specific to flask? Is there any specific example with skeletal design, threading library choices?
Please suggest the right way to proceed to achieve abstract REST API for angular client which hides the back end callbacks and other complexities.

Communication with Python and Supercollider through OSC

I'm trying to connect Python with Supercollider through OSC, but it's not working.
I'm using Python3 and the library osc4py3.
The original idea was to send a text word by word, but upon trying I realized the connection was not working.
Here's the SC code:
(
OSCdef.new(\texto,{
|msg, time, addr, port|
[msg, time, addr,port].postIn;
},
'/texto/supercollider',
n
)
)
OSCFunc.trace(true);
o = OSCFunc(\texto);
And here's the Python code:
osc_startup()
osc_udp_client("127.0.0.1", 57120, "supercollider")
## here goes a function called leerpalabras to separate words in rows.
with open("partitura.txt", "r") as f:
for palabra in leerpalabras(f):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", palabra)
osc_send(msg, "supercollider")
sleep(2)
osc_terminate()
I've also tried with this, to see if maybe there was something wrong with my for loop (with the startup, and terminate of course):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", "holis")
osc_send(msg, "supercollider")
I run the trace method on SC, nothing appears on the post window when I run the Python script on terminal, but no error appears on neither one of them, so I'm a bit lost on what I can test to make sure is getting somewhere.
It doesn't print on the SC post window, it just says OSCdef(texto, /texto/supercollider, nil, nil, nil).
When I run the SuperCollider piece of your example, and then run:
n = NetAddr("127.0.0.1", 57120);
n.sendMsg('/texto/supercollider', 1, 2, 3);
... I see the message printed immediately (note that you used postIn instead of postln, if you don't fix that you'll get an error instead of a printed message).
Like you, I don't see anything when I send via the Python library - my suspicion is that there's something wrong on the Python side? There's a hint in this response that you have to call osc_process() after sends, but that still doesn't work for me
You can try three things:
Run OSCFunc.trace in SuperCollider and watch for messages (this will print ALL incoming OSC messages), to see if your OSCdef is somehow not receiving messages.
Try a network analyzer like Packet Peeper (http://packetpeeper.org/) to watch network traffic on your local loopback network lo0. When I do this, I can clearly see messages sent by SuperCollider, but I don't see any of the messages I send from Python, even when I send in a loop and call osc_process().
If you can't find any sign of Python sending OSC packets, try a different Python library - there are many others available.
(I'm osc4py3 author)
osc4py3 store messages to send within internal lists and returns immediately. These lists are processed during osc_process() calls or directly by background threads (upon selected theading model).
So, if you have selected as_eventloop threading model, you need to call osc_process() some times, like:
…
with open("partitura.txt", "r") as f:
for palabra in leerpalabras(f):
msg = oscbuildparse.OSCMessage("/texto/supercollider", ",s", palabra)
osc_send(msg, "supercollider")
for missme in range(4):
osc_process()
sleep(0.5)
…
See doc: https://osc4py3.readthedocs.io/en/latest/userdoc.html#threading-model

Pyppeteer behaving differently on Linux and Windows

I have pyppeteer code that browses around. Let's assume it only clicks on a tags.
It runs fine on my local Windows machine, but breaks whenever I run it remotely on a Linux server. Same conda env, same code.
The relevant part of my code, simplified, looks like:
async def act(self):
element = self.element
async def get_action():
tag_name = await self.page.evaluate(
'elem => { return elem.tagName.toLowerCase(); }',
element)
action = None
if tag_name == 'a':
action = element.click()
else:
action = async_pass()
return action
async def get_action_future():
# gather syntax based on:
# https://miyakogi.github.io/pyppeteer/reference.html#pyppeteer.page.Page.click
action = await get_action()
future_action = asyncio.gather(
action,
asyncio.sleep(0.001), # dirty, dirty work-around, doesn't work nicely otherwise
)
waited_future = await asyncio.shield(future_action)
if waited_future[0] is None:
await self.page.waitForNavigation(self.wait_options)
return None
await get_action_future()
It runs fine on my Windows machine.
When I start it on a Linux machine, it starts off OK, whether there's navigation or not. Then, after a few navigation clicks, I'm getting a timeout, then another error:
Error encountered: Navigation Timeout Exceeded: 20000 ms exceeded.
# then I trigger the element selector and the act method again, wrapped in try/except
Error encountered: Protocol Error (Runtime.callFunctionOn): Session closed. Most likely the page has been closed.
I'm stuck on this problem for a while and would appreciate any help!
My environment includes:
python=3.6, pyppeteer=0.0.25.
BTW:
I noticed that this question has a similar error. BUT, the error is different (Protocol error (Page.navigate): Target closed instead of Protocol Error (Runtime.callFunctionOn)), as well as the environment (node.js, Puppeteer, etc.).
It seems that Pypepeteer is no longer maintained. I was experiencing the same issue and migrated to Selenium which is working well.

Disable fingerprint in Twisted Web

I did a nmap to my server and i watch the fingerprint, how can i disable it?
443/tcp open ssl/http TwistedWeb httpd 9.0.0
The "fingerprint" is how server identifies itself at the start of http session. Thus we should look at what implements Web server in twisted and where does it keep its identification.
Now if we look at http://twistedmatrix.com/trac/browser/tags/releases/twisted-12.2.0/twisted/web/server.py line 498 states
version = "TwistedWeb/%s" % copyright.version
This variable then gets handled by Request.process() method
class Request(pb.Copyable, http.Request, components.Componentized):
....
def process(self):
"Process a request."
# get site from channel
self.site = self.channel.site
# set various default headers
self.setHeader('server', version)
self.setHeader('date', http.datetimeToString())
# Resource Identification
self.prepath = []
self.postpath = map(unquote, string.split(self.path[1:], '/'))
try:
resrc = self.site.getResourceFor(self)
self.render(resrc)
except:
self.processingFailed(failure.Failure())
So you could easily subclass Request and overwrite process method, to do what you like.
Or you in theory could do something like this in your application code:
from twisted.web import server
server.version = "COBOL HTTPD SERVICE"
overriding version value in the imported resource.

Resources