Connection configuration loops - Prosys OPC UA Client - node.js

I'm using sample codes from documentation and I'm trying to connect to server using Prosys OPC UA Client. I have tried opcua-commander and integration objects opc ua client and it looks like server works just fine.
Here's what is happening:
After entering endpointUrl, client adds to url -- urn:NodeOPCUA-Server-default.
Client asks to specify security settings.
Client asks to choose server - only 1 option and it's urn:NodeOPCUA-Server-default.
And it goes back to step 2 and 3 over and over.
If I just minimize prosys client without closing configuration after some time I get this info in terminal:
Server: closing SESSION new ProsysOpcUaClient Session15 because of timeout = 300000 has expired without a keep alive
\x1B[46mchannel = \x1B[49m ::ffff:10.10.13.2 port = 51824
I have tried this project and it works -> node-opcua-htmlpanel. What's missing in sample code then?
After opening debugger I have noticed that each Time I select security settings and hit OK, server_publish_engine reports:
server_publish_engine:179 Cencelling pending PublishRequest with statusCode BadSecureChannelClosed (0x80860000) length = 0

This is due to a specific interoperability issue that was introduced in node-opcua#0.2.2. this will be fixed in next version of node-opcua. The resolution can be tracked here https://github.com/node-opcua/node-opcua/issues/464

The issue has been handled at the Prosys OPC Forum:
The error happens because the server sends different
EndpointDescriptions in GetEndpointsResponse and
CreateSessionResponse.
In GetEndpoints, the returned EndpointDescriptions contain
TransportProfileUri=http://opcfoundation.org/UA-Profile/Transport/uatcp-uasc-uabinary.
In CreateSessionResponse, the corresponding TransportProfileUri is
empty.
In principle, the server application is not working according to
specification. The part 4 of the OPC UA specification states that “The
Server shall return a set of EndpointDescriptions available for the
serverUri specified in the request. … The Client shall verify this
list with the list from a DiscoveryEndpoint if it used a
DiscoveryEndpoint to fetch the EndpointDescriptions. It is recommended
that Servers only include the server.applicationUri, endpointUrl,
securityMode, securityPolicyUri, userIdentityTokens,
transportProfileUri and securityLevel with all other parameters set to
null. Only the recommended parameters shall be verified by the
client.”

Related

Refering a open Chrome window using Selenium [duplicate]

For some unknown reasons ,my browser open test pages of my remote server very slowly. So I am thinking if I can reconnect to the browser after quitting the script but don't execute webdriver.quit() this will leave the browser opened. It is probably kind of HOOK or webdriver handle.
I have looked up the selenium API doc but didn't find any function.
I'm using Chrome 62,x64,windows 7,selenium 3.8.0.
I'll be very appreciated whether the question can be solved or not.
No, you can't reconnect to the previous Web Browsing Session after you quit the script. Even if you are able to extract the Session ID, Cookies and other session attributes from the previous Browsing Context still you won't be able to pass those attributes as a HOOK to the WebDriver.
A cleaner way would be to call webdriver.quit() and then span a new Browsing Context.
Deep Dive
There had been a lot of discussions and attempts around to reconnect WebDriver to an existing running Browsing Context. In the discussion Allow webdriver to attach to a running browser Simon Stewart [Creator WebDriver] clearly mentioned:
Reconnecting to an existing Browsing Context is a browser specific feature, hence can't be implemented in a generic way.
With internet-explorer, it's possible to iterate over the open windows in the OS and find the right IE process to attach to.
firefox and google-chrome needs to be started in a specific mode and configuration, which effectively means that just
attaching to a running instance isn't technically possible.
tl; dr
webdriver.firefox.useExisting not implemented
Yes, that's actually quite easy to do.
A selenium <-> webdriver session is represented by a connection url and session_id, you just reconnect to an existing one.
Disclaimer - the approach is using selenium internal properties ("private", in a way), which may change in new releases; you'd better not use it for production code; it's better not to be used against remote SE (yours hub, or provider like BrowserStack/Sauce Labs), because of a caveat/resource drainage explained at the end.
When a webdriver instance is initiated, you need to get the before-mentioned properties; sample:
from selenium import webdriver
driver = webdriver.Chrome()
driver.get('https://www.google.com/')
# now Google is opened, the browser is fully functional; print the two properties
# command_executor._url (it's "private", not for a direct usage), and session_id
print(f'driver.command_executor._url: {driver.command_executor._url}')
print(f'driver.session_id: {driver.session_id}')
With those two properties now known, another instance can connect; the "trick" is to initiate a Remote driver, and provide the _url above - thus it will connect to that running selenium process:
driver2 = webdriver.Remote(command_executor=the_known_url)
# when the started selenium is a local one, the url is in the form 'http://127.0.0.1:62526'
When that is ran, you'll see a new browser window being opened.
That's because upon initiating the driver, the selenium library automatically starts a new session for it - and now you have 1 webdriver process with 2 sessions (browsers instances).
If you navigate to an url, you'll see it is executed on that new browser instance, not the one that's left from the previous start - which is not the desired behavior.
At this point, two things need to be done - a) close the current SE session ("the new one"), and b) switch this instance to the previous session:
if driver2.session_id != the_known_session_id: # this is pretty much guaranteed to be the case
driver2.close() # this closes the session's window - it is currently the only one, thus the session itself will be auto-killed, yet:
driver2.quit() # for remote connections (like ours), this deletes the session, but does not stop the SE server
# take the session that's already running
driver2.session_id = the_known_session_id
# do something with the now hijacked session:
driver.get('https://www.bing.com/')
And, that is it - you're now connected to the previous/already existing session, with all its properties (cookies, LocalStorage, etc).
By the way, you do not have to provide desired_capabilities when initiating the new remote driver - those are stored and inherited from the existing session you took over.
Caveat - having a SE process running can lead to some resource drainage in the system.
Whenever one is started and then not closed - like in the first piece of the code - it will stay there until you manually kill it. By this I mean - in Windows for example - you'll see a "chromedriver.exe" process, that you have to terminate manually once you are done with it. It cannot be closed by a driver that has connected to it as to a remote selenium process.
The reason - whenever you initiate a local browser instance, and then call its quit() method, it has 2 parts in it - the first one is to delete the session from the Selenium instance (what's done in the second code piece up there), and the other is to stop the local service (the chrome/geckodriver) - which generally works ok.
The thing is, for Remote sessions the second piece is missing - your local machine cannot control a remote process, that's the work of that remote's hub. So that 2nd part is literally a pass python statement - a no-op.
If you start too many selenium services on a remote hub, and don't have a control over it - that'll lead to resource drainage from that server. Cloud providers like BrowserStack take measures against this - they are closing services with no activity for the last 60s, etc, yet - this is something you don't want to do.
And as for local SE services - just don't forget to occasionally clean up the OS from orphaned selenium drivers you forgot about :)
OK after mixing various solutions shared on here and tweaking I have this working now as below. Script will use previously left open chrome window if present - the remote connection is perfectly able to kill the browser if needed and code functions just fine.
I would love a way to automate the getting of session_id and url for previous active session without having to write them out to a file during hte previous session for pick up...
This is my first post on here so apologies for breaking any norms
#Set manually - read/write from a file for automation
session_id = "e0137cd71ab49b111f0151c756625d31"
executor_url = "http://localhost:50491"
def attach_to_session(executor_url, session_id):
original_execute = WebDriver.execute
def new_command_execute(self, command, params=None):
if command == "newSession":
# Mock the response
return {'success': 0, 'value': None, 'sessionId': session_id}
else:
return original_execute(self, command, params)
# Patch the function before creating the driver object
WebDriver.execute = new_command_execute
driver = webdriver.Remote(command_executor=executor_url, desired_capabilities={})
driver.session_id = session_id
# Replace the patched function with original function
WebDriver.execute = original_execute
return driver
remote_session = 0
#Try to connect to the last opened session - if failing open new window
try:
driver = attach_to_session(executor_url,session_id)
driver.current_url
print(" Driver has an active window we have connected to it and running here now : ")
print(" Chrome session ID ",session_id)
print(" executor_url",executor_url)
except:
print("No Driver window open - make a new one")
driver = webdriver.Chrome(service=Service(ChromeDriverManager().install()),options=myoptions)
session_id = driver.session_id
executor_url = driver.command_executor._url
Without getting into why do you think that leaving an open browser windows will solve the problem of being slow, you don't really need a handle to do that. Just keep running the tests without closing the session or, in other words, without calling driver.quit() as you have mentioned yourself. The question here though framework that comes with its own runner? Like Cucumber?
In any case, you must have some "setup" and "cleanup" code. So what you need to do is to ensure during the "cleanup" phase that the browser is back to its initial state. That means:
Blank page is displayed
Cookies are erased for the session

Server Sent Events with Pyramid - How to detect if the connection to the client has been lost

I have a pyramid application that send SSE messages. It works basically like these:
def message_generator():
for i in range(100):
print("Sending message:" + str(i))
yield "data: %s\n\n" % json.dumps({'message': str(i)})
time.sleep(random.randint(1, 10))
#view_config(route_name='events')
def events(request):
headers = [('Content-Type', 'text/event-stream'),
('Cache-Control', 'no-cache')]
response = Response(headerlist=headers)
response.app_iter = message_generator()
return response
When I browse to /events I get the events. When I move to another page the events stop, when I close the browser the events stop.
The problem happens for example if I am in /events and I switch off the computer. The server does not know that the client got lost and message_generator keeps sending messages to the void.
In this page: A Look at Server-Sent Events mention this:
...the server should detect this (when the client stops) and stop
sending further events as the client is no longer listening for them.
If the server does not do this, then it will essentially be sending
events out into a void.
Is there a way to detect this with Pyramid? I tried with
request.add_finished_callback()
but this callback seems to be called with
return response
I use Gunicorn with gevent to start the server.
Any idea is highly appreciated
From PEP 3333:
Applications returning a generator or other custom iterator should not assume the entire iterator will be consumed, as it may be closed early by the server.
Basically a WSGI server "should" invoke the close() method on the app_iter when a client disconnects (all generators, such as in your example, support this automatically). However, a server is not required to do it, and it seems many WSGI servers do not. For example, you mentioned gunicorn (which I haven't independently verified), but I did verify that waitress also does not. I opened [1] on waitress as a result, and have been working on a fix. Streaming responses in WSGI environments is shaky at best and usually depends on the server. For example, on waitress, you need to set send_bytes=0 to avoid it buffering the response data.
[1] https://github.com/Pylons/waitress/issues/236

Syncing app state with clients using socketio

I'm running a node server with SocketIO which keeps a large object (app state) that is updated regularly.
All clients receive the object after connecting to the server and should keep it updated in real-time using the socket (read-only).
Here's what I have considered:
1:
Emit a delta of changes to the clients using diff after updates
(requires dealing with the reability of delivery and lost updates)
2:
Use the diffsync package (however it allows clients to push changes to the server, but I need updates to be unidirectional, i.e. server-->clients)
I'm confident there should be a readily available solution to deal with this but I was not able to find a definitive answer.
The solution is very easy. You must modify the server so that it accepts updates only from trusted clients.
let Server = require('diffsync').Server;
let receiveEdit = Server.prototype.receiveEdit
Server.receiveEdit = function(connection, editMessage, sendToClient){
if(checkIsTrustedClient(connection))
receiveEdit.call(this, connection, editMessage, sendToClient)
}
but
// TODO: implement backup workflow
// has a low priority since `packets are not lost` - but don't quote me on that :P
console.log('error', 'patch rejected!!', edit.serverVersion, '->',
clientDoc.shadow.serverVersion, ':',
edit.localVersion, '->', clientDoc.shadow.localVersion);
Second option is try find another solution based on jsondiffpatch

Error 2007 - SQLyog

Good Morning,
I am trying to connect to a Mysql Data base using SqlYog, I have created a new connection and I have entered all the necessary informations (login,password & port). But when I click “ok”, I get this error message (Error 2007 : Protocol mismatch; server version = 1, client version = 10) . I have pinged the data base and it responds me successfully !!!
Could you please tell me how can I solve this problem, I need to access to the data base urgently.
Thanks a lot.
This is usually due to using a very old MySQL server (before 3.22.x) which has a different protocol version.
You can take a look at this link to see what all server versions are supported.
To check your server protocol version, go to the MySQL command line and type:
SHOW VARIABLES LIKE "%version%"
SQLyog supports only the protocol version 10.

IIS Application pool identity

I am attempting to obtain a data feed from yahoo finance. I am doing this with the following code:
System.Net.WebRequest request = System.Net.WebRequest.Create(http://download.finance.yahoo.com/download/quotes.csv?format=sl&ext=.csv&symbols=^ftse,^ftmc,^ftas,^ftt1x,^dJA);
request.UseDefaultCredentials = true;
// set properties of the request
using (System.Net.WebResponse response = request.GetResponse())
{
using (System.IO.StreamReader reader = new System.IO.StreamReader(response.GetResponseStream()))
{
return reader.ReadToEnd();
}
}
I have placed this code into a console application and, using Console.WriteLine on the output I receive the information I require. I have used the 'Run as..' command to execute this using a specific domain account.
When I use this code from within a Page load I receive the following error message "No connection could be made because the target machine actively refused it 76.13.114.90:80".
This seems to suggest that the call is reaching yahoo (is this true?) and that there is something missing.
This would suggest there is an identity difference in the calls between the console application and application pool.
Environment is: Windows Server 2003, IIS 6.0, .net 4.0
"Target machine actively refused it" indicates that the TCP connection itself is not succeeding. This could be due to the fact that the Proxy settings when run under IIS are not the same as those that apply when you run in the console.
You can fix this by setting a WebProxy on your request, that points to the proxy server being used in the environment.
Yes, an active refusal is indication that the target machine is receiving the request and the information in the headers is either incorrect or insufficient to process the request. It is entirely possible that if you had to run this call using a "run as" command in console that the application pool's identity user does not have the appropriate permission or username. You can attempt to change the identity user to this specific domain account to see if that alleviates the problem, but you may have to isolate this particular function into its own application pool in order to protect the rest of the website from having this specification.

Resources