There was no endpoint listening at <URI> that could accept the message. This is often caused by an incorrect address or SOAP action - wcf-client

I have two WCF clients consuming a 3rd party web service.
These two clients execute the same method call. In the one case it works every time, in the other I get the "There was no endpoint listening ..." message.
As far as I can tell, the only difference between the two calls is that they are in two different client exes, and that means that the .exe.config files are not the same. They use the same source code, which is shared between the two projects in Visual Studio, so that's not different.
But in fact the content of those two exe.config files is (almost) exactly the same; the only difference is that the exe.config for the call that fails has bigger values for the maxBufferSize and maxReceivedMessageSize attributes of the binding element, as well as a larger sendTimeout value.

This isn't really an answer, it's an explanation.
The problem is that of the two clients above, one was a desktop Windows Forms app, and the other a Windows Service. They both used the same code base (i.e. instance of a class), and nearly the same app.config files.
BUT the service logged in under the SystemAccount - and at some sites this doesn't seem to have the rights/profile to access the internet, and so it could not find the web service endpoint. Obvious. When you know.

Related

Proton BlockingConnection Listen Claims on Azure Service bus

I get the following error messages:
'Listen' claim(s) are required to perform this operation. Resource: 'sb:/XXXXX.servicebus.windows.net/queue
proton._utils.LinkDetached: receiver 21881c3f-a71a-41fc-92c5-4a8d82956cdf from queue closed due to: Condition('amqp:unauthorized-access', "Unauthorized access.
'Listen' claim(s) are required to perform this operation.'
I have listen claims setup. Not sure what I am doing wrong. The same python code when run from a separate vm works fine. I can send a message to the queue.
However only when I am running the code from an app service I hit the Listen Claims error. I am using the proton qpid python library. I am passing the keyname and key to the connection string...
Without more detail this may be hard to gander a precise fix, but some thoughts come to mind, because, candidly, I trust that error (especially if it worked from another machine.)
You've checked that the queue and key bits are making it into the app service intact?
You've checked the connection string is formatted correctly, since you mention "passing the keyname to the connection string"; making me wonder if you built it by hand rather than copying from the portal. (and if so, if it could have gotten mangled)
(at this point I'm totally guessing) Perhaps qpid is picky about talking to entity. vs namespace level connection strings?
I can't even really expect firewall/network failures, that would likely manifest TOTALLY differently.
My next debugging steps would involve trying to validate the above, and trying to connect without the qpid intermediary, as I've seen other SO posts asking the same question (re Listen failure) but no followups, so perhaps there is an anomaly there; but my lack of familiarity with qpid makes it hard to do more than guess.
Don't hesitate to reach out on our github if you run into SDK related trouble in the future, full disclosure, am a maintainer for the python library.

How web app can used offlinely base on Service Worker?

I know there are many documents about Service Worker, also many questions already asked.
But today is a long day with me, so I'm very tired to read many many docs now.
I just want to explain my thinking about Service Worker, how it helps us serve web app offline, and I hope everybody can tell me whether it's right or not.
Everything I know about Service Worker is it intercepts on the network request job of browser, and do something. So I guess when it intercepts, it will cached every request. So when the network isn't connected, Service Worker uses the data it cached for serving to the users
Thanks for all reply,
Yes, your thoughts are right. Here I will provide some more details about the whole functioning.
A service worker (SW), like a web worker, runs on different thread than the one used by the main web app. This allows SW to keep running even when the web app is not opened, allowing for instance to receive and show web notifications.
A SW, differently from a web worker, used for generic purposes, acts specifically as a proxy between our web application and the network. However is up to us to define and implement what and how the SW has to cache data locally, otherwise, by default, the SW doesn't know what to store in the cache.
For this we have to implement caching strategies that target static assets (like .js or .css files, for instance) or even URLs (but keep in mind that the CACHE API, used by the SW, can only cache GET calls, no PUT/POST).
Once the assets or URLs we are interested are defined within the scope of a specific strategy, the SW will intercept all outgoing requests and see if there is a match and eventually provide the data from the local cache, instead of going over the network.
Of course this depends on the strategy we chose/implement.
Since the requested data is already available locally, the SW can deliver it even when the user is offline.
If interested, I wrote an article, describing in detail the service workers and some of the most common caching strategies, applied to different scenarios.

How are C/S RTE ports implemented in AUTOSAR?

I'm wondering about this because they are vastly different than S/R RTE ports. Data which is sent through the S/R can be observed/recorded. After all RTE is the one who takes the incoming data and copies it to a temporary/direct location. That data is quantifiable. BUT, when talking about C/S, client somehow has access to a functions which are offered by a server. Those functions are executed in clients context, not the server context. Does anybody know how this is implemented?
I do not really understand what your question is, because somehow you already answered yourself by writing "when talking about C/S, client somehow has access to a functions which are offered by a server. Those functions are executed in clients context, not the server context.".
So, best case, the client simply invokes a function in the server.
When speaking about Client and Server in different Tasks or even on different uC cores, then events will also be involved and the call is getting more complex.

How to properly test an azure bot service

I'm able to successfully load test my bot server by getting the proper auth token from Microsofts auth URL (basically through this page)
I was wondering if this was a valid test on the service considering that we're not actually hitting the bot frameworks endpoint (which has rate limiting)
Is there another way to load test a bot service wherein i can replicate the bot frameworks throttling/rate limits?
I ended up with using load test with Visual Studio and Visual Studio Team Services.
The reason why I used this approach is that you can setup full path of load tests. Azure Bot Service can be either Web App or Function App with endpoint prepared for receiving messages - using HTTP POST so in the end is just web service.
You can setup load tests for different endpoints including number of hits to selected endpoint. In case of Bots you can for instance setup test with 100 fake messages sent to the bot to see the performance.
You can read more under these two links below:
Load test your app in the cloud using Visual Studio and VSTS
Quickstart: Create a load test project
Unfortunately as stated in the documentation you linked, the rates are not publicly available due to how often they are adjusted.
Regarding user-side throttling- this should not actually have an effect either way as long as you simulate reasonable traffic, but even if you go a bit overboard, an individual user hitting rate-limiting would be functionally equivalent to just having a bit more traffic. The single user sending more messages to the bot is the same as three users sending the same amount of messages slightly slower and there's no limit for your bot in terms of how many customers you might have. That said, a user getting a message, reading it, and typing up a response should not put themselves into a situation where they are rate-limited.
However, regarding bot side throttling it is useful to know if your bot is sending messages too fast for the system. If you are only ever replying directly to messages from users, this will not be an issue, as the system is built with replying to each user message in mind. The only area you might run into trouble is if you are sending additional (or unsolicited) messages, however even here as long as you are within reasonable limits you should be OK. (i.e. if you aren't sending several messages back to a user as fast as possible for each message they send you, you will probably not have problems.) You can set a threshold for bot replies within your channel at some reasonable-sounding limit to test this.
If you would like to see how your bot responds in cases where throttling is occurring (and not necessarily forcing it into tripping the throttling threshold), consider setting your custom channel to send 429 errors to your bot every so often so that it has to retry sending the message.

Ektorp querying performance against CouchDB is 4x slower when initiating the request from a remote host

I have a Spring MVC app running under Jetty. It connects to a CouchDB instance on the same host using Ektorp.
In this scenario, once the Web API request comes into Jetty, I don't have any code that connects to anything not on the localhost of where the Jetty instance is running. This is an important point for later.
I have debug statements in Jetty to show me the performance of various components of my app, including the performance of querying my CouchDB database.
Scenario 1: When I initiate the API request from localhost, i.e. I use Chrome to go to http://localhost:8080/, my debug statements indicate a CouchDB peformance of X.
Scenario 2: When I initiate the exact same API request from a remote host, i.e. I use Chrome to go to http://:8080/, my debug statements indicate a CouchDB performance of 4X.
It looks like something is causing the connection to CouchDB to be much slower in scenario 2 than scenario 1. That doesn't seem to make sense, since once the request comes into my app, regardless of where it came from, I don't have any code in the application that establishes the connection to CouchDB differently based on where the initial API request came from. As a matter of fact, I have nothing that establishes the connection to CouchDB differently based on anything.
It's always the same connection (from the application's perspective), and I have been able to reproduce this issue 100% of the time with a Jetty restart in between scenario 1 and 2, so it does not seem to be related to caching either.
I've gone fairly deep into StdCouchDbConnector and StdHttpClient to try to figure out if anything is different in these two scenarios, but cannot see anything different.
I have added timers around the executeRequest(HttpUriRequest request, boolean useBackend) call in StdHttpClient to confirm this is where the delay is happening and it is. The time difference between Scenario 1 and 2 is several fold on client.execute(), which basically uses the Apache HttpClient to connect to CouchDB.
I have also tried always using the "backend" HttpClient in StdHttpClient, just to take Apache HTTP caching out of the equation, and I've gotten the same results as Scenarios 1 and 2.
Has anyone run into this issue before, or does anyone have any idea what may be happening here? I have gone all the way down to the org.apache.http.impl.client.DefaultRequestDirectory to try to see if anything was different between scenarios 1 and 2, but couldn't find anything ...
A couple of additional notes:
a. I'm currently constrained to a Windows environment in EC2, so instances are virtualized.
b. Scenarios 1 and 2 give the same response time when the underlying instance is not virtualized. But see a - I have to be on AWS.
c. I can also reproduce similar 4X slower performance as in scenario 2 with this third scenario: instead of making the localhost:8080/ using Chrome, I make it using Postman, which is a Chrome application. Using Postman from the Jetty instance itself, I can reproduce the 4X slower times.
The only difference I see in c. above is that the request headers in Chrome's developer tools indicate a Remote Address of [::1]:8080. I don't have any way to set that through Postman, so I don't know if that's the difference maker. And if it were, first I wouldn't understand why. And second, I'm not sure what I could do about it, since I can't control how every single client is going to connect to my API.
All theories, questions, ideas welcome. Thanks in advance!

Resources