Working with an extremely simple proxy service configured on the new 1.0.0 Micro Integrator by WSO2. I use the Integration Studio and it's buildin intergator to run and test the functionality. It seems however that for some reason I cannot call my proxy service.
I can clearly see my changes are reflected as it boots up and the following line appears:
ProxyService named 'myprox' has been deployed from file
Also, it mentions that the endpoints have been configured:
INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through EI_INTERNAL_HTTP_INBOUND_ENDPOINT Listener started on 0.0.0.0:9201
The custom proxy service is now narrowed down to just a LOG and RESPOND mediator. Whatever URL I use, the same error keeps popping up:
WARN {org.wso2.carbon.inbound.endpoint.internal.http.api.InternalAPIDispatcher} - No Internal API found to dispatch the message
So far I have tried every type of combination I can imagine, with every one of them providing above message. The latest I tried was:
http://localhost:9201/services/myprox
I tried with and without the "/services/" subdirectory. I tried with and without HTTPS using the provided 9164 port. I also tried the variations of the 8290 and 8253 ports to no avail.
When I run this CAR file with EI 6.5.0. I can get result on the url mentioned above.
What is going on here?
It seems you are trying to call the inbound endpoint port for a proxy. The WARN message you have shown indicates that. In micro integrator the default port for proxy services is 8290. So your proxy URL should look like below.
http://localhost:8290/services/myprox
(Please note that the above mentioned port is the default one. It might change, if you started have the server with a port offset or configured differently in your settings.)
Please go through this blog for a proxy sample created and deployed into Micro Integrator from Integration Studio.
https://www.yenlo.com/blog/a-first-look-at-wso2-enterprise-integrator-6.5.0-m5-micro-integrator-and-developer-studio
Related
We are attempting to use websockets in ColdFusion (2018.0.13.329786) in an app we have running on Azure VMs behind Cloudflare. However, we are continually getting this error on the client side:
WebSocket connection to 'wss://www.*************.com/cfws' failed:
CFWebSocketWrapper.open # cfwebsocketCore.js:21
init # cfwebsocketChannel.js:49
_cf_websockets_init_6539553945348401 # strategies-for-devel…ing-with-impact:175
fire # cfajax.js:1214
$E.windowLoadHandler # cfajax.js:1321
Uncaught TypeError: Cannot set properties of undefined (setting 'readyState')
at WebSocket.wsConnection.onerror (cfwebsocketCore.js:54:29)
wsConnection.onerror # cfwebsocketCore.js:54
error (async)
CFWebSocketWrapper.open # cfwebsocketCore.js:53
init # cfwebsocketChannel.js:49
_cf_websockets_init_6539553945348401 # *************:175
fire # cfajax.js:1214
$E.windowLoadHandler # cfajax.js:1321
load (async)
$E.onWindowLoad # cfajax.js:1297
cfinit # cfajax.js:1332
(anonymous) # cfajax.js:1834
We have a cfc that's called when a message is posted to a channel that writes to a log file on the server, and this log file never gets updated. This is unsurprising as it appears that something is preventing the connection altogether.
From a configuration perspective, we run these updates when the VM is created:
webSocketObj= createObject("component","cfide.adminapi.websocket");
webSocketObj.setProperty(propertyName="EnableWebSocketServer", propertyValue="true");
webSocketObj.setProperty(propertyName="EnableProxyPort", propertyValue="8581");
and then via cfExecute:
#server.coldfusion.rootdir#/lib/wsproxyconfig.jar -ws IIS -site All -host localhost -port 8581
and then the CF service and IIS are restarted.
We have also enabled the websocket 'switch' in Cloudflare.
This should be the same as going into CF Admin, going to the websockets tab, and then ticking "Use Proxy", and then using the default port of 8581. This should send everything through IIS on port 443 from the client perspective.
Cloudflare and Azure say that no special configuration is needed. And we can see that CF has port 8581 open.
The most infuriating thing is that we worked on this in our Dev environment last year and after much trial and error got it working. However, our notes from that time were not good and when we did the above to try to get this working in our QA environment it did not work. We're obviously missing a step somewhere, but have not been able to figure it out.
Can anyone who has gotten this working explain what steps are required to make ColdFusion websockets work on an Azure VM behind Cloudflare?
We solved this issue. Hopefully this will help anyone else who runs into the same problem:
Per this doc, we realized that the wsproxyconfig was supposed to be creating an 'application' (like a virtual folder) in IIS called cfws pointing to <CF_INSTALL_HOME>/config/wsproxy/1. However, it was not doing so. Once that application was created, everything started working as expected.
Upon further testing we found that it created this application only if it was run as administrator. Otherwise, it reported success and provided no warnings or failures, but did not create the application.
We were running wsproxyconfig from the command line via CFExecute, so it was running under an administrative user, but that was apparently not sufficient. So we moved the wsproxyconfig call to one of our PowerShell scripts and had it run with administrative privileges and that solved our problem.
This had apparently worked in our dev environment originally because we ran wsproxyconfig manually.
My client has a functionality in the App where it converts an audio file into text. For this, I am using the Google Cloud Speech to text API. The client has a VM setup where there is no internet connection available, and all the network traffic should go through a proxy if it needs to connect to the internet. The SpeechToText API calls don’t go via the proxy, but directly hit the firewall, which in turns blocks it and the translation fails.
I looked for ways for using global proxies in the APP, which didn’t work as these calls are gRPC based and not REST based. Looked on gRPC code as well for the proxy settings and used one of the environment variables that they provide for the same, but even that didn’t work.
I also tried to check in the Google Speech To Text client libraries if they provide proxy related settings, but even there is no solution for that.
The Google Cloud API calls use gRPC and gRPC protocol is using HTTP/2 which doesn’t seem to provide proxy based control.
I already tried to follow the steps in the instructions, however it's still not working to send the traffic via the proxy.
Any ideas what else I can do?
https://medium.com/google-cloud/accessing-google-cloud-apis-though-a-proxy-fe46658b5f2a
https://developers.google.com/gdata/articles/proxy_setup
I have HTTP Version set to 2.0, but App Service is acting like it is not.
I'm using https://tools.keycdn.com/http2-test to test and it says Negative! <site> does not support HTTP/2.0.
Chrome is also using HTTP/1.1.
It looks like this is affecting all apps in the App Service Plan. I have 2 currently and neither have working HTTP/2. I added a third and it doesn't support HTTP/2 either. I have HTTPS setup on both apps and my requests are using HTTPS.
I've tried all sorts of combinations of changing the setting and restarting. I've tried stopping both apps and then restarting them.
I contacted Azure support and they found an issue with the server my app service was hosted on. They were able to fix the issue and it is working now.
I test in my site and get the same error message with you. However after waiting for serveral minutes, it will turn to HTTP/2.
As you have tested, go to App Service's Application Settings and set the HTTP version to 2.0. It may caused by the response delay.
If you want to ensure it, as Zahid said, you could refer to this blog to check if the http20Enable attibute value is true.
Azure is just starting releasing the full HTTP/2 support.
HTTP/2 is supported as HTTP Server on AppService but the ILB (Reverse-proxy) router doesn't support on the client side the HTTP/2.
So the HTTP/2 is not available end to end because of the internal reverse-proxy (ARR) but they are migration on YARP Project with support HTTP/2 and gRPC.
We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....
The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.
I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.
I am using a third party api which accepts json input and response back with a json format output. Locally I checked the api response on port 8181 and it works great. When I am deploying and testing the same on production environment over AWS, its failing with error :
Could not get any response
There seems to be an error connecting to https://ec2 instance public ip:8181/auth/raw
I am able to ping the public ip of the server. I have already tried exploring the solution but could not find any.
Please suggest how can i resolve this.
I got to solve it myself after breaking my head like anything by adding Custom TCP Rule on port 8181 over Inbound under security group of the instance.