My client has a functionality in the App where it converts an audio file into text. For this, I am using the Google Cloud Speech to text API. The client has a VM setup where there is no internet connection available, and all the network traffic should go through a proxy if it needs to connect to the internet. The SpeechToText API calls don’t go via the proxy, but directly hit the firewall, which in turns blocks it and the translation fails.
I looked for ways for using global proxies in the APP, which didn’t work as these calls are gRPC based and not REST based. Looked on gRPC code as well for the proxy settings and used one of the environment variables that they provide for the same, but even that didn’t work.
I also tried to check in the Google Speech To Text client libraries if they provide proxy related settings, but even there is no solution for that.
The Google Cloud API calls use gRPC and gRPC protocol is using HTTP/2 which doesn’t seem to provide proxy based control.
I already tried to follow the steps in the instructions, however it's still not working to send the traffic via the proxy.
Any ideas what else I can do?
https://medium.com/google-cloud/accessing-google-cloud-apis-though-a-proxy-fe46658b5f2a
https://developers.google.com/gdata/articles/proxy_setup
Related
Working with an extremely simple proxy service configured on the new 1.0.0 Micro Integrator by WSO2. I use the Integration Studio and it's buildin intergator to run and test the functionality. It seems however that for some reason I cannot call my proxy service.
I can clearly see my changes are reflected as it boots up and the following line appears:
ProxyService named 'myprox' has been deployed from file
Also, it mentions that the endpoints have been configured:
INFO {org.apache.synapse.transport.passthru.core.PassThroughListeningIOReactorManager} - Pass-through EI_INTERNAL_HTTP_INBOUND_ENDPOINT Listener started on 0.0.0.0:9201
The custom proxy service is now narrowed down to just a LOG and RESPOND mediator. Whatever URL I use, the same error keeps popping up:
WARN {org.wso2.carbon.inbound.endpoint.internal.http.api.InternalAPIDispatcher} - No Internal API found to dispatch the message
So far I have tried every type of combination I can imagine, with every one of them providing above message. The latest I tried was:
http://localhost:9201/services/myprox
I tried with and without the "/services/" subdirectory. I tried with and without HTTPS using the provided 9164 port. I also tried the variations of the 8290 and 8253 ports to no avail.
When I run this CAR file with EI 6.5.0. I can get result on the url mentioned above.
What is going on here?
It seems you are trying to call the inbound endpoint port for a proxy. The WARN message you have shown indicates that. In micro integrator the default port for proxy services is 8290. So your proxy URL should look like below.
http://localhost:8290/services/myprox
(Please note that the above mentioned port is the default one. It might change, if you started have the server with a port offset or configured differently in your settings.)
Please go through this blog for a proxy sample created and deployed into Micro Integrator from Integration Studio.
https://www.yenlo.com/blog/a-first-look-at-wso2-enterprise-integrator-6.5.0-m5-micro-integrator-and-developer-studio
I've build a Web-App that is displayed in an Electron-App with Nativefier. That already works great, but now i need to send requests from the website to the local network to talk with some local devices which are (with it's ip-address) configured in the Web-App.
I had the idea to use the Electron-App as a "proxy" to the local network by using a javascript callback from the Website to the Electron-App (don't know if this is possible, just an idea), which then makes the local request because it's running on a computer in the same network.
The reason for this post is that i need ideas/tips to secure this and prevent allowing to talk to other than the desired Web-App (available under a certain domain) by developing something protective like checking or validating the Server, validating the request by sending it's hash back to the server or other methods.
So my questions are: is it generally a good idea to do something like this or is this a huge security problem and have anyone any tips to secure the communication and only allowing the communication to in the web-app configured devices in the local network?
Hi I want to make a simple google home Action, which will be controlling a LED of my development board. But I want to host data routing and handling by AWS. I have MQTT communication running between AWS server and the development board.
I am planning to deploy a Node.js server in AWS Elastic Beanstalk or Elastic computer cloud.
But I am not sure how to connect the request make on Google Home to the AWS service. Is there good documentation for this?
If possible I want to know the options with "DialogFlow" and with "Actions API".
Thanks.
Fulfillment for both Dialogflow and the Actions API goes through a webhook that you define. This will need to be an HTTPS server, with a valid non-self-signed SSL certificate, available at a public IP address. You can run this on EC2 in a variety of configurations - whatever works best for you.
On the Node.js side, most application servers are using something like Express.js to handle some routing and middleware processing. The libraries from Google assume that you'll be passed a request and a response object that have been processed using Express.js and through the express body parser to turn the JSON HTTPS body into a Javascript object. However, you don't need to use these libraries if you don't want - you just need to parse and respond with JSON.
I have a windows server 2012 VPS running a web app behind Cloudflare. The app needs to initiate outbound connections based on user actions (eg upload image from URL). The problem is that this 'leaks' my server's IP address and increases risk of DDOS attacks.
So I would like to prevent my server's IP from being discovered by setting up a forward proxy. So far my research has shown that this is no simple task, and would involve setting up another VPS to act as a proxy.
Does this extra forward proxy VPS have to be running windows ? Are their any paid services that could act as a forward proxy for my server (like cloudflare's reverse proxy system)?
Also, it seems that the suggested IIS forward proxy plugin, Application Request Routing, does not work for HTTPS.
Is there a solution for both types of outgoing (HTTPS + HTTP) requests?
I'm really lost here, so any help or suggestions would be appreciated.
You are correct in needing a "Forward Proxy". A good analogy for this is the proxy settings your browser has for outbound requests. In your case, the web application behaves like a desktop browser and can be configured to make the resource request through a proxy.
Often you can control this for individual requests at the application layer. An example of doing so with C#: C# Connecting Through Proxy
As far as the actual proxy server: No, it does not need to run Windows or IIS. Yes, you can use a proxy service. The vast majority of proxy services are targeted towards consumers and are used for personal privacy or to get around network restrictions. As such, I have no direct recommendations.
Cloudflare actually has recommendations regarding this: https://blog.cloudflare.com/ddos-prevention-protecting-the-origin/.
Features like "upload from URL" that allow the user to upload a photo from a given URL should be configured so that the server doing the download is not the website origin server.
This may be a more comfortable risk mitigator, as it wouldn't depend on a third party proxy service. A request for upload could be handled as a web service call to a dedicated "file downloader" server. Keep in mind that if you have a queued process for another server to do the work, and that server is hosted in the same infrastructure, both might be impacted by a DDoS, depending on the type of DDoS.
Your question implies that you may be comfortable using a non-windows server. Many softwares exist that can operate as a proxy(most web servers), but suffer from the same problem as ARR - lack of support for the HTTP "CONNECT" verb, which is used by modern browsers to start an HTTPS connection before issuing a "GET". SQUID is very popular, open source, and supports everything to connect to.. anything. It's not trivial to set up. Apache also has support for this in "mod_proxy_connect", but I have no experience in that and the online documentation isn't very robust. It's Apache, though, so it may be worth the extra investigation.
I'm having fits accomplishing something and after scouring google & SO, throwing my hands up after a few days. Trying to do something that I think is pretty common: debug / examine all HTTP traffic while developing a node.js app.
In Windows it is as simple as firing up Fiddler and I can see all HTTP & HTTPS traffic from all processes. But I've switched platforms over to OSX and trying to make the same work.
I've tried using Charles & MITMPROXY, but all I'm seeing is the traffic to, with the response, my node.js app. My node.js app is calling external services, some using the popular request package (which I have seen how to set that up) but also using other packages, like azure-storage. What's troubling me is I can't get any of the debugging proxies to show me at the azure-storage package is sending / receiving to the endpoints they are calling.
Conceptually I think I get it... I have to tell these different things (like node.js, request & azure-storage) to go through the proxy each of these tools uses... but how can you do that without modifying their source? Can't, like how Fiddler works on Windows, you do something to "all traffic goes through this proxy"?
I'd use Fiddler on OSX but it is currently not working with no ETA in sight after talking to Telerik.
So the problem I was having is what I thought... in my specific instance the module that I was using to access Azure storage was not using the default proxy. I found a package (**global-tunnel that hijacked everything that used the request package to control it going through a proxy. Now I saw stuff show up in the HTTP debuggers I was using.
The problem now is when I am trying to reach an HTTPS endpoint... using something like Charles, it used it's own SSL cert which wasn't trusted by Azure so the connections were refused. Back to the drawing board...