GAE dispatch.yaml not properly routing to services - node.js

I'm having some trouble getting my dispatch.yaml to work, but I also haven't found any answers that address my problem. Most seem to be issues of putting the default service first in the dispatch, but I don't list mine at all.
dispatch:
- url: "*/timestamps/*"
service: timestamps
- url: "*/reqheaders/*"
service: reqheaders
I have my custom domain setting pointing to sub.example.com and for some reason, every route just points to the default service, so I get an error saying Cannot GET /timestamps or Cannot GET /timestamps/subpath

Related

Generic domain part with fixed subdomain using Caddy and auto SSL?

I'd like to setup a Caddy server where the subdomain is static but the domain part is "wildcard", such as "api.*"
From my understanding of Caddy, the wildcard is possible for one part of the full domain (*.domain.com matches bar.domain.com but not foo.bar.domain.com).
Moreover, this configuration would automatically create a SSL certificates (which Caddy does in general, but I'm not sure here) for any new DNS entry that points to my server with a domain starting with "api.*".
The "*" here would be the domain directly, not any subdomain (it would work for api.domain.com, but not for api.foo.domain.com).
Is this something possible using a simple Caddy command (such as api.* { ... }, which I tried without luck), or does it need a more complex implementation?
Thank you for your help!
I found a working solution with the help of the Caddy Community.
Here's the code :
{
on_demand_tls {
ask https://static.site.com/domain/verify
interval 2m
burst 5
}
}
static.site.com {
...
}
:443 {
tls {
on_demand
}
// Your custom config, for instance:
reverse_proxy * ...
}
The nifty part is the tls { on_demand } part for your generic HTTPS, which will create a certificate automatically. But, this can be abused by anyone that points one of their DNS entry to your server.
So to avoid that, the Caddy community highly recommends you to set a on_demand_tls that will query an endpoint, and allow the SSL certificate to be created only if that endpoint returns true.
NOTE: The ask is a GET request that DO NOT FOLLOW redirects! Anything but a 200 status code will be considered a failure, even a 3xx!
The ask url will have the ?domain appended and will allow you to verify that domain against your logic, such as custom value in the domain like "starting by static.*", and verify that the domain exists in your database (for example).
If your URL already contains some query parameter, don't worry, Caddy is clever enough to add them. (https://static.site.com/domain/verify?some=query will become https://static.site.com/domain/verify?some=query&domain={domain}.
Caddy support https for the ask parameter, and that URL can also be external with no problems at all (no need for localhost or local server configuration).
I met the same problem, and after 1 day's stucking, here is my solution:
Assuming the site name is: site.com, and I want caddy handle these domains for me:
a.dot.site.com
b.dot.site.com
c.dot.site.com
a.eth.site.com
b.eth.site.com
c.eth.site.com
1.make sure you set SSL access available. e.g. via cloudflare:
2.set the A address pointing to your Caddy server's IP.
2.Caddy file should looks like:
# the key is: you have to list all the patterns for your multiple subdomains
*.site.com *.eth.site.com *.dot.site.com {
reverse_proxy 127.0.0.1:4567
log {
output file /var/log/access-wildcard-site.com.log
}
tls {
dns cloudflare <your cloud flare api key>
}
}

Azure Application Gateway - question about override backend path

I've been working with the Azure Application Gateway for a while and I have some doubts about the Override Backednd Path option. I would appreciate if someone could clarify if my reasoning is correct.
Assumptions:
listener: mysite.mycompany.com
backend: myserver1.mycompany.com / myserver2.mycompany.com
HTTP Settings:
Override backend path: /images
Override with new hostname: Yes -> Pick hostname from backend target
Based on the settings above, if I send a request to mysite.mycompany.com, how will the App Gw forward it? My assumption would be that it will forward it to either myserver1.mycompany.com/images or myserver2.mycompany.com/images, but that does not seem to work properly.
Regards,
Wojtek
I send a request to mysite.mycompany.com, how will the App Gw forward it?
My assumption would be that it will forward it to either myserver1.mycompany.com/images or myserver2.mycompany.com/images, but that does not seem to work properly.
That's exactly how it works.

Azure Application Gateway Redirection from empty hostname

I have created an Application Gateway that needs to fulfill the working of my previous Resource (F5).
As a listener I use a hostname: hostname.stackoverflow.com that listens on 443
As a Http Setting I am using a specific port being 4443
As a BackEnd pool I use the URL/FQDN of my dev VM.
This totally works If i create a VM in the VNET and add "hostname.stackoverflow.com" to the hosts file with the ip of the application gateway.
Now I want to get a little further and add paths to my Application Gateway.
The goal is that if I use "hostname.stackoverflow.com" I need to redirect this to "Hostname.stackoverflow.com/login.aspx?guestlogin".
As far I have tried the following.
Add the "/login.aspx?guestLogin" to the HTTPS settings like this.
When I try this inside my VM. The URL changes but the path that I added there was not added in the right way, This is what I got:
So That made me think override backend path is maybe not the right way to do this.
Wanted To create a Redirection Rule That will redirect my "hostname.stackoverflow.com" to the "hostname.stackoverflow.com/login.aspx?guestLogin" But in the settings of the Application Gateway I need to provide a source path (meaning: I can not redirect from an empty hostname to a new url I think)
I am very new to Azure and even more new to the Application Gateway. Is there something that I did wrong. Is there a better way to do this ?
The iRule that I need to get in Application Gateway is as followed.
if { [string tolower [HTTP::host]] equals "hostname.stackoverflow.com" } {
if {[HTTP::path] eq "/"} {
HTTP::redirect "login.aspx?guestLogin"
}
elseif {[string tolower [HTTP::uri]] starts_with "/login.aspx?id="} {
set tail [string range [HTTP::uri] 12 end]
HTTP::redirect "login.aspx?guestLogin&$tail"
}
pool default.pool
}

What could be causing this mystery GCloud App Deploy error? (NodeJS, AppEngine. Standard Environment)

ERROR: (gcloud.app.deploy) Error Response: [9] Cloud build 6axxx...xxx9b status: FAILURE.
I'm trying to understand if I can use a NodeJS / Express server with Google Cloud App Engine, Standard Mode. My application started out from an Express-Generator framework. There is a single page app, and some function calls back to server via custom routes. Nothing terribly crazy.
I set up repo, and $ git clone https://gitlab.com/my_repo into the GCloud shell. Test, test and retest using the sandbox (local development server.) Test url is of the form: https://8080-dot-xxxxxx-dot-devshell.appspot.com Yipee.
Next step is hard deploy: I start with $ gcloud app create followed by $ gcloud app deploy (had to make a side trip to ensure correct authorization and billing stuff is whole, etc...) . Website / server totally works as intended. URL is of the form https://my-custom-XYZ-website.appspot.com/ Works great.
I can check the version at the Google Cloud Platform -- App Engine -- Version console The output there shows me:
Version: 20181120t103136
Status: Deployed
Traffic Allocation: 100%
Instances: 1
Runtime: Node10
Environment: Standard
Size: 748.8 KB
Deployed: (Date/Time by me)
So that's the background. The problem is now I can no longer update the content. I can easily push code to the terminal interface, but the command $ gcloud app deploy fails for any sort of update / new version. Sigh.
Log related info -- Build steps:
Fetcher = successful
Builder = status, Step Failed
Builder Arguments
--name=us.gcr.io/my-custom-XYZ-website/app-engine-tmp/app/ttl-2h:12xxxxxxa5a0 --directory=/workspace --destination=/srv --cache-repository=us.gcr.io/my-custom-XYZ-website/app-engine-tmp/build-cache/ttl-7d --cache --base=gcr.io/gae
runtimes/nodejs10:nodejs10_10_13_0_20181111_RC00
Directory /workspace/
"builder": Permission denied for "d71xxxxxxxxxxxxxxxxxx88b5" from request "/v2/my-custom-XYZ-website/app-engine-tmp/build-cache/ttl-7d/node-cache/manifests/d71xxxxxxxxxxxxxxxxxx88b5". : None
app.yaml
# [START runtime]
runtime: nodejs10
# [END runtime]
handlers:
- url: /images
static_dir: public/images
- url: /javascript
static_dir: public/javascript
- url: /red-canoe
static_dir: public/alt-content
- url: /stylesheets
static_dir: public/stylesheets
- url: /.*
secure: always
redirect_http_response_code: 301
script: auto
Any idea on how to identify and correct what's wrong here?
Note: I did create another simple test product in node.js, and I can easily update the versions there. That test product had only a simple app.js with a simple Hello World response. Version #2 had Hello There, World (okay, so yeah, not the worlds most robust test...). But the version update, via $ gcloud app deploy worked just fine there. I did note the version size on the Hello World app was around 245kb or so.
So, after a whole lot of testing I think I figured out what is happening here.
The node.js application actually utilizes three different Google related components / tools.
Google Firebase Authentication
Google Sheets API, V4
Google App Engine (Deployment)
When I'm created those components, the system prompts me to either create a new project or utilize an existing project. I chose the exact same project for all three tools. I believe the fact that these were all tied together messed up the ability to perform updates to Google App Engine vcloud app deploy
The fix was to delete that three combo project, and create three separate projects
MyProject_Sheets
MyProject_Firebase_Auth
MyProject_AppEngineDeploy
This works reliably. All done.
And for anybody who may be interested in the Firebase / Sheets API stuff I did here, check out this link. I built an online phone directory, protected by login via mobile phone, with contact data stored on a private Google sheet.

Why should i use a LoadBalancerProbe instead of subscribing to the RoleEnvironment.StatusCheck Event?

I was fiddling with the options azure provides to balance the load between multiple webroles.
I found three possible ways to do this.
the first would be to do nothing at all and let the default ( round robin) implementation do the job.
the second possibility would be to define a custom LoadBalancerProbe in the ServiceDefinitionFile, which i tried and did not get to work: From my understanding the custom aspx page is called each time a status check is performed on the role. Depending on the http response code the role changes its status to busy. - but this is never happening.
Also, i couldnt really find any examples for defining a custom LoadBalancingProbe.
Thus i looked for an alternative way to do this.
Now iam subscribing to the RoleEnvironment.StatusCheck Event, which allows me to implement some logic and depending on the results set the role state to busy and available.
My Questions:
1) Supposing the Custom LoadBalancerProbe works as described in the MSDN, what is the difference between subscribing to the StatusCheckEvent and using a custom probe?
2) Why does my custom load balancer probe not work ? - iam just testing with the azure emulator for now and iam well aware that traffic still gets routed to the webrole instances allthough they are set to busy in the emulator.
But my Custom Probe does not change the status of the webroleinstances at all.
Here is the very rudimentary code, which should - to my knowledge set the status of webrole instance_n_0 to busy.
public class LoadBalanceController : Controller
{
public ActionResult Index()
{
WebOperationContext woc = WebOperationContext.Current;
if(RoleEnvironment.CurrentRoleInstance.Id.ToLower().Contains("_0"))
{
woc.OutgoingResponse.StatusCode = System.Net.HttpStatusCode.ServiceUnavailable;
}else
{
woc.OutgoingResponse.StatusCode = System.Net.HttpStatusCode.OK;
}
return View(); //not relevant
}
Ive also configered my servicedefinitionfile and set a Route to redirect to this controller/action when calling the healthcheck.aspx defined in the custom probe.
<LoadBalancerProbes>
<LoadBalancerProbe name="WebRoleBalancerProbeHttp" protocol="http" path="healthcheck.aspx" intervalInSeconds="5" timeoutInSeconds="100"/>
</LoadBalancerProbes>
...
<InputEndpoint name="EndpointWeb" protocol="http" port="80" loadBalancerProbe="WebRoleBalancerProbeHttp"/>
The Route:
routes.MapRoute(
name: "HealhCheck",
url: "healthcheck.aspx",
defaults: new { controller = "LoadBalance", action = "Index", id = UrlParameter.Optional }
);
Not sure why the custom probe isn't working, but for the differences: The health-check event lets you announce whether an instance is available, but you don't have any flexibility in terms of how often this is called. Also, you can't launch a separate service that listens on a custom port (or port type).
You have much more flexibility with custom probes, since you can create any type of port listener to determine health, even a separate exe.
With Virtual Machines, this is the only method of health probes, since Virtual Machines don't have the guest agent running and don't provide the health-check event.
In your service definition file, you have marked your probe to be hosted on an http endpoint (and not HTTPS).
Is that the case with your web app as well ? Is it exposing that endpoint on HTTP and not on HTTPS ? If yes, then also check if there is any automatic redirection to HTTPS happening.
I think you have mostly setup everything properly. Here is a post that has some example code and another question from SO, where they were successful in setting it up (should provide some insight into the csdef file).
I agree with David Makogon's points about the differences between the two.

Resources