502 Bad Gateway on Google Compute Engine, Nginx, and Meteor - node.js

I built a Meteor app and would like to run it on Google Compute Engine. I followed the guide found here to get a basic instance of my app up and running, changing the disk size/type, the instance type, the disk and instance zones (to both match where I live), and added export METEOR_SETTINGS={ ... } to the second to last line of the startup.sh file.
Everything seemed to work fine, I have the persistent disk and vm instance listed on my Google Cloud dashboard. I created a new firewall rule on my default network to tcp:80 and tcp:443 incoming traffic, and now when I access the instance's external IP address from my browser, I'm shown a 502 Bad Gateway nginx/1.8.0 page (where I'd instead expect the homepage of my Meteor app).
Is there anything in the configuration details in the startup.sh file that I'm missing or should modify? Could there be an issue with how the compute vm instance is communicating with the persistent disk? Frankly I'm far outside of my domain with this type of thing.

After ssh-ing into my instance and dinking around a bit, I called export ROOT_URL='<the_instances_external_ip>', rather than 'http://localhost', at which point everything started to work. It's unfortunate how little documentation there is on getting Meteor apps configured and running in production (I only thought to mess with ROOT_URL after searching something unrelated), so hopefully this will at least be helpful for someone else.

Related

AWS ALB return an empty page when used with ECS

I am working on containerizing one react app & provision it using ECS cluster with alb.
Everything looks great but whenever I accessing the ALB DNS in the browser it returns an empty page with 2 words only "react app"
I checked health check all the backend instances are healthy and returning 200 code.
I have used the ec2 instances IP address in the browser and the page loaded completely.
It seem issue with the alb, why not loaded the complete page
It should be a problem on how you have configured the DockerFile.
You would have added some default route to nginx configuration which displays 'react app'.
To debug, please try to run the DockerFile from your local using Docker Desktop and see if you are able to browse the react app.
If it works fine in your local then for sure 100% it'll work in ECS without any problem.

How to use an AWS elastic load balancer with React HTML5 page

I have an React-app web site that plays HTML5 video hosted on EC2. I have two instances running and I can view the web page on each of the two instances. If I set up an ELB that points to a single instance and use the load balancer's URL, the web page works as well. However, with both machines in the target group, the page fails about 50% of the time when going through the load balancer's URL.
This is the error in _webpack_require_:
TypeError: modules[moduleId] is undefined
modules[moduleId]: undefined
modules: (1) […]
modules["./node_modules/#babel/runtime/helpers/arr…dules/#babel/runtime/helpers/arrayWithoutHoles.js(module, exports)
installedModules: {…}
moduleId: "./node_modules/babel-preset-react-app/node_modules/#babel/runtime/helpers/esm/objectWithoutProperties.js"
An internet search for an undefined modules[moduleId] looks like it's all over the place. And I'm not sure I know enough to ask this question properly.
It looks like some people use nginx (https://www.freecodecamp.org/news/production-fullstack-react-express/), which presumably means not using the AWS ELB, which I expect is much harder to control than AWS.
Most streaming protocols require session affinity. Try setting up sticky sessions on your load balancer. Here's how to for classic ELB: https://docs.aws.amazon.com/elasticloadbalancing/latest/classic/elb-sticky-sessions.html
Unfortunately that also pins the other traffic as well, but better than 50% failure.

linux redirect localhost port to url port

I need to redirect localhost:8080 to http://url:8080/.
Some background:
I am using docker swarm stack services. One service (MAPS) creates a simple http server that lists xml files to port 8080 and another service (WAS) uses WebSphere Application Server that has a connector that uses these files, to be more precise it calls upon a file maps.xml that has the urls of the other files as http://localhost:8080/<file-name>.xml.
I know docker allows me to call on the service name and port within the services, thus I can use curl http://MAPS:8080/ from inside my WAS service and it outputs my list of xml files.
However, this will not always be true. The prod team may change the port number they want to publish or they might update the maps.xml file and forget to change localhost:8080 to MAPS:8080.
Is there a way to make it so any call to localhost:8080 gets redirected to another url, preferrably using a configuration file? I also need it to be lightweight since the WAS service is already quite heavy and I can't make it too large to deploy.
Solutions I tried:
iptables: Installed it on the WAS service container but when I tried using it it said my kernel was outdated
tinyproxy: Tried setting it up as a reverse proxy but I couldn't make it work
ncat with inetd: Tried to use this solution but it also didn't work
I am NO expert so please excuse any noob mistakes I made. And thanks in advance!
It is generally not a good idea to redirect localhost to another location as it might disrupt your local environment in surprising ways. Many packages depend on localhost being localhost :-)
it is possible to add MAPS to your hosts file (/etc/hosts) giving it the address of maps.

How to use get cf ssh-code password

We are using CF Diego API 2.89 version, Currently I was able to use it and see the vcap and the app resources when running cf ssh myApp.
Now it's become harder :-)
I want to deploy App1 that will "talk" with "APP2"
and have access to to it file system (as it available in the command line when you run ls...) via code (node.js), is it possible ?
I've found this lib which are providing the ability to connect to ssh via code but not sure what I should put inside host port etc
In the connect I provided the password which should be retrieved
via code
EDIT
});
}).connect({
host: 'ssh.cf.mydomain.com',
port: 2222,
username: 'cf:181c32e2-7096-45b6-9ae6-1df4dbd74782/0',
password:'qG0Ztpu1Dh'
});
Now when I use cf ssh-code (To get the password) I get lot of requests which I try to simulate with Via postman without success,
Could someone can assist? I Need to get the password value somehow ...
if I dont provide it I get following error:
SSH Error: All configured authentication methods failed
Btw, let's say that I cannot use CF Networking functionality, volume services and I know that the container is ephemeral....
The process of what happens behind the scenes when you run cf ssh is documented here.
It obtains an ssh token, this is the same as running cf ssh-code, which is just getting an auth code from UAA. If you run CF_TRACE=true cf ssh-code you can see exactly what it's doing behind the scenes to get that code.
You would then need an SSH client (probably a programmatic one) to connect using the following details:
port -> 2222
user -> cf:<app-guid>/<app-instance-number> (ex: cf:54cccad6-9bba-45c6-bb52-83f56d765ff4/0`)
host -> ssh.system_domain (look at cf curl /v2/info if you're not sure)
Having said this, don't go this route. It's a bad idea. The file system for each app instance is ephemeral. Even if you're connecting from other app instances to share the local file system, you can still lose the contents of that file system pretty easily (cf restart) and for reasons possibly outside of your control (unexpected app crash, platform admin does a rolling upgrade, etc).
Instead store your files externally, perhaps on S3 or a similar service, or look at using Volume services.
I have exclusively worked with PCF, so please take my advice with a grain of salt given your Bluemix platform.
If you have a need to look at files created by App2 from App1, what you need is a common resource.
You can inject an S3 resource as a CUPS service and create a service instance and bind to both apps. That way both will read / write to the same S3 endpoint.
Quick Google search for Bluemix S3 Resource shows - https://console.bluemix.net/catalog/infrastructure/cloud_object_storage
Ver 1.11 of Pivotal Cloud Foundry comes with Volume Services.
Seems like Bluemix has a similar resource - https://console.bluemix.net/docs/containers/container_volumes_ov.html#container_volumes_ov
You may want to give that a try.

How to get haproxy to use a specific cluster computer via the URI

I have successfully set haproxy on my server cluster. I have run into one snag that I can't find a solution for...
TESTING INDIVIDUAL CLUSTER COMPUTERS
It can happen that for one reason or another, one computer in the cluster gets a configuration variation. I can't find a way to tell haproxy that I want to use a specific computer out of a cluster.
Basically, mysite.com (and several other domains) are served up by boxes web1, web2 and web3. And they round-robin perfectly.
I want to add something to the URL to tell haproxy that I specifically want to talk to web2 only because in a specific case, only that server is throwing an error on one web page.
Anyone know how to do that without building a new cluster with a URI filter and only have one computer in that cluster? I am hoping to use the cluster as-is but add something to the URI that will tell haproxy which server to use out of the cluster.
Thanks!
Have you thought about using different port for this? Defining new listen section with different port, because, as I understand, you can modify your URL by any means?
Basically, haproxy cannot do what I was hoping. There is no way to add a param to the URL to suggest which host in the cluster to use.
I solved my testing issue by setting up unique ports for each server in the cluster at the firewall. This could also be done at the haproxy level.
To secure this path from the outside world, I told the firewall to only accept traffic from inside our own network.
This lets us test specific servers within the cluster. We did have to add a trap in our PHP app to deal with a session cookie that is too large because we have haproxy manipulating this cookie to keep users on the server they first hit. So when the invalid session cookie is detected, we have the page simply drop the session and reload the page.
This is working well for our testing purposes.

Resources