Parse-server app not working gcloud - node.js

I have server based backend deployed on app engine and it's working(free trial), but it have too much resources. I want to deploy an updated version of it with more fine tuned app.yaml and some minor updates. The problem is: new version is not working. It have some redirect code and it works, when i directly type:
https://myapp/parse
it shows:
{"error":"unauthorized"}
it's all good.
But when i try to do login, or other CRUD it just writing logs and doesn't do anything.
Because of that all i can do is return traffic to old version, which is working just fine, and beg for help.
May be google did some updates in last month and i have to redo something in my app?
UPD
Backend always returns 307 redirect when i do authorized request, old(deployed in november 16) version works great.

It is not 100% clear for me the issue. If it is possible to show us the whole output.
But apparently, it is the HTTPS in your URL. Try it with HTTP. It is the most common scenario for the unauthorized error.

Related

Scraping AWS with Puppeteer runs locally but fails on Heroku

I know it sounds a lot like other issues here in Stackoverflow, bear with me, it's not (not that I could tell)
I have a scraping app (using Puppeteer) that I use to scrape an Amazon public page.
It works great, I've debugged it by setting the headless: false and I see it works, and it gives me back the expected result.
The same app fails on Heroku, but the problem is not with launching or using Puppeteer (I have several indications), but probably because I'm being identified as a robot.
The error returned is:
waiting for selector `#link_continue input` failed: timeout 30000ms exceeded
Important to say that the error is a generic Puppeteer error that indicates that the selector I'm waiting for just doesn't appear on-page.
I know it should as it's a selector on the first page I navigate to, and it works locally (as mentioned before) - the selector always exists if the page loads.
I had the exactly same error when I've tried to run the scraping on my local machine before setting a User-Agent header. But at that time I could use the headless:false so I saw in my eyes that I'm being rejected due to illegal operations on their page (robots-like operations) so I was redirected to an error page that didn't contain this selector on it.
For this reason, I suspect it recognizes me as a robot, but I don't know how to debug it, it drives me crazy.
Now, if you'd like to reproduce the problem:
You need to wait for the mentioned selector on this site:
https://sellercentral.amazon.com/hz/fba/profitabilitycalculator/index
and then deploy it to Heroku and try to run it maybe 2-3 times
** Two questions: **
How can I proceed from here, I'm 99.9% sure it's the same issue I had previously, but I can't verify... any suggestions?
Given that this is actually the problem, can anyone suggest an easy-to-use/deploy hosting that also allow easy VPN configuration? I think Heroku doesn't give you to do that unless you have an enterprise account
Thanks
I would like to point out that Amazon is very good at blocking IPs. It is very likely that they already blacklisted IPs of cloud services like Heroku, Azure, etc... Previously I have observed services like Cloudflare, Akamai etc... blacklisting these known IPs.
In this scenario Rotating proxies could help you to avoid getting blocked.

Netlify deploy react application breaks on parameters

I am having trouble putting an app in production with Netlify. The app works smoothly locally, but when deploying to Netlify I am having trouble with this endpoint:
mywebsite.com/confirm?token=aaf21b43849e45d5576c6e8a3278d1c4&email=test#test.com
The page is directly not found.
If I enter
mywebsite.com/confirm?token=aaf21b43849e45d5576c6e8a3278d1c4&email=test#test
(without the .com) it works fine.
It seems like a Netflify bug/limitation, but I would like to confirm with a more experienced audience (you).
Thank you very much
My current workaround has been to encrypt the email and decrypt it again so no dots are in the URL. Not the best solution but it works.

azure 502 bad gateway

has anyone seen this before so I am getting a 502 bad gateway error on my app, the issue I have is that the detailed error information I am getting says my requested url is https://SOX:80/api however my site is configured to use https://sox.domain.com and the site largely works pulling the various JS files required
my app service name is SOX in the azure dashboard so I assume that is where it is picking up SOX from but I have no idea why it is using this.
So overall the issue had me perplexed... however with more testing I soon figured out what was going on.
my backend is Dotnet core Azure throwing the 502 bad gateway was its way of handling exceptions ultimately the problem was code based.
I am mentioning this purely so that it will help others
my first issue was based on cert handling it seems dotnet runs in a container that is specified by your app name as i mentioned above https://SOX:80
the below was causing my issues
sslPolicyErrors = X509StoreStoreHelper.ValidateSSLPolicy(cert.Thumbprint, cert);
after commenting this out for testing my problem went away(we are putting in a proper fix )
my second issue came from using an unsupported view in Azure SQL master.sys.master_files which again just threw a 502 bad gateway error referencing https://SOX:80
please note I have used https://SOX:80 as a reference to mask the real site.
hope this helps the next person.
Based on your description, I have checked your site (https://sox.azurewebsites.net/) and found that it contains three static files (index.html,generic.html,elements.html). I viewed your website in Chrome incognito window as follows:
I did not find any requests against https://SOX:80/api in your html page or JavaScript files. Please try to access your website in a new incognito window to isolate the cache issue or just press CTRL + F5 to refresh your current page to narrow this issue. Moreover, you need to check whether you have configured URL Rewrite. If you still could not solve this issue, you need to update your question with the details for us to reproduce this issue.

Could not retrieve the CDN endpoints in subscription with ID

Searched Google and so - no luck.
Just got this message in Azure for 3 CDN endpoints.
There seems no way to know what is going on without MS support. It is a test account and I do not recall setting this. I have been through similar obfuscated MS error messages only to discover that Azure had crashed.
What does it mean?
This isn't really a direct answer, but could help with the general problem of "what happens if the CDN goes down?".
There is a recent development called the "Progressive Web App".
Basically unless served by localhost, everything has to be over https, but script is cached as a local application in your browser.
When your app makes requests to the registered domain, these are intercepted by a callback you put in your serviceWorker.js, so you can cache even application data locally, and sync the local data occasionally with the server (or on receive events if you're using webSockets).
Since the Service Worker intercepts REST calls to the registered domain, this in theory makes it fairly easy to add to just about any framework.
https://developers.google.com/web/fundamentals/getting-started/codelabs/your-first-pwapp/
Sometimes there is a (global) problem with the CDN. It happend before.
You can check the azure CDN status on this page: https://azure.microsoft.com/en-us/status/
At this moment everything looks good, you still have problems?

Heroku Node.js sample facebook app does not work in Google Chrome

The Heroku app i'm trying to get to work (code here):
https://github.com/heroku/facebook-template-nodejs
"Unsafe Javascript attempt to access frame with URL" errors occur when the page is loaded in chrome.
The login button takes you to facebook but does not actually log you into the app and gives the same errors.
Has anyone got this app to work on Chrome or can anyone advise as to how to patch it up?
P.S. it seems to work fine on Mozilla.
Almost certain this is a cross domain policy issue, as stated above. Generally speaking, you just need to add the correct header info to the response.
Access-Control-Allow-Origin: *
In Node, I think it is just a matter of adding it as another header in the response, using
response.writeHead
See http://nodejs.org/api/http.html#http_response_writehead_statuscode_reasonphrase_headers
Oh, and there's explicit instructions on how to do it if you're using Express. I see no reason why it can't work using plain old node then.
http://enable-cors.org/server_expressjs.html
So I looked at your link, in your case I think you just have to enter the header info prior to using any other express app methods.
As to why it works in Firefox and not Chrome, not sure. Both support CORS many versions back. Maybe you have some Chrome extension that's interfering.

Resources