A sprint boot rest api is deployed as the web app via fire up a docker image in Azure. After that I need to make a POST request to test the API. Here comes the issues. I seems can't access the API. It is not the issue of the code itself since I can get the result if I deployed the code locally,
Here are some of my key steps
I add the following user command when fire up the application from the docker image (docker image is saved in the azure container registry)
docker run -d -p 8177:8177 my-api-image:latest
login to azure from azure-cli
az login
I query the post method in the terminal
curl -X POST -'from=161&to=169&limit=100' https://<my-app-name>.azurewebsites.net:8177/readRecords
But I am keep getting the Connection time out error
Failed to connect to <my-app-name>.azurewebsites.net port 8177: Connection timed out
I also try to run the curl method from the shell from the Azure Portal in the browser, it also told me the time out error Anyone know the reason of this? and how can I solve it so that I can send a post request.
Azure web app only support http 80 and https 443 port.
So your port 8177 doesn't work. For more details, please read my answers in below posts.
Related posts:
1. Strapi on Azure does not run
2. Django channels and azure
Related
I've wrote a simple NodeJs (ExpressJs) server that uses Puppeteer to generate PDF files on JSON data being passed to it. While locally everything is working like charm, I'm struggling to run this server on Azure App Services.
I've created a Resource group, within it I've created an App Servces instance (running on Linux) that is connected to my repo at Azure DevOps (via the Deployment Center).
My server has two endpoints:
/ - returns a JSON - { status: "ok" }. I'm using this to validate the running server instance.
/generate-pdf - uses the Puppeteer to generate and return a PDF file.
After successfully starting the App Service instance I'm able to access the "/" route and get a valid response but upon accessing the "/generate-pdf" route the result is "502 - Bad Gateway".
Does my instance require some additional configuration that I haven't done?
Does App Services can not run Puppeteer? Perhaps there is a different service on Azure that I need to use?
Is there a way to automate the process via the Azure DevOps pipeline or release?
Any questions/thoughts/FAQs are more than welcomed. Thanks =)
I'm answering my own question: as was mentioned here https://stackoverflow.com... the Azure App Services does not allow the use of GDI (which is required by Chrome) regardless if you're using Linux or Windows based system. The solution was to put the NodeJs application into a Docker container and manually install Chrome. Once you have a container - just upload it to the Azure App Services and viola!
By default App Servies exposes 80 and 443 ports, so if your application listens on a different port be sure to specify it via the WEBSITES_PORT environment variable.
In my case, I had to upload the Docker image to the Docker hub but you can also set up a pipeline to automate the process.
I've built the Docker image on my M1 Pro and it led to some arch issues when the container was uploaded to Azure. Be sure to add the --platform linux/amd64 in the image-building step if you're building for Linux.
I am running 3 tier application: frontend (react), backend (spring boot) and managed azure database in Azure.
To run backend and frontend - I do have Azure App Service leveraging containers.
It works fine until we restricted backend to be accessible via private endpoint in vNet.
Frontend is obviously connected to the very same vNet via SWIFT(known as vNet integration) connection.
So far it is all good.
The issue arises when there is a problem with backend which cannot connect to database e.g. because I messed up connection string. So I fixed that issue and restarted backend with a new version - fixed connection string.
Buuuut, and here it comes ... since backend previously crashed it is not running and the way how to bring up a new version is to simply call App Service URL (curl https://my-backend.azurewebsites.net). The issue is that it is not possible to call it since it is behind a private endpoint.
Workaround would be to start a VM inside the very same vNet and call backend like this:
admin#debug:~$ curl -k https://10.0.20.4 -I -H "Host: my-backend.azurewebsites.net"
and this works.
But this is a very cumbersome solution which is not a solution at all in fact.
Anyone has idea how to make it work ?
I am using terraform, and I also notice that when I totally un-deploy App Service and deploy it again - it boots up again.
Thx
I am in a kubernetes cluster with two services running. One of the services expose a endpoint like /customer/servcie-endpoint and other service is a nodejs application which is trying to access data from this service. Axios doesn't work as it needs a host to work with.
If I do a kubectl exec shell and run curl /customer/servcie-endpoint I receive all the data.
I am not sure how to get this data in a nodejs application. Sry for naive ask!
I am executing a bunch of postman APIs using Newman command on Jenkins.
One of those APIs requires a webhook URL in the body:-
{
"webhook_url": "http://localhost:8000"
}
which I want to use later on to retrieve the content posted on it.
I tested it on my local machine by creating a local web server using nodeJs which acted as a webhook URL and I could verify/see the content getting echoed on that web server.
But I need help on how to achieve this same thing via Jenkins.
In short, I want to:-
Create and Start a web server via Jenkins which I can use as webhook_url
Execute collection of postman APIs and verify content posted on webhook_url
Kill web server
Fixed it by creating webserver image in docker so that I dont have to create, start or kill it via jenkins.
The app (Node.js) is deployed on Cloud Foundry (IBM Cloud, US-South) by my previous colleague, but the codes in our private Github (seperate from IBM DevOps) for local testing doesn't look the same as what he has deployed on Cloud Foundry. I have tried the following methods to download it from the cloud, but none of them work:
Bluemix file viewer - I can't find it in the new IBM Cloud interface. How can I use DevOps services to achieve it? Ref
bx cf download - it doesn’t work because the app is running on Diego backend. Ref
bx cf file - it doesn’t work because the app is running on Diego backend. Ref
bx cf curl - I got the following error message. After I closed the firewall, the error message is still the same. I can't find a way to solve this problem. Ref
Invoking 'cf curl /v2/apps/7fe6cdb8-521f-4716-954d-d9598502d049/droplet/download'...
FAILED
Error creating request:
Error performing request: Get https://dal05.objectstorage.service.networklayer.com:443/v1/AUTH_9832c938-360c-442a-9713-a5ad3a5d5368/cc-droplets/20/ef/20efe5fb%!D(MISSING)0fa9%!D(MISSING)4ceb%!D(MISSING)8098%!D(MISSING)ec710c8ad0db/fb2ea5e85ec02b65e1d987a7223b92c414df5851?temp_url_sig=8e2b2f7ce7a420d323a0ed5f002669a095af5b12&temp_url_expires=1517896403: dial tcp 10.1.129.3:443: getsockopt: connection refused
TIP: If you are behind a firewall and require an HTTP proxy, verify the https_proxy environment variable is correctly set. Else, check your network connection.
Cloud Object Storage - I don't have permission. Ref
SSH without CLI - It doesn't make sense to ask my password because I use a federated user ID without a password. Ref
You should be able to make 'bx cf ssh ' to the runtime. Then you should be able to make tar package of what you see necessary. Then just have a place to upload that to.