How to deploy dotnet core application on Ubuntu server with Nginx server? - linux

I have a dotnet core application built on dotnet core 3.1 and when I tried to deploy the same in ubuntu 18.04 server by following the steps given in this doc but not able to access the app on port 80 (accessing through public IP)
Here is the Nginx updated configuration
and dotnet application is running with port 5000 and 5001 (for now I didn't configure service to the same)
Getting the following error when accessing through the browsers ( public IP)
I'm missing any configurations?

The problem was IISexpress port access issue.
By default, the IISexpress does not allow the external network to access the port and this access needs an explicit configuration.
If you are facing the same problem, you can find the code snippet and other details here.
Accessing IISExpress for an asp.net core API via IP

Related

Access to node server and react app hosted on vm via internet

I'm trying some stuff with vms (vmware) and nodejs.
I setup 2 vms runing ubuntu 20.04.5 LTS with static ips let say : 192.168.10.3 and 10.4 each accessible via ssh over internet with my public ip :
ssh -p 103 admin#\<public-ip\>
ssh -p 104 admin#\<public-ip\>
I opened those ports(103 and 104) on windows firewall and forwarded them on the router, everything is fine till now.
In one vm (10.4) i installed the node server running on port 3000 ( which i can access from windows in : 192.168.10.4:3000) and a react app (it can be anything using a different port) on 3001.
How can i access the node server or/and the react app using my public ip ?
i tried on the browser : public-ip:104 but firefox blocked it "This address uses an unusual network port for web browsing. As a security measure, Firefox dropped the request."
public-ip:3000 is not working either

How to access host nodejs app from docker nuxt app?

I am trying to get my nuxt app working on the production servers. For the local machine, the generated docker image runs well and it can access the nodejs app that runs on localhost. The axios 'baseurl: http://127.0.0.1:6008/' seems working fine, the docker image can access this. On the production servers, i have used docker to setup the nuxt app, the same way i tested on my local machine. Yet the docker nuxt app cannot reach the nodejs app on the host server. I can see this must be some kind of network setting issue.
In vuejs app, i usually setup a proxypass in the apache web conf, to convert the input backend query to match and replace them with localhost address.
ProxyPass /app/query http://localhost:6008/query
The nuxt.config file, axios setting looks lik this:
axios: {
baseURL:'http://127.0.0.1:6008/',
browserBaseURL: ''
},
Does docker needs additional settings or should i configure my apache for this communication between my docker container and a node app that running on host apache pm2 ?
localhost or 127.0.0.1 from within the docker will not resolve to the server's localhost. Instead you need to specify the name of the nodejs service (if you are using docker-compose) or the nodejs docker container name (if you are just using docker run).
You can also try by giving the IP of the server where the docker is running instead of 127.0.0.1

Implemented keycloak on aws ec2 windows instance and it's running inside the instance only but not running outside the instance

I'm using keycloak 15.1.1 version and 64 bit windows ec2 instance.
Downloaded RDP from aws and using RDP I logged into instance and added keycloak and mysql connecor 8.0.31 to connect keycloak with external database.
I referred from here : https://jbjerksetmyr.medium.com/how-to-setup-a-keycloak-server-with-external-mysql-database-on-aws-ecs-fargate-in-clustered-mode-9775d01cd317
I did all as here mentioned.
Note : To run keycloak they(link) used "standalone.sh" command but for windows instance :standalone" is enough.
So I ran keycloak using following command
command 1 : "standalone -b 172.31.35.208" This is my private ip
It's running successfully on 172.31.35.208:9090
Here 9090 port is mapped with 8080 so my keycloak currently running on 9090 port. But if I use this same address outside of the instance it's showing error "This site can’t be reached"
command 2 : "standalone -b http://ec2-35-180-74-78.eu-west-3.compute.amazonaws.com" this is my public DNS
running inside instance
It's running successfully on http://ec2-35-180-74-78.eu-west-3.compute.amazonaws.com:9090/
But if I use this same address outside of the instance it's showing error "This site can’t be reached"
running outside instance
Note : Here -b means, I am telling to my instance that run keycloak on my private Ip or on my public DNS
I tried this : Edited inbound rules to add port 8080 and 9090 to clear traffic but it doesn't work
Anyone can help me to solve this issue
Keycloak service is not accessible from public browser but it's only on inside the instance
Help me to find out the issue
Issue fixed. That issue came because of version 15. Then I used version 19 it's working now

How to access stronglooop loopback application which is running on a subfolder on live server

I have installed strongloop loopback application on a live server on domain e.g: www.abc.com . I have created stronglooop loopback project in a subfolder called "lb" After successfull creation I executed the command "slc run" the terminal logs that your loopback app is running on http://localhost:3000 but when I opened www.abc.com/lb or www.abc.com:3000/lb... it was not running there ... What mistake I did? thanks in advance.
Your loopback running on same domain but on another port so you can access it by http://www.example.com:3000
It's doesn't matter which folder your application is stored. When you run it, by default it runs on default domain on port 3000.
You should be able to deploy your app to that server using the command:
$ slc deploy http://user:pass#prod1.example.com:3000
This is also documented here

Strange behaviour of Mean.io on Azure VM‏

I created an Azure virtual machine with Ubuntu 14.04 LTS OS.
I installed a mean.io application version 0.3.3, on this virtual machine, with nginx that proxy the requests in the http port 3000 over the port 80.
I opened one endpoint in azure portal, for the TCP protocol on private port 3000 and public port 80.
I installed the latest version of node on Azure VM.
The database (mongoDB) is hosted on compose.io.
With pm2 (https://www.npmjs.org/package/pm2) I created a daemon that run the application.
All apparently works fine: the cpu was with no load and the memory was empty (only 100MB).
But after a period, node.js cannot process the request.
I have tried to do a 'curl' in localhost 3000 but i dont have any response.
The problem persists only in Azure VM: I tried the same application, with the same configuration, on my dev machine (ubuntu 14.04 desktop), and on Digital Ocean (another distro of ubuntu 14.04 server) and all works fine without problem.
Can you help me to find the problem?
I have tried to dockerize all infrasctructure, in the same machine (a CoreOS vm on azure):
1 container with mean app,
1 container with MongoDB,
the problem still persisted!!!
finally, i have found the solution: keep the connection to MongoDB alive.
i have modified the server.js file from the mean app in this mode:
var options = {
server: {
socketOptions: { keepAlive: 1 }
}
};
var db = mongoose.connect(config.db, options);
In this mode the connection still alive and the problem was solved.

Resources