Problems running .NET Core 3.1 and SQL Server 2017 Linux images on Linux host - linux

I'm running a Web API using .NET Core 3.1 with EF Core 3.1 connecting to a SQL Server 2017 database. Every component is built in a Linux Docker image and I'm using docker-compose with Nginx as a reverse proxy to route to my Web API.
When I test my deployment on Windows 10 using Docker Desktop, everything works perfectly. When I test the exact same images, with the exact same docker-compose.yml file on a Linux host (Ubuntu 18.04), the connection from the API to the Databse seems to drop from time to time, randomly, which is a problem when it happens in the middle of a DB write operation. I either get a 504 (timeout), or worse, a lock on a database row which blocks access to that table.
Again, if I deploy in Windows, this never happens. Only happens when the Docker host is Linux.
Any help would be greatly appreciated. I've been trying to find some solution for the past 3 days, but nothing I've tried corrected that behaviour on my Linux box.
One more thing. On Windows, instead of connecting the API to the DB by name (the container's name), I'm using host.docker.internal in the connection string. There is no equivalent for Linux. I've read about a couple of hacks, but none seem to solve my problem.

Related

M1 Apple Silicon MongoDB connection creates new database, also affects Docker deploy

I got a brand new MBP M1 Max last week, started porting over my development environment. Almost everything as far as nodejs and our projects ported, however, I did run into an issue with Docker. It turned out that I needed to tell Docker to compile in a linux environment and that resolved issues with BCRYPT.
What I was unaware of was that apparently, that deploy (to QA), did not have a good database connection. It turns out, that build connects to the server, but then creates a database named "test" and then is apparently using that database rather than the one specified in the URL.
To debug, I took the url from QA and plugged it into development and I am getting the same behavior.
On my old intel mac, I did full pulls and npm installs to get everything to match, and guess what, it works. So everything that was working on Thursday on Intel Mac works with the library updates and code updates since Thursday on the Intel Mac, but the M1 Mac... it doesn't fail, it just creates a whole new database! By the way, the docker deploys to QA also work on the intel mac. Something about the M1 Mac is causing this issue.
This does not even make sense to be a bug that one could experience due to a chip change.
Here is our QA DB URL minus information that would make our db public. It is hosted on MongDB Atlas. Anyone else seeing this behavior as well? Anyone got the work around?
login: mongodb+srv://fake:password#cluster0.something.mongodb.net/databasenameqa?replicaSet=atlas-12hyiw-shard-0&readPreference=primary&connectTimeoutMS=10000&authSource=admin&authMechanism=SCRAM-SHA-1
I am going to take this over to the git for mongo on node. Also, I just checked my localhost connection url and it looks like this:
mongodb://localhost:27017/databasenamedev
therefore its not the database name after the / that is the issue.

Umbraco 9 on Linux Server (Kestrel Service)- Site Dies After Deploying Code Changes

I've gotten an Umbraco 9.0.1 instance up and running on a Linux server (Ubuntu 20.04). I'm using what appears to be the recommended approach of using Nginx as a reverse proxy to a Kestrel service
https://learn.microsoft.com/en-us/aspnet/core/host-and-deploy/linux-nginx?view=aspnetcore-5.0
The issue I'm having is that any time I do a code update and deploy it to my Linux server, followed by a restart of the kestrel service, the site dies. I get a 502 bad gateway error. Reloading the daemon does nothing. The only thing I've found that seems to work is if I navigate to place in the filesystem where I've deployed the published application and run the command:
dotnet Umbraco.Nine.Linux.dll (Umbraco.Nine.Linux being the name of my project and the corresponding dll the result of the publish)
that command just hangs so I hit ctrl + C to stop it, but that brings the site back up.
Has anyone else experienced anything like this? If not, how are you handling Umbraco code deployments to a Linux server? I should add that if I use the default .net core website from Visual Studio, I don't have this issue. I deploy my code and restart the kestrel service and everything is beautiful, no site outage, changes show up. So maybe there is some Umbraco startup process that's making things die?
I've checked the logs for the kestrel service and they just indicate the service stops/restarts as expected. The logs in Nginx confuse me. They say something like
2021/09/27 15:35:08 [error] 81048#81048: *1 connect() failed (111: Connection refused) while connecting to upstream, client: 184.67.248.194, server: <mysite.com>, request: "GET /umbraco/api/test/another HTTP/1.1", upstream: "http://127.0.0.1:2003/umbraco/api/test/another", ..."
EDIT:
I thought I should post some additional details, just in case it helps. For the publishing of the project, I am using Visual Studio built in publishing and choosing File System.
(Note I've tried the single file publish but I haven't even gotten that to run. I get an error indicating some library the project depends upon will not run in single file publish. Again, this seems Umbraco specific as I can get a default asp.net core site to publish just fine like this. I'm only adding this note in case it's relevant to something just being wrong with Umbraco 9 as we're still very early in the release.)
After that I just zip it up, upload it to the server, unzip it with commands like:
unzip -o website.zip -d <path/to/my/web/app/>
Then
sudo systemctl restart <my.kestrel.service>
That's when browsing the site yields the 502 Bad Gateway I described above. Any help on what I'm doing wrong would be greatly appreciated.
So, I solved this, sort of. I've abandoned the whole kestrel thing and I'm using a docker container to run the asp.net core application. I'm still hosting using nginx as a reverse proxy. The only difference is instead of having a systemd kestrel service listening on the specified port, it's a docker container doing it.
I posted details here: https://our.umbraco.com/forum/umbraco-9/107189-umbraco-nine-on-linux-server-site-dies-after-deploying-code-changes
Hopefully it helps someone else out!
I'm pretty excited because the whole thing is 'script'-able so I managed to automate it in an AWS CodePipeline.

Getting over a 426 upgrade required

I've been working on a web app (front Angular, back Node/Express/Mongo) for a few months now.
I run Angular on localhost:4200 and Node on localhost:3000
Some people in our team are running the backend in a VM that runs on their computers.
So that the app works in both cases we've edited the windows hosts file to make the app point to the correct place (either the VM or the back on the local machine)
127.0.0.1 mysite
Developers using the VM changed 127.0.0.1 with their VM's IP.
Everything worked smoothly.
A few days ago, our company installed bitlocker on every PC and I believe it caused our setup to break for everyone not using the VM (which is not subject to bitlocker)
People working on localhost started receiving from the front app:
OPTIONS http://mysite:3000/auth/login 426 (Upgrade
Required)
The requests are not even hitting the Node server. Looks like they're redirected to a websocket server?
If I change the requests to target localhost:3000 the app works again but we lose the setup for people working on the VM. (thus committing code becomes annoying if we need to change the base url each time)
I could make an environment for each case but it's not clean and I'd like to know why it suddenly broke.
Try changing the port from 3000 to something else.
I just ran into this issue when a coworker tried running an express app we've been building on a Windows machine for the first time, as opposed to an EC2 instance. I've been using a Mac during development.
The issue seemed to be that 0.0.0.0:3000 was already mapped on company Windows machines. If you run netstat -an in a command prompt you may see it in use already.
hello mate this usually happens due to protocol mismatch between the PC and server.TLS 1.0 and 1.1 were permanently deprecated on June 4 2018. I suspect you’re using something that still uses and old version of TLS.

CentOS 6.6 TCadmin - Server not responding to query

Hello "stackers" i've been trying the past few days to set-up TCadmin GamePanel on my CentOS server.
We're running CentOS 64 bit, with 32 lib installed.
-We can create Murmur/counter-strike: Source servers running without problems.
-Other servers not responding to query.
-we've allowed port range 27015 - 27030
-SteamCMD is running and we can connect to steam API (tried via. the server)
Following folder names are with lowercases:
/home/tcagame/user
/home/tcadmin/tcafiles/games
/home/tcadmin/tcafiles/users
So after what my research lead to. It isn't because of uppercase letters.
We've tried to reinstall the entire server but nothing works.
Does anybody know why this is happening?
(if some info are missing i'll provide them to you)
Best regards
Rune
Update:
TCadmin support responded to our ticket:
Bu default Tcadmin Runs the "srcds.exe"
This is not the right file to run
Cilck on the server, choose service settings and change "srcds.exe" to "srcds_run"

Launching an app using its URL on OpenShift Origin

I'm learning to use the open source version of OpenShift. I have downloaded the linux image and started it on a virtual machine (named VM1) on my PC, which runs Windows 7. On another VM (named VM2) I have installed another linux OS and configured the JBoss IDE to work with OpenShift. Then I have successfully created and hosted an app on my local OpenShift PaaS cloud. Here is where the problem starts:
On VM2 (the one running linux where I developed the app) I have no problem accessing my account webpage on OpenShift, viewing what apps I have created and testing them.
From any other PC on my network I can log in to the OpenShift web console and view my apps by simply entering the IP of VM1 (in my case 192.168.1.107). There I can see the URL to launch my app: http://localtest2-mydomain.openshift.local/ . But when I click on it, I get a message saying that the web page is not available. Again, if I use this link in VM2, it works like a charm.
I tried changing the system32\drivers\etc\hosts file so that and link ending in openshift.local will be sent to the IP address of VM1, but it doesn't work. Can anyone help me?
As far as I know, you can not use wildcards in your hosts file, you would need to specify the entire url in your hosts file for it to find it correctly. Give that a try and see if it helps.

Resources