In my Azure Pipeline I created a container using docker run using the below command:
docker run -e 'ACCEPT_EULA=Y' -e 'SA_PASSWORD=Pa$$w0rd12' -p 1433:1433 -d mcr.microsoft.com/mssql/server:2017-latest-ubuntu
In another task I tried listing this container with a docker ps and it shows me the container as below:
/usr/bin/docker ps
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
afca60eb6fde mcr.microsoft.com/mssql/server:2017-latest-ubuntu "/opt/mssql/bin/nonr…" 1 second ago Up Less than a second 0.0.0.0:1433->1433/tcp, :::1433->1433/tcp nostalgic_jemison
Finishing: list docker containers
Post that I'm trying to run my dotnet integration tests which uses the above container for the SQL server. These tests are supposed to create their own DB inside the servers, run the tests and delete them. But It fails while running the tests with the below error:
Error Message:
System.AggregateException : One or more errors occurred. (A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)) (The following constructor parameters did not have matching fixture data: DatabaseSetup databaseSetup)
---- Microsoft.Data.SqlClient.SqlException : A network-related or instance-specific error occurred while establishing a connection to SQL Server. The server was not found or was not accessible. Verify that the instance name is correct and that SQL Server is configured to allow remote connections. (provider: TCP Provider, error: 40 - Could not open a connection to SQL Server)
---- The following constructor parameters did not have matching fixture data: DatabaseSetup databaseSetup
The connection string I'm using in the integration tests are as below:
"Data Source=localhost,1433;Initial Catalog=dbname;User Id=SA;Password=Pa$$w0rd12"
I'm using Ubuntu 20.04 as the build agent. Same setup works fine on my local system on WSL with Ubuntu20.04
Update 1: I have replaced localhost with the IP address of the container in connection string. Works fine in WSL locally but it still throws the same error on Azure Pipelines.
Update 2: I have just noticed that the container stops when I run dotnet test in the pipeline. I can see container running before dotnet test but can't see the container active after dotnet test
A network-related or instance-specific error occurred while
establishing a connection to SQL Server. The server was not found or
was not accessible. Verify that the instance name is correct and that
SQL Server is configured to allow remote connections. (provider: TCP
Provider, error: 40 - Could not open a connection to SQL Server)
You can follow the below steps to troubleshoot the error:
Verify that the instance is running
Verify that the SQL Server Browser service is running
Verify the server name in the connection string
Verify the aliases on the client machines
Verify the firewall configuration
Verify the enabled protocols on SQL Server
Test TCP/IP connectivity
Test local connection
Test remote connection
The following constructor parameters did not have matching fixture
data: DatabaseSetup databaseSetup
You can refer to The following constructor parameters did not have matching fixture data for more help.
Related
The project has this message:
This project was scheduled for deletion, but failed with the following
message: Failed to open TCP connection to host.ru:5000 (Connection
refused - connect(2) for "host.ru" port 5000)
Can you tell me what this might be related to? Why does gitlab use a different port for deletion ?
(default port is 30443)
How do I delete this message?
A lot of questions, but I really don't understand what this message is. clearly this is an error :)
Gitlab is located in docker.
message
P.S. Now I check whether the port is open.
UP! If you don't need the container register, then disable it. This will solve the problem.
Issue:
gitlab is running in a separate docker container and registry is running in a separate docker container. gitlab container is not able to resolve the dns name for the registry and gives the error
"this project was scheduled for deletion, but failed with the following message: failed to open tcp connection to registry:5000 (getaddrinfo: name or service not known)"
Solution:
docker inspect (registry container name)
E.g. docker inspect registry
get the IP address of the registry container
Login to gitlab container machine :
E.g. docker exec -it gitlab bash
Edit the hosts file
vi /etc/hosts
Add the ip address and dns name mapping for the runner container
in the hostfile
172.xx.x.1 registry
This will resolve this issue.
no restarts required.
The error message is below
MongoDB shell version: 3.2.11
connecting to: test
2020-05-16T20:53:47.438+0000 W NETWORK [thread1] Failed to connect to 127.0.0.1:27017, reason: errno:111 Connection refused
2020-05-16T20:53:47.440+0000 E QUERY [thread1] Error: couldn't connect to server 127.0.0.1:27017, connection attempt failed :
connect#src/mongo/shell/mongo.js:229:14
#(connect):1:6
By the way, is there any way to automatically seed the database in the Docker container? I have to manually seed the data base every time.
Thank you, guys.
did you map the port of your localhost to MongoDB container? if not add -p 27017:27017 to your docker run command.
If you are looking to seed the database upon initialization, you can use the following as stated on docker hub:
When a container is started for the first time it will execute files with extensions .sh and .js that are found in /docker-entrypoint-initdb.d. Files will be executed in alphabetical order. .js files will be executed by mongo using the database specified by the MONGO_INITDB_DATABASE variable, if it is present, or test otherwise. You may also switch databases within the .js script.
You can do this by mounting a javascript or shell script file via -v "./path/to/file.js:/docker-entrypoint-initdb.d/file.js" or via the volumes: ["./path/to/file.js:/docker-entrypoint-initdb.d/file.js"] key of your mongodb service if you are using docker-compose.
I have setup a gitlab-ci build with the architecture illustrated below.
(source: gitlab.com)
.
The listener container is unable to communicate with the postgres container using the hostname, ‘postgres’. The hostname is unrecognised. How can the listener container communicate with the postgres database instance?
The documentation recommends configuring a postgres instance as a service in .gitlab-cy.yml. CI jobs defined in .gitlab-ci.yml, are able to connect to the postgres instance via the service name, 'postgres'.
The tusd, minio and listener containers are spawned within a docker-compose process, triggered inside the pytest CI job. The listener container writes information back to the postgres database.
Subsequently, I thought about using the IP address of the postgres service in place of the hostname. From within the pytest CI build job I have tried to determine the IP address of the postgres database using the following bash command sequence:
export DB_IP="$(docker inspect -f '{{range .NetworkSettings.Networks}}{{.IPAddress}}{{end}}' postgres)"
echo "DB IP ADDRESS IS $DB_IP"
However, postgres is not recognised as a container.
How do I determine the IP address of the postgres service? Alternatively can I use the IP address of the shared runner? How do I determine this?
Does anybody have any ideas?
Update 11/1/2019
Resolved by moving all services into docker-compose file so that they can communicate with each other. This includes the postgres container etc…After some refactoring in test environment initialisation, tests are now invoked via docker-compose run command.
Now able to successfully run tests using gitlab-shared runner…
Actually I'm trying to deploy Kubernetes via Rancher on a single server.
I created a new Cluster and added a new node.
But after a several time, an error occurred:
This cluster is currently Provisioning; areas that interact directly with it will not be available until the API is ready.
[controlPlane] Failed to bring up Control Plane: Failed to verify healthcheck: Failed to check https://localhost:6443/healthz for service [kube-apiserver] on host [172.26.116.42]: Get https://localhost:6443/healthz: dial tcp [::1]:6443: connect: connection refused, log: standard_init_linux.go:190: exec user process caused "permission denied"
And when I'm checking my docker container, one of them is always restarting, the rancher/hyperkube:v1.11.3-rancher1
I'm run docker logs my_container_id
And I show standard_init_linux.go:190: exec user process caused "permission denied"
On the cloud vm, the config is:
OS: Ubuntu 18.04.1 LTS
Docker Version: 18.06.1-ce
Rancher: Rancher v2
Do you have any issues about this error ?
Thank a lot ;)
What is your type of architecture?
Please run:
uname --all
or
docker info | grep -i "Architecture"
to check this.
Rancher is only supported on x86.
Finally, I called the vm sub-contractor and they created a vm with a nonexec var partition.
After a remount, it's worked.
I've been following this tutorial to set up an Azure container service. I can successfully connect to the master load balancer via putty. However, I'm having trouble connecting to the Azure container via docker.
~ docker -H 192.168.33.400:2375 ps -a
error during connect: Get https://192.168.33.400:2375/v1.30/containers/json?all=1: dial tcp 192.168.33.400:2375: connectex: No connection could be made because the target machine actively refused it.
I've also tried
~ docker -H 127.0.0.1:2375 ps -a
This causes the docker terminal to hang forever.
192.168.33.400 is my docker machine ip.
My guess is I haven't setup the tunneling correctly and this has something to do with how docker runs on Windows 8.1 (via VM).
I've created an environment variable called DOCKER_HOST with a value of 2375. I've also tried changing the value to 192.168.33.400:2375.
I've tried the following tunnels in putty,
1. L2375 192.168.33.400:2375
2. L2375 127.0.0.1:2375
3. L22375 192.168.33.400:2375
4. L22375 127.0.0.1:2375 (as shown in the video)
Does anyone have any ideas/suggestions?
Here are some screenshots of the commands I ran:
We can follow this steps to setup tunnel:
1.Add Azure container service FQDN to Putty:
2.Add private key(PPK) to Putty:
3.Add tunnel information to Putty:
Then we can use cmd to test it: