Completely new to Dockers here.
I have a very simple .NET 45 application, that is built on ServiceStack framework. They are mainly APIs web services.
Docker image is built well, container is deployed well and running, no issues. I am using Windows Container (not docker for windows). However, I have one particular web service that is calling a third party API(Legacy), and we are using SOAP client to do that. However, this web services always throw EndpointNotFound exception:
"There was no endpoint listening at https://externalapi.asmx that could accept the message. This is often caused by an incorrect address or SOAP action. See InnerException, if present, for more details."
I have read up on Windows NAT issue problem. My gut feeling is that the self contained container is isolated, and hence no external internet connectivity to the outside world. Is there anything that I need to configure on my microsoft/iis image?
My Docker file:
FROM microsoft/iis
SHELL ["powershell"]
RUN Install-WindowsFeature NET-Framework-45-ASPNET ; \
Install-WindowsFeature Web-Asp-Net45
COPY \ MyAwesomeWebsite
RUN Remove-WebSite -Name 'Default Web Site'
RUN New-Website -Name 'MyAwesomeWebsite' -Port 80 \
-PhysicalPath 'c:\MyAwesomeWebsite' -ApplicationPool '.NET v4.5'
EXPOSE 80
You can easily check connectivity to external world from your container. Just login to it and try to issue the same call from there. It can be multitude of issues and simplest one can be firewall running on your container host. For me for example unless I stop Mcafee firewall it's not going to work since Mcafee firewall sees traffic coming from 172.x subnet (which container will be run on by default) and it blocks it.
Related
I've recently set up a Debian 8 Jessie VM on Google Cloud. I've installed Jenkins and have the service up and running(verified by "sudo service jenkins status"), yet I can't connect to the VM's external IP from another machine. I used to run Jenkins from my personal computer until I decided I needed a dedicated server to run it continuously. When I was running it on my personal machine I would just access localhost:8080 and the Jenkins dashboard would load fairly quickly. However, upon trying to access the external IP address of the VM running Jenkins, I'm usually greeted with "Connection refused" in my web browser.
At the suggestion of most posts I've seen regarding such issues, I've lifted all firewalls on the VM and have tried to ensure that the VM is listening at the correct IP address, but nothing seems to be able to change the outcome presented by my browser. Where does the issue most likely reside: the VM, Google Cloud, or Jenkins? I'm at a loss.
My first guess is a connection/firewall issue. To test this, you could try a port forward using SSH: SSH into your server with a local port forward: ssh -L 8080:localhost:8080 yourserver. You should then be able to direct your web browser at http://localhost:8080/ and your packets flow through the SSH connection. If that makes it work, have a good look at
How to open a specific port such as 9090 in Google Compute Engine . Or better yet, if you are the only one to use that Jenkins server, just keep using the SSH tunnel. It's much more secure than opening jenkins to the public world.
Have you tried installing tcpdump on the VM and doing a packet capture? That way you can determine where the traffic is being dropped. If you don't see any traffic, then it is being dropped somewhere in the cloud before it gets to your VM. If you are seeing traffic, then you need to determine is it Jenkins or some agent on the host (perhaps a firewall but you mentioned you cleared all the rules) ... I would suggest stopping the Jenkins service and then trying to access it again. Do you get the same "Connection Refused" message? If so, then it is something on the VM. If not, then it something at the application layer, i.e. Jenkins.
Happy hunting!!!
My problem is that I have a set of net core applications that I created and I send them to Docker Hub:
$ docker push username/appname
On the other side I create on Azure Container Service with DC/OS, and login the server with terminal
$ ssh -i /Users/username/.ssh/id_rsa -L 80:localhost:80 -f -N username#servernamemgmt.westeurope.cloudapp.azure.com -p 2200 -v
but I cant understand how to install my docker images.
In DC/OS, in order to deploy and run your Docker containers, you use Marathon (for long-running services such as an app server, etc.) or Jobs for one-off or scheduled tasks (think: distributed cron). You don't ssh into nodes and manually pull/run them.
If your docker images are already on Docker hub, in order to use them on your DC/OS cluster you typically use Marathon.
Since you say you configured an SSH tunnel with port forwarding (this is an important step), you should be able to access the Marathon UI using http://localhost/Marathon . Then, click on 'Create Application' where you can specify it's settings. The part you are probably looking for is in the second menu item - 'Docker Container' (menu to the left inside the Create Container dialog). There you can specify an image. This by default goes to Docker Hub, so you can write 'username/appname' in the 'Image' text box.
There are additional settings but I think this is what your question was about.
More information:https://learn.microsoft.com/en-us/azure/container-service/container-service-mesos-marathon-ui
I have one Java based application(Jboss version 6.1 Community) with heavy traffic on it. Now I want to migrate this application deployments using docker and docker-swarm for clustering.
Scenario
My application needs two ports exposed from docker container one is web port(i.e.9080) and another one is databases connection port(i.e.1521) and there are few things like logs directory for each container mounted on host system.
Simple Docker example
docker run -it -d --name web1 -h "My Hostname" -p 9080:9080 -p 1521:1521 -v /home/web1/log:/opt/web1/jboss/server/log/ -v /home/web1/license:/opt/web1/jboss/server/license/ MYIMAGE
Docker with Swarm example
docker service create --name jboss_service --mount type=bind,source=/home/web1/license,destination=/opt/web1/jboss/server/license/ --mount type=bind,source=/home/web1/log,destination=/opt/web1/jboss/server/log/ MYIMAGE
Now if I scale/replicate above service to 2 or 3, which host port it will bind and which mount directory will it bind for the newly created containers ??
Can anyone help me to get how scale and replication service will work in this type of scenario ?
I also gone through --publish and --name global but nothing help me in my case.
Thank you!
Supporting stateful containers is still immature in the Docker universe.
I'm not sure this is possible with Docker Swarm (if it is I'd like to know) and it's not a simple problem to solve.
I would suggest you review the Statefulset feature that comes in the latest version of Kubernetes:
https://kubernetes.io/docs/concepts/abstractions/controllers/statefulsets/
https://kubernetes.io/docs/tutorials/stateful-application/basic-stateful-set/
It supports the creation of a unique volume for each container in a scale-up event. As for port handling that is part of Kubernetes nornal Service feature that implements container load balancing.
I would suggest building your stack into a docker-compose v3 file, which could be run onto an swarn-cluster.
Instead publishing those ports, you should expose them. That means, the ports are NOT available onto the hostsystem directly, but in the docker-network. Every Composefile got it's own network by default, eg: 172.18.0.0/24. Each Container got's an own ip and makes that Service available other the specified port.
If you scale up to 3 Containers you will got:
172.18.0.1:9080,1521
172.18.0.2:9080,1521
172.18.0.3:9080,1521
You would need a Loadbalancer to access those Services. I do use Jwilder/Nginx if you prefer a container approach. I also can recommand Rancher which comes with an internal Loadbalancer.
In Swarm-mode you have to use the overlay network driver and create the network, otherwise it would just be accessible from the local host itself.
Related to logging, you should redirect your log file to stdout and catch them with an logging driver (fluentd, syslog, graylog2)
For persistent Storage you should have a look at flocker! However Databases might not support those storage implementations. EG: MYsql doesnot support them, mongodb does work with a flocker volume.
It seems like you have to read alot.. :)
https://docs.docker.com/
I have a small company network with the following services/servers:
Jenkins
Stash (Atlassian)
Confluence (Atlassian)
LDAP
Owncloud
zabbix (monitoring)
puppet
and some Java web apps
all running in separate kvm(libvirt)-vms in separate virtual-subnets on 2 machines (1 internal, 1 hetzner-rootserver) with shorewall inbetween. I'm thinking about switching to Docker.
But I have two questions:
How can I achieve network security between docker containers (i.e. I want to prevent owncloud to access any host in the network except ldap-hosts-sslport)
Just by using docker-linking? If yes: does docker really allow to access only linked containers, but no others?
By using kubernetes?
By adding multiple bridging-network-interfaces for each container?
Would you switch all my infra-services/-servers to docker, or a hybrid solution with just the owncloud and the java-web-apps on docker?
Regarding the multi-host networking: you're right that Docker links won't work across hosts. With Docker 1.9+ you can use "Docker Networking" like described in their blog post http://blog.docker.com/2015/11/docker-multi-host-networking-ga/
They don't explain how to secure the connections, though. I strongly suggest to enable TLS on your Docker daemons, which should also secure your multi-host network (that's an assumption, I haven't tried).
With Kubernetes you're going to add another layer of abstraction, so that you'll need to learn working with the pods and services concept. That's fine, but might be a bit too much. Keep in mind that you can still decide to use Kubernetes (or alternatives) later, so the first step should be to learn how you can wrap your services in Docker containers.
You won't necessarily have to switch everything to Docker. You should start with Jenkins, the Java apps, or OwnCloud and then get a bit more used to the Docker universe. Jenkins and OwnCloud will give you enough challenges to gain some experience in maintaining containers. Then you can evaluate much better if Docker makes sense in your setup and with your needs to be applied to the other services.
I personally tend to wrap everything in Docker, but only due to one reason: keeping the host clean. If you get to the point where everything runs in Docker you'll have much more freedom to choose where a service can run and you can move containers to other hosts much more easily.
You should also explore the Docker Hub, where you can find ready to run solutions, e.g. Atlassian Stash: https://hub.docker.com/r/atlassian/stash/
If you need inspiration for special applications and how to wrap them in Docker, I recommend to have a look in https://github.com/jfrazelle/dockerfiles - you'll find a bunch of good examples there.
You can give containers their own IP from your subnet by creating a network like so:
docker network create \
--driver=bridge \
--subnet=135.181.x.y/28 \
--gateway=135.181.x.y+1 \
network
Your gateway is the IP of your subnet + 1 so if my subnet was 123.123.123.123 then my gateway should be 123.123.123.124
Unfortunately I have not yet figured out how to make the containers appear to the public from their own ip, at the moment they appear as the dedicated servers' ip. Let me know if you know how I can fix that. I am able to access the container using its ip though.
I am currently trying to setup a Linux service with IBM Tivoli Identity Manager (IBM Security Identity Manager) a.k.a. ITIM, to a Linux development server where I work and have had some issues. All our Linux servers use ssh to connect. Our eventual goal is to implement single sign on across our networks using Identity Manager.
In the ITIM web interface, I chose the option MANAGE SERVICES and was displayed a page like the following, where I click the CREATE button to create a new service:
Then I am next shown a page where I choose the kind of service I want to make, in this page I choose the POSIX LINUX option because I want to connect to a Linux Server.
Then on the next page, I am entering the information for my Linux server that I want to connect to, the domain name for the server is phongdev.fit.edu, a server for development work.
Note on this page there is a field titled TIVOLI DIRECTORY INTEGRATOR (TDI) where there is default information for the TDI installation, in my case, TDI is installed on the same server as ITIM is installed, so the localhost domain name should be fine. However when I check the server using netstat command there is nothing running on that port, 16231, so I looked up the instructions for starting the TDIDispatcher on google and was told to run the following command, /etc/init.d/ITIMAd restart at the command line and that appeared to run successfully, however still nothing running on port 16231 on the server.
Since our servers use SSH I was required by ITIM to setup key based authentication, I did setup a key and passphrase on this Linux server using ssh, and entered the data on the next screen of ITIM which looks like the following, but as you can see an error is generated when I choose the TEST CONNECTION button:
I checked the logs and there is no info in the logs for these errors, I am not sure where to move next in trying to solve this issue, i suspect it may be related to the fact that the TDI Dispatcher does not appear to be running on port 16231.
Apart from what Matt said (the link especially is useful), the var/ibm/tivoli/common/TDI logs should tell you what the problem with TDI is when you start it up - if there's a problem.
The port number where it's listening ought to be mentioned somewhere in those logs.
Unless there was an upgrade or multiple attempts to configure the RMI dispatcher I don't see why the port shouldn't be 16231 or 1099.
TDI is probably running on a different port. You didn't specify if TDI is running on Windows or Linux, so my answer is assumes Linux since that is what I am most familiar with.
You can find your port # by looking in the solution.properties file in your TDI/timsol directory. It should be listed as api.remote.naming.port.
TDI runs on the default port 1099. Once you start TDI (service ITIMAd start, or however you start it on your system) use ps auxw | grep -i rmi (or something similar) to find the process. Then use netstat -anp | grep PID where PID is the process ID of the TDI RMI process. You should see immediately what port it is listening on. I am not where I have access to a TDI server right now to get you exact commands, but you should get the idea.
Here is a good article for ISIM 6 (should be the same for ITIM 5.1 on TDI 7) on changing the port # for the RMI:
http://publib.boulder.ibm.com/infocenter/tivihelp/v2r1/index.jsp?topic=%2Fcom.ibm.itim_pim.doc%2Fdispatcher%2Finstall_config%2Ft_changeportnum.htm
If you are experiencing error CTGIMT600E and you have multiple network interfaces on TDI 6 or lower, you may need to specify your server IP (or hostname) as a java property so the TDI RMI binds on the correct interface. Edit <tdi_home>/ibmdisrv and insert -Djava.rmi.server.hostname=<yourhost>. For more infomration refer to this article:
http://www-01.ibm.com/support/docview.wss?uid=swg21381101
If you are still having issues, watch your ITIM msg.log and trace.log when you test the connection and look for clues. Also look at the TDI ibmdi.log which will be located under your TDI directory. That may also help you out.