Host multiple services that need same ports open on gitlab ci - gitlab

this is an issue I've been postponing for a while but I need to get fixed at some point.
Basically I have two services which I have containerized and registered in my gitlab registers. The two services represent different versions of the same program. In my automated tests I have 2 test suites which are testing for backwards compatibility with these services. The issue that I have is that when my automated tests run, only one service can run fine because there is a conflict over the ports they use I assume and so gitlab can't run all of the services at the same time.
Is there a way to get around this without making the ports specifiable in the code? This option would take the most amount of time and I'd rather leave it as a last resort.

It seems after some digging that making the ports configurable is my only option. As per the kubernetes documentation "You cannot use several services using the same port (e.g., you cannot have two mysql services at the same time)." https://docs.gitlab.com/runner/executors/kubernetes.html

Related

Remote Execution platform

I’m looking for a framework/platform that would allow me to execute remote commands on a Windows machine and report back the results.
These machines would be public outside our company network, probably behind firewalls, proxies, etc. We have complete access over them and can configure them in any way we want. Think ATMs with 3G network.
I guess what i’m looking is something like SaltStack remote execution. But that enterprise plan has a hight cost per minion, and I need to install it on the thousands.
Another possible solution would be something like Octopus Deploy, Azure DevOps or any CD tool for that matter but without the need for environments.
I’ve looked also at ansible, but without an agent to overcome the target being behind firewalls, routers, proxies, I’m not sure how the reverse connection would work.
I would like to avoid Puppet or Chef for now. Ideally a cloud based solution would be wonderful, especially in azure.
Any recommends, directions?
Octopus Deploy is currently working on "Ops Processes" which sounds like it might fit what you are looking for. It's on our roadmap if you are interested, and we are planning to have the first round of features from this ready to ship in the next 8 weeks or so.
Caveat: I work at Octopus so read into that what you will

How to deploy software builds from Azure DevOps to internal servers?

We have our software hosted in Git on Azure DevOps and built using a build pipeline (which primarily uses a Cake script). We are now looking to deploy this software using the Azure DevOps release pipeline. However, all of our application servers are behind our firewall, inside of our network, and don't have any port open except for 80 and 443 for the web applications. We have dev, staging, and production servers for our apps (including some for load balancing). All I really need is to copy the artifact, backup the current code to a separate folder on the server, deploy and unzip the artifact file in the root deployment folder, and restart IIS on those servers.
My company is rather large and bureaucratic so there are some hoops we have to jump through for due diligence before we even attempt this new process. In that spirit, I am trying to find the best solution. If you can offer your advice, and in particular, offer any other solution we did not think of, that would be helpful:
The obvious solution would be to stand up servers on Azure cloud and move completely to the cloud. I know this is a solution, and this may be where we go, but my request is for non-cloud solution options so I can present this properly and make a recommendation.
Use a Hyper VPN tunnel to securely transfer the files and restart IIS. Probably the easiest and simplest method in regards to our already built build process on AzDO. Technically, this is the one I am least comfortable with.
Use build agents inside the network, connect to them from AzDO, have them build the software, and then have them deploy it or other agents. Lots of work to set it up but so far the least intrusive to our security. I also am not a fan because I wanted AzDO to handle builds and deployments.
Open the SFTP and SSH ports for each server and transfer the files that way. Maybe the least secure way but very simple?
If you have a better solution for this problem or a more common solution, let me know. If you think I should one of the 4 above solutions, let me know. If you can expand on any of the options above, please do.
ADO agents only require external connectivity, so they talk to ADO, not vice versa. So you only need 443 outbound to a couple of ADO urls.
Reading: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops#communication
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops#im-running-a-firewall-and-my-code-is-in-azure-repos-what-urls-does-the-agent-need-to-communicate-with
You could use Environments. Create Environment for each VM (that includes creating agent on your machine) and then use the environment parameter in YAML pipelines deployment job. The deployment job can then do whatever you need (deploy webapp, move files, backup, etc..) on your target machine, regardless whether it's on private network.
More reading - Azure DevOps Environments: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
Using deployment job: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops

Heroku workers in dev

I'm looking into using a worker as well as a web for the first time as I have to scrape a website. I'm just wondering before I commit to this about working in a dev environment. How do jobs in a queue get handled when I'm testing my app before it's pushed to Heroku?
I will probably be using RabbitMQ if that's relevant here.
I guess it depends on what you mean by testing. You can unit test the code that does the scraping in isolation from any queue, and you can provide a mock implementation of the queue operations to handle a goodly portion of your integration tests.
I suppose you might want a real instance of the queue for certain tests, but depending on the nature of your project, you might be satisfied with the sorts of tests described in the first paragraph.
If you simply must test the queue operation and/or you want to run a complete copy of production locally then you'll have to stand up an instance of Rabbitmq. You can stand one up locally or use one of the SAAS providers.
If you have multiple developers working on the project, you might want to make it easy for them by creating something like a vagrant script that sets up a complete environment in a vm. Or better still something like docker. Doing so also gives you a lot more deployment options (making you less dependent on the heroku tooling).
Lastly, numerous CI solutions like Travis CI provide instances of popular services for running tests (including rabbit).

docker and product versions

I am working for a product company and we do make lot of releases of the product. In the current approach to test multiple releases, we create separate VM and install all infrastructure softwares(db, app server etc) on top of it. Later we deploy the application WARs on the respective VM. Recently, I came across docker and it seems to be much helpful. Hence I started exploring it with the examples listed on the site. But, I am not able to find a way as how docker can be applied to build environment suitable to various releases?
Each product version will have db schema changes.
Each application WARs will have enhancements/defects etc.
Consider below example.
Every month, our company is releasing a new version of software and hence in order to support/fix defects we create VMs per release. Given the fact that if the application's overall size is 2 gb and OS takes close to 5 gb (apart from space it will also take up system resources for extra overhead). The VMs are required to restore any release and test any support issues reported against it. But looking at the additional infrastructure requirements, it seems that its very costly affair.
Can docker have everything required to run an application inside a container/image?
Can docker pack an application which consists of multiple WARs/DB schemas and when started allocate appropriate port?
Will there be any space/memory/speed differences compared to VM and docker assuming above scenario?
Do you think docker is still appropriate solution or should we continue using VMs? Can someone share pointers on how I can achieve above requirements with docker?
tl;dr: Yes, docker can run most applications inside a container.
Docker runs a single process inside each container. When using VMs or real servers, this one process is usually the init system which starts all system services. With docker it is usually your app.
This difference will get you faster startup times for your app (not starting the whole operating system). The trade off is that, if you depend on system services (such as cron, sshd…) you will need to start them yourself. There are some base images that provide a more "VM-like" environment… check phusion's baseimage for instance. To start more than a single process, you can also use a process manager such as supervisord.
Going forward, the recommended (although not required) approach is to start one process in each container (one per application server, one per database server, and so on) and not use containers as VMs.
Docker has no problems allocating ports either. It even has an explicit command on the Dockerfile: EXPOSE. Exposed ports can also be published on the docker host with the --publish argument of run so you don't even need to know the IP assigned to the container.
Regarding used space, you will probably see important savings. Docker images are created by stacking filesystem layers… this means that the common layers are only stored once on the server. In your setup, you will likely only have one copy of the base operating system layer (with VMs, you have a copy on each VM).
On memory you will probably see less significant savings (mostly caused by not starting all the operating system services). Speed is still a subject of research… A few things clear so far is that for faster IO you will need to use docker volumes and that for network heavy use cases you should use host networking. Check the IBM research "An Updated Performance Comparison of Virtual Machines and Linux Containers" for details. Or a summary like InfoQ's.

Dev environment for multiple server setup - Nodejs

This is my first time building out something with multiple servers. I wanted to know if anyone could point me towards a guide for setting up a dev environment (windows) for a backend that will be set up on multiple servers ie one server for the API, one for another set of processes (ie file compression) and one for everything else.
Again, just trying to figure out if it's possible to set up a dev environment to test out the system on my local machine.
Thanks
You almost certainly want to run virtual machines (on something like VMWare or VirtualBox) to really test multi-machine stuff. However, I also develop for multiple machines every day (we have an array of app servers, an array of background worker servers, e-commerce servers, cache stores and front proxies—and I still just develop on one virtual machine that has all that stuff running on it. Provided you make hostnames and ports configurable for everything, there's not much difference between localhost port 9000 and some.server.tld port 8080. Actually running all the VMs on a single computer would likely be painful, both in terms of system resources and complexity.
There are tools to help with setting up VMs with similar or the same configurations too. Take a look at http://vagrantup.com/ and also http://babushka.me/.
Just my $0.02.

Resources