I am running some 3rd party software on a VMSS instance, and would like the instance to scale based on the output from a command ran on that instance. Its not dissimilar from having a set of licenses and once all (or most) of the licenses are used, its time to spin up another instance. the command for showing the number of licenses used is very simple
Does anyone know how I may do this?
Related
this is an issue I've been postponing for a while but I need to get fixed at some point.
Basically I have two services which I have containerized and registered in my gitlab registers. The two services represent different versions of the same program. In my automated tests I have 2 test suites which are testing for backwards compatibility with these services. The issue that I have is that when my automated tests run, only one service can run fine because there is a conflict over the ports they use I assume and so gitlab can't run all of the services at the same time.
Is there a way to get around this without making the ports specifiable in the code? This option would take the most amount of time and I'd rather leave it as a last resort.
It seems after some digging that making the ports configurable is my only option. As per the kubernetes documentation "You cannot use several services using the same port (e.g., you cannot have two mysql services at the same time)." https://docs.gitlab.com/runner/executors/kubernetes.html
I wanted patch my Linux instances, hosted on Google Cloud Platform.
Is there any native tool available on Google Cloud Platform, like Azure Update Manager, or do we have to use a 3rd party tool?
Yes, At this moment GCP doesn't have a product that fulfills patch management like Azure update management. However, there are some other workarounds, on how to manage the patch updates of a large number of VMs.
a). Set up a startup script in order to execute certain maintenance routines. However, restarting the VM is necessary. Startup scripts can perform many actions, such as installing software, performing updates, turning on services, and any other tasks defined in the script [1] .
b). If we want to patch large number of instances, a Managed Instance Group [2] could also be an alternative, as the managed instance group automatic updater safely deploy new versions of software to instances in MIG and supports a flexible range of rollout scenarios. Also, we can control the speed and scope of deployment as well as the level of disruption to service. [3]
c). We could use OS Inventory Management [4] to collect and view operating system details for VM instances. These details include operating system information such as hostname, operating system, and kernel version as well as installed packages, and available package updates for the operating system. The process is described here [5].
d). Finally, there's also the possibility of setting up automatic security updates directly in CentOS or Redhat 7.
I hope the above information is useful.
RESOURCES:
[1] https://cloud.google.com/compute/docs/startupscript
[2] https://cloud.google.com/compute/docs/instance-groups/#managed_instance_groups
[3] https://cloud.google.com/compute/docs/instance-groups/rolling-out-updates-to-managed-instance-groups
[4] https://cloud.google.com/compute/docs/instances/os-inventory-management
[5] https://cloud.google.com/compute/docs/instances/view-os-details#query-inventory-data
Thank you all, who shared your knowledge!!!
GCP does not have any such package manager currently. If you would like to patch your servers you would have to setup a cronjob (either with crontab or another cron service like a GKE cronjob) to run the appropriate update command.
I think it was released after this question was asked (April 2020ish), but GCP now offers a native VM patch service called OS Patch Management for their VMs. You can learn more about it here: https://cloud.google.com/compute/docs/os-patch-management
We created and tested several Azure Function Apps hosted at Windows. While creating new Azure Function App in what scenario do I select OS other than Windows? Meaning Linux or Docker.
I created test instances for all three OS selection options and basic settings of each of them appear to be very close.
Linux or Docker is useful if your functions have dependencies that only work on Linux/Docker. For example, some node.js native libraries only work on Linux, and will never work on Windows.
If you don't need Linux for anything specific, then I suggest sticking to Windows since that is currently (at the time of writing) the best and most supported environment for running Azure Functions.
Azure Functions 2.0 runtime is based on .NET Core, so it is cross-platform. If you choose Linux/Docker, Functions runtime will be deployed on Linux.
2.0 is still in preview, so Linux/Docker are not supported in production yet. For now, Consumption Plan (pay per call) is not supported.
See The Azure Functions on Linux Preview. Quote:
Functions on Linux can be hosted in a dedicated App Service tier in 2 different modes:
You bring the Function App code and we provide and manage the container, no specific Docker related knowledge required.
You bring your own Docker container including the Azure Functions runtime 2.0, specific dependencies, and Function App code.
For consumptions mode, the cold start varies a little bit among the OS.
It looks like, although the average time is very close between Windows and Linux, the best and worst cases are much better for Linux... which kind of makes sense.
Check this as a good reference: https://mikhail.io/serverless/coldstarts/azure/
Now, if you are deploying to a dedicated Apps Service Plan, it plays a bigger role. Linux Plans are cheaper than Windows Plans due to the OS licensing cost.
For my system, I have a back-end process that uses a 3rd party command line tool to do some occasional processing. This tool writes to and reads from the file system (I point it at some files, it works its magic, and then it writes out the results to another file).
This is obviously easy to do with an Azure Virtual Machine. Just write a Windows Service to manage this command line tool and have it read from a Queue to get the processing jobs.
To this point, however, I've been able to do everything in Azure without having to resort to a full blown VM. I like that. I like not having to worry about applying patches and other maintenance, downtime and the like.
So, my question is, is there something in Azure that would let me have this service without resorting to a VM? Would a "Worker Role" be able to accomplish this? Can it read and write to/from the file system? Can it handle 3rd party tools with a bunch of arbitrary dependencies? Can I launch another process from C# code within the worker role?
Would a "Worker Role" be able to accomplish this?
Absolutely! Remember that a Worker Role is a full blown VM also (with same OS powering Azure Virtual Machine).
Can it read and write to/from the file system?
Yes. However there's a catch. You can can't read/write to any arbitrary location on the VM. You would have full access to a special folder on that VM called Local Storage. You can read more about it here: http://msdn.microsoft.com/en-us/library/azure/ee758708.aspx
Can it handle 3rd party tools with a bunch of arbitrary dependencies?
Yes, again! Again, there's a catch. Since these VMs are stateless VMs, anything you install after the VM is stood up for you by Microsoft is not guaranteed to be there in case Microsoft decides to tear down that VM for whatever reasons. If you need to install any additional software, you would have to install them via a process called Startup Tasks. You can read about them here: http://msdn.microsoft.com/en-us/library/azure/hh180155.aspx.
Can I launch another process from C# code within the worker role?
Though I have not tried it personally but I think it is possible because you get a VM running latest version of Windows server.
I am working for a product company and we do make lot of releases of the product. In the current approach to test multiple releases, we create separate VM and install all infrastructure softwares(db, app server etc) on top of it. Later we deploy the application WARs on the respective VM. Recently, I came across docker and it seems to be much helpful. Hence I started exploring it with the examples listed on the site. But, I am not able to find a way as how docker can be applied to build environment suitable to various releases?
Each product version will have db schema changes.
Each application WARs will have enhancements/defects etc.
Consider below example.
Every month, our company is releasing a new version of software and hence in order to support/fix defects we create VMs per release. Given the fact that if the application's overall size is 2 gb and OS takes close to 5 gb (apart from space it will also take up system resources for extra overhead). The VMs are required to restore any release and test any support issues reported against it. But looking at the additional infrastructure requirements, it seems that its very costly affair.
Can docker have everything required to run an application inside a container/image?
Can docker pack an application which consists of multiple WARs/DB schemas and when started allocate appropriate port?
Will there be any space/memory/speed differences compared to VM and docker assuming above scenario?
Do you think docker is still appropriate solution or should we continue using VMs? Can someone share pointers on how I can achieve above requirements with docker?
tl;dr: Yes, docker can run most applications inside a container.
Docker runs a single process inside each container. When using VMs or real servers, this one process is usually the init system which starts all system services. With docker it is usually your app.
This difference will get you faster startup times for your app (not starting the whole operating system). The trade off is that, if you depend on system services (such as cron, sshd…) you will need to start them yourself. There are some base images that provide a more "VM-like" environment… check phusion's baseimage for instance. To start more than a single process, you can also use a process manager such as supervisord.
Going forward, the recommended (although not required) approach is to start one process in each container (one per application server, one per database server, and so on) and not use containers as VMs.
Docker has no problems allocating ports either. It even has an explicit command on the Dockerfile: EXPOSE. Exposed ports can also be published on the docker host with the --publish argument of run so you don't even need to know the IP assigned to the container.
Regarding used space, you will probably see important savings. Docker images are created by stacking filesystem layers… this means that the common layers are only stored once on the server. In your setup, you will likely only have one copy of the base operating system layer (with VMs, you have a copy on each VM).
On memory you will probably see less significant savings (mostly caused by not starting all the operating system services). Speed is still a subject of research… A few things clear so far is that for faster IO you will need to use docker volumes and that for network heavy use cases you should use host networking. Check the IBM research "An Updated Performance Comparison of Virtual Machines and Linux Containers" for details. Or a summary like InfoQ's.