Currently we are using a Jenkins as our CI system and there is one master server and slaves which are provisioned by Saltstack on Openstack. If our Jenkins master server goes down, we need to create a new master and we need to pull the files from the old master & put it in new ones but it's gonna take at least 30mins.
Is there any way to setup Jenkins with High Availability?
I already check with Gearman Plugin, however if the Gearman server goes down for some reason, we need to setup a HA for Gearman also.
Is there any other ways to setup a High Availability for Jenkins?
Jenkins doesn't have a great HA story; the best you can do with the open source version is to put all of the files in $JENKINS_HOME on a shared file system, and then have a cold standby master machine that you can spin up if the active master goes down. That would reduce your failover time to however long it takes for the master to restart, which is usually just a few minutes.
You could also look at CloudBees' Jenkins Enterprise offering, which includes a High Availability Plugin.
I use cluster from scratch doc to create a Jenkins WAN-HA active/passive cluster. See the attached Architecture Diagram for Jenkins HA using pacemaker .
/etc/init.d/jenkins will need to be converted to be an ocf agent script. Currently I manually start up Jenkins via systemd on pcmk-2 server when pcmk-1 is down.
Related
Currently I am running the whole gitlab EE as a single container. I need to scale out the service so that it can support more users and more operations/pull/push/Merge Requests etc simultanously.
I need to run a redis cluster of its own
I need to run a PG cluster separate
I need to integrate elasticsearch for search
But how can I scale out the remaning core gitlab services. Do they support a scale out architecture.
gitlab workhorse
unicorn ( gitlab rails )
sidekiq ( gitlab rails )
gitaly
gitlab shell
Do they support a scale out architecture.
Not exactly, considering the GitLab Omnibus image is one package with bundled dependencies.
But I never experienced so much traffic that it needed to be split up and scaled out.
There is though a proposal for splitting up the Omnibus image: gitlab-org/omnibus-gitlab issue 1800.
It points out to gitlab-org/build/CNG which does just what you are looking for:
Each directory contains the Dockerfile for a specific component of the infrastructure needed to run GitLab.
rails - The Rails code needed for both API and web.
unicorn - The Unicorn container that exposes Rails.
sidekiq - The Sidekiq container that runs async Rails jobs
shell - Running GitLab Shell and OpenSSH to provide git over ssh, and authorized keys support from the database
gitaly - The Gitaly container that provides a distributed git repos
The other option, using Kubernetes, is the charts/gitlab:
The gitlab chart is the best way to operate GitLab on Kubernetes. This chart contains all the required components to get started, and can scale to large deployments.
Some of the key benefits of this chart and corresponding containers are:
Improved scalability and reliability
No requirement for root privileges
Utilization of object storage instead of NFS for storage
The default deployment includes:
Core GitLab components: Unicorn, Shell, Workhorse, Registry, Sidekiq, and Gitaly
Optional dependencies: Postgres, Redis, Minio
An auto-scaling, unprivileged GitLab Runner using the Kubernetes executor
Automatically provisioned SSL via Let's Encrypt.
Update Sept. 2020:
GitLab 13.4 offers one feature which can help scaling out GitLab on-premise:
Gitaly Cluster majority-wins reference transactions (beta)
Gitaly Cluster allows Git repositories to be replicated on multiple warm Gitaly nodes. This improves fault tolerance by removing single points of failure.
Reference transactions, introduced in GitLab 13.3, causes changes to be broadcast to all the Gitaly nodes in the cluster, but only the Gitaly nodes that vote in agreement with the primary node persist the changes to disk.
If all the replica nodes dissented, only one copy of the change would be persisted to disk, creating a single point of failure until asynchronous replication completed.
Majority-wins voting improves fault tolerance by requiring a majority of nodes to agree before persisting changes to disk. When the feature flag is enabled, writes must succeed on multiple nodes. Dissenting nodes are automatically brought in sync by asynchronous replication from the nodes that formed the quorum.
See Documentation and Issue.
I am using Icinga2-2.3.2 cluster HA setup with three nodes in the same zone and database in a seperate server for idodb. All are Cent OS 6.5. Installed IcingaWeb2 in the active master.
Configured four local checks for each node including cluster health check as described in the documentation. Installed Icinga Classi UI in all three nodes, beacuse I am not able to see the local checks configured for nodes in Icinga Web2.
Configs are syncing, checks are executing & all three nodes are connected among them. But the configured local checks, specific to the node alone are not happening properly and verified it in the classic ui.
a. All local checks are executed only one time whenever
- one of the node is disconnected or reconnected
- configuration changes done in the master and reload icinga2
b. But after that, only one check is hapenning properly in one node and the remaining are not.
I have attached the screenshot of all node classic ui.
Please help me to fix and Thanks in advance.
So I am still a bit new to hadoop and am currently in the process of setting up a small test cluster on Amazonaws. So my question relates to some tips on the structuring of the cluster so it is possible to work submit jobs from remote machines.
Currently I have 5 machines. 4 are basically the Hadoop cluster with the NameNodes, Yarn etc. One machine is used as a manager machine( Cloudera Manager). I am gonna describe my thinking process on the setup and if anyone can chime in the points I am not clear with, that would be great.
I was thinking what was the best setup for a small cluster. So I decided to expose only one manager machine and probably use that to submit all the jobs through it. The other machines will see each other etc, but not be accessible from the outside world. I am have conceptual idea on how to do this,but I am not sure how to properly go about doing this though, if anyone could point me in the right direction that would great.
Also another big point is, I want to be able to submit jobs to the cluster through exposed machine from a client machine (might be Windows). I am not so clear on this setup as well. Do I need to have Hadoop installed on the machine in order to use the normal hadoop commands, and to write/submit jobs say from Eclipse or something similar.
So to sum it up my questions are,
Is this an ok setup for a small test cluster
How can I go about using one exposed machine to submit/route jobs to the cluster, without having any of the Hadoop nodes on it.
How do I setup a client machine to submit jobs to a remote cluster, and an example on how to do it on Windows. Also if there are any reason not to use Windows as a client machine in this setup.
Thanks I would greatly appreciate any advice or help on this.
Since this is not answered I will attempt to answer it.
1. Rest api to submit an application:
Resource 1(Cluster Applications API(Submit Application)): https://hadoop.apache.org/docs/current/hadoop-yarn/hadoop-yarn-site/ResourceManagerRest.html#Cluster_Applications_APISubmit_Application
Resource 2: https://docs.hortonworks.com/HDPDocuments/HDP2/HDP-2.6.5/bk_yarn-resource-management/content/ch_yarn_rest_apis.html
Resource 3: https://hadoop-forum.org/forum/general-hadoop-discussion/miscellaneous/2136-how-can-i-run-mapreduce-job-by-rest-api
Resource 4: Run a MapReduce job via rest api
2. Submitting hadoop job fromĀ client machine
Resource 1: https://pravinchavan.wordpress.com/2013/06/18/submitting-hadoop-job-from-client-machine/
3. Sending program to remote hadoop cluster
It is possible to send the program to a remote Hadoop cluster for running it. All you need to ensure is that you have set the resource manager address, fs.defaultFS, library files, and mapreduce.framework.name correctly before running the actual job.
Resource 1: (how to submit mapreduce job with yarn api in java)
I am trying to set up Hadoop permanently on Amazon EC2. Currently what I am doing is every morning launch EC2 instances and set up Hadoop. Is there any way i can avoid this tedious step? I am looking for an Hadoop image which can be loaded on EC2 and make things easy for me.
I know I can use EMR for hadoop services. But I dont know how to start a EMR (hadoop) cluster without submitting a job flow. I mean I need a hadoop cluster without any jobs running in it.
Ultimately my aim is to run bioinformatics applications like Distmap and Seal. For these applications to run there are many dependencies. So I need a free hadoop cluster to set up the environment and then run these applications.
I hope its clear what I am trying to do.
Thanks.
What you can do is one of the below:
Option 1. Start out with an EBS backed EC2 instance with your favourite Linux distro. Go ahead and install Hadoop software that you need. Create as many EC2 instances as the types of instances you are going to need (master / slaves /etc). You can create then your own AMIs in the AWS Console (right click on the EC2 instance and click "Create AMI"). You can then launch your own instances, as many as you need, based on this AMI. You can also create AMI's from instance-store backed instances, but that will mean dumping everything to S3 and creating an AMI from there. There are a lot of tutorials about this available, please leave a comment if you need directions :)
Option 2. Start out with a Hadoop based AMI, repeat the steps above after doing your own configurations / adding dependencies to them. I went ahead and searched for Hadoop AMI's from the AWS console and there are 48 in EU-West-1 (not sure what region you're working with).
Option 3. Start an EMR Cluster in interactive mode. There is also an option to keep the cluster alive after finishing job flows. If you also set the EC2 keys for the EMR instances, you should be able to SSH into them and have a functional Hadoop cluster (not sure about the dependencies though, you might be better of rolling your own).
I hope I understood correctly what you're trying to achieve and this helps a little bit.
This is more of a configure management and automation problem. Try CMT like chef and puppet to get this done according to your desire.
I have a condor pool which consist of 4 dedicated machine one is set as a centeral manager, submitting, and executing node while the other three is set to be executing nodes I used CentOS 5.4 as an OS for all the machines. My problem is when I submitted a job from the central manager it works just on the central manager so when I specify in the JDL file that the job should run in any machine apart from the central manager the job stay in hold and does not run. When I type condor_status all nodes appear. I keep the daemon MASTER, STARTD in the daemon list for the executing nodes. Does any one come across this problem?
There's not enough information to answer your question, but the first thing to do is to run condor_q -analyze <jobid> and see what it tells you. See the Condor manual Section 2.6.5: Why is the job not running?
One possible cause is that you're not telling Condor to transfer your input/output files for you, and your nodes have different "filesystem domains", so Condor is unable to find a host which shares a common filesystem with your submit host.