I want to make a script of sorts such that I can automate the mundane tasks that I do to setup my linux box after a clean install. These steps are namely:
Install perforce(as I work with perforce), or git and checkout code - This requires the user to enter a user/password
Install software such as sun-jdk, maven, mysql, tomcat, etc
Install eclipse and a couple of other plugins
Mount a remote drive to local disk and create a copy of the data from that mount to local disk
There are other couple of task and the target system is mostly ubuntu/debian? How can I do this, I know preseed file is an option but how do i do user input and stuff. Pls help!
Try Fabric which is
a Python (2.5 or higher) library and command-line tool for streamlining the use of SSH for application deployment or systems administration tasks.
You can write a fabfile with the various tasks you need, and then just run it against your freshly installed machine to configure the environment.
Gitpod.io provides automated dev environments as a service.
It is not only providing a full terminals based on Docker but also prebuilds your git branches, so you are ready to code immediately.
Related
im using gitlab on a synology NAS.
for reasons of safety, i would like to automatically synchronise (mirror) the entire gitlab system on my NAS with my account on gitlab.com.
is there an easy way to do this?
Your entire gitlab repository is stored under /volume1/docker/gitlab/repositories (or a similar directory).
To synchronize, you would need to run a script to push a certain repository (which is in bare mode) to a gitlab. This shows how to do it to GitHub, so it should be similar for gitlab.com:
https://gist.github.com/joahking/780877
Once configured, you can use the task manager in Synology's control panel to execute the script regularly.
Good luck!
I'm a bit new to version control and deployment environments and I've come to a halt in my learning about the matter: how do deployment environments work if developers can't work on the same local machine and are forced to always work on a remote server?
How should the flow of the deployment environments be set up according to best practices?
For this example I considered three deployment environments: development, staging and production; and three storage environments: local, repository server and final server.
This is the flow chart I came up with but I have no idea if it's right or how to properly implement it:
PS. I was thinking the staging tests on the server could have restricted access through login or ip checks, in case you were wondering.
I can give you (according to my experience) a good and straightforwarfd practice, this is not the only approach as there is not a unique standard on how to work on all projects:
Use a distributed version control system (like git/github):
Make a private/public repository to handle your project
local Development:
Developers will clone the project from your repo and contribute to it, it is recommended that each one work on a branch, and create a new branch for each new feature
Within your team, there is one responsible for merging the branches that are ready with the master branch
I Strongly suggest working on a Virtual Machine during Development:
To isolate the dev environment from the host machine and deal with dependencies
To have a Virtual Machine identic to the remote production server
Easy to reset, remove, reproduce
...
I suggest using VirtualBox for VM provider and Vagrant for provisioning
I suggest that your project folder be a shared folder between your host machine and your VM, so, you will write your source codes on your host OS using the editor you love, and at the same time this code exists and runs inside your VM, is in't that amazingly awesome ?!
If you are working with python I also strongly recommend using virtual environments (like virtualenv or anaconda) to isolate and manage inner dependencies
Then each developer after writing some source code, he can commit and push his changes to the repository
I suggest using project automation setup tools like (fabric/fabtools for python):
Making a script or something that with one click or some commands, reproduces all the environment and all the dependencies and everything needed by the project to be up and running, so all developers backend, frontend, designers... no matter their knowlege nor their host machine types can get the project running very merely. I also suggest doing the same thing to the remote servers whether manually or with tools like (fabric/fabtools)
The script will mainly install os dependencies, then project dependencies, then cloning the project repo from your Version Control, and to do so, you need to Give the remote servers (testing, staging, and production) access to the Repository: add ssh public keys of each server to the keys in your Version Control system (or use agent forwarding with fabric)
Remote servers:
You will need at least a production server which makes your project accessible to the end-users
it is recommended that you also have a testing and staging servers (I suppose that you know the purpose of each one)
Deployment flow: Local-Repo-Remote server, how it works ?:
Give the remote servers (testing, staging, and production) access to the Repository: add ssh public keys of each server to the keys in your Version Control system (or user agent forwarding with fabric)
The developer writes the code on his machine
Eventually writes tests for his code and runs them locally (and on the testing server)
The developer commits and pushes his code to the branch he is using to the remote Repository
Deployment:
5.1 If you would like to deploy a feature branch to testing or staging:
ssh access to the server and then cd to the project folder (cloned from repo manually or by automation script)
git checkout <the branch used>
git pull origin <the branch used>
5.2 If you would like to deploy to production:
Make a pull request and after the pull request gets validated by the manager and merged with master branch
ssh access to the server and then cd to the project folder (cloned from repo manually or by automation script)
git checkout master # not needed coz it should always be on the master
git pull origin master
I suggest writing a script like with fabric/fabtools or use tools like Jenkins to automate the deployment task.
VoilĂ ! Deployment is done!
This is a bit simplified approach, there are still a bunch of other recommended and best practice tools and tasks.
I am looking for a way to sync my git repositories from my Mac to an Intel NUC box I have running Ubuntu. The reason I want to do this is to take the load off my Mac for builds, and just have my Ubuntu box do this when it receives a change to one of the repos. Also, I'd like it to be automatic so that i can play about with Gradles continuous build feature. Has anyone done anything like this, or is this possible? I've been looking into rsync and thinking this is the way to go, but was looking other inputs.
That's the basic functionality of version control, you can have the same copy of repo on any number of machines.
You can clone your repository on any other machine (linux, window or another mac). Then run a cron job to pull latest changes on new machine periodically.
I really love this tool - Syncthing
Download binaries for your systems
Run on both machines (just execute one file)
Connect machines to each other
Add desired folders to be synchronized
I am trying something similar, unison is a nice alternative.
https://github.com/bcpierce00/unison
The latest version comes with a repeat configuration option that allows you to run it as a service. Doesn't have all the bells and whistles of a gui, but is basically similar to bidirectional rsync with conflict management.
I have a Go package running on Windows and is working fine but now I'm at a stage where I would like to test this on production CentOS 6.5 server.
What is the best practice to deploy this from Windows to CentOS?
Would I have to use my Git repo to distribute to Linux operating system, compile then deploy the binary to the server?
Also I have multiple files, so I would imagine go build *.go would suffice or are there better options for doing compilation?
What is the best practice to deploy this from Windows to CentOS?
As far as best practices go I would recommend using continuous integration. You can setup jenkins, or there are some cloud options out there: codeship.io, travis-ci.org, drone.io, wercker.com, ... Some of them have free plans available.
Basically you'd commit your code to git and push that out to Github (or Bitbucket if you want free private repos). The continuous integration server will be notified whenever you push out changes, and will build, test and create a release tar archive of your project. You can then take this resulting tar and download it to your CentOS box. In 6.5 you'll need to create an init.d script to keep your program up and running. You can see an example here (the system v script).
CentoOS 7 uses systemd now which would be slightly easier to setup.
Taking this one step further it's also possible to setup continuos deployment, in which the download, extraction and installation can also be automated. Depending on your project it may or may not make sense to set up continuous deployment. (Auto-pushing to production might be a little too automatic) You can find an example in wercker here.
Although there is an an up-front cost to setting up continuous integration if this is a project that other people will contribute too, or one that you intend to work on long-term, the cost will definitely be worth it. (Future you will be greatful when you come back to this project 6 months from now, change 1 line of code, and don't have to remember all the manual steps it took to deploy)
I write company internal software in PHP and C++.
What are the best methods of deploying this type of software to linux machine? Currently, we use svn export, are there any other methods?
We use checkinstall. Just write a simple Makefile that copies the files to target directories on the target machine and then run checkinstall to create RPM, DEB or TGZ package, which you can later easily install with distribution package management tools.
You can even add shell scripts that are executed before and after files are copied, so you can do some pre and post processing like adding user accounts, crontab entries, etc.
Once you get more advanced, you can add dependencies to these packages so it could also pull and install PHP, MySQL, Apache, GCC libraries and even required PHP or Apache modules or some extenal C++ libs you might need, all with a single command.
I think it depends on what you mean by deploy. Typically a deploy process for web projects involves a configuration scripting step in which you can take the same deploy package and cater it to specific servers (staging, development, production) by altering simple configuration directives.
In my experience with Linux serviers, these systems are often custom built, and in my experience often use rsync rather than svn export and/or scp alone.
A script might be executed from the command line like so:
$ deploy-site --package=app \
--platform=dev \
--title="Revsion 1.2"
Internally, the system would take whatever was in trunk for the given package from SVN (I'm sure you could adapt this really easily for git too), generate a new unique tag with the log entry "deploying Revision 1.2".
Then it would patch any configuration scripts with the appropriate changes (urls, hosts, database passwords, etc.) before rsyncing it the appropriate destination.
If there are issues with the deployment, it's as easy as running the same command again only this time using one of your auto-generated tags from an earlier deploy:
$ deploy-site --package=app \
--platform=dev \
--title="Reverting to Revision 1.1" \
--tag=20090714200154
If you have to also do a compile on the other end, you could include as part of your configuration patching a Makefile and then execute a command via ssh that would compile the recently deployed code once the rsync process completes.
There is, in my experience, a tradeoff between security and ease of deployment.
For my deployment, I've never had a problem using scp to move the files from one machine to another. You can write a simple BASH script to take a list of machines (from a text file or STDIN) and push a given directory/application to a given directory on all of the machines. Say you hypothetically did it to a bin directory, the end user would never know the difference.
The only problem with that would be when you have multiple architectures and OSes, where it has to be compiled on each one individually. In that case, you could just write a script (the first example that pops into my mind is Net::SSH from Ruby) to take that list of servers, cd to the given directory, and run the compilation script. However, if all machines use the same architecture and configuration, you can hypothetically just compile it once on the machine that you are using to distribute.