Provisioning vs Packaging a box with all the necessary tools in Vagrant - linux

I am trying to set up a development environment with Vagrant. I am using centOS 6. From what I have read about Vagrant, I should set up provisioning scripts to install the packages I need when I run vagrant up. For me, this process takes quite a while. However, it seems like it would be more efficient to install everything one and create a new box. Is there some advantage to provisioning that I'm missing? What is it best for me to do in this case?

You can provision everything and when you want to run vagrant up for the nth time you can do so without provisioning:
vagrant up --no-provision
As to why provisioning? It's mostly so that you can easily take the base box and then change for example one or more items in the list to see the effect.
But it keeps the base clean and reusable.

Related

How does Galaxy Meteor hosting for windows work?

I have a node.js application I have adopted from a more senior developer. I want to deploy it, and I know it will work because he already deployed it several times. I am reading these instructions:
https://galaxy-guide.meteor.com/deploy-quickstart.html
I use windows, as did he.
How does deployment work?
Take these instructions:
Windows If you are using Windows, the commands to deploy are slightly
different. You need to set the environment variable first, then run
the deployment command second (the syntax is the same as everything
you’d put for meteor deploy).
In the case of US East, the commands would be:
$ SET DEPLOY_HOSTNAME=galaxy.meteor.com
$ meteor deploy [hostname]
--settings path-to-settings.json
Am I just supposed to go to the source directory on my laptop and run these commands? What then happens? Is the source uploaded to their server from my laptop and then their magic takes care of the rest?
What about when I want to make a change to the code? Do I just do the same thing, poiting to an existing container and, again, they do the magic?
Am I just supposed to go to the source directory on my laptop and run these commands? What then happens? Is the source uploaded to their server from my laptop and then their magic takes care of the rest?
It is not magic. You basically go to your dev root and enter these commands. Under the hood it builds your app for production (including minification and prod flags for optimization) and once complete opens a connection to the aws infrastructure and pushes the build bundle.
See: https://github.com/meteor/meteor/blob/devel/tools/meteor-services/deploy.js
On the server there will be some install and post install scripts that set up all the environment for you and, if there are no errrors in the process, start your app.
These scripts have if course some automation, depending on your account settings and the commands you have entered.
What about when I want to make a change to the code? Do I just do the same thing, poiting to an existing container and, again, they do the magic?
You will have to rebuild (using the given deploy command) again but Galaxy will take care of the rest.

How to set up a development environment for React when IT won't allow you to install anything on your Windows workstation

I am working for a client that does not allow setting up anything on the native Windows workstation.
I am, however, allowed to set up a virtual machine on which I can install anything I want.
So, I've set up a Linux VM and installed the React environment.
However, I would like to be able to use the native Windows tools that are allowed for development, since installing and using them on the VM is painfully slow.
I'm currently modifying the code with a native Windows IDE, then pushing the changes to a Git repository, then pulling the changes down to the Linux VM to see them work. However, for debugging, where changes are added, removed, modified, etc... this is also painfully slow.
I tried to set up a shared folder to work on the code locally and having it update on the Linux VM dynamically, but that doesn't work because "npx create-react-app" does a bunch of things, like set up symlinks, that either don't work on a shared folder or aren't allowed by IT. I'm guessing it's the shared Windows folder that's limiting this. I also tried to set up a Samba share of the Linux folder, but I think this is blocked by IT, because I just can't see it from my Windows machine, and network discovery is turned on.
So, now that you know my pain, what would be the best way to set up a React development environment in this situation? Help...
I almost understands nothing about linux and VM, but here is something you can do.
When creating a react application with create-react-app, when you run npm start, your application will be hosted in localhost:3000.
So to do what you want, you need to set up the enviroment in the VM (e.g. create-react-app) and then configure (this is the part I don't understand how to do) your VM in a way you can access the VM's localhost and the files of your project.
This way you can edit the files of the VM and also see the app changing in the windows browser.
How to share VM's folder with host
How access VM's localhost

Setting up development environment for OpenBTS

I want to make some little changes in OpenBTS code and use it. Currently I am following this process
Make some changes in code. ( Can't do testing of these changes at runtime)
Build the packages
Install the packages
Setup or Run OpenBTS
Test the behavior of OpenBTS to see that those changes are reflected or not.
If not working, goto step 1
This a quite hectic process, is there any smarter way to do it. Like OpenBTS is directly run from code, rather than packages installed on Ubuntu. If I make change in code, and they are directly reflected in my setup. How i can setup this dev environment.
This answer is a bit late, I have just started to work on this my self. I don't bother installing the packages each time. My cycle is more like this:
Build the packages
Setup/run the database scripts (init the databases)
Install the packages that I don't need to re-build
Run each package manually (from the open BTS folders), e.g. run ../Transceiver, ../sipauthserver, ../OpenBTS, ../OpenBTSCLI ...etc...
Then when I want to make a code change - I do:
Stop everything
Code change
Re-build (e.g. just openBTS)
Re-run everything as before.
I also scripted the startup / stop sequences to make this faster (open/run each app in new terminals)

Bamboo 5.5.0 - How to delete a remote agent's capability via the bamboo-capabilities.properties file?

I am currently trying to automate the process of bamboo remote agent installation and uninstallation. I have run into a problem in regards to adding and removing capabilities.
What I am trying to automate:
(The following is what I do on the bamboo server via the GUI, I want to do this on the remote agent machine via bash script.)
I install the remote agent on a VM machine, then start it up. I go to the bamboo interface and click on the newly created agent's name.
I add a custom capability type, for the key I put 'buildserver' and for the value I put the name of the agent.
I add an 'Executable' capability of type 'Command' with Executable label 'cygwin' and path 'C:\cygwin64\bin\bash'
I navigate to the git executable, and remove it by clicking 'delete.' <--- (the problem step)
what I've done.
I have looked here and found a way to automate steps 1-3 using the following "bamboo-capabilities.properties" file:
buildserver="AGENTNAME"
system.builder.command.cygwin="C:\cygwin64\bin\bash"
However I am stuck on how I would remove the git capability (step 4.) I've tried something appending something like this to the file:
system.git.executable=""
but it does not seem to do anything. Does anyone know how I would do this? There seems to be very little documentation about this online.
Thanks very much.
I never found a way to get around this, but I found a workaround. I later learned the point of removing git in my situation was to allow a shared capability that was also called git to take precedence. My workaround was to set the non-shared capability to the value of the shared capability. I am not 100% sure that this does the same thing, and I am not in a position to test it yet, but as a capability seems to be only a key-value pair I don't see why it wouldn't.... will update if anything breaks.

Best way to do automated clean install for Fedora Linux server?

I have a Fedora 10 64-bit server where I want to set up a nightly fresh install. The server is an exact clone of our customer's hardware and is used for running acceptance tests.
I would have liked to set this up using a virtual machine, but that's prohibited due to problems we've had with the different video and network drivers on the VM.
Here are the basic steps I need to automate:
Reinstall base Fedora 10
Update to the latest packages
Install additional packages (some of these come from the rpmfusion repository and our own private repository, so the repo files for these need to be added to the configuration)
Restore file system table to include a NAS mount
Restore users and home directories.
I've looked at using Kickstart to do the installation, but it looks as if that will only satisfy the first step above by just answering all the questions that you'd normally answer interactively during installation. Does anyone know of a more suitable tool that I could use ?
Edit: looks like respin could also be very useful here.
You could look at something like
fog - http://www.fogproject.org/
clonezilla - http://clonezilla.org/
Basically these two applications are for the automated, unattended deployment of backup images to machines. They tend to be used in large enterprises but can be used for what you want to achieve.
I have only used clonzilla but fog can apparently run script after a pxe boot install. You could clone the device after all the steps above and just push down the image with a nightly reboot , you could use clonezilla or fog for this, or you could use fog with a script to apply the chances after a clean image has been installed on the server
Kickstart can do more using a %post section
Just wanted to elaborate to #BenBruscella's %post post.
Kickstart has a section where you can include or call up any post-install script to start after the main installation stuff is done.
With this you could easily do your package updates and mounts.

Resources