Best way to do automated clean install for Fedora Linux server? - linux

I have a Fedora 10 64-bit server where I want to set up a nightly fresh install. The server is an exact clone of our customer's hardware and is used for running acceptance tests.
I would have liked to set this up using a virtual machine, but that's prohibited due to problems we've had with the different video and network drivers on the VM.
Here are the basic steps I need to automate:
Reinstall base Fedora 10
Update to the latest packages
Install additional packages (some of these come from the rpmfusion repository and our own private repository, so the repo files for these need to be added to the configuration)
Restore file system table to include a NAS mount
Restore users and home directories.
I've looked at using Kickstart to do the installation, but it looks as if that will only satisfy the first step above by just answering all the questions that you'd normally answer interactively during installation. Does anyone know of a more suitable tool that I could use ?
Edit: looks like respin could also be very useful here.

You could look at something like
fog - http://www.fogproject.org/
clonezilla - http://clonezilla.org/
Basically these two applications are for the automated, unattended deployment of backup images to machines. They tend to be used in large enterprises but can be used for what you want to achieve.
I have only used clonzilla but fog can apparently run script after a pxe boot install. You could clone the device after all the steps above and just push down the image with a nightly reboot , you could use clonezilla or fog for this, or you could use fog with a script to apply the chances after a clean image has been installed on the server

Kickstart can do more using a %post section

Just wanted to elaborate to #BenBruscella's %post post.
Kickstart has a section where you can include or call up any post-install script to start after the main installation stuff is done.
With this you could easily do your package updates and mounts.

Related

Upgrading MariaDB on AWS Linux machine

I have a moodle site which runs on a linux AWS box and I'm trying to upgrade it. I need to have MariaDB 10.3 on there, and I currently have 10.2.10
I've followed the instruction for upgrading using yum from this webpage https://www.ryadel.com/en/mariadb-10-upgrade-10-3-without-losing-data-how-to/ and all goes fine until I get to Running Transaction Check and Running Transaction Check at which point I get the following
Transaction check error:
file /etc/my.cnf from install of MariaDB-common-10.3.27-1.el7.centos.x86_64 conflicts with file fr
om package mariadb-config-3:10.2.10-2.amzn2.0.3.x86_64
file /usr/lib64/libmysqlclient.so.18 from install of MariaDB-compat-10.3.27-1.el7.centos.x86_64 co
nflicts with file from package mariadb-libs-3:10.2.10-2.amzn2.0.3.x86_64
I'm not sure what to do now? Any help or pointers would be appreciated.
EC2 is not designed for database specifically
You seem to be installing and running your database on EC2 (what you call a linux AWS box), this means you can SSH into the instance and install software manually and carry out updates and edit configuration files and settings etc.
RDS is designed for Database
RDS also has other really convenient features like automatic version upgrade and maintenance window management.
If your situation allows I would suggest to use a tool designed for database instead of having to configure things manually. It will save you a lot of time and troubleshooting, it is also more secured.

Setting up development environment in Windows 10 without admin rights

Let me give a quick background of the work I do and then I'll explain the problem I am facing.
I am a software developer with more than 15+ years work experience. My work involves a lot of varied tasks:
data analysis using R, Python
development of web applications using Ruby on Rails, JS, etc.
building models using open source libraries
So far, I have been doing all this in my personal laptop (Ubuntu 18.04) and have faced no issues.
But I would soon need to start using a laptop provided by the organisation that I am working for. This org is not a IT company, it's a public body. They only use Windows (10) and don't provide admin access to anyone. It's very hard to get permission to install any kind of "approved" software. Just to give an example, they refused to install Chrome in my laptop as they wouldn't be able to control the updates.
So here's my problem - what do I do to work peacefully using their laptop? The primary reason I have to use the work laptop is that there are a lot of important documents kept in shared drives that are accessible only in their machines.
I have been looking at options like WSL or Hyper-V. But, before I put in a request to the IT team to get them to agree, I wanted to know a few things:
1) Which among WSL or Hyper-V would be the better approach for setting up the dev environment that I want?
2) IF I get the IT team to install WSL/HV, would I be able to set up everything else without having to go back to them for each software? Is there a way of secure local admin access these options would provide that will ease their concerns?
3) Is there some other way of setting up what I want?
If still applicable and actual I can share my solution:
If you should work on a windows machine where you don’t have administrative privileges, you can very easily make a portable R/Rstudio installation.
Download a recent version of R from the CRAN site and the recent version of RStudio. After download extract RStudio installation exec with 7Zip and copy files from $_OUTDIR to the desired location (in case you making an update, simply overwrite all files, that already exist). Your RStudio executable will be in
your-chosen-directory/bin/rstudio.exe
Then run CRAN-R installation, ignore the warning that you don’t have administrative privileges and go forward until installation will complete. Run RStudio, from the menu
Tools->Global Options
locate where your R installation is located.
If you performing an update (more recent version of R), copy all files from the library subfolder of the old R installation into new, but this time DON’T OVERWRITE! This operation vill preserves the packages you have installed in the previous version of R. After copying update all your packages from the RStudio window (Packages->Update). When the packages update process will end check which packages failed to update (You will see warning messages near them in the RStudio console). Remove these packages (write down names of failed packages and delete corresponding folders from library subfolder). For this, you will need to exit from RStudio. After deletion launch RStudio again and execute the packages install command in the RStudio console:
install.packages(c("package1", "package2", "package3"))
Congratulations, You are ready to go!

Setting up development environment for OpenBTS

I want to make some little changes in OpenBTS code and use it. Currently I am following this process
Make some changes in code. ( Can't do testing of these changes at runtime)
Build the packages
Install the packages
Setup or Run OpenBTS
Test the behavior of OpenBTS to see that those changes are reflected or not.
If not working, goto step 1
This a quite hectic process, is there any smarter way to do it. Like OpenBTS is directly run from code, rather than packages installed on Ubuntu. If I make change in code, and they are directly reflected in my setup. How i can setup this dev environment.
This answer is a bit late, I have just started to work on this my self. I don't bother installing the packages each time. My cycle is more like this:
Build the packages
Setup/run the database scripts (init the databases)
Install the packages that I don't need to re-build
Run each package manually (from the open BTS folders), e.g. run ../Transceiver, ../sipauthserver, ../OpenBTS, ../OpenBTSCLI ...etc...
Then when I want to make a code change - I do:
Stop everything
Code change
Re-build (e.g. just openBTS)
Re-run everything as before.
I also scripted the startup / stop sequences to make this faster (open/run each app in new terminals)

Provisioning vs Packaging a box with all the necessary tools in Vagrant

I am trying to set up a development environment with Vagrant. I am using centOS 6. From what I have read about Vagrant, I should set up provisioning scripts to install the packages I need when I run vagrant up. For me, this process takes quite a while. However, it seems like it would be more efficient to install everything one and create a new box. Is there some advantage to provisioning that I'm missing? What is it best for me to do in this case?
You can provision everything and when you want to run vagrant up for the nth time you can do so without provisioning:
vagrant up --no-provision
As to why provisioning? It's mostly so that you can easily take the base box and then change for example one or more items in the list to see the effect.
But it keeps the base clean and reusable.

Automated builds in monotouch

I am currently trying to implement a one-click build solution (without having to start the monodevelop IDE) for my monotouch projects, where i could specify provisioning profiles and code signing certificates. I searched popular build tools like nant, ant and maven, but none seems to support monotouch. Has anyone tried something similar ?
I've gotten it work with Jenkins (aka Hudson).
You basically setup a Jenkins server, and setup your Mac as a "slave" build server. (I used a JNLP slave).
From there you can run any command line you want in the build, so you merely have to run mdtool with some arguments, like so:
/Applications/MonoDevelop.app/Contents/MacOS/mdtool -v build "--configuration:Release|iPhone" "Path/To/YourSolution.sln"
One thing to worry about is that to sign an iOS app, the slave process must run under your user. So you can't really create a Mac daemon for it, you'll have to run the slave process in startup for your user and minimize it, which is kind of annoying.
You tried teamcity? Might be worth you having a look at this thread Buildserver for MonoTouch upon OS X?
Thx for your answers. However i don't need to use such heavy artillery in this case. Since i'm developping a single and small app, I ended up by creating 3 different Build configurations in my solution, because i found out that it is possible to configure different codesign identities and provisioning profiles for each one (Development, Ad-Hoc, AppleStore), in the monodevelop project options menu.
Then in the AppleStore/Ad-Hoc configuration, i added post build commands, so i could create the .ipa file automatically (basically create a "Payload" folder and copy the .app file into it, and then zip it into a .ipa file, along with the icon and itunesartwork files).
Finally i created a bash script that invokes mdtool with any of the configurations, so i can build and generate .ipa executables by just executing the script.

Resources