setting up a linux machine on a webserver [closed] - linux

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I have a server with Hostinger, and I have SSH access.
It lacks a LOT of commands. Here's my bin folder.
https://gyazo.com/4509a9c8868e5a19c01f78ba3e0bf09e
I can use wget, meaning I can grab packages.
How can I get this up and running as an average linux machine? My plan is to use heroku on it (sneaky i know) and run django and such, but it lacks so much to start with it's looking really hard. I'm lacking essentials, including dbkg, apt, make, ect. Tips are appreciated.

There shouldn't be a case when your Linux based server is missing core packages like package manager (As I understood you don't have apt-get).
I'm lacking essentials, including dbkg, apt, make, ect.
For me, this server is broken and needs to be reinstalled.
I guess you can try to install apt with wget:
look for apropriate release here: http://security.ubuntu.com/ubuntu/pool/main/a/apt/
Example: wget http://security.ubuntu.com/ubuntu/pool/main/a/apt/apt_1.7.0_amd64.deb
Install dpkg -i apt_1.4_amd64.deb
But maybe you are on different OS than you think? Have you tried to install with yum or dnf? To make sure what os you run type:
cat /etc/*release
or
lsb_release -a
Going back to your question on how to setup Linux server.
1. Update
[package manager] update
If you run Debian based OS use apt-get as a package manager, if Centos based use yum or dnf (dnf is updated yum) and Arch uses pacman. For other distributions look it up.
2. Install
Install packages you require. To make life easier you can install groups
yum groupinstall [name_of_group]
For my knowledge apt doesn't have group install but uses meta packages instead (They points to a group of packages) Ex.:
apt-get install build-essential
3. Create users
Avoid using root! Create users for services, processes etc. This is tremendously important for security reasons. More on overall security for Linux
4. Configure
Mayby silly point but configure what needs to be configured. For instance, web servers, ssh, workspace etc. Each use case is different.
5. Orchestrate
If you don't want to set up each time Linux environment by hand from shell you can use tools like Chef or Ansible for doing it for you (Of course you need to configure it first which will take some time, but later you will save much more trust me)
For setting up application environments, I really recommend using Docker. Thanks to this your application will work on any Linux based server which has docker engine installed. Not only maintenance, but deployment will also be child easy. Just download on any server your image, then run container with necessary parameters.
At the end you will need any server with only security, kernel updates and docker engine. Rest of dependencies will be resolved inside your Docker image
Hope it helps

Heroku isn't a web server in the same sense as Apache or Nginx. It's a platform as a service provider. You don't install it on your own server, you use its hosted platform (and it uses whatever web server you bundle into your slug).
I suggest you go through the getting started tutorial for Python, which walks you through deploying a simple Django app on Heroku. That should give you a good idea of how Heroku works.
If your goal is to enable some kind of deploy workflow on your own server (via shared hosting or a server where you have full administrative access) you can search the web for deploy tools. There are all kinds of them, some of which may be more suitable to your needs than others.

Related

Automated installation of the operating system via ipmi using some solution

Suggest a solution if such exists.
There are 20 empty baremetal servers. Me need to go to the ipmi and manually connect the image file to start the installation OS.
Question: are there any solutions to automate this process?
Since you tag this question with "OpenStack", you must have heard of Ironic.
If the thought of installing OpenStack to automatically install servers frightens you, look up Cobbler. It was used by now defunct products Helion OpenStack and SUSE OpenStack Cloud to set up clouds.
Ubuntu uses MAAS for this purpose.
This is not a complete list.

Installing a package independent of package manager [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 5 years ago.
Improve this question
I am trying to create an install script for my program that runs on Linux systems. I am choosing Bash for the install script, and I am wondering if there is any efficient way of installing a package independent of the system's package manager. I know that Debian uses aptitude, Redhat uses yellow dog updater, etc. Given this diversity of package managers, it seems like a lot of if statements will be required, of which check the distribution and match it with the default package manager.
The only thing I can think of apart from checking the distro is having the script manually download the package sources and building them. I'm not sure how popular projects do this, but I would be interested to know.
Your question seems inside out. Just make your script install things in /usr/local/bin with an option for those who want to create a package wrapper for their distro to easily override the destination (typically to use /usr/bin instead, ideally simply by setting DESTDIR). The Linux Standard Base should tell you where to similarly install auxiliary files, libraries, etc.
As an aside, the core package manager on Debian is dpkg and on Red Hat it's rpm.
You are not the first to encounter this issue. Basically, there are two differnt solutions to that.
Use a package manager
Examples:
docker: https://get.docker.com/
https://github.com/firehol/netdata
Docker detects your OS and adds its own repository to the local package manager. This might be an option for you, depending on your project size.
Standard compile and install approach
You might have heard of the following 3 lines:
./configure
make
make install
Make is not the only option, there are other build systems, that do essentially the same.
There are still a lot of open source projects out there, where compiling locally and then moving/copying the files to the correct location is the preferred method of installation (sometimes only the development builds)
Examples:
aria2: https://github.com/aria2/aria2/

Distributing and Updating Software Applications to Linux envaranment

Currently I'm manually distributing and updating two applications over 50 computers running CentOS 6.5 and Ubuntu 14.04. Each time the new version is available for either of my applications,i have to copy all files and update it in all the computers by manually.its very time consuming and frustrating.
to avoid this manual process over 50 computers,I like to maintain a central server that contain the latest version of the applications and whenever need to install or update just type a command in client pc like we use in CentOS and Ubuntu to install a software
in Ubuntu
sudo apt-get install vlc
and in Cent OS
sudo yum install vlc
one of the programs written in java and other is written in python
I google it and can't find any good and useful source about how to do this.
some one alrady done this or knows how to achive this please help.
You need to create packages to make this happen.
Ubuntu uses the Debian package format, so you can use Debian's New Maintainer's Guide, which is the canonical tutorial on how to create a Debian package. It makes the assumption that you're going to upload the package to Debian, which in your case isn't true, but that just means you need to skip some sections of the document.
For RPM, there isn't such a document AFAIK, but there is the book 'max rpm' (which unfortunately is somewhat outdated), and fedora has augmented that with some guidelines and best practices which they've put on their wiki. Since RHEL is created by forking fedora and stabilizing that, and since CentOS is based on RHEL, what goes for fedora goes for CentOS, too.
These methods will create packages manually, which is always the best way and will result in the least problems afterwards. However, they take time. If you don't want to spend that time, there are also a few options to generate packages which will automate part or all of the job for you. Personally, however, I'm not a fan of these methods and therefore wouldn't recommend them.
Finally, another option is to not create packages, but to use a config management system like puppet to automate the deployment. It's even available in Ubuntu and EPEL.
edit I notice you may actually be asking about creating a repository instead. That's a different thing. There are several tools to help you do that; at core, all they do is run createrepo for RPM packages, or dpkg-scanpackages for debian packages. You can do that yourself, or investigate time in a tool like reprepro or aptly or some such.

Should I use another user than the root when installing NGiNX

I herd that it would be better to use a sub-user for installing NGiNX. Is it true? I am thinking to use NGiNX to install virtual-host that my clients could use for there website and I don't want them to have to much control over NGiNX...
I am using Ubuntu Linux distro.
Thanks in advance for any help and/or tips.
How are you planning to install these applications? Since you say you're using Ubuntu, then I would assume that you'll be installing apps via either the graphical manager or by apt-get or aptitude.
If you're using the graphical program manager, then it should prompt you for your password; this performs a sudo under the hood.
If you're using either apt-get or aptitude or something similar, those programs need to be run as root to install.
In both instances above, the installation scripts for the packages will (should) handle any user-related issues that are necessary for the program you're installing to function properly. For example, when I did an apt-get install jenkins, the installation scripts automatically created a jenkins user for me, and my Jenkins CI server runs as the jenkins user automatically.
Of course, if you're compiling all of these programs by hand, all bets are off and you'll need to figure out how best to do all of this yourself. Of course, if you're compiling these programs by hand to get them installed, I'd have to question why you're using Ubuntu in the first place; one of the best parts to using a Linux distribution with sane package management capabilities is actually USING said package management! (Note: by this statement, I mean anything Debian-based for sure; and I understand that Red Hat's yum provides very similar capabilities, but I haven't used anything RedHat since around 2003.)
You don't want a process to have any more access than it needs. So yes, you should use a user besides root -- one that has the minimal privileges required to read the files it needs. Typically this involves creating a new nginx (or www or similar) user specifically for the task.

Clone Debian/Ubuntu installation [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
Is there an easy way of cloning entire installed debian/ubuntu system?
I want to have identical installation in terms of installed packages and as much as possible of settings.
I've looked into options of aptitude, apt-get, synaptic but have found nothing.
How to mirror apt-get installs.
Primary System
dpkg --get-selections > installed-software
scp installed-software $targetsystem:.
Target System
dpkg --set-selections < installed-software
dselect"
done.
+1 to this post
This guide should answer your direct question
But I would recomend Rsync and simply clone entire /root. Its only expensive the first time.
You can also create your own package repository and have all your machines run their daily updates from your repository.
Supposing you want to install Ubuntu on multiple identical systems you could try with the Automatic Install feature.
You can use rsync for that and there is an interesting thread about it on ubuntuforms:
ubuntuforms
There is RSYNC which let's you synchornise files between installations. So you could just rsync your entire distro, or at least the directories that contain the programs and the configuration files.
Also, I don't know if this is what you are asking, but you could turn your existing install into an ISO image, this would allow you to install it elsewhere, thus having a duplicate.
Hope that helps
If the drives and systems are identical, you might consider using dd to copy the source machine to the target.
The only changes that would need to be made on booting the new machine would be to change the hostname.
Once the machine has been duplicated, go with what the other answers have suggested, and look at rsync. You won't want to rsync everything, though: system log files etc should be left alone.
Also, depending on how often "changes" are made to either system (from bookmarks to downloaded ISOs), you may need to run rsync in daemon mode, and have it update nearly constantly.
SystemImager
FAI
We have systemimager working great with RHEL and CentOS. Haven't tried it on Debian.
The trick linked by Luka works great with debian though.
Well it all depends on scale, and how often you want to use it, using systemimager is basicly rsync on steroids, it has some scripts which make creation of images easy and allows you have network settings etc. This can be easily used where you need to create a farm of webservers or a farm of mailserver with only a small difference between installations where you are able to boot one blank system over the network and have it completely installed. This has the advantage that it's almost completely automated, a script learn your partitioning layout and automatically applies it.
When you only need one copy of a system, keep it simple, boot from a livecd, create your partitioning, copy over network using rsync, install your bootloader and everything 'll be fine.

Resources