I've got several Virtual Hosts set up on my Apache (Linux) server. I'd like each Virtual Host to have its own set of cronjobs so that they can be managed independently without the risk of one Virtual Host conflicting with another. Is there a way to do something like this?
The only way for me is to create local users on the machine where those virtual hosts reside and give each user rights to manage particular virtual host (config, cron, etc)
Related
I have an Amazon Linux machine, where users log in and connect to other servers (Bastion server), now I have upgraded my Linux machine to.
How do I move all the users present in server1 to Server2
Things I have tried:
created snapshots of Server1
converted to volumes and attached it to Server2.
Please suggest what else I can do to get all users from Server1
You should not create snapshots of the boot disk, since it contains the Operating System.
Instead, you should:
Start with the raw Amazon Linux 2 image
Create the new users in the Operating System. See: Add new user accounts with SSH access to an EC2 Linux instance
Copy the /usr/USERNAME directories from the old instance to the new instance
This will preserve the users' settings and SSH keys.
copied all the main files, passwd, group, and shadow files from etc and rebooted it worked.
I installed gitlab on a raspberry pi 4 in my local network and will use it only locally. When I configure in /etc/gitlab/gitlab.rb the external_url 'http://rpi4.local' (and execute sudo gitlab-ctl reconfigure afterwards) it works. I can even configure a different port here.
But the configuration external_url 'http://gitlab.rpi4.local' does not work. Do I need to configure something else, like my /etc/hosts file ?
You will need to make the name valid in DNS through some mechanism. There are multiple ways depending on your needs and your options for DNS.
As you mentioned, you can add the name to your /etc/hosts file. This should be done both on the GitLab server and on any workstation you wish to have access to GitLab (assuming Linux-based machines. The process differs for Mac or Windows).
Use a valid DNS name and add it to your DNS. Use a name such as gitlab.<a-domain-you-own> and add it to DNS. Many domain registrars offer DNS for free or you could use a dynamic DNS service if your Raspi has a dynamic internal address. The advantage of using this method is you won't have to modify any /etc/hosts files and all workstations will know how to access your GitLab instances without any changes.
I know there are several questions similar to this, but as far as I can see there's not an answer for the setup that I can get to work, and as far as documentation goes I'm a bit lost.
My goal is to set up a linux development server on the local network which I can run multiple docker machines / containers on for each of our projects.
Ideally, I would create a docker-machine on the dev box, and then be able to access that from any of my local network machines. I can run docker on the linux box directly and access by publishing the ports, but I want to run multiple machines with different ip addresses so that we can have multiple VMs running (multiple projects).
I've looked at Docker Swarm and overlay networks and just not been able to find a single tutorial or set of instructions to get this sort of set up running.
So I have a dev box at 192.168.0.101 with docker-machine on. I want to create a new machine, run nginx on it, and then access nginx from another machine on the local network i..e http://192.168.99.1/ then set up another and access that too at say http://192.168.99.2/.
If anyone has managed to do this i'd be interested to know how.
One way I've been thinking about doing it, is running nginx on the local host on the dev box, and set up config rules to proxy to the local machines, unsure how well this would work (it works for web servers, but what if I want to ssh or bash into one of those machines, or if one has a mysql container I want to connect to)
Have you considered running your docker machines inside LXD containers?
Stepane Grabers site has a lot of relevant information
https://stgraber.org/category/lxd/
The way that I resolved this, is by using a NAT on the linux box, and then assigning a different ip to each machine. I followed the instructions here; http://blog.oddbit.com/2014/08/11/four-ways-to-connect-a-docker/ which finally got me to be able to share multiple docker machines using the same ports (80), on different ips.
I'm using Vagrant to deploy my VMs and my current setup looks like this:
server1 = VM1, VM2, VM3 ( main production server )
server2 = VM1, VM2, VM3 ( backup server )
My questions is, can I somehow sync the VMs across the different physical servers in case one fails so I can keep running the VMs on the second one without experiencing any downtime ?
I know there is the Synced Folders option within Vagrant but that is not what I need, I basically need to clone the VMs from server1 to server2 periodically in case of the downtime so they can keep on running on the backup server while the main one doesn't get up again.
Thanks a bunch.
Vagrant doesn't inherently support this, since it's intended audience is really development environments. It seems like you're looking for something more like what VMWare vSphere does.
I am deciding between using puppet or chef to provision a matching development and production environment. I plan to regularly add virtual hosts to apache. I have looked through the docs of both and I am not certain about this. If I add a virtual host, does the server need to be re-provisioned entirely (destroyed / rebuilt) for a new virtual host to be active? Or can I simply reboot the machine and new changes to the puppet or chef manifests will be applied?
Nope, provisioning can run over and over again, even without reboot.
Chef by default runs every 30 minutes.