Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Our customer has about 800+ computers running Windows XP distributed across the country. Each computer can be accessed using TeamViewer. The goal is to replace XP by a Linux distribution remotely.
Does anybody know if this is possible, and where to start?
Thanks!
PXE is your only realistic hope:
Some on-site assistance is needed to press F12 at Bios before Windows XP boot:
A) On PC-A, setup DHCP server that refer DHCP-client to PXE server that download Linux ISO from a web server (of course all three can be a Windows machine in the same LAN segment onsite)
B) reboot PC-B onsite to reboot machine and press F10-F12 to choose Boot-options
C) then choose network-boot (PXE-Boot)
further reading : https://www.vercot.com/~serva/
guide: https://youtu.be/nnxgFpUr1Og?t=39
Note: Make sure you have enabled proxyDHCP and not DHCP Server
I would try with something like these:
Clonezilla, which works by replicating a previously prepared disk image to one or more computers booted inside a network segment
Cobbler, which works like a provisioning server for Linux based machines
Of those options, I have used Clonezilla with success. As long as the prepared base image doesn't change too frequently, the main time consuming tasks would be related with configuring the Clonezilla server and building that seed image.
I did a basic test of Cobbler and it worked fine in my environment, being this a way that would be more apt to deal with requirement changes.
Please also note that both options require some network configuration, like DHCP server settings that work with the PXE protocol.
Also, for your requirement, someone, a human being, would be needed to execute one or more of these tasks:
Properly enable network booting in the BIOS of each of the 800+ machines, unless it has already be done before
Boot the machines to install the new operating system
The network booting option, based on the PXE specification, should be supported by the motherboard of those machines and have higher booting priority than other devices, like CD drives, hard drives, etc.
Another thing to consider for the couple of options I'm suggesting, is how are those 800+ distributed across the country. The more disperse they are, the more cumbersome this task will be. Quite contrary, if there are few places were those machines are located, the more feasible this task will be; for example, by preparing and testing a server, computer or laptop that you then carry to each of those few places and installing the new operating system.
Regarding the option to boot using the public Internet to reach a remote deployment server, I don't know about how that could be done; in fact, for me that would be something quite interesting to learn about. If something like this is possible, another variable to note is the hardware compatibility of the destination machines, because as far as I know, protocols like PXE do some kind of multicast or broadcast in the local network segment and I presume those 800+ machines don't have recent motherboards with advanced firmware that could support more modern boot protocols.
That's all for now.
Related
Closed. This question is not about programming or software development. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 2 days ago.
Improve this question
I am looking for a solution to create several virtual switches connected by a virtual router on a single linux pc. I would like to create a test and development enviroment and use my linux pc's, kvm/qemu/libvirt for the virtual machines or containers.
My intentions is to create several virtual networks for each domain (test, development, production etc.) and connect them via virtual router to my physical lan. I want to do all these in a single linux pc. I do not want to host lost of services like openstack neutron etc. or dedicate a pc totally to an environment such as proxmox/openstack.
I have understood that openvswitch can create virtual switches as the name suggests, but I have not been able to find any instructions related to creating a virtual router. I have seen some posts routing the these switches' traffic with kernels network configuration, but I am more looking for virtual switches/routers and their virtual interfaces without messing my host routing settings often and manually.
I will connect my custom build kvm/qemu containers or virtual machines to these enviroments. I do not want to be forced to some type of image setup as in openstack, vargant or docker offers.
My internet searches pointed me, open virtual networking (ovn) which claims to do virtual switches, routers etc and seems to be using openvswitch underneath. However ovn seems like it requires higher level of tooling or services such as openstack etc. I have not seen a proper package for that in arch linux as well.
To make long story short:
Can I create several private network switches for such as 192.168.100.0/24, 192.168.101.0/24, 192.168.102.0/24 for host, test, developenment etc. and connect them to a virtual router and make these machines accessible from my lan via openvswitch in a reqular linux box.
If not, which toolset I can use to achive that. I am only interested in the networking stack, I would like to be free of any other stack or technology such as openstack, proxmox, vagrant or docker which came as bundled with their services, image types etc.
Any help much appreciated.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I was searching for some benchmark between hardware between Linux server and it version for desktop. All that I could find was between different kinds of servers mainly Linux and windows.
Anyway, my question is if the Linux server, Ubuntu 14.04 to be more specific, perform better and consume less than the desktop version.
If is relevant for the answer, my application to the system I want to deploy is just a simple rest service for my mobile app. I am the only client for the service and so I want to plan how much my energy bill will grow and compare it to sheared hosts.
The machine I will (or not) deploy my server is a Samsung NP-R540 with a dual-core Intel® Core™ i3 CPU M 380 # 2.53GHz processor and 4 GB of RAM.
In most regards, yes.
A system without a GUI (guessing you are not going to slab an X server on top off it but run it pure CLI) generally has a lower power consumption than a fullblown desktop machine.
Primarily since you do not need to utilize a power hungry GPU that emits heat.
Same goes for sound.
General tip for using a normal PC as a server is to remove graphic cards if possible and use only the built in. If the box is missing a built in, try to get a hold of a passively cooled graphics card.
Disable everything that you are not going to need in BIOS.
For example,
Floppy controllers,
IDE controllers (if you have both IDE and SATA and will not be using IDE drives)
Built in sound.
PS/2 controllers (if existing, and you connect keyboard via USB)
Firewire (if not used)
Fake-raid (if not used)
Furthermore removing any stupid LED lights (power LED indicator may be nice though), and perhaps extra fans if they are mounted solely in order to cool down a gaming graphics card.
Needless to say, a server version of a Linux distribution is smaller, leaving out the packages that makes up the GUI environments and associated services.
Thus leaving a smaller footprint both on disk and on power consumption since you wont have 500 packages laying around that are updated needlessly.
I think it largely depends on what you run. The default server and desktop distributions will have different services and even the kernels might be optimised differently. You can use a tool like powertop to analyse what exactly is consuming power and optimise accordingly.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am wanting to install OpenShift Origin on my PC running Windows 7.
I am completely new to Linux environments and terminology but wanted to 'look around' the OpenShift product with the hope that I can become familiar with its offerings and features.
So I have started here:
https://www.openshift.com/products/origin
Where the instructions are:
"The easiest way to run OpenShift Origin locally is to download an image suitable for running on a VM. The image will work on KVM , VirtualBox or VMWare . You can also spin up a VirtualBox instance using Vagrant or build your own machine using Puppet".
I have downloaded openshift-origin.latest.tgz and I am assuming the next step is to download and install a 'VM' (something I also have never used)?
I have heard the name VMWare before but when I visit the site there seem to be 15+ different products and I'm not sure which one is required for the above task.
So, is it possible for someone to provide a <ul> of steps required to install and run OpenShift Origin on Windows 7?
A google search for how to install openshift origin on windows 7? does not seem to return any immediately obvious results (the first result links to an article that starts with [obsolete]).
There is a video called 'open shift origin setup' here:
http://youtu.be/rzW3N_C5sIE
But it starts with a file called 'openshift_origin.iso' and not the 'openshift-origin.latest.tgz' that I have downloaded and then it gets into some terminal coding that is completely foreign to me.
Any pointers appreciated.
Edit:
In addition to accepted answer below, as virtual machines may be a bit ominous to newbies here are some screenshots which show the installation of VirtualBox, it was really pretty easy.
For Windows 7, I downloaded VirtualBox 4.2.16 for Windows hosts x86/amd64 from:
https://www.virtualbox.org/wiki/Downloads
and then ran the installer:
Then you will see a few of these type of screens, just click 'Install'.
Unfortunately then when running OpenShift, as per instructions in accepted answer, I got this message:
And I haven't been able to find a workaround to this yet.
But this error shouldn't occur for those who have hardware acceleration enabled.
VirtualBox
VirtualBox is freely available.
Open VirtualBox from the Start Menu - this opens the VirtualBox Manager.
Open the menu File > Import Appliance or press CTRL+I.
Click Open Appliance...
Browse to the folder you downloaded OpenShift Origin to.
Select the .ovf file.
Press Next.
Press Import.
It'll import the file for a while (roughly 2 minutes on my computer) and show up as a Virtual Machine afterwards. You can just click Start and it'll boot up.
VMWare
VMWare Player is free for personal non-commercial use while most other VMWare products are not.
I haven't personally tried this route, but it seems easy enough to just open the .vmx file directly.
Your choices of software to run the ISO (VM image with Fedora) on Windows is VirtualBox or VMWware Workstation. Here's an interesting article that compares the 2:
http://www.infoworld.com/d/virtualization/review-vmware-workstation-9-vs-virtualbox-42-203277
2 unrelated things here...
First, if you do not have a hardware virtualization enabled 64 bit processor (listed as VT-X on Intel chips, and AMD-V on AMD processors), then you cannot host an OpenShift Origin VM, which itself spawns VMs, and thus not only needs the virtualization enabled processor, but needs its VirtualBox VM enabled for virtualization (a checkbox under System/Acceleration in the settings for the VM).
Second, OpenShift Origin relies on multicast DNS, which is not supported on Windows 7, so it won't work.
If you can install Fedora 20 Alpha (I expect Fedora 19 will work, but I haven;t tried it) onto metal, then install VirtualBox and the nss-mdns RPM, that should work.
Been there, done that, got the headache.
This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center.
Closed 11 years ago.
This question is to practicing Linux kernel hackers:
Generally, it is best to test/play with linux kernel changes/hacks in a virtualized enviroment.
What virtual environment do you use for testing your hacks?
How do you make a minimalistic filesystem(with basic utils) to use with the environment. If you are using a readymade filesystem, what is that are you using?
Useful heuristics you do with your environment(like installing a new kernel, sharing files etc?
Please provide a step by step procedure to setup the environment, if possible.
A collection of this info doesnt seem available in web.
Thanks.
Different people use different set ups, I don't think there is one true answer.
I currently use VirtualBox as Hypervisor with a file system created with Buildroot.
Apart from other VMs (kvm, qemu, vmware etc.) you could also use User Mode Linux to much the same effect if your hacking is in the more "logical" layers of the kernel.
I'm currently using a Fedora14 VM running in QEMU/KVM on a Fedora14 host for my network driver development. I use a fairly standard install with the Software Development packages, plus whatever web or networking tools (e.g. wireshark) might be useful for the task. I typically set up a serial console on the VM and monitor it with minicom on the host - this helps me catch stack traces when I'm chasing a bug. I usually have my source and editing environment on the host machine with the files on an NFS file system that the VM mounts - this way I don't have to keep copying files to and from the VM. With the host running the same version kernel, I can compile the driver quickly on the multicore host and test it in the VM.
Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm going to get a low-end old (CHEAP!) computer to run non-stop as a little server for Subversion, Mercurial, Trac and maybe a little other things. It's 99% for myself - performance isn't a concern.
It'll probably have a 1 GHz P3/P4/Celeron, 256 MB SDRAM, 30 GB IDE HDD or something like that, any video card so I can hook up a monitor.
I could get about setting Windows Server on it, but I feel that's too much of an overkill. All I need is to access my code from my laptop, desktop, maybe remotely, same for a wiki, bug tracker, etc. so I feel that a light Linux distribution will be more than enough.
I want to have a GUI, preferably with Xfce, but I don't mind IceVM or any other light GUI - it doesn't have to be pretty, I just don't like CLI as a Windows user.
However, the advantage of Windows would be that I already have tons of experience setting it up and can directly use Remote Desktop to get to it and AFAIK I have access to Home Server that "just works" - unless you can suggest me a distro made for home servers.
So the question is: what Linux distribution do you think is best for my needs? Or should I just strap Windows Home Server on it?
I would suggest Ubuntu. Setting up/installing applications is just a breeze with apt-get.
Having used Debian for nearly seven years, I think it will suit your task very well. Besides, I find it much more convenient to manage than Red Hat based distributions (such as Scientific Linux, Fedora or CentOS).
EDIT: Ubuntu (which another poster has suggested) is essentially an advanced Debian customization towards desktop use. Ubuntu heavily relies on Python scripting and generally consumes more resources than Debian. I believe that original Debian fits the job you described better.
It doesn't sound like you have demanding requirements at all, so I'd probably go with something easy to set up. I believe Ubuntu is pretty good in this regard.
You might also want to look into VNC, which is a bit like a free, cross-platform Remote Desktop.
CentOS - a free version of RedHat Enterprise Linux which is the most common server Linux distribution.
I have been using Debian for very similar purposes. This too has a gui application install manager.(however, not everything I 've installed was available through the manager, then just used the command line)
I've also been using red hat at work for host development machine. I might consider Fedora for a home server, as there appears to be lots of support on the web for red hat/fedora.
BTW I use windows for most things, and just vnc on to the linux machine.