Does Linux server consume less power than desktop edition? [closed] - linux

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 8 years ago.
Improve this question
I was searching for some benchmark between hardware between Linux server and it version for desktop. All that I could find was between different kinds of servers mainly Linux and windows.
Anyway, my question is if the Linux server, Ubuntu 14.04 to be more specific, perform better and consume less than the desktop version.
If is relevant for the answer, my application to the system I want to deploy is just a simple rest service for my mobile app. I am the only client for the service and so I want to plan how much my energy bill will grow and compare it to sheared hosts.
The machine I will (or not) deploy my server is a Samsung NP-R540 with a dual-core Intel® Core™ i3 CPU M 380 # 2.53GHz processor and 4 GB of RAM.

In most regards, yes.
A system without a GUI (guessing you are not going to slab an X server on top off it but run it pure CLI) generally has a lower power consumption than a fullblown desktop machine.
Primarily since you do not need to utilize a power hungry GPU that emits heat.
Same goes for sound.
General tip for using a normal PC as a server is to remove graphic cards if possible and use only the built in. If the box is missing a built in, try to get a hold of a passively cooled graphics card.
Disable everything that you are not going to need in BIOS.
For example,
Floppy controllers,
IDE controllers (if you have both IDE and SATA and will not be using IDE drives)
Built in sound.
PS/2 controllers (if existing, and you connect keyboard via USB)
Firewire (if not used)
Fake-raid (if not used)
Furthermore removing any stupid LED lights (power LED indicator may be nice though), and perhaps extra fans if they are mounted solely in order to cool down a gaming graphics card.
Needless to say, a server version of a Linux distribution is smaller, leaving out the packages that makes up the GUI environments and associated services.
Thus leaving a smaller footprint both on disk and on power consumption since you wont have 500 packages laying around that are updated needlessly.

I think it largely depends on what you run. The default server and desktop distributions will have different services and even the kernels might be optimised differently. You can use a tool like powertop to analyse what exactly is consuming power and optimise accordingly.

Related

Why do I need to choose my processor architecture when downloading an application for Linux? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 3 years ago.
Improve this question
Isn't the operating system an abstraction on top of the hardware?
Making hardware architectures irrelevant for software being run on the same operating system?
If so why do I need to choose my processor architecture (e.g: ARM or amd64) when downloading NodeJS for example?
Different platforms abstract away different things:
Java/WASM abstract away CPU architecture, memory model, device access, terminal output and file access.
Any program can run anywhere.
Linux/Windows abstract away device access, terminal output and file access.
Any program built for that CPU and ABI can run.
DOS abstracts away terminal output and file access.
Any program built for that CPU and ABI that includes drivers for devices can run.
BIOS abstracts away terminal output.
Any program built for that CPU and ABI that includes device drivers and file system drivers to load its own data can run.
You need to account for everything that is not abstracted away, and on Linux that includes the CPU architecture.
It's better than DOS where you additionally needed to make sure your program supported your sound card, but not as convenient as Java where a single Android app can run on both x86 and arm64.
You've probably heard programs can be compiled to "machine code." These are low-level instructions for the hardware, different for every type of machine (and are influenced not only by CPUs but also by peripherals).
The NodeJS interpreter is written in C and C++ and compiled to machine code. This compiled code is only valid on a particular type of a machine. So you need to download the correct version of the NodeJS interpreter for your machine.
You can write pure JS code to be run on NodeJS and then it will usually not depend on the machine type - it will be "universal" to an extent. But as soon as the JS code (this is usually true for some specific modules and libraries) uses native code (C, C++, & others) for performance reasons, this code gets to be compiled for a specific machine, and then the JS module also becomes bound to a specific machine.
The operating system has little to no influence in all of this. It basically says how the machine code will be written into a file (e.g. which file format to use), and abstracts access to hardware (such as the disk drives) in a way this code can use.
Historically, there have been attempts to create operating systems which would completely abstract the underlying machine in a way which makes programs completely portable. They usually do it by disallowing user-written machine code (i.e. user-compiled programs) to execute, and only allow interpreted code to run.
The operating system installed must support the processor(s), data buses and memory addressing of the hardware.
At a systems level, in kernel code and device drivers it is impossible to ignore details of the hardware architecture. Applications typically sit a level above all this but are still dependent on the abstraction layers below.
Incidentally, Node.js is written in part in C and C++ which takes advantage of the performance improvements offered by 64 bit processing. Wrangling an optimised performance has been a key objective of node.js design, it has been refactored more than once to that end.

Remotely install Linux on Windows xp using TeamViewer [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 5 years ago.
Improve this question
Our customer has about 800+ computers running Windows XP distributed across the country. Each computer can be accessed using TeamViewer. The goal is to replace XP by a Linux distribution remotely.
Does anybody know if this is possible, and where to start?
Thanks!
PXE is your only realistic hope:
Some on-site assistance is needed to press F12 at Bios before Windows XP boot:
A) On PC-A, setup DHCP server that refer DHCP-client to PXE server that download Linux ISO from a web server (of course all three can be a Windows machine in the same LAN segment onsite)
B) reboot PC-B onsite to reboot machine and press F10-F12 to choose Boot-options
C) then choose network-boot (PXE-Boot)
further reading : https://www.vercot.com/~serva/
guide: https://youtu.be/nnxgFpUr1Og?t=39
Note: Make sure you have enabled proxyDHCP and not DHCP Server
I would try with something like these:
Clonezilla, which works by replicating a previously prepared disk image to one or more computers booted inside a network segment
Cobbler, which works like a provisioning server for Linux based machines
Of those options, I have used Clonezilla with success. As long as the prepared base image doesn't change too frequently, the main time consuming tasks would be related with configuring the Clonezilla server and building that seed image.
I did a basic test of Cobbler and it worked fine in my environment, being this a way that would be more apt to deal with requirement changes.
Please also note that both options require some network configuration, like DHCP server settings that work with the PXE protocol.
Also, for your requirement, someone, a human being, would be needed to execute one or more of these tasks:
Properly enable network booting in the BIOS of each of the 800+ machines, unless it has already be done before
Boot the machines to install the new operating system
The network booting option, based on the PXE specification, should be supported by the motherboard of those machines and have higher booting priority than other devices, like CD drives, hard drives, etc.
Another thing to consider for the couple of options I'm suggesting, is how are those 800+ distributed across the country. The more disperse they are, the more cumbersome this task will be. Quite contrary, if there are few places were those machines are located, the more feasible this task will be; for example, by preparing and testing a server, computer or laptop that you then carry to each of those few places and installing the new operating system.
Regarding the option to boot using the public Internet to reach a remote deployment server, I don't know about how that could be done; in fact, for me that would be something quite interesting to learn about. If something like this is possible, another variable to note is the hardware compatibility of the destination machines, because as far as I know, protocols like PXE do some kind of multicast or broadcast in the local network segment and I presume those 800+ machines don't have recent motherboards with advanced firmware that could support more modern boot protocols.
That's all for now.

Can a virtual machine be as efficient as a hardware based OS? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about a specific programming problem, a software algorithm, or software tools primarily used by programmers. If you believe the question would be on-topic on another Stack Exchange site, you can leave a comment to explain where the question may be able to be answered.
Closed 9 years ago.
Improve this question
I am forgoing dual booting for the ease of a virtual machine and I have a few questions which I am unable to find answers to online. Could someone answer these, or at least point me in the right direction to find out details about how a virtual machine might be able to utilize full hardware power?
I am going to be running Windows 8 (natively) and using a VM to run a flavor of Linux (probably Ubuntu 12.04.2 if it matters).
(1) Will my virtual machine be able to run my Fortran in Parallel?
- I have a Intel Core i7 2.4 GHz processor which has the ability to hyperthread up to 8 cores. If I run code in the VM using MPI/pthreads/openMP, will I be able to utilize the 4 physical cores? How about the hyperthreading 8 cores?
- Will there be a slow down of the 2.4 GHz? I assume there will be some since they need to also run Windows, but how horribly will it be affected?
(2) I have a dedicated GPU (GeForce GTX 770M), will I be able to use the dedicated GPU for CUDA based (or OpenCL, or any kind of GPGPU) programming?
(3) I am starting out with 4 GB or RAM, but plan on upgrading to 16 GB. I know that the RAM will affect the VM, but will it be the dominate thing which affects VM performance? Once I upgrade to the full 16 GB of RAM, will I be able to consider any other inefficiencies to be negligible?
Thanks for the help ahead of time. Again, even pointing me in the right direction for reading will help if full answers cannot be given.
(1) VMWare supports multiple cores and hyperthreading. You can choose how many to assign to the VM. The physical processor isn't slowed down, but obviously your host OS will be using some CPU too, and the virtualization has an overhead (albeit small on modern CPUs).
(2) You'll need to check that out for the particular virtual machine software and version.
(3) RAM works in a pretty obvious way: the host OS uses some, the guest OS some, and VMWare's overhead is an order of magnitude smaller. 4 GB is enough to run Ubuntu in Windows, but if you want multiple VMs or are running processes that use gigabytes, add a corresponding amount of RAM.
I've used VMWare Workstation for years, plus VirtualBox recently. I'd say that for high end or leading edge stuff, VMWare is still the better choice. For easier tasks VirtualBox is nice.

Reprogram a device [closed]

Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 3 years ago.
Improve this question
Is it possible to take a device, say, a PDA, and wipe an software off of it and install your own?
For example, could I take a mac terminal program and install it onto a PDA (with wifi) and do SSHing and such?
And what language would / could it be in?
The language this could be in is not really the issue; it is, mostly, a matter of system compatibility.
Software applications do not run in a vacuum: they rely on the underlying operating system or for the least some form of virtual environment or a runtime such as Java, Silverlight etc.
Before one can re-purpose a PDA or other similar device, he/she need to install some system / host-software of sorts on it, and doing so can be rather complicated because of the proprietary and dedicated nature of many of the hardware subsystems therein.
General purpose systems such as Linux or Windows can be installed on various hardware platforms (including appliances) provided that:
- said hardware subsystems (CPU, keyboard/input devices, display device, storage devices...) comply to some specification, and
- the corresponding device drivers are available.
In the case of PDA, GPS appliances, smartphones and various other hardware platforms (and while many such platforms run on custom versions of Windows, Linux, Android etc.), there is typically enough proprietary differences, custom hardware and other deviations from specifications that installing alternative operating systems or runtimes is typically a challenge. Lack of documentation can also be a limiting factor.
Many such devices however host some form of runtime atop the system (Java in many cases), and rather than installing anew a alternative operating system, it is possible, in some cases, to install and run applications written in these hosted languages.
Even though, uninstalling existing applications (say to make room) and installing new applications may be a challenge as well. Difficulties arise because of
- purposeful "locking in" of the appliances (the manufacturers purposely prevent such re-purposing, using various forms of encryption, undocumented features and the like)
- intrinsic limitations of the runtime (whereby only a subset / sandboxed version of the language features is available).
In short, the specific approach for re-purposing appliances depends on:
the specific appliance/device: make, version etc.
the intended purpose: which particular uses are desired for the new device
the technical expertise and patience of the implementers ;-)
In general this is far from trivial: beginners beware! (*)
(*) BTW, the relative lack of sophistication apparent in the question seem to indicate the OP may not have the necessary skills involved in this kind of "hacking". It can however, be a very fun and rewarding learning experience.
No, but you can probably find a PDA terminal and do SSH with it.
Mac and PDAs have different architecture (their processors talk different language).

Linux distribution for a programmer's private server [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 3 years ago.
Improve this question
I'm going to get a low-end old (CHEAP!) computer to run non-stop as a little server for Subversion, Mercurial, Trac and maybe a little other things. It's 99% for myself - performance isn't a concern.
It'll probably have a 1 GHz P3/P4/Celeron, 256 MB SDRAM, 30 GB IDE HDD or something like that, any video card so I can hook up a monitor.
I could get about setting Windows Server on it, but I feel that's too much of an overkill. All I need is to access my code from my laptop, desktop, maybe remotely, same for a wiki, bug tracker, etc. so I feel that a light Linux distribution will be more than enough.
I want to have a GUI, preferably with Xfce, but I don't mind IceVM or any other light GUI - it doesn't have to be pretty, I just don't like CLI as a Windows user.
However, the advantage of Windows would be that I already have tons of experience setting it up and can directly use Remote Desktop to get to it and AFAIK I have access to Home Server that "just works" - unless you can suggest me a distro made for home servers.
So the question is: what Linux distribution do you think is best for my needs? Or should I just strap Windows Home Server on it?
I would suggest Ubuntu. Setting up/installing applications is just a breeze with apt-get.
Having used Debian for nearly seven years, I think it will suit your task very well. Besides, I find it much more convenient to manage than Red Hat based distributions (such as Scientific Linux, Fedora or CentOS).
EDIT: Ubuntu (which another poster has suggested) is essentially an advanced Debian customization towards desktop use. Ubuntu heavily relies on Python scripting and generally consumes more resources than Debian. I believe that original Debian fits the job you described better.
It doesn't sound like you have demanding requirements at all, so I'd probably go with something easy to set up. I believe Ubuntu is pretty good in this regard.
You might also want to look into VNC, which is a bit like a free, cross-platform Remote Desktop.
CentOS - a free version of RedHat Enterprise Linux which is the most common server Linux distribution.
I have been using Debian for very similar purposes. This too has a gui application install manager.(however, not everything I 've installed was available through the manager, then just used the command line)
I've also been using red hat at work for host development machine. I might consider Fedora for a home server, as there appears to be lots of support on the web for red hat/fedora.
BTW I use windows for most things, and just vnc on to the linux machine.

Resources