Monitoring Bandwidth on your server [closed] - linux

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
I use to be on a shared host and I could use there standard tools to look at bandwidth graph.
I now have my sites running on a dedicated server and I have no idea whats going on :P sigh
I have installed webmin on my Fedora core 10 machine and I would like to monitor bandwidth. I was about to setup the bandwidth module and it gave me this warning:
Warning - this module will log ALL network traffic sent or received on the
selected interface. This will consume a large amount of disk space and CPU
time on a fast network connection.
Isn't there anything I can use that is more light weight and suitable for a NOOB? 'cough' Free tool 'cough'
Thanks for any help.

vnStat is about as lightweight as they come. (There's plenty of front ends around if the graphs the command line tool gives aren't pretty enough.)

I use munin. It makes pretty graphs and can set up alerts if you're so inclined.

Unfortunately this is not for *nix but I have an automated process to analyise my IIS logs that moves them off the web server and analyises them with Web Log Expert. Provided the appropriate counter is turned on it gives me the bandwidth consumed for every element of the site.
The free version of their tool won't allow scripting but it does the same analysis. It supports W3C Extended and Apache (Common and Combined) log formats.

Take a look at mrtg. It's fairly easy to set up, runs a simple cron job to collect snmp stats from your router, and shows some reasonable and simple graphs. Data is stored in an RRD database (see the mrtg page for details) and can be mined for other uses as well.

Related

Aim of using puppet, chef or ansible [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed 5 years ago.
Improve this question
I read many article concerning Configuration Management, but I dont really understand on what this configuration is applied.
Is it on software himself ? Like, changing hosts in conf file etc... ?
Or on the app "host" ? In that case, what is the aim of using this kind of software, knowing that we generally use docker containers "ready to use" ?
You spent hours setting up that server, configuring every variable, installing every package, updating config files. You love that server so much that you named it 'Lucy'.
Tomorrow you get run over by a bus. Will your coworkers know every single tiny change you made to that server? Unlikely. They will have to spend hours digging into that server trying to figure out what you've done and why you've done it.
Now let's multiply this by 100s or even 1000s servers. Doing this manually is unfeasible.
That's where config management systems come in.
It allows you to have documentation of your system's configurations by the nature of config management systems itself. Playbooks/manifests/recipes/'whatever term they use' will be the authoritative description of your servers. Unlike readme.txt which might not always match the real world, these systems ensure that what you see there is what you have on your servers.
It will be relatively simple to duplicate this server configuration over and over to potentially limitless scale(Google, Facebook, Microsoft and every other large company work that way).
You might think of a "Golden image" approach where you configure everything, then take a snapshot and keep replicating it over and over. The problem is it's difficult to compare the difference between 2 such images. You just have binary blobs. Where as with most config management systems you can use traditional VCS and easily diff various versions.
The same principle applies to containers.
Don't treat your servers as pets, treat them as cattle.

Tools to collect server information during load testing [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
We don’t allow questions seeking recommendations for books, tools, software libraries, and more. You can edit the question so it can be answered with facts and citations.
Closed 7 years ago.
Improve this question
I will be performing a distributed load test using JMeter. I am using the JMeter extras plugin to output some nice graphs but all of these graphs have to do with response times, response latency, throughput, etc. I want to also measure CPU, memory used/free, disk usage/latency, and network utilization, maybe some others.
I will be testing a web application that is running on Ubuntu 14.04.
What tools or commands can I use to gather these stats at various points during the load test and either output the raw data or averages?
Thank you for any information you can provide.
Free and great for high level KPIs. Works within JMeter:
http://jmeter-plugins.org/wiki/PerfMon/
Free / Paid and great for detailed low level analysis (stand alone tool):
http://newrelic.com
We use New Relic ourselves and are very satisfied!
I am using Cacti for that, it is relatively easy to install and configure (on Centos it can be installed with yum from the EPEL repository). It uses snmp to get network, CPU, memory, load,..from the various target servers. To monitor disk io's there is a great template (https://github.com/markround/Cacti-iostat-templates), if you follow step by step their instructions it will work (at least on centos/red-hat).
What I like with cacti is that you can also define your own datasources, for example you can ask cacti to execute a shell script on your server that would parse your access.log (or any other application log files) and returns metrics like throughput (nbr requests, nbr bytes) or processing time,.. etc then you can get this plotted side by side with the devices utilizations metrics.
To set-up the whole think you will probably one day, it is not very intuitive how to define your own data sources for example. Also you have to enable snmp on the box, which is easy if you remove the whole /etc/snmp.conf and use the bare minimum. It is a great tool for capacity management.

is there any way of trasnfering a application running on one system to other system? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 9 years ago.
Improve this question
i want to know is there any way of migrating a application ( say Firefox) from one system to other system ?
if so please let me know ...
example : suppose i have two systems say A ,B . and Firefox is running in A, now i want to transfer that running application (Firefox) to B as it is
im thinking like, we can migrate the our process stable of the application, why people dnt think in that way, and they simple down marking the question if they dont know the answer? its not the way that we should respond to problem that has been asked in our stack community ... any how people who are good at operating system pleas do think in way that is it possible to transfer our process state to other machine so we can get the same image there at other system .. if so please let me know. thanks in advance..
What you want is called process migration, and is not easily possible on Linux in general.
However, if you design your application carefully and use some application checkpointing mechanism, it might be possible (in some very limited way). Perhaps using Berkeley Lab Checkpoint Restart library could help.
Don't expect to migrate processes of applications as complex as Firefox.
Read also about continuation passing style & virtual machines. It is relevant. Reading Queinnec's Lisp in Small Pieces & the famous SICP should also help a lot. See also Continuation-Passing C.
And in practice, you might be able to get process migration for some of your own applications (using few external libraries, or using them "atomically" between checkpoints) if you design your application from the start with process migration in mind.
PS. Feel free to ask me more e.g. by email, by citing this question, explaining the actual application you want to migrate and what you have read and tried. This subject is surprisingly interesting and difficult (you probably could make a PhD on it).

How can I load test my website on Azure? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 9 years ago.
Improve this question
I need to measure how many concurrent users my current azure subscription will accept, to determine my cost per user. How can I do this?
This is quite a big area within capacity planning of a product/solution, but effectively you need to script up a user scenario, say using a tool like JMeter or VS2012 Ultimate has a similar feature, then fire-off lots of requests to your site an monitor the results.
Visual Studio can deploy your Azure project using a profiling mode, which is great for detecting the bottlenecks in your code for optimisation. But if you just want to see how many requests per/role before it breaks something like JMeter should work.
There are also lots of products out there on offer, like http://loader.io/ which is great for not worrying about bandwidth issues, scripting, etc... and it should just work.
If you do role your own manual load testing scripts, please be careful to avoid false negatives or false positives, by this I mean that if you internet connection is slow and you send out millions of requests, the bandwidth of your internet may cause your site to appear VERY slow, when in-fact its not your site at all...
This has been answered numerous times. I suggest searching [Azure] 'load testing' and start reading. You'll need to decide between installing a tool to a virtual machine or Cloud Service (Visual Studio Test, JMeter, etc.) and subscribing to a service (LoadStorm)... For the latter, if you're focused on maximum app load, you'll probably want to use a service that runs within Azure, and make sure they have load generators in the same data center as your system-under-test.
Announced at TechEd 2013, the Team Foundation Test Service will be available in Preview on June 26 (coincident with the //build conference). This will certainly give you load testing from Azure-based load generators. Read this post for more details.

What are security risks when running an Erlang cluster? [closed]

Closed. This question is off-topic. It is not currently accepting answers.
Want to improve this question? Update the question so it's on-topic for Stack Overflow.
Closed 10 years ago.
Improve this question
It's more a general question in terms of what one has to look out for when running an Erlang system. For example, I know of atom exhaustion attacks. What are other possible attacks and how to make your system more secure?
Running a cluster means they are sharing a cookie, and if one knows the cookie than they can attach to any of your nodes (assuming they are attached to your network) and execute any arbitrary Erlang command or program.
So my thought is that clustered means that there are at least two files (and some number of people) who know what the cookie is (or where to find it).
I would be afraid of bugs in applications deployed in your system. Good example from otp is SSL app, which was completely re-written 3 years ago. The next would be http client - memory leaks. Xmerl was never a strong part of the system.
Also, be careful with 3rd party Erlang apps: new web servers (probably better than inets, but if you do not need all the performance consider stable Yaws), ejabberd - number of techniques hitting directly OS, Riak - interaction with filesystem, ulimit, iostats etc.
First of all, you want to have your Cluster in a closed VPN (if they are far apart and parhaps communicate over a WAN). Then, you want to run them atop hardened UNIX or LINUX. Another strong idea is to close all epmd connections to your cluster even if one has got the cookie by using net_kernel:allow(Nodes). One of the main Weaknesses of Erlang Vms (i have come to realise) is memory consumption. I think that if an Erlang Platform providing service to many users and its NOT protected against DOS attacks, its left really vulnerable. You need to limit number of allowed concurrent connections for the Web Servers so that you can easilly block out some script boys in the neighbourhood. Another situation is having distributed/replicated Mnesia database across your cluster. Mnesia replicates data but i ain't sure if that data is encrypted. Lastly, ensure that you are the sole administrator of all the machines in your cluster.

Resources