I've been looking for a decent example to answer my question but, not sure if its possible at this point.
I'm curious if its possible to upload a image or any file and stream it so separate server? In my case I would like to stream it to imgur.
I ask this because I don't want to reduce the bandwidth hit for all the files to come to the actual nodejs server and upload it from there. Again, I'm not sure if this is possible or if I'm reaching but, some insight or an example would help a lot.
Took a look at Binary.js which may do what I'm looking for but, its IE10+ so no dice with that...
EDIT: based on comment on being vague
When the file is uploaded to the node server, it takes a bandwidth hit. When node takes the file and upload it to the remote server, it takes another hit. (if I'm not mistaken) I want to know it its possible to pipe it to the remote service(imgur in this case) and just used the node server as a liaison. Again, I'm not sure if this is possible which is why I'm attempting to articulate the question. I'm attempting to reduce the amount of bandwidth and storage space used.
Related
I want to create a tool, with which we can administer the server.There are two questions with in this question:
To administer access/hit rate of a server. That is to calculate how many times the server has been accessed from a particular time period and then may be generate some kind of graph to demonstrate the load at a particular time on a particular day.
However i don't have any idea, how i can gather these information.
A pretty vague idea is to
use a watch over access log(in case of apache) and then count the number of times the notification occurs and note down the time simultaneously
Parse access.log file every time and then generate the output(but access.log file can be very big, so not sure about this idea)
I am familiar with apache and hence the above idea is based on apache's access log and i don't have idea about other like nginx etc.
Hence i would like to know, if I can use the above procedure or is there any other way possible.
I would like to know when the server is reaching its limit. The idea of using top and then show the live result of cpu usage and ram usage via CPP
To monitor a web server the easiest way is probably to use some existing tool like webalizer.
http://www.webalizer.org/
To monitor other things like CPU and memory usage I would suggest snmpd together with some other tool like mrtg. http://oss.oetiker.ch/mrtg/
If you think that webalizer does not sample data often enough with its hourly statistics but the sample time of mrtg with 5 minutes would be better it is also possible to provide more data with snmpd by writing an snmpd extension. Such an extension could parse the apache log file with a rather small amount of code and give you all the graphing functionality for free from mrtg or some other tool processing snmp data.
Pardon my ignorance but I have a few questions that I can not seem to get the answers by searching here or google. These questions will seem completely dumb but I honestly need help with them.
On my Azure website portal I have a few things I am curious of.
How does CPU-Time apply to my website? I am unaware how I am using CPU unless this applies to hosting some type of application? I am using this "site" as a form to submit data to my database.
What exactly does "data out" mean? I am allowed 165mb per day.
What exactly is file system storage? Is this the actual space available on my Azure server to store my project and any other things I might directly host on it?
Last question is, how does memory usage apply in this scenario as well? I am allowed 1024mb per hour.
I know what CPU-Time is in desktop computing as well as memory usage but I am not exactly sure how this applies to my website. I do not know how I will be able to project if I will go over any of these limits so that I can upgrade my site.
How does CPU-Time apply to my website? I am unaware how I am using CPU
unless this applies to hosting some type of application? I am using
this "site" as a form to submit data to my database.
This is CPU time used by your code. If you use a WebSite project (in ASP.NET) you may want to do PreCompilation for your WebSite proejct before deploying to Azure Website (read about PreCompilations here). Compiling your code is one side of the things. Rest is executing your code. Each web request that goes to a server handler/mapper/controller/aspx page/etc. uses some CPU time. Especially writing to database and so on. All these actions count toward CPU time.
But how exactly the CPU time is measured, it is not documented.
What exactly does "data out" mean? I am allowed 165mb per day.
Every single HTTP request to your site generates a response. All the data that goes out from your website is counted as "data out". Basically all and any data that goes out of the Data Center where your WebSite is located counts as data out. This also includes any outgoing HTTP/Web Request your code might be performing against remote sources. This also is the Data that goes out if you are using Azure SQL Database that is not in the same Data Center as your WebSite.
What exactly is file system storage? Is this the actual space
available on my Azure server to store my project and any other things
I might directly host on it?
Exactly - your project + anything you upload to it (if you allow for example file uploads) + server logs.
Last question is, how does memory usage apply in this scenario as
well? I am allowed 1024mb per hour.
Memory is same as CPU cycles. However my guess is that this is much easier to gauge. Your application lives in its own .NET App Domain (check this SO question on AppDomain). It is relatively easy to measure memory usage for the App Domain.
Just got invited to put.io ... it's a service that takes a torrent file (or a magnet link) as input and gives a static file available for download from it's own server. I've been trying to understand how a service like this works?
It can't be as simple as simply torrenting the site and serving it via a CDN... can it? Because the speeds it offers seems insanely fast to me
Any idea about the bandwidth implications (or the amount used) by the service?
I believe services like this typically just are running one or more bittorrent clients on beefy machines with a fast link. You only have to download the torrent the first time someone asks for it, then you can cache it for the next person to request it.
The bandwidth usage is not unreasonable, since you're caching the files, you actually end up using less bandwidth than if you would, say, simply proxy downloads for people.
I would imagine that using a CDN would not be very common. There's a certain overhead involved in that. You could possibly promote files out of your cache to a CDN once you're certain that they are and will stay popular.
The service I was involved with simply ran 14 instances if libtorrent, each on a separate drive, serving completed files straight off of those drives with nginx. Torrents were requested from the web front end and prioritized before handed over to the downloader. Each instance would download around 70 or so torrents in parallel.
I am in the process of creating a site which enables users to upload audio. I just figured our how to use ffmpeg with PHP to convert audio files (from WAV to MP3) on the fly.
I don't have any real experience with ffmpeg and I wanted to know what's the best way to convert the files. I'm not going to convert them upon page load, I will put the conversions in a queue and process them separately.
I have queries about how best to process the queue. What is a suitable interval to convert these files without overloading the server? Should I process files simultaneously or one by one? How many files should I convert at each interval to allow the server to function efficiently?
Server specs
Core i3 2.93GHz
4GB RAM
CentOS 64-bit
I know these questions are very vague but if anyone has any experience with a similar concept, I would really love to hear what works for them and what common problems I could face in the road ahead.
Really appreciate all the help!
I suggest you use a work queue like beanstalkd. When there is a new file to convert simply place a message into the queue (the filename maybe). A daemon that works as beanstalkd client fetches the message and converts the audio file properly (the daemon can be written in any language that has a beanstalkd library).
We have a dedicated godaddy server and it seemed to grind to a halt when we had users downloading only 3MB every 2 seconds (this was over about 20 http requests).
I want to look into database locking etc. to see if that is a problem - but first I'm curious as to what a dedicated server ought to be able to serve.
to help diagnose the problem, host a large file and download it. That will give you the transfer that the server and your web server can cope with. If the transfer rate is poor, then you know its the network, server or webserver.
If its acceptable or good, then you know its the means you have of generating those 3MB files.
check, measure and calculate!
PS. download the file over a fast link, you don't want the bottleneck to be your 64kbps modem :)
A lot depends on what the 3MB is. Serving up 1.5MBps of static data is way, way, way, within the bounds of even the weakest server.
Perhaps godaddy does bandwidt throtling? 60MB downloads every 2 seconds might fire some sort of bandwidt protection (either to protect their service or you from being overcharged, or both).
Check netspeed.stanford.edu from the dedicated server and see what your inbound and outbound traffic is like.
Also make sure your ISP is not limiting you at 10MBps (godaddy by default limits to 10Mbps and will set it at 100Mbps on request)