Uploading large files in JSF - jsf

I want to upload a file that is >16GB. How can I do this in JSF?

When using HTTP, you'll face two limitations. The one on the client side (webbrowser) and the one on the server side (webserver). The average webbrowser (IE/FF/Chrome/etc) has a limit of 2~4GB, depending on the make/version/platform. You cannot control this from the server side on. The enduser has to change the browser settings itself (sometimes this isn't possible at all). The average webserver (Tomcat/JBoss/Glassfish/etc) in turn has a limit of 2GB. You can configure this, but this still won't and can't remove the limitation on the webbrowser.
Your best bet is FTP. If you want to do this by a webpage, consider an applet which utilizes Apache Commons Net FTPClient. There are several ready-to-use opensource/commercial ones by the way.
You however still need to take into account that the disk file system on the FTP server side supports that large files. FAT32 for example has a limit of 4GB per file. NTFS and several *Nix file systems, however, can go up to 16EB.

Related

Get whole domain content with wget or other commands in linux?

I would like to copy a client project but I only have FTP-access. Normally I'd do it with SSH-access, but in this case it's not possible. The problem is the size of the project (nearly 3GB)
Is there a solution to copy the project to my server only with FTP-access?
The size isn't the problem here. Because of the encryption a SSH upload produces much more overhead than a FTP upload, therefore the answer is: Of course you can use FTP for file uploads, even if they large uploads. FTP was meant for this.
The more important concern is security. If you are normally using SSH for file uploads you'll for sure having security in mind (because FTP would been faster than SSH). If your provider does support SFTP you could use it as an alternative.

What solutions are there to backup millions of image files and sub-directories on a webserver efficiently?

I have a website that I host on a Linux VPS which has been growing over the years. One of its primary functions is to store images/photos and these image files are typically around 20-40kB each. The way the site is organised at the moment is all images are stored in a root folder ‘photos’ and under that root folder are many subfolders determined by a random filename. For example, one image could have a file name abcdef1234.jpg and that would be stored in the folder photos/ab/cd/ef/. The advantage of this is that there are no directories with excessive numbers of images in them and accessing files is quick. However, the entire photos directory is huge and is set to grow. I currently have almost half a million photos in tens of thousands of sub-folders and whilst the system works fine, it is fairly cumbersome to back up. I need advice on what I could do to make life easier for back-ups. At the moment, I am backing up the entire photos directory each time and I do that by compressing the folder and downloading it. It takes a while and puts some strain on the server. I do this because every FTP client I use takes ages to sift through all the files and find the most recent ones by date. Also, I would like to be able to restore the entire photo set quickly in the event of a catastrophic webserver failure so even if I could back up the data recursively, how cumbersome would it be to have to upload each back stage by stage?
Does anyone have any suggestions perhaps from experience? I am not a webserver administrator and my experience of Linux is very limited. I have also looked into CDN’s and Amazon S3 but this would require a great deal of change to my site in order to make these system work – perhaps I’ll use something like this in the future.
Since you indicated that you run a VPS, I assume you have shell access which gives you substantially more flexibility (as opposed to a shared webhosting plan where you can only interact with a web frontend and an FTP client). I'm pretty sure that rsync is specifically designed to do what you need to do (sync large numbers of files between machines, and do so efficiently).
This gets into Superuser territory, so you might get more advice over on that forum.

Sending files to remote server in the quickest manner

I have a separate server that processes the media uploaded to my main, web facing server. For now I upload files to it using FTP but the problem with this is that to ensure the files are done uploading I have a timeout running, which adds a delay in the overall processing time. I can't seem to get it to wait less than 5 seconds and still guarantee to pick up the media and this delay is no longer acceptable. So:
Is there a better way to implement this cleanly? I've considered sticking with FTP and sending another file after the initial upload that will indicate it's done but then there are two uploads for every upload = expensive. Another option I've considered is implementing a custom server that will just get a content-length header, do some authentication, and then receive the file and kickoff the processing as soon as its ready. Socket programming doesn't seem too intimidating but I have some worries about sending binary files and different formats, is this a valid concern? Also are there any other protocols out there I could implement to do this, rather than reinvent the wheel? Something like FTP but with a little verification.
I'd be glad for any pointers or tips you can share, thanks!
I suggest you use rsync. This runs over ssh, will move entire directories / hierarchies of files, do incremental copies, in short everything you could possibly want.

Migrate data from one server to another

I bought a new server and I want to move all the data (directories, sub directories, users, passwords, ..etc) from my old server to it.
Is there a way to do that?
Thanks,
Do you have physical access to both servers? If so you can use the dd command to make a clone of the disk from the old server to the disk that is going into the new server.
In order to do this though, both hard drives have to be installed in one of the servers.
You can also use netcat and dd to clone a disk over a network.
for the directories and files, use a FTP client from your server, if it allows you to, if not, just download all the content to your computer and upload it to the new server.
For the users and passwords, i guess they are in a Database, connect to the database using SSH, telnet, or MysqlAdmin or any RMDB client system and export a dump file, then log in to the new server's SQL system and import that dump file.
Anyway you should give more details of both servers anyway so we can help you, for example, are they Shared hosting or dedicated machine? and what kind of access do you have to them, also, their operative system would help people to reply you accurately
In principle, yes.
If the hardware is similar (= just more RAM, disk space but same CPU architecture and no special graphics card drivers), you might be able to copy every file and then install the boot loader once more (the boot loader config usually changes when the hard disk size changes).
Or you can create a list of all services that you use, determine which config files each one uses and then just copy those. Ideally, you shouldn't copy them but compare the old and the new versions and merge them.
The most work intensive way is to use a tool like puppet. In a nutshell, puppet allows to create install scripts for services (along with all the configuration that you need). So if you need to install a service again (new hardware, second server), you just tell puppet to do it. On the plus side, your whole installation will be documented, too. If you ever wonder why something is the way it is, you can look into the puppet files.
Of course, this approach takes a lot of time and discipline, so it might not be worth it in your case. Apply common sense.

Disk configuration for server running IIS

I've inherited a website from an external company (who have gone bust) and I need to get it deployed to our servers (its 3 web sites running together)
However, in testing, although the app runs correctly, performance is poor and I am pretty certain this is because the app writes files to the local disk. We only currently have a single disk in the server but as its virtual we can increase this to two fairly quickly.
Server is Windows 2008 running IIS7 and has two processors already.
Some of the files are 100mb+, but there are also lots of small writes and log file writes as well.
My question is where to put which parts of the application?
Is it best to have the OS on one disk, Web sites and files/log on another?
or sites and OS on one and files on another?
Is there a "standard" point to start from?
If anyone could reply with something like this, but with an explanation so I understand WHY!!
e.g.
C: OS
C: WebSites
D: Files
D: Logs
Your background sounds like it's from Linux, because some people configure new servers taking the items you listed into account. We have a handful of IIS sites; we mostly run Apache and on Linux, so I'm taking a stab at this.
Where we have IIS, we also tend to have MS SQL Server. I would keep the Windows OS on a nice large partition, and put everything IIS, including the root directory on a second drive. IIS installs defaulted to C:\, but I believe you can move it to another drive. The names of the utilities and how to do this, are best left to those who do this regularly.
In other words, I'd make a gross disk split OS/IIS, and then tune from there. However, make sure you have lots of disk space and can defragment.

Resources