How do live web edits maintain persistance? - browser

Where are the user changes stored - hard disk or browser?
And is the modified page stored or only the modifications?

Related

Are web page data files stored locally for a time?

When we hit any URL( like Facebook.com) on any browser(like chrome) it uses many resources for that particular page like JS files, Images, properties files etc. So, are they stored locally temporarily ?
Yes, it's called the browser cache :-) Websites can also use local storage to store some data on your machine. Additionally along the way various servers might cache the resources. ISPs do this a lot.

What solutions are there to backup millions of image files and sub-directories on a webserver efficiently?

I have a website that I host on a Linux VPS which has been growing over the years. One of its primary functions is to store images/photos and these image files are typically around 20-40kB each. The way the site is organised at the moment is all images are stored in a root folder ‘photos’ and under that root folder are many subfolders determined by a random filename. For example, one image could have a file name abcdef1234.jpg and that would be stored in the folder photos/ab/cd/ef/. The advantage of this is that there are no directories with excessive numbers of images in them and accessing files is quick. However, the entire photos directory is huge and is set to grow. I currently have almost half a million photos in tens of thousands of sub-folders and whilst the system works fine, it is fairly cumbersome to back up. I need advice on what I could do to make life easier for back-ups. At the moment, I am backing up the entire photos directory each time and I do that by compressing the folder and downloading it. It takes a while and puts some strain on the server. I do this because every FTP client I use takes ages to sift through all the files and find the most recent ones by date. Also, I would like to be able to restore the entire photo set quickly in the event of a catastrophic webserver failure so even if I could back up the data recursively, how cumbersome would it be to have to upload each back stage by stage?
Does anyone have any suggestions perhaps from experience? I am not a webserver administrator and my experience of Linux is very limited. I have also looked into CDN’s and Amazon S3 but this would require a great deal of change to my site in order to make these system work – perhaps I’ll use something like this in the future.
Since you indicated that you run a VPS, I assume you have shell access which gives you substantially more flexibility (as opposed to a shared webhosting plan where you can only interact with a web frontend and an FTP client). I'm pretty sure that rsync is specifically designed to do what you need to do (sync large numbers of files between machines, and do so efficiently).
This gets into Superuser territory, so you might get more advice over on that forum.

iOS7 Core Data and iCloud as Backup. User's point of view

I'm working at a library-type Application that uses CoreData.
This application will be available to iPhone only and I want to use iCloud just as a backup to be sure that if users change device or delete and reinstall the application they can get their original data.
Working with the new CoreData-iCloud setup I see that the configuration is extremely simple. I have just added NSPersistentStoreUbiquitousContentNameKey when I created the persistent store and I listen to three basic notifications form iCloud.
Now my problem is that when I delete my application and I reinstall it, at the first launch of the reinstalled application, data from iCloud takes more than 2/3 minutes to get back to the device.
This is not what users expect... they start using the application and at some point they find their old data. This is extremely strange from the user point of view. Is there a correct way to reload previously stored data or I have to let iCloud decide when to reload them? And in this case how do you manage this situation making users aware of this random update time?
If all you want is a backup then just put the store in /Documents and it will be backed up using the normal backup method selected by the user. Using Core Data/iCloud integration is intended for synchronisation of data between devices and results in transaction logs being put in iCloud. These transaction logs are being imported after the app is reinstalled (assuming iCloud is available).
If the user has enabled backups to iCloud rather than iTunes then backups should be automatic, no need to plug in and sync with iTunes.

Transfer liferay files (documentt library) between two servers

I have built my liferay website in the development environment and now ready to be published. I have also installed two liferay nodes on two different servers where I want to put my website. Server1 is active and server2 as backup.
The problem is when I started the development, I didn't know I would one day need to have the two-server structure, so I stored all the documents and images on the file system and not to a database. So basically with this setting, when I make changes on server 1, I have to transfer the document library manually to server two, just like I would do for the themes.
I tried to change the document library location from the filesystem to the database in the portal-ext.properties, but that didn't help.
So, my questions:
Is there a way to transfer these files to a database now, where they can be shared by both servers? and if not,
Is it possible to somehow transfer the document library from server1 to server2 automatically through some script?
Thanks,
Adia
If server2 is a cold standby backup server and assuming you have a correct backup of the Liferay data directory of server1 and the database at the same moment in time, you can just restore the backup of the Liferay data directory to server2, restore the DB to the corresponding moment in time as the data directory backup and start server2.
In hot standby scenario's and clustered environments things get a little bit more complicated as you would need to use a common place to store documents, images, search indexes, etc... The easiest way is to store everything in the database or on a common file system so that multiple nodes are always working on the same data.
In you want to get your current set of documents that is stored on disk into the database the easiest way is to use the Server > Server Administration > Data Migration tab in the Control Panel. It has an option to migrate documents from the existing repository aka the disk to another, which would be the JCRStore in your case as that store can be configured to use the database.

Uploading large files in JSF

I want to upload a file that is >16GB. How can I do this in JSF?
When using HTTP, you'll face two limitations. The one on the client side (webbrowser) and the one on the server side (webserver). The average webbrowser (IE/FF/Chrome/etc) has a limit of 2~4GB, depending on the make/version/platform. You cannot control this from the server side on. The enduser has to change the browser settings itself (sometimes this isn't possible at all). The average webserver (Tomcat/JBoss/Glassfish/etc) in turn has a limit of 2GB. You can configure this, but this still won't and can't remove the limitation on the webbrowser.
Your best bet is FTP. If you want to do this by a webpage, consider an applet which utilizes Apache Commons Net FTPClient. There are several ready-to-use opensource/commercial ones by the way.
You however still need to take into account that the disk file system on the FTP server side supports that large files. FAT32 for example has a limit of 4GB per file. NTFS and several *Nix file systems, however, can go up to 16EB.

Resources