I could not find a straight answer, so I am sorry if it has already been solved.
I was wondering, like with BitTorrent, when you download something using IPFS, does it automatically 'seed'/host it?
I also know you can stop seeding with torrents. If it automatically hosts the file after downloading it, what would be a way to stop hosting a specific file? all files?
edit: if it doesn't automatically host files after a download, how would one go about hosting a file on IPFS?
Thanks a ton.
To understand IPFS the best thing to do is take the time to read the white paper.
The protocol used for data distribution is inspired by BitTorrent and is named BitSwap. To protect against leeches (free-loading nodes that never share), BitSwap use a credit-like system.
So to answer your questions, yes when you download some content it's automatically hosted (or a least part of it), and if you try to trick the protocol by not hosting the content your credit will drop and you will not be able to participate in the network.
Related
Any suggestions? My goals are the following:
works in as many browsers as possible (so as few dependencies as possible)
allows easily uploading multiple files
secure
Thank you very much. I know a little JavaScript, but the part where I am faltering a bit is the PHP to actually handle the upload. I've found some scripts, but all say that security is an issue.
For a secure upload, you'll need SSL. With Dreamhost, you'll have to pay for a unique IP address. Dreamhost also sells a certificate, not sure if you can provide your own. Check the Dreamhost control panel for details.
If you want to store the files on disk, you'll want to provide encryption. I'm sure that there are many options to choose from (for example TrueCrypt).
I maintain the website for my daughter's school. Historically I used our service provider's webftp interface to make edits in-place through their web-based GUI, which worked well for the small edits I typically need to make. They recently disabled webftp for security reasons, leaving ftp as the means to modify web pages and upload documents. Yes, I know FTP isn't very secure itself, but that's another story.
I have used wget to pull down the site and have built a git repository to manage changes. As I make changes, I want to upload the new and modified files to the website. Ideally I would only upload the new and changed files, not the entire site. My question is how do I do this?
I have found wput, which looks promising, but its documentation is not clear about which directory that wget created is the one I should recursively upload, and what the target directory should be. Since we are talking about a live site, I don't want to experiment until I get things right.
This seems like it should be a common use case with a well-known solution, but I have had little luck finding one. I have tried searches on Google and Stack Overflow with terms like "upload changed files ftp linux", but no clear answer pops up like I usually get. Some recommend rsync, but the target is on the service provider's system, so that might not work. Many variants of my question that I have found are Windows-centric and of little help since I work on Linux.
I suppose I could set up a target VM and experiment with that, but that seems excessive for what should be an easily answered question, hence my appeal to Stack Overflow. What do you recommend?
maybe this answer helps you : https://serverfault.com/questions/24622/how-to-use-rsync-over-ftp/24833#24833
It uses lftp's mirror function to sync up a remote and local directory tree.
I also used mc's (midnight commander) built in ftp client quite a lot when maintaining a site.
You should use git with git-ftp. It is generally a good idea to use a VCS by any project...
I need to copy files from one server to a UNC path on the same network. The ASP.NET app uses .NET 2.0
Currently we're just using a simple System.IO.File.Copy method, and works just fine, but we were asked to make sure the files are transferred securely.
I can think of two ways to do this. Either writing a WCF or ASMX service and install a SSL certificate on the target server, and use that, or, explicitly encrypting each file before calling File.Copy, and then decrypting the file once it's copied.
Am I missing an option? Are there better ways to do this? If not...which option would be best for my requirement?
thanks in advance.
My initial concern was that a person in my LAN could just launch a simple tool and get a copy of the files being copied between servers on my LAN.
After asking a related question on superuser.com - can a file being copied over my LAN be sniffed?, i learned that even if a regular person is able to launch a popular sniffer tool like WireShark and configure it to see the stream of the files being copied over the network, it would not be an easy task to convert that stream back into a file. It would take a higher skill to do that.
However, for safety, I'd go with encrypting the stream (WCF or ASMX service over SSL) so that even if they can see the stream, it'd still be encrypted.
Closed. This question needs to be more focused. It is not currently accepting answers.
Want to improve this question? Update the question so it focuses on one problem only by editing this post.
Closed 4 years ago.
Improve this question
I am creating an application where I want to upload huge files.
Following is a little description of what this application tries to achieve:
Create a vmdk file from user's physical machine(using Vmware Converter tool, vmdk files can be GBs in size).
Upload this vmdk file to the remote server
now the purpose of having vmdk file on remote server is accessibility.
i.e. a user away from his physical machine, can later on login via a webconsole, and
instantiate a virtual machine from this vmdk on the remote server
I think this makes the situation different from normal file uploads(10-20MB file uploads).
rsync/scp/sftp might help, but..
Would this be possible using a web interface?
If not, then do I need to create a separate client for the end user, to convert and upload his files efficiently?
Any help is appreciated..
Use a file transfer protocol for this, not HTTP. You need a protocol that can restart the transfer in the middle in case the connection breaks.
BTW, I don't mean to use FTP.
I'm not an expert on all the current file transfer protocols (I've been an FTP expert, which is why I recommend against it).
However, in this situation, I think you're off-base in assuming you need transparency. All the users of this system will already have the VMWare Converter software on their machine. I see no reason they couldn't also have a small program of yours that will do the actual upload. If there's an API to the Converter software, then your program could automate the entire process - they'd run your program before they go home for the night, your program would convert to the vmdk, then upload it.
Exactly which protocol to use, I don't know. That might take some experimentation. However, if the use of the protocol is embedded within your small application and in the service, then your users will not need to know which protocols you're experimenting with. You'll be able to change them as you learn more, especially if you distribute your small program in a form that allows auto-update.
If you insist on using a web interface for this, the only way to pull it off is with something similar to a signed Java applet (I can't speak to Flash or other similar technologies, but I'm sure they're similarly capable).
Once you've crossed this threshold of going to an applet-like control, then you have far more freedom about what and how you can do things.
There's nothing wrong with HTTP per se for uploading files, it's just that the generic browser is a crummy client for it (no restartability, as mentioned, is but one limitation).
But with an applet you can select any protocol you want, you can throttle uploads so as to not saturate the clients connection, you can restart, send pieces, do checksums, whatever.
You don't need an entire webpage devoted to this, it can be a small component. It can even be an invisible component (and fired via JS). But the key factor is that it has to be a SIGNED component. An unsigned component can't interact with the users file system, so you'll need to get the component signed. It can be your own cert, etc. It follows much of the similar mechanics as normal web certificates.
Obviously the client browser will need to support your applet tech as well.
Rsync would be ideal if you can find a host that supports it.
It can restart easily, retransfer only changed parts of a file if that's useful to you and has built in options to use ssh, compression etc.
It can also confirm that the remote copy matches the local file without transferring very much data
I would run parallel FTP streams to speed up the process....
My boss has come to me and asked how to enure a file uploaded through web page is safe. He wants people to be able to upload pdfs and tiff images (and the like) and his real concern is someone embedding a virus in a pdf that is then viewed/altered (and the virus executed). I just read something on a procedure that could be used to destroy stenographic information emebedded in images by altering least sifnificant bits. Could a similar process be used to enusre that a virus isn't implanted? Does anyone know of any programs that can scrub files?
Update:
So the team argued about this a little bit, and one developer found a post about letting the file download to the file system and having the antivirus software that protects the network check the files there. The poster essentially said that it was too difficult to use the API or the command line for a couple of products. This seems a little kludgy to me, because we are planning on storing the files in the db, but I haven't had to scan files for viruses before. Does anyone have any thoughts or expierence with this?
http://www.softwarebyrob.com/2008/05/15/virus-scanning-from-code/
I'd recommend running your uploaded files through antivirus software such as ClamAV. I don't know about scrubbing files to remove viruses, but this will at least allow you to detect and delete infected files before you view them.
Viruses embedded in image files are unlikely to be a major problem for your application. What will be a problem is JAR files. Image files with JAR trailers can be loaded from any page on the Internet as a Java applet, with same-origin bindings (cookies) pointing into your application and your server.
The best way to handle image uploads is to crop, scale, and transform them into a different image format. Images should have different sizes, hashes, and checksums before and after transformation. For instance, Gravatar, which provides the "buddy icons" for Stack Overflow, forces you to crop your image, and then translates it to a PNG.
Is it possible to construct a malicious PDF or DOC file that will exploit vulnerabilities in Word or Acrobat? Probably. But ClamAV is not going to do a very good job at stopping those attacks; those aren't "viruses", but rather vulnerabilities in viewer software.
It depends on your company's budget but there are hardware devices and software applications that can sit between your web server and the outside world to perform these functions. Some of these are hardware firewalls with anti-virus software built in. Sometimes they are called application gateways or application proxies.
Here are links to an open source gateway that uses Clam-AV:
http://en.wikipedia.org/wiki/Gateway_Anti-Virus
http://gatewayav.sourceforge.net/faq.html
You'd probably need to chain an actual virus scanner to the upload process (the same way many virus scanners ensure that a file you download in your browser is safe).
In order to do this yourself, you'd have to keep it up to date, which means keeping libraries of virus definitions around, which is likely beyond the scope of your application (and may not even be feasible depending on the size of your organization).
Yes, ClamAV should scan the file regardless of the extension.
Use a reverse proxy setup such as
www <-> HAVP <-> webserver
HAVP (http://www.server-side.de/) is a way to scan http traffic though ClamAV or any other commercial antivirus software. It will prevent users to download infected files.
If you need https or anything else, then you can put another reverse proxy or web server in reverse proxy mode that can handle the SSL before HAVP
Nevertheless, it does not work at upload, so it will not prevent the files to be stored on servers, but prevent the files from being downloaded and thus propagated. So use it with a regular file scanning (eg clamscan).