How can I upload changed files to a website in Linux? - linux

I maintain the website for my daughter's school. Historically I used our service provider's webftp interface to make edits in-place through their web-based GUI, which worked well for the small edits I typically need to make. They recently disabled webftp for security reasons, leaving ftp as the means to modify web pages and upload documents. Yes, I know FTP isn't very secure itself, but that's another story.
I have used wget to pull down the site and have built a git repository to manage changes. As I make changes, I want to upload the new and modified files to the website. Ideally I would only upload the new and changed files, not the entire site. My question is how do I do this?
I have found wput, which looks promising, but its documentation is not clear about which directory that wget created is the one I should recursively upload, and what the target directory should be. Since we are talking about a live site, I don't want to experiment until I get things right.
This seems like it should be a common use case with a well-known solution, but I have had little luck finding one. I have tried searches on Google and Stack Overflow with terms like "upload changed files ftp linux", but no clear answer pops up like I usually get. Some recommend rsync, but the target is on the service provider's system, so that might not work. Many variants of my question that I have found are Windows-centric and of little help since I work on Linux.
I suppose I could set up a target VM and experiment with that, but that seems excessive for what should be an easily answered question, hence my appeal to Stack Overflow. What do you recommend?

maybe this answer helps you : https://serverfault.com/questions/24622/how-to-use-rsync-over-ftp/24833#24833
It uses lftp's mirror function to sync up a remote and local directory tree.
I also used mc's (midnight commander) built in ftp client quite a lot when maintaining a site.

You should use git with git-ftp. It is generally a good idea to use a VCS by any project...

Related

Does VSCode remote-ssh extension in remote development store ANY info client side?

This is a security oriented question. Basically I'm reviewing the appdata for vscode and I see a couple of cache files. I'm trying to figure out if any of the file data is being transferred into client OS since that would be a security violation. I don't see a firm answer on this anywhere. Microsoft saying that it's "Sandboxed" isn't good enough for my security concerns, I need to be reasonably certain.
Basically if vscode-remote is ultimately a renderer like an ssh terminal it's okay, however if it does even a small amount of plain text caching on WINDOWS that's a no no since ultimately I'd be bypassing the security of the server.
Just to be clear my access is secured over ssh and approved, but my viewing on the client side is what's in question.
It appears to be okay(haven't found any files in violation), but I need something firmer, and of course it needs to be from an official source. (or offer direct proof to substantiate the use case as secure).
This is not actually my own answer one of the vscode development team(Chuck Lantz) responded to a direct question by email.
Okay, have an update. We don’t currently have the equivalent of an “In-Private”
mode in the browser context where all caching is in RAM.
You can, however, run VS Code in portable mode and keep the contents in a more
secure location. This keeps all data relative to the application folder so you
could put some or all of it in an encrypted virtual hard drive or even on a
remote file share (e.g. using SSHFS).
Portable Mode in Visual Studio Code
It defaults to using the system temp location for some content, but you can
change that to a sub-folder as well. The location of data folders by OS is also listed in the article.
Thanks Chuck!

Does IPFS host files you access by default?

I could not find a straight answer, so I am sorry if it has already been solved.
I was wondering, like with BitTorrent, when you download something using IPFS, does it automatically 'seed'/host it?
I also know you can stop seeding with torrents. If it automatically hosts the file after downloading it, what would be a way to stop hosting a specific file? all files?
edit: if it doesn't automatically host files after a download, how would one go about hosting a file on IPFS?
Thanks a ton.
To understand IPFS the best thing to do is take the time to read the white paper.
The protocol used for data distribution is inspired by BitTorrent and is named BitSwap. To protect against leeches (free-loading nodes that never share), BitSwap use a credit-like system.
So to answer your questions, yes when you download some content it's automatically hosted (or a least part of it), and if you try to trick the protocol by not hosting the content your credit will drop and you will not be able to participate in the network.

How can I browse an XSS infected Joomla website for rebuilding purposes without being infected?

Both the web files and the database have been tampered pointing to malicious JavaScript. They have tasked me to rebuild their site, but I would like to be able to view the site if possible to get at the content and view the site as they had a lot of pages. Since I didn’t originally build the site I don’t know the structure of the content.
I don’t have to repair the site; I just need to rebuild it with the CMS of my choice. I don’t know anything about the Joomla database, or know if I can even get access to it to be able to start there.
I originally thought using a virtual machine would be OK for this, but I wasn’t sure if I would be risking my host machine as well using this method. I would of course turn off JavaScript but I was hoping someone else may have been already been down this road and might be able to offer some insight.
Couldn't you just FTP to their host, pull it off and get it working on a machine with no connection?
If you were really paranoid. I don't think an XSS infected site would do too much damage to a properly protected machine anyway.
My paranoid answer:
It's a great idea to turn off Javascript. I would get an extension like Noscript for Firefox or Notscript for Chrome. I use these Noscript regularly, and it makes it easy to see what Javascript is coming from where.
Secondly, your idea with a VM is good, but take it a step further and run Linux in that VM. Linux can be infected, but it is rare to see something that will infect Linux.
Regular expressions and HTML parsers can also be your friends. Script something that can scan files looking for things like script tags and especially iframes. That way you can get an idea of files that have been corrupted and what is calling to where.
One other less likely gotcha is malicious executables or scripts disguised as something innocent like a JPEG, PDFs, etc. If you download and open files off of that machine, make sure it is at least onto your VM with no network connectivity.
Get server logs if you can; perhaps your assailant was sloppy and let some clue about their activities. Perhaps run Wireshark on a second machine to look for things calling out to strange domains. This may be excessive, but I find it to be a fun exercise. :)
Also things like Virustotal and Threat Expert can be your friends if you think you have a malicious file or you see malicious activity. Better to be paranoid than compromised.
Cleaning this type of stuff up isn't exactly rocket science. You just need to get a connection to the backing database server and run a couple queries to kill the xss stuff out of the stored content.
You'd do your client a great service by starting off doing just that.
The VM idea is a good one. krs1 suggests running Linux which is an even better idea as almost all trojans that get downloaded are for Windows. If you run Wireshark while you use the site so you can see what the network traffic looks like and what URLs are being requested, etc. If you run it in a Linux VM though you'll probably only get half the picture since any exploit worth the oxygen it took to keep the programmer alive while it was written will check what platform you're on and only download when you're on an exploitable one.
But I digress, you're rebuilding a website, not doing malware analysis (which is more fun IMO). Once you identify and remove the offending content you should be good. See if you can find out what the exploit was that got them and work with their IT guy if they have one so steps can be taken to mitigate it from happening again.

GitHub and Source Code Protection and Control [duplicate]

This question already has answers here:
How do you protect your software from illegal distribution? [closed]
(22 answers)
Closed 5 years ago.
I am working in a small startup organization with approximately 12 - 15 developers. We recently had an issue with one of our servers where by the entire server was "Re provisioned" i.e. completely wiped of all the code, databases, and information on it. Our hosting company informed us that only someone with access to the server account could have done something like this - and we believe that it may have been a disgruntled employee (we have recently had to downsize). We had local backups of some of the data but are still trying to recover from the data loss.
My question is this - we have recently began using GitHub to manage source control on some of our other projects - and have more then a few private repositories - is there any way to ensure that there is some sort of protection of our source code? What i mean by this is that I am aware that you can delete an entire Project on GitHub, or even a series of code updates. I would like to avoid this from happening.
What i would like to do is create (perhaps in a separate repository) a complete replica of the project on Git - and ensure that only a single individual has access to this replicated project. That way if the original project is corrupted or destroyed for any reason we can restore where we were (with history intact) from the backup repository.
Is this possible? What is the best way to do this? Github has recently introduced "Company" accounts... is that the way to go?
Any help on this situation would be greatly appreciated.
Cheers!
Well, if a disgruntled employee leaves, you can easily remove them from all your repositories, especially if you are using the Organizations - you just remove them from a team. In the event that someone deletes a repository maliciously that still had access for some reason, we have daily backups of all of the repositories that we will reconstitute if you ask. So you would never lose more than one day of code work at worst. Likely someone on the team will have an update with that code anyhow. If you need more protection than that, then yes, you can setup a cron'd fetch or something that will do mirrors of your code more often.
First, you should really consult github support -- only they can tell you how they do the backup, what options for permission control they have (esp. now that they introduced "organizations") etc. Also you have agreement with them -- do read it.
Second, it's still very easy to do git fetch by cron, say, once an hour (on your local machine or on your server) -- and you're pretty safe.
Git is a distributed system. So your local copy is the same as your remote copy on Git hub! You should be OK to push it back up there.

revision control for server side cgi programming

A friend of mine and I are developing a web server for system administration in perl, similar to webmin. We have a setup a linux box with the current version of the server working, along with other open source web products like webmail, calendar, inventory management system and more.
Currently, the code is not under revision control and we're just doing periodic snapshots.
We would like to put the code under revision control.
My question is what will be a good way to set this up and software solution to use:
One solution i can think of is to set up the root of the project which is currently on the linux box to be the root of the repository a well. And we will check out the code on our personal machines, work on it, commit and test the result.
Any other ideas, approaches?
Thanks a lot,
Spasski
Version Control with Subversion covers many fundamental version control concepts in addition to being the authority on Subversion itself. If you read the first chapter, you might get a good idea on how to set things up.
In your case, it sounds like you're making the actual development on the live system. This doesn't really matter as far as a version control system is concerned. In your case, you can still use Subversion for:
Committing as a means of backing up your code and updating your repository with working changes. Make a habit of committing after testing, so there are as few broken commits as possible.
Tagging as a means of keeping track of what you do. When you've added a feature, make a tag. This way you can easily revert to "before we implemented X" if necessary.
Branching to developt larger chunks of changes. If a feature takes several days to develop, you might want to commit during development, but not to the trunk, since you are then committing something that is only half finished. In this case, you should commit to a branch.
Where you create a repository doesn't really matter, but you should only place working copies where they are actually usable. In your case, it sounds like the live server is the only such place.
For a more light-weight solution, with less overhead, where any folder anywhere can be a repository, you might want to use Bazaar instead. Bazaar is a more flexible version control system than Subversion, and might suit your needs better. With Bazaar, you could make a repository of your live system instead of setting up a repository somewhere else, but still follow the 3 guidelines above.
How many webapp instances can you run?
You shouldn't commit untested code, or make commits from a machine that can't run your code. Though you can push to backup clones if you like.

Resources