FireFTP - URL with Login - mozilla

on successful connection to an SFTP site, FireFTP has an oprion to copy 'URL with Login'; this provides a fully qualified URL with all credentials used in accessing the FTP site.
I am assuming that this URL can be used elsewhere: {Where} Can I use the URL to navigate to the site automatically?

I think it can be used for multiple purposes.
- You can easily save the credentials to a file
- You can easily send the credentials to a friend
- You can use it to navigate to it from browser
- You can use it to store
- And if somebody else uses your pc, not everybody can access the ftp, because you don't need to "save" the credentials in the application itself.
So i believe it is more of a 'use it for your own wishes' thing.
Edit: Mozilla says: 'FireFTP will copy the URL (HTTP or FTP) and put it in the clipboard so you can paste it somewhere.'
See:
http://fireftp.mozdev.org/help.html

Related

How to prevent users from browsing certain files of my website

I have recently launched a website on GoDaddy hosting. I have keept some images and JavaScript files used in website, in separate folders. I want to prevent the users from browsing those images and files by simply appending the folder and file name in the website URL. For example
www.example.com/images/logo.png
If I understand correctly, you want to have html file with images, that shouldn't be accessible alone? If yes, then it cannot be done. You can watch for correct HTTP Referrer header, but it can be simply faked and it also makes it inaccessible for browsers that don't send referrer or having sending it forbidden for "privacy" reasons.
If you want hide files to be accessible only by server side scripts, ftp/scp, then you can try to use .htaccess (if GoDaddy runs on Apache) and correct configuration: https://httpd.apache.org/docs/2.2/howto/access.html
Another way could be hiding that files and creating one-shot token like this:
<img src=<?pseudocode GEN_TOKEN("file.jpg") ?> /> with another file serving these hidden files just for generated token, then deleting it from DB. Nevertheless, this will not protect anybody from downloading or accessing these files, if they want...
But, anyway, try to clarify your question better...
If you are keeping images/files in folder which is open to public, I guess you kept in that folder for purpose, you want public to access those images and files.
How public know images file name? Stop file content listing for your web site.
I am not aware which language you are using on web server, but in ASP.NET you may write module/ middle ware which can intercept in coming request and based on your logic (e.g. authentication and authorization) you can restrict access. All modern languages support this kind of functionality.

Can fiddler access local machine data?

One of customers have reported that they can see the password being transferred as clear text, they probably tried tool like a fiddler to capture the HTTP request/response. So my question is is it possible using fiddler or any other tool is it possible for someone to monitor the http traffic on that local computer at the moment the user entered the password and clicked to login ?
If the user is accessing the website without using SSL (i.e. by going to "http://" instead of "https://"), then it is possible to see all of the traffic between the website and the browser, and not only on the local computer but also on the network that the computer is connected to.
If the user is accessing the website via HTTPS, Fiddler is able to act as a proxy and decrypt the traffic between the browser and the server by using a special SSL certificate (thanks to #user18044 for clarification in the comments below).
In your case Fiddler is NOT accessing browser memory directly to get to the password in clear text.

how does google verify ownership of a website?

In order to verify that I own a website, google asked me to do the following:
Download this HTML verification file. [googleXXX.html]
Upload the file to http://www.example.com/
Confirm successful upload by visiting http://www.example.com/googleXXX.html in your browser.
Click Verify below.
To stay verified, don't remove the HTML file, even after verification succeeds.
The file provided by google contains a single line:
google-site-verification: googleXXX.html
How that this work? How is that supposed to tell them that I actually own that domain?
It doesn't tell them that you own it, it tells them that you have write permission to it. That's considered enough.
It demonstrates that you have sufficient control of the web server at the domain to be able to add pages to the website. The assumption is that this level of control would only be available to the owner of the domain, or a delegated administrator.

Is it really possible to hack the a forbidden web browser area that throws a 403 error?

I am not asking how. I am asking if. Is it possible to bypass a 403 error on the web?
Let me explain a bit in detail. On a web server the IIS has set up a directory for a project we are such that it is not accessable to the outside. So if you type the path to that directory in a web browser, the web browser will say that it is not accessable and it will throw a 403 error.
Now, here is the problem. Some files are placed there with some secure information. A programmer on our team has made a big deal about this and the fact that the files are placed on a server that is accessalbe to the outside world. On the other hand, I think this is not such a big deal since if a user on the outside tried to go to that directory, his web browser will throw the 403 error. But other people on the team say that a hacker can still somehow access it.
So that leads me here and to my question. Is it possible to bypass a 403 error on the web? I say no. Some network guys at work say maybe. I am not asking how to do it. I am only asking if it is really possible.
I gather from your information that there is a web server with a directory setup on the web like so
http://www.example.com/directory
Now, if you navigate to this URL you get a 403 Forbidden error? However, if you know the name of a file you can go to http://www.example.com/directory/MyImportantDocument.docx and it is possible to view the document at this location?
Unless there is a runnable script on your server that does this, it is not possible to view the directory contents via the web. However, URLs are not considered secure as they are logged in browser history, proxy and server logs and can also be leaked by browsers' referer header. I assume the files are stored here so they can be accessed by a remote application?
File names can be easily brute forced by an attacker. Tools such as dirbuster and dirb do this automatically. Therefore, if the files do not need to be readable remotely, they should be moved to an internal server, not accessible from the internet or DMZ.
If access is needed you should implement some sort of authentication. At the very least activate basic auth on IIS. This will prompt a web browser user for a username and password in order to view files, or the files can be accessed programmatically by setting the appropriate Authorization header, which is an encoded username and password.
Better would be something with comprehensive session management, like an application pre-built for this purpose. E.g. a CMS which is kept up-to-date and securely configured.
Also you should make sure that the IIS website is only configured to be accessed via HTTPS which will protect against traffic snooping of the credentials, URL path, headers and file contents.
In some cases (e.g. Back-end or web server mis-configuration) it's possible to bypass 403. For understanding those methods read this script:
https://github.com/lobuhi/byp4xx
this script contained well-known methods and collected from various bug bounty communities.
So if your back-end server not vulnerable to this script, probably it's safe.
So basically it is NOT possible if the server software itself doesn't has any bug. But if you have other parts of your website that are public and probably using a dynamic scripting language that may higher your risk if someone is able to find a hole with something like "access file from filesystem".
In general I would recommend you to NOT store any security relevant files on a public server that don't need to!
If you could avoid it, it's always the better way.
There is a simple exploit to bypass .httacess restrictions... Try to Google "bypass error 403" and you will find the method. As auditor I can confirm that it is not a good practice (and if I see it I will always raise it as an issue) if you store credentials (or any other sensitive information) in plain text on web server.

Have Excel open a URL using a browser -- not via Excel

Is there any way to tell Excel to just pass a URL included in a cell directly to the system browser and don't try and open directly in Excel first?
Here's the detail of the issue:
I have a web app that requires a valid session (a cookie is required). If a request comes in (for a protected page) without a cookie a redirect is issued to the login page.
This web app can generate an Excel spreadsheet with cells that have URLs that point to a resources in the web app. The hope is that one could open the spreadsheet and then click on links in cells to open the pages in their browser (assuming the browser has the cookie).
If I paste one of these URLs into the browser's location bar it works (because the browser will include the cookie). Likewise, if one of these URLs was included in an Email I can click on it, and if currently logged in, will access the resource.
Here's the problem:
But, if I click the link directly in Excel what happens is Excel sends the request, not the browser. Since Excel has no idea about the cookies the browser is holding no cookie is sent.
What happens is Excel request the correct URL. The web app sees the request without the cookie and sends a redirect back to Excel with a Location: header pointing to the login page.
Next, Excel makes another request for the login page.
Finally, the browser is opened (if not already opened) and it makes another request to the login URL.
I assume what's happening is Excel is seeing the login pages is text/html and decides to hand off to the browser, but it's giving the browser the URL provided in the Location:, not the URL that is in the spreadsheet.
Is there a way to configure Excel to not try and open URLs in cells directly?
Running Excel 14.3.5 (Mac 2011).
First off, this is what I use in a Windows copy of Office 2010. I totally realise you're working with Mac Office, so this may require a little bit of alteration...
Public Function OpenSite()
Dim URL As String
URL = Worksheets("Sheet1").Range("A1").Value
Call Shell("C:\Program Files\Internet Explorer\Iexplore.exe " & URL, 1)
End Function
In a windows environment, this will open the site whose URL is in Sheet1!A1 using Internet Explorer
Looking elsewhere on StackOverflow shows this as a possible method for calling your Browser with URL in a Mac Environment:
Shell("Macintosh HD:Users:brownj:Documents:" & _
"rnaseq:IGV_2.0.14:igv.sh", vbNormalFocus)
(It does not appear to need the 'Call' command first.)
Perhaps try replacing the first argument with "{path to your browser}" & URL and see if that'll do the trick. If it works, just put this in the 'ThisWorkbook' object under Private Sub Workbook_Open to automatically execute it when the sheet's opened.
Once again, I'm sorry I can't test this... But hopefully this will at least set you in the right direction!

Resources