How can I test my browser ignoring Location headers? - browser

I want to test a site with my Firefox ignoring Location: headers like this example in PHP.
header('Location: another-page.php');
Is there a plugin available to do this, or any other method?
Would my best bet be surfing the site with Lynx? Does Lynx ignore them?
Thanks

You could try bringing up the pages with cURL.
It is a command line application that is invoked via:
curl http://url
cURL does not follow Location: headers by default.

Related

Wget and quoted URL

Currently I am struggling with mirroring a website using Wget.
Browsing the web I came out with the following command to mirror a complete website:
wget --mirror --convert-links --adjust-extension --backup-converted --page-requisites -e robots=off http://www.example.com
As expected, after running the command there is a folder called www.example.com containing all downloaded files. However, some background images are missing. Digging through the files and logs I found that wget seems to have a problem with quoted image URLs.
The website uses the following CSS to include a background image:
<div ... style="background-image: url("/path/to/image") ;..." ... />
Collecting the pages requisites wget parses the URL and tries to download the file,
http://www.example.com/"/path/to/image"
which obviously fails with an error 404:
--2018-01-08 18:04:00-- https://www.example.com/"/path/to/image"
Reusing existing connection to www.example.com:443.
HTTP request sent, awaiting response... 404 Not Found
2018-01-08 18:04:00 ERROR 404: Not Found
Unfortunately I cannot post the original domain for privacy reasons...
I already tried to find a solution on the web, but I did not manage to find the right keywords to search for, so as a last choice I must ask you for help.
Is there a way to tell Wget to ignore quotes inside URLs?

Linking in localhost(LAMP)

I am newly(about 1 month) started using LAMP and Bootstrap.
I developed a web-site that worked perfectly until I reinstalled LAMP.
Here my progress:
0. reinstalled LAMP
1. moved my "backup-ed" file to my "localhost" direction
2. I run "chmod 777 *" to each dir and file
3. When I write "localhost" to my browser(firefox) the "index.html" is running
4. When I click the link(say: index)
The browser responds:
http://localhost/undefined
Not Found
The requested URL /undefined was not found on this server.
Apache/2.4.7 (Ubuntu) Server at localhost Port 80
Is there any way to fix this, by the way it's working(linking) perfectly when I write file:///var/www/html/index.html.
The reason why I want to use LAMP is add .php files to handle form.
Thanks
What happens when you do hit http://localhost ?
What exactly do you see?? Tried http://localhost/html
What is exactly your document root as per apache conf?
You might need to check that you are placing your files in the root directoy. It should be in the "htdocs" foler.
/opt/lampp/htdocs/
If all else fails, you can try using xampp which is another free alternative to lamp.
I get this a lot, when your browser looks for a file that is not in htaccess you get a forbidden or unfound error. The way to fix this is to make sure the link you click goes to an accessible URL. Try finding if other links in the page or scripts are overwriting your link.
Finally check if you can access it from another browser, or try to demonstrate the security of your machine. From a public library you can request an Ubuntu CD from canonical, and while you're waiting, you can visually inspect your machine for tampering.

How to enable php in NixOs

I'm trying to setup an LAMP environment with NixOs.
I managed to have mysql and apache running, but I can't find a way
to enable php.
At the moment, apache is serving php file as text instead of executing it.
I've seen there is a enablePHP option in the appache-httpd/default.nix file but it doesn't seem visible (it doesn't appear when I do man configuration.nix and I get an error message if I try to set it to true).
Most likely the version of nixpkgs used to build your system (and the configuration.nix man page) is older than the version of nixpkgs you are looking at. After an update of your system the option should be documented in the configuration.nix man page and work as expected.
I successfully use enablePHP and enableUserDir to render php files in my user's public_html. An .htaccess file with DirectoryIndex index.php further enables php index files.
I'm also in the process of setting up a php stack (using nginx / php-fpm) and I found the following, which might answer your question.
Use the extraModules parameter of the httpd config to enable the php module, like so:
extraModules = [
{ name = "php5"; path = "${pkgs.php}/modules/libphp5.so"; }
];
I found this example here: https://github.com/svanderburg/disnix-stafftracker-php-example/blob/master/deployment/configurations/test-vm1-httpd.nix

Terminal - How to run the HTTP request 'PUT'

So, what I am trying to do is run from Terminal in Linux an HTTP request, 'PUT'. Not POST, not GET, 'PUT'.
I know in terminal you can just type 'GET http://example.com/', but when I did 'PUT http://example.com' (And a bunch of other variables after that...), Terminal said that PUT is not a command.
Here's what I tried:
:~$ PUT http://example.com
PUT: command not found
Well, is there a substitute for the command 'PUT', or some way of sending that HTTP request from terminal?
I don't want to use any external programs.... I don't want to download or install anything. Any other ways?
I would use curl to achieve this: curl -X PUT http://example.com
curl -X PUT -d arg=val -d arg2=val2 http://sssss.zzzz
will work or use postman for HTTP requests www.getpostman.com if terminal is not your main concern, else, CURL is always there.
You are getting
Terminal said that PUT is not a command.
because the information is not being redirected via a network connection (to something that understands HTTP). bash has limited support by itself for communicating over a network, as discussed in
Tech Tip: TCP/IP Access Using bash
More on Using Bash's Built-in /dev/tcp File (TCP/IP)
Advanced Bash-Scripting Guide: Example 29-1. Using /dev/tcp for troubleshooting
Besides that, the HTTP specification says of PUT:
The PUT method requests that the enclosed entity be stored under the supplied Request-URI. If the Request-URI refers to an already existing resource, the enclosed entity SHOULD be considered as a modified version of the one residing on the origin server. If the Request-URI does not point to an existing resource, and that URI is capable of being defined as a new resource by the requesting user agent, the origin server can create the resource with that URI.
Clarifying, if you are PUTing to an existing URI, you may be able to do this, and the command implictly needs some data to reflect a modification.
The example in HTTP - Methods (TutorialsPoint) shows a PUT command used to store an HTML body on a URI. Your script has to redirect the data (as well as the initial request) onto the network connection.
You could do all of that using a here-document, or redirecting a file, e.g., (using that example to show how it might be adapted):
cat >/dev/tcp/example.com/80 <<EOF
PUT /hello.htm HTTP/1.1
User-Agent: Mozilla/4.0 (compatible; MSIE5.01; Windows NT)
Host: www.tutorialspoint.com
Accept-Language: en-us
Connection: Keep-Alive
Content-type: text/html
Content-Length: 182
<html>
<body>
<h1>Hello, World!</h1>
</body>
</html>
EOF
But your script should also provide for reading the server's response.
Using the -X flag with whatever HTTP verb you want:
curl -X PUT -H "Content-Type: multipart/form-data;" -d arg=val -d arg2=val2 localhost:8080
This example also uses the -d flag to provide arguments with your PUT request.

Download file/folder from sharepoint using Curl/Wget automatically

I have been trying to use Curl and wget to download file from Sharepoint. I am planning to make it as Script which runs automatically everyday and download the file from URL.
I tried using CURL with following command
curl -O --user Myusername:Mypassword https://OurDomain.sharepoint.com/_XXX&file=IPS_cleaned.xlsx&action=default
But it gave me error about SSL connection. I got to know that there is some existing bug in CURL 7.35 So i downgraded it to 7.22. But still gives me same error.
I also tried using Wget
wget --user=Myusername --password=MyPassword --no-check-certificate https://OurDomain.sharepoint.com/_XXX&file=IPS_cleaned.xlsx&action=default
But it still gives me error -- Unable to establish SSL connection
Can someone please let me know how i can accomplish my task
UPDATE
I was able to resolve the error in CURL. Below is the command that i gave
curl -O -L --sslv3 -A "Mozilla/5.0 (Windows; U; Windows NT 5.1; en-US) AppleWebKit/525.13 (KHTML, like Gecko) Chrome/0.A.B.C Safari/525.13" --user Myusername:Mypassword 'https://OurDomain.sharepoint.com/_%7BB21r-9CA2-345DEF%7D&file=IPS_cleaned.xlsx&action=default'
Now what it downloads is a file, which when i open it shows me Login page of Sharepoint. It does not download the actual excel file.
Any reason?
Another potential solution to this involves taking your sharepoint link and replacing the text after the '?' with download=1:
This:
https://my.sharepoint.com/:u:/g/XXX/XXXX-bunchofRandomText?e=kRlVi
Becomes this:
https://my.sharepoint.com/:u:/g/XXX/XXXX-bunchofRandomText?download=1
Now, you can just:
wget https://my.sharepoint.com/:u:/g/XXX/XXXX-bunchofRandomText?download=1
*Note, this example used a single file and a link where anyone with the link could access the file (no credentials required)
Please use rclone
Download and install the latest one from https://rclone.org/downloads
First option: Use OneDrive to access SharePoint sites/personal folder. This option will help you to upload large files.
1.create rclone configurations using the rclone config command
2.Select New remote and give a name
3.Select cloud storage OneDrive
4.Leave client ID and secret as blank
5.Edit advanced config: n
6.Remote config: Use auto-config: y
7.Open the URL on the browser and give access to rclone
8.Select personal/shared site URL option
8a.Shared site URL option you have to give the site URL. ie; https://sharepoint.com/sites/SiteName
9.Select personal/Documents drive. Documents drive will show if you selected the shared site URL option in the 8th step
Save config and quit
And the configuration file contents will be like the following. If you selected the Personal option drive type will be personal.
[onedrive]
type = onedrive
token =
drive_id =
drive_type = documentLibrary
Second option: In this option, you can upload up to 2 GB-sized files.
1.create rclone configurations using rclone config command
2.Select New remote and give a name
3.Select cloud storage WebDAV
4.Give site URL, username and password
5.Save and quit
And the configuration file contents will be like the following. Password will be in an encrypted format.
vim /root/.config/rclone/rclone.conf
[sharepoint]
type = webdav
url = https://sharepoint.com/sites/SiteName/Documents
vendor = sharepoint
user =
pass =
Download a file from SharePoint.
rclone copy --ignore-times --ignore-size --verbose sharepoint:SourceFolder/file.txt DestFolder
Firefox plugin that captures the link with session ID etc.. and it provides a command you could paste in the console for curl or wget.
If anyone has a better suggestion please let me know.
It gives you a curl or wget command with headers, cookies and all, with a copy to clipboard button, right on the download dialogue.
Download URL: https://addons.mozilla.org/en-US/firefox/addon/cliget
Reference: https://superuser.com/questions/27243/how-to-find-out-the-real-download-url-on-download-sites-that-use-redirects/1239026#1239026
Struggled with the same issue myself, and had my not-so-automatic-but-man-so-convenient way, with a daily log-in.
logged into Sharepoint with a browser,
exported the cookie,
run the following command.
wget --cookies=on --load-cookies cookies.txt --keep-session-cookies --no-check-certificate -m https://yoursharepoint.com
And files were downloaded just fine.
For anyone using CURL to download a file on Sharepoint with an "Anyone with the link" download option. Below are the steps I had to follow to download. Essentially you have to use the cookie from the share link, and then download the file from a different download link they don't provide easily for you.
When sending the CURL command for the “share link” it returns a 302 message, a forward link, and a cookie. If we save that cookie and use it to hit a “download” link I am able to download the file. Essentially, Microsoft uses the initial “share link” to send the cookie to the browser, and then redirect to their “View File” website. On that website you need to use the cookie provided (authentication), and select your next function (On screen view, print, download, etc). When you click the download button you hit a different link. I was able to find this link by going to the "view page" website for the file/link, turning on developer tools, and watching the link the browser follows when hitting download. You can then replicate that link for each file. If we use that download link along with the cookie, we can download the file.
curl -i -c cookies.txt SHARE LINK
curl -o docsdownloaded.pdf -b cookies.txt DOWNLOAD LINK
Share Link Ex: https://tenant.sharepoint.com/:b:/s/Folder/EdNUf4xAVzFJgBoO0MqkfppR5tgobxLrmCnRqU4LFJQ?e=rOGNSD
Download Link Ex:https://tenant.sharepoint.com/sites/Folder/_layouts/15/download.aspx?SourceUrl=%2Fsites%2FFolder%2FShared%20Documents%2FGeneral%2FBig%2Dfile%2Epdf
Similar to the answer Zyglute gave, using cURL:
You can export your login cookie using the cookies.txt Chrome extension: https://chrome.google.com/webstore/detail/njabckikapfpffapmjgojcnbfjonfjfg
Then use the following code:
curl -b cookie.txt https://OurDomain.sharepoint.com/_XXX&file=IPS_cleaned.xlsx&action=default
At some point your Sharepoint session will expire (not sure how long that takes), and you will need a new cookie file.
EDIT: If a malicious user gets a hold of your cookie.txt, they could get into your SharePoint account, so be sure to keep it safe.
Use wget adding &download=1 at the end of the link.
wget "<yourlink>&download=1"
it will be download with <yourlink> string as name, then just mv with the correct name after.

Resources