Creating a script which helps present grades faster, and more efficiently - web

I am a high school student reasonably concerned about my grades, and when I check them through a system known as Zangle! Student Connection, it is mildly painful in how long it takes.
I was wondering if it would be possible to construct a script, in whichever language is deemed appropriate, to login for me, based on a pre-entered login username and password, and then present my grade percentages in a nice layout, instead of the kind of awkward and messy layout it is now presented in.
I'm guessing this either way to hard, or way out of my reach at the moment, or even impossible, but I just thought it was a decently cool idea and was looking for any suggestions.
Also, I have no idea as to which language is best for this, so I would definitely need help on this!

You could try Greasemonkey, it lets you create "user scripts" in Firefox. That way you could use Javascript to reorganise the interface.

Well, here is my take on it in PHP, utilizing the cURL library:
PHP is by no means deemed appropriate for the task. It is however, fairly easy to setup what you're looking for in the language.
<?php
error_reporting(-1);
$ch = curl_init();
/*Some sites block your access if you do not have cookies enabled. In order to get the cookies you will need to submit the form manually and using a packet sniffer (or Firebug) get the cookies that are being sent.*/
//$cookies ="CFID=25318504; CFTOKEN=38400766; PERSON_ID=3461047";
/*Again, if you have Firebug then getting the following POST data, once you submit the form manually, fairly straightforward. This is what cURL will utilize in the POST fields*/
//The action=submit may also vary, this is also easily acceible via Firebug. (right next to the parameters tab.
$post_data = "username=test&password=test&action=submit";
curl_setopt($ch, CURLOPT_URL, "http://www.sitename.com");
//follows a Location: redirect
curl_setopt($ch, CURLOPT_AUTOREFERER, 1);
curl_setopt($ch, CURLOPT_HEADER, 1);
curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);
//send above cookies, which were gathered manually =(
//Utilize this only if cookies are a neccesity.
//curl_setopt($ch, CURLOPT_COOKIE, $cookies);
//Doing a POST request
curl_setopt($ch, CURLOPT_POST, 1);
curl_setopt($ch, CURLOPT_POSTFIELDS, $post_data);
$output = curl_exec($ch);
curl_close($ch);
if($output == false) {
echo "cURL Error:" . curl_error($ch);
}
//You can sort this data using an HTML parser
echo $output;
Once you have successfully connected to the site, you can utilize one of many PHP HTML parsers to traverse through the data, such as: DOMDocument and Xpath or SimpleXML.

Related

API CloudConvert curl request PHP

I would like to test a very simple case with the API CloudConvert with a curl request.
I want to import the file essaiFichier.txt with a curl request. I get a response in Json with a status "waiting". I have no idea if the request was well done. If someone has faced the same problem it would be great to have some
Below my code in order to fix the issue.
$authorization ="Authorization: Bearer eyJ0eXAiOiJKV1QiLCJhbGciOi";
$url ="https://api.cloudconvert.com/v2/jobs";
$post = '{
"tasks": {
"import-1": {
"operation": "import/url",
"url": "http://localhost/biere/essaiFichier.txt",
"filename": "essaiFichier.txt"
}
}
}';
$ch=curl_init($url);
curl_setopt($ch, CURLOPT_POSTFIELDS, $post);
curl_setopt($ch, CURLOPT_HTTPHEADER, array('Content-Type: application/json' , $authorization));
curl_setopt($ch, CURLOPT_CUSTOMREQUEST, "POST");
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$response = curl_exec($ch);
$info = curl_getinfo($ch);
I'm also new to cloudconvert, though it looks to me like you aren't following the 'rules' for using the service - at least, not to get anything useful out of it....
You need to do THREE things (at least):
Import (you have that)
Task (like 'convert'...)
Export (get your modified file back)
I find their 'Job Builder' to be a simple way to get the code - at least for starting out. See https://cloudconvert.com/api/v2/jobs/builder
I entered your 'Import' into the 'Job Builder' (note - I think you don't need the 'filename' in there, or you should break apart the 'url' and only have the file named in the 'filename' section - again, I'm new at it, but that is how I read it) and it still shows me this in the blue box at the top (the 'hints')
How to build a job
Add a processing task, for example a convert task.
Add an export task. You can use the export/url task to generate a URL for the output file.
Which tells me you just need to do the other parts so you have a complete request.
As for the 'waiting' response, yes, that is what you will get on the initial request. Again, see the docs on the Job Builder page - you can either do another request for the 'wait' response (which should get you the link for the 'export' part) or you can do a webhook that will be your trigger to download the file (which would make things more automatic).
Following your code and the Job Builder, I just finished my first conversion - worked great and now I can move on with my project (yeah!)
thank you very much for your complete and precise answer. Finally I found an alternative with https://github.com/dompdf/dompdf, wich is much easier to use for my opinion and without any inscription. I recommend it.
Thanks

Using webRequest API to intercept script requests, edit them and send them back

As the title says, I'm trying to intercept script requests from the user's page, make a GET request to the script url from the background, add a bit of functionality and send it back to the user.
A few caveats:
I don't want to do this with every script request
I still have to guarantee that the script tags are executed in the original order
So far I came with two solutions, none of which work properly. The basic code:
chrome.webRequest.onBeforeRequest.addListener(
function handleRequest(request) {
// First I make the get request for the script myself SYNCHRONOUSLY,
// because the webRequest API cannot handle async.
const syncRequest = new XMLHttpRequest();
syncRequest.open('GET', request.url, false);
syncRequest.send(null);
const code = syncRequest.responseText;
},
{ urls: ['<all_urls>'] },
['blocking'],
);
Now once we have the code, there are two approaches that I've tried to insert it back into the page.
I send the code through a port to a content script, that will add it to the page inside a <script></script> tag. Along with the code, I also send an index to keep sure the scripts are inserted back into the page in the correct order. This works fine for my dummy website, but it breaks on bigger apps, like youtube, where it fails to load the image of most videos. Any tips on why this happens?
I return a redirect to a data url:
if (condition) return { cancel: false }
else return { redirectUrl: 'data:application/javascript; charset=utf-8,'.concat(alteredCode) };
This second options breaks the code formatting, sometimes removing the space, sometimes cutting it short. I'm not sure on the reason behind this behavior, it might have something to do with data url spec.
I'm stuck. I've researched pretty much every related answer on this website and couldn't find anything. Any help or information is greatly appreciated!
Thanks for your time!!!

Am I safe (XSS injection, etc) with my website contact form with Google Captcha?

I have suffered injection in my website (from a search box in a KB system). I removed that KB system but have a Contact Form (with Google Captcha) where the user enters his name, email and message and I use PHP mail() to send me the message.
Is it possible that an attacker can get access to my website from a possible attack to that form? Or the worst scenario could just be that he uses it to send Spam?
This is my PHP code before calling "main()":
<?php
$fname = $_POST['contact-f-name'];
$lname = $_POST['contact-l-name'];
$email = $_POST['contact-email'];
$text = $_POST['contact-message'];
$companyname = $_POST['company-name'];
$subject = $_POST['subject'];
$address = "myemail#myemail.com";
$headers = "From: " . strip_tags($email) . "\r\n";
$headers .= "Reply-To: ". strip_tags($email) . "\r\n";
$headers .= "MIME-Version: 1.0\r\n";
$headers .= "Content-type:text/plain; Charset=UTF-8 \r\n";
$message = ."Name: ".strip_tags($fname)." ".strip_tags($lname)."\r\n"
."Email: ".strip_tags($email)."\r\n"
."Company Name: ".strip_tags($companyname)."\r\n"
."Subject: ".strip_tags($subject)."\r\n"
."Message: ".strip_tags($text)."\r\n";
if(#mail($address, $subject, $message, $headers)) { echo "true"; }
else { echo "false"; }
exit;
?>
TL;DR: Short answer: Maybe:
While I do not have the time right now to do a complete and exacting answer to this post; I will point you to some best practises, and lots of links to other more verbose answers to similar questions regarding making user inputted data safe.
How to make the inputs safer?
Disable certain dangerous PHP functions. Read the second answeer rathr than the "ticked" answer.
Use PHPs filter_var() to force input the their correct types, especially for emails:
$email = filter_var($_POST['contact-email'], FILTER_SANITIZE_EMAIL);
use preg_replace() (or str_replace() ) to remove unwanted characters from your values. This can most typically be backticks, quotes of any kind, forward slashes or backslashes. Example.
I recommend replacing mail() in your code with PHPMailer.
strip_tags is ok, but just ok. It has flaws (such as dealing with unclosed tags). be aware of that.
Your PHP should be suitably jailed so if someone can run exec(...) commands (Ohsh1tOhsh1tOhsh1t) you have not (literally) lost your server.
What else can I read?
This huge topic on how to deal with forms on PHP
This question about how to "sanitise" user input.
OWASP PHP filters for cleaning inputs.
Disable dangerous functions
PHP fitler_var sanitisation filter list
Securing user variables (database related mostly)
Further wise words on data sanitisation.

Drupal - Security check all site paths by role

I'm writing this in the forlorn hope that someone has already done something similar. I would have posted on drupal.org - but that site is about as user-friendly as a kick in the tomatoes.
I don't know about you, but when I develop I leave all my Drupal paths with open access, and then think about locking them down with access permissions at the end.
What would be be really useful is a module which parses all the paths available (by basically deconstructing the contents of the menu_router table) and then trying them (curl?) in turn whilst logged-in as a given user with a given set of roles.
The output would be a simple html page saying which paths are accessible and which are not.
I'm almost resigned to doing this myself, but if anyone knows of anything vaguely similar I'd be more than grateful to hear about it.
Cheers
UPDATE
Following a great idea from Yorirou, I knocked together a simple module to provide the output I was looking for.
You can get the code here: http://github.com/hymanroth/Path-Lockdown
My first attempt would be a function like this:
function check_paths($uid) {
global $user;
$origuser = $user;
$user = user_load($uid);
$paths = array();
foreach(array_keys(module_invoke_all('menu')) as $path) {
$result = menu_execute_active_handler($path);
if($result != MENU_ACCESS_DENIED && $result != MENU_NOT_FOUND) {
$paths[$path] = TRUE;
}
else {
$paths[$path] = FALSE;
}
}
$user = $origuser;
return $paths;
}
This is good for a first time, but it can't handle wildcard paths (% in the menu path). Loading all possible values can be an option, but it doesn't work in all cases. For instance, if you have %node for example, then you can use node_load, but if you have just %, then you have no idea what to load. Also, it is a common practice to omit the last argument, which is a variable, in order to correctly handle if no argument is given (eg. display all elements).
Also, it might be a good idea to integrate this solution with the Drupal's testing system.
I did a bit of research and wasn't able to find anything. Though I'm inclined to think there is a way to check path access through Drupal API as opposed to CURL - but please keep me updated on your progress / let me know if you would like help developing. This would a great addition to the Drupal modules.

Best way to handle security and avoid XSS with user entered URLs

We have a high security application and we want to allow users to enter URLs that other users will see.
This introduces a high risk of XSS hacks - a user could potentially enter javascript that another user ends up executing. Since we hold sensitive data it's essential that this never happens.
What are the best practices in dealing with this? Is any security whitelist or escape pattern alone good enough?
Any advice on dealing with redirections ("this link goes outside our site" message on a warning page before following the link, for instance)
Is there an argument for not supporting user entered links at all?
Clarification:
Basically our users want to input:
stackoverflow.com
And have it output to another user:
stackoverflow.com
What I really worry about is them using this in a XSS hack. I.e. they input:
alert('hacked!');
So other users get this link:
stackoverflow.com
My example is just to explain the risk - I'm well aware that javascript and URLs are different things, but by letting them input the latter they may be able to execute the former.
You'd be amazed how many sites you can break with this trick - HTML is even worse. If they know to deal with links do they also know to sanitise <iframe>, <img> and clever CSS references?
I'm working in a high security environment - a single XSS hack could result in very high losses for us. I'm happy that I could produce a Regex (or use one of the excellent suggestions so far) that could exclude everything that I could think of, but would that be enough?
If you think URLs can't contain code, think again!
https://owasp.org/www-community/xss-filter-evasion-cheatsheet
Read that, and weep.
Here's how we do it on Stack Overflow:
/// <summary>
/// returns "safe" URL, stripping anything outside normal charsets for URL
/// </summary>
public static string SanitizeUrl(string url)
{
return Regex.Replace(url, #"[^-A-Za-z0-9+&##/%?=~_|!:,.;\(\)]", "");
}
The process of rendering a link "safe" should go through three or four steps:
Unescape/re-encode the string you've been given (RSnake has documented a number of tricks at http://ha.ckers.org/xss.html that use escaping and UTF encodings).
Clean the link up: Regexes are a good start - make sure to truncate the string or throw it away if it contains a " (or whatever you use to close the attributes in your output); If you're doing the links only as references to other information you can also force the protocol at the end of this process - if the portion before the first colon is not 'http' or 'https' then append 'http://' to the start. This allows you to create usable links from incomplete input as a user would type into a browser and gives you a last shot at tripping up whatever mischief someone has tried to sneak in.
Check that the result is a well formed URL (protocol://host.domain[:port][/path][/[file]][?queryField=queryValue][#anchor]).
Possibly check the result against a site blacklist or try to fetch it through some sort of malware checker.
If security is a priority I would hope that the users would forgive a bit of paranoia in this process, even if it does end up throwing away some safe links.
Use a library, such as OWASP-ESAPI API:
PHP - http://code.google.com/p/owasp-esapi-php/
Java - http://code.google.com/p/owasp-esapi-java/
.NET - http://code.google.com/p/owasp-esapi-dotnet/
Python - http://code.google.com/p/owasp-esapi-python/
Read the following:
https://www.golemtechnologies.com/articles/prevent-xss#how-to-prevent-cross-site-scripting
https://www.owasp.org/
http://www.secbytes.com/blog/?p=253
For example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$esapi = new ESAPI( "/etc/php5/esapi/ESAPI.xml" ); // Modified copy of ESAPI.xml
$sanitizer = ESAPI::getSanitizer();
$sanitized_url = $sanitizer->getSanitizedURL( "user-homepage", $url );
Another example is to use a built-in function. PHP's filter_var function is an example:
$url = "http://stackoverflow.com"; // e.g., $_GET["user-homepage"];
$sanitized_url = filter_var($url, FILTER_SANITIZE_URL);
Using filter_var allows javascript calls, and filters out schemes that are neither http nor https. Using the OWASP ESAPI Sanitizer is probably the best option.
Still another example is the code from WordPress:
http://core.trac.wordpress.org/browser/tags/3.5.1/wp-includes/formatting.php#L2561
Additionally, since there is no way of knowing where the URL links (i.e., it might be a valid URL, but the contents of the URL could be mischievous), Google has a safe browsing API you can call:
https://developers.google.com/safe-browsing/lookup_guide
Rolling your own regex for sanitation is problematic for several reasons:
Unless you are Jon Skeet, the code will have errors.
Existing APIs have many hours of review and testing behind them.
Existing URL-validation APIs consider internationalization.
Existing APIs will be kept up-to-date with emerging standards.
Other issues to consider:
What schemes do you permit (are file:/// and telnet:// acceptable)?
What restrictions do you want to place on the content of the URL (are malware URLs acceptable)?
Just HTMLEncode the links when you output them. Make sure you don't allow javascript: links. (It's best to have a whitelist of protocols that are accepted, e.g., http, https, and mailto.)
You don't specify the language of your application, I will then presume ASP.NET, and for this you can use the Microsoft Anti-Cross Site Scripting Library
It is very easy to use, all you need is an include and that is it :)
While you're on the topic, why not given a read on Design Guidelines for Secure Web Applications
If any other language.... if there is a library for ASP.NET, has to be available as well for other kind of language (PHP, Python, ROR, etc)
For Pythonistas, try Scrapy's w3lib.
OWASP ESAPI pre-dates Python 2.7 and is archived on the now-defunct Google Code.
How about not displaying them as a link? Just use the text.
Combined with a warning to proceed at your own risk may be enough.
addition - see also Should I sanitize HTML markup for a hosted CMS? for a discussion on sanitizing user input
There is a library for javascript that solves this problem
https://github.com/braintree/sanitize-url
Try it =)
In my project written in JavaScript I use this regex as white list:
url.match(/^((https?|ftp):\/\/|\.{0,2}\/)/)
the only limitation is that you need to put ./ in front for files in same directory but I think I can live with that.
Using Regular Expression to prevent XSS vulnerability is becoming complicated thus hard to maintain over time while it could leave some vulnerabilities behind. Having URL validation using regular expression is helpful in some scenarios but better not be mixed with vulnerability checks.
Solution probably is to use combination of an encoder like AntiXssEncoder.UrlEncode for encoding Query portion of the URL and QueryBuilder for the rest:
public sealed class AntiXssUrlEncoder
{
public string EncodeUri(Uri uri, bool isEncoded = false)
{
// Encode the Query portion of URL to prevent XSS attack if is not already encoded. Otherwise let UriBuilder take care code it.
var encodedQuery = isEncoded ? uri.Query.TrimStart('?') : AntiXssEncoder.UrlEncode(uri.Query.TrimStart('?'));
var encodedUri = new UriBuilder
{
Scheme = uri.Scheme,
Host = uri.Host,
Path = uri.AbsolutePath,
Query = encodedQuery.Trim(),
Fragment = uri.Fragment
};
if (uri.Port != 80 && uri.Port != 443)
{
encodedUri.Port = uri.Port;
}
return encodedUri.ToString();
}
public static string Encode(string uri)
{
var baseUri = new Uri(uri);
var antiXssUrlEncoder = new AntiXssUrlEncoder();
return antiXssUrlEncoder.EncodeUri(baseUri);
}
}
You may need to include white listing to exclude some characters from encoding. That could become helpful for particular sites.
HTML Encoding the page that render the URL is another thing you may need to consider too.
BTW. Please note that encoding URL may break Web Parameter Tampering so the encoded link may appear not working as expected.
Also, you need to be careful about double encoding
P.S. AntiXssEncoder.UrlEncode was better be named AntiXssEncoder.EncodeForUrl to be more descriptive. Basically, It encodes a string for URL not encode a given URL and return usable URL.
You could use a hex code to convert the entire URL and send it to your server. That way the client would not understand the content in the first glance. After reading the content, you could decode the content URL = ? and send it to the browser.
Allowing a URL and allowing JavaScript are 2 different things.

Resources