Behavioural part in cuckoo analysis report empty - sandbox

I analyzed a malware sample
SHA1 : 0bd0a280eb687c69b697d559d389e34d4fad02b2.
The result generated by cuckoo doesnt contain any information about Behavioral analysis. I analyzed the same malware on malwr.com and it's showing the behavioral part correctly with the file accesses, registry keys,mutexes. Link for malwr.com report(https://malwr.com/analysis/ZjA1OTExOWI5ZWIwNDZjMjkyN2Y5NWRmMzhlNWRhZmY/)
I am unable to figure out where the fault is. Any help greatly appreciated.

one reason might be that the malware have virtual machine environment detector ability. i faced same problem when analysing: 1fb06a150c91059501b739708627a9c752d906aff211455248cecb755a5a5c6a
maybe cuckoo server uses some hardening vm for malwares, so they cant detect the fake environment. have a look at: kernelmode

Related

Testing security of untrusted image upload

I need to test how my website will deal with image files with malware embedded. I'm already satisfied with its validation of the file type using header inspection. I'm looking to test it with genuine image files with embedded malware.
Can I get a jpg that should be recognised as malware to test this? Or make one myself?
Or if there is a better way of doing it, how can I make sure that everything is wired up and working correctly?
It will be deployed on Azure with the images saved to Azure Blob storage. I want to use the [new] Azure storage security service to detect malware.
It is typically difficult to use real malware for testing. You need to be able to handle the file in your test environment with care to not infect your systems, or, escape out into the rest of your network.
To facilitate testing anti-malware solutions vendors provide a number of test files. These are benign files which are detected as malicious. Therefore it is safe to use them in your environment but will serve as a good test case of your solution.
There is one industry standard file (EICAR) which is available for testing anti-malware solutions. As you can see from VirusTotal almost every vendor will detect this file as malicious. Many of them identifying it as a test file.
In your particular use case there is one challenge here, it is not an image file. However, as you mention in the comments I believe that turning off the image detection during this test is much better than using real malware for testing.
Finally, one word of caution with EICAR. As so many products detect this as a virus you need to be careful of the anti-malware solution that is running on the systems that the file ends up on. For example, if your development environment contains an on access scanner attempting to run the test will potentially trigger the on access scanner.

Can a running nodejs application cryptographically prove it is the same as published source code version?

Can a running nodejs program cryptographically prove that it is the same as a published source code version in a way that could not be tampered with?
Said another way, is there a way to ensure that the commands/code executed by a nodejs program are all and only the commands and code specified in a publicly disclosed repository?
The motivation for this question is the following: In an age of highly sophisticated hackers as well as pressures from government agencies for "backdoors" that allow them to snoop on private transactions and exchanges, can we ensure that an application has been neither been hacked nor had a backdoor added?
As an example, consider an open source-based nodejs application like lesspass (lesspass/lesspass on github) which is used to manage passwords and available for use here (https://lesspass.com/#/).
Or an alternative program for a similar purpose encryptr (SpiderOak/Encryptr on github) with its downloadable version (https://spideroak.com/solutions/encryptr).
Is there a way to ensure that the versions available on their sites to download/use/install are running exactly the same code as is presented in the open source code?
Even if we have 100% faith in the integrity of the the teams behind applications like these, how can we be sure they have not been coerced by anyone to alter the running/downloadable version of their program to create a backdoor for example?
Thank you for your help with this important issue.
sadly no.
simple as that.
the long version:
you are dealing with the outputs of a program, and want to ensure that the output is generated by a specific version of one specific program
lets check a few things:
can an attacker predict the outputs of said program?
if we are talking about open source programs, yes, an attacker can predict what you are expecting to see and even can reproduce all underlying crypto checks against the original source code, or against all internal states of said program
imagine running the program inside a virtual machine with full debugging support like firing up events at certain points in code, directly reading memory to extract cryptographic keys and so on. the attacker does not even have to modify the program, to be able to keep copys of everything you do in plaintext
so ... even if you could cryptographically make sure that the code itself was not tampered with, it would be worth nothing: the environment itself could be designed to do something harmful, and as Maarten Bodewes wrote: in the end you need to trust something.
one could argue that TPM could solve this but i'm afraid of the world that leads to: in the end ... you still have to trust something like a manufacturer or worse a public office signing keys for TPMs ... and as we know those would never... you hear? ... never have other intentions than what's good for you ... so basically you wouldn't win anything with a centralized TPM based infrastructure
You can do this cryptographically by having a runtime that checks signatures before running any code. Of course, you'd have to trust that runtime environment as well. Unless you have such an environment you're out of luck - that is, unless you do a full code review.
Furthermore you can sign the build by placing a signature within the build system. The build system and developer access in turn can be audited. This is usually how secure development environments are build. But in the end you need to trust something.
If you're just afraid that a particular download is corrupted you can test against an official hash published at one or more trusted locations.

Security issuses and risks concerning debugging production environment

I am doing some research about security vulnerability and risks concerning debugging production environments. I would like to get your opinions and about possible risks concerning such environments.
By debugging I mean not only inspecting software with debugger but also all kinds of debugging techniques like logging, testing, reviewing code and especially post mortem debugging using mini-dumps. I am especially interested in general issues and issues related to .NET framework. I would also like to hear about other risk concerning bug management process.
In following answer I also posted my current research results.
For future investigating I found this posts related:
What's the risk of deploying debug symbols (pdb file) in a production environment?
Good processes for debugging production environment? Copying data to Dev?
Which are the dangers of remote debugging?
1) Most obvious issue is related to private data exposure. Using debuggers we have access to all data which was earlier loaded to process memory. This means that we are ignoring build in software access control logic. In many countries there are also legal issues with exposing private data to unauthorized people.
This is also an concern with logging, we should be careful what information we are logging, so that we have enough data to investigate bug cause but do not store vulnerable data (financial records, health-care records) in logs. There is also other general issue that usually our security level is not consistent regarding security of production database and log files.
.NET is addressing this issue with SecureSting class, but it is not eliminating the problem it only minimize data exposure length. For processing data we have to get string value in some point so if memory dump was taken when that processing was taking place secure information would be exposed in dump file. Other way to address this issue is preventing developers to access production data with data anonymisation before coping any data to local environments.
2) Another issue is risk involved in introducing new defects to software when fixing and investigating reported bugs. Bugs fixing process tends to be more ad-hoc then normal development process. It have some reasons, because existing in production bugs could be costing company money so there is pressure to fix them quickly.
The solution here is to maintain the same quality procedure which are being held with new features development process.

How can I see all the rules of Fortify Secure Coding Rules?

I want to see the specific rules of Fortify Secure Coding Rules (the rules that Fortify uses by default), because I want to write a report about all rules that are used by Fortify:
I have tried to see them in C:\Program Files\Fortify Software\HP Fortify v3.60\Core\config\rules but I have found .bin files and I can't see them.
I also have opened AuditWorkbench and in Security Content Management I can't see them either.
Is there any way to see them?? Thanks for your help.
Short of becoming a Software Engineer at HP Fortify, No. The default rules are considered Intellectual Property of HP Fortify and no one outside Engineering has access to them.
What problem are you trying to solve by this report?
As HP/Fortify distributes rule-packs as binary files to protect their intellectual property, you will not be able to see how the individual rules are written.
However, if you're looking to include some information about which rules/rule-packs were used, you can navigate to the project summary screen and see which rule packs were used at the time of the scan. You will also have access to information such as each rule pack's version and additional meta data about each pack.
Being able to provide this level of detail in a meta-report might be sufficient to preempt follow-up questions. Just a thought...
The built in Fortify rules are not available to read and edit since it's the core intellectual property of the tool.
However, Fortify has published a taxonomy of what vulnerabilities are scanned, and their mapping to CWE:s. The link is here: https://vulncat.fortify.com/en/weakness

Assembly security

I'm currently offering an assembly compile service for some people. They can enter their assembly code in an online editor and compile it. When then compile it, the code is sent to my server with an ajax request, gets compiled and the output of the program is returned.
However, I'm wondering what I can do to prevent any serious damage to the server. I'm quite new to assembly myself so what is possible when they run their script on my server? Can they delete or move files? Is there any way to prevent these security issues?
Thank you in advance!
Have a look at http://sourceforge.net/projects/libsandbox/. It is designed for doing exactly what you want on a linux server:
This project provides API's in C/C++/Python for testing and profiling simple (single process) programs in a restricted environment, or sandbox. Runtime behaviours of binary executable programs can be captured and blocked according to configurable / programmable policies.
The sandbox libraries were originally designed and utilized as the core security module of a full-fledged online judge system for ACM/ICPC training. They have since then evolved into a general-purpose tool for binary program testing, profiling, and security restriction. The sandbox libraries are currently maintained by the OpenJudge Alliance (http://openjudge.net/) as a standalone, open-source project to facilitate various assignment grading solutions for IT/CS education.
If this is a tutorial service, so the clients just need to test miscellaneous assembly code and do not need to perform operations outside of their program (such as reading or modifying the file system), then another option is to permit only a selected subset of instructions. In particular, do not allow any instructions that can make system calls, and allow only limited control-transfer instructions (e.g., no returns, branches only to labels defined within the user’s code, and so on). You might also provide some limited ways to return output, such as a library call that prints whatever value is in a particular register. Do not allow data declarations in the text (code) section, since arbitrary machine code could be entered as numerical data definitions.
Although I wrote “another option,” this should be in addition to the others that other respondents have suggested, such as sandboxing.
This method is error prone and, if used, should be carefully and thoroughly designed. For example, some assemblers permit multiple instructions on one line. So merely ensuring that the text in the first instruction field of a line was acceptable would miss the remaining instructions on the line.
Compiling and running someone else's arbitrary code on your server is exactly that, arbitrary code execution. Arbitrary code execution is the holy grail of every malicious hacker's quest. Someone could probably use this question to find your service and exploit it this second. Stop running the service immediately. If you wish to continue running this service, you should compile and run the program within a sandbox. However, until this is implemented, you should suspend the service.
You should run the code in a virtual machine sandbox because if the code is malicious, the sandbox will prevent the code from damaging your actual OS. Some Virtual Machines include VirtualBox and Xen. You could also perform some sort of signature detection on the code to search for known malicious functionality, though any form of signature detection can be beaten.
This is a link to VirtualBox's homepage: https://www.virtualbox.org/
This is a link to Xen: http://xen.org/

Resources