I am using the pulledpork to get my rules daily. I want to be able to test these rules and make sure everything is working. Is there anything out there that is up to date and working? I know rules2alert is there but it is vastly unfinished and hasn't been touched in a while. When I run it on my pulledpork rules I get many errors.
It would be interesting to attempt to use a script with Scapy to automatically generate traffic that will trip rules. However, there exists a service with which you can generate a number of IDS alerts using just a command line tools such as wget and curl or your browser - testmyids.com (blog post).
Just run wget testmyids.com to trip the "GPL ATTACK_RESPONSE id check returned root" signature. This is the most basic check. Website contains details about more complex checks for file-format or executables detailed in the link.
Related
I know that this question seems familiar in a lot of stackoverflow questions. But This is not the same as the other questions.
Basically i've got a PS script that uses the module "AzSK" to run something , I used this command in a loop to add multiple properties to my azure storage. On every step the command keeps asking me to confirm if i want to continue (Y/N).
Because I use a loop for more than 40 iterations I need to confirm every time I perform the command.
Like many Stackoverflow questions and the internet told me i need to try to add -Force , -Confirm to my command to automatically confirm the yes to the read input. But this answer only applies to commands that have this parameter build in. with the get-help command -Detailed I didn't see any of this parameter available. So I was wondering if it was possible to create this auto "Y" reply even if the command does not allow any parameter for it.
The command I use is Get-AzSKAzureServicesSecurityStatus and this adds attestation statuses to control id inside a azure blob storage. the command only allows one attestation status to be added so I wrapped it inside a for loop. Which makes my struggle of constantly confirmation even worse.
Please try to use the format below:
cmd /c echo y | powershell "the command which will propmt"
I did a simple test which to delete a directory, and works.
This may not be an answer to your query "if it was possible to create this auto "Y" reply even if the command does not allow any parameter for it."
But since you are trying it specifically for the attestation feature of the Secure DevOps Kit for Azure(AzSK), this might help:
The reason the confirmation message pops up for each control and does not allow a "Forced" yes is because:
Utmost discretion is to be used when attesting controls using the
Secure DevOps Kit for Azure(AzSK). In particular, when choosing to not
fix a failing control, you are taking accountability that nothing will
go wrong even though security is not correctly/fully configured.
Ideally, the bulk attestation feature is meant to be used in case the same control needs to be attested across multiple resource instances/resource groups and not vice versa. Refer this for scenarios where this feature can be used (although not recommended).
Hope this helps!
Ok, so over the past year I have built some rather complex automation scripts (mostly bash, but with some perl here and there) for some of the more common work we do at my place of business. They rely heavily on ImageMagick, Ghostscript, and PhantomJS to name just a few. They also traverse a huge number of directories across the network and several different file systems and host OSs... Frankly the fact that they work is a bit of a miracle and perhaps a testament to my willingness to keep beating my head against the wall... Also, trust me, this is easier and more effective than trying to corral my resources. Our archives are... organic... and certain high-ranking individuals in the company think of them as belonging to them and do not look out for the interests of the company in their management. They are, at least, relatively well backed-up these days.
In any case these scripts automate the production of a number of image-based print-ready products of varying degrees of complexity up to multi-hundred page image-heavy books, and as such some of them accept absurdly complex argument structures to do all the things they do. (P.S. embedded Javascript in SVGS is a MAGICAL thing!) These systems have been in "working beta" for a while now, which basically means I've been hand entering the commands at a terminal to run them, and I want to move them up to production and offer them as a webservice so that those in production who are not friends with the command line can use them, and to also potentially integrate them with our new custom-developed order management system.
TL;DR below
so that's the background, the problem is this:
I'm running everything on a headless CentOS 6.4 virtual machine with SELINUX disabled.
Apache2 serves up my interface.sh CGI just fine, and the internet has already helped me make the POST data into shell variables. Now I need to launch the worker scripts that actually direct the heavy lifting and coordinate the binaries... from the CGI:
#get post data from form and make it into variables...
/bin/bash /path/to/script/worker.sh $arg1 $arg2 $arg3 $arg5 $arg6 $opt1 $arg7
Nothing.
httpd log shows permission denied, fair enough!
Ok, googling suggests that the script being called by the CGI must also be owned by the apache user and group, or by root with 755 permissions. Done!
now httpd log show permission denied for things worker.sh is trying to do :/
Google has lead me to believe that for security reasons fcgi requires that everything interacted with by the CGI process chain must have correctly controlled permissions, all the way down to the binaries and source files... Sure, this is smart for security and damage control, but almost impossible in my case. We have very dynamic data and terabytes of resources... :/
the script worker.sh exports its own environment and runs all it's commands as root. This is largely to overcome the minefield of permissions disasters that I have to contend with and CentOS's own paranoia about allowing stuff to happen. I had hoped this might be a work around, but no.
One suggestion I have seen is to simply write out the composed command to a text file and have cron or incron do something with the text file. Seems like that would work... BUT, I'd love to be able to get STDIO back into my web page as there are verbose errors and notifications (though no interaction) in many of these worker scripts, and I would like to provide notification of completion as well. Is there any way to do this that doesn't require a permissions war to be waged?
To run certain commands as another user, you can use sudo.
Set up sudo to allow passwordless access to run your command by the apache user. Then you can have the CGI script call sudo /path/to/script args to run it as root (or -u for another user of your choice).
It's very hard to make this secure, so you should make sure your CGI script is only accessible by trustworthy individuals.
I would really like to measure connection speed between two exact sites. Naturally one of the sites is our site. Somehow I need to prove that not our internet connection is flaky, but that a site at the other end, is overcrowded.
At our end I have windows and linux machines available for this.
I imagine I would run a script at certain times of day which - for example - tries to download an image from that site and try to measure download time. Then put the download time into a database then create a graph from the records in the database. (I know that this is really simple and not sophisticated enough, but hence my question)
I need help on the time measurement.
The felt speed differences are big, sometimes the application works flawlessly, but sometimes we get timed out errors.
Now I use speedtest to check if our internet connection is OK, but this does not show that the site that is not working is slow, and now I can't provide hard numbers to assist my situation.
Maybe it is worth mentioning that the application we try to use at the other end is java based.
Here's how I would do it in Linux:
Use wget to download whatever URL you think represents your sites best. Parse the output into a file (sed, awk), use crontab to trigger the download multiple times.
wget www.google.com
...
2014-02-24 22:03:09 (1.26 MB/s) - 'index.html' saved [11251]
Is there a way to execute commands using directory traversal attacks?
For instance, I access a server's etc/passwd file like this
http://server.com/..%01/..%01/..%01//etc/passwd
Is there a way to run a command instead? Like...
http://server.com/..%01/..%01/..%01//ls
..... and get an output?
To be clear here, I've found the vuln in our company's server. I'm looking to raise the risk level (or bonus points for me) by proving that it may give an attacker complete access to the system
Chroot on Linux is easily breakable (unlike FreeBSD). Better solution is to switch on SELinux and run Apache in SELinux sandbox:
run_init /etc/init.d/httpd restart
Make sure you have mod_security installed and properly configured.
If you are able to view /etc/passwd as a result of the document root or access to Directory not correctly configured on the server, then the presence of this vulnerability does not automatically mean you can execute commands of your choice.
On the other hand if you are able view entries from /etc/passwd as a result of the web application using user input (filename) in calls such as popen, exec, system, shell_exec, or variants without adequate sanitization, then you may be able to execute arbitrary commands.
Unless the web server is utterly hideously programmed by someone with no idea what they're doing, trying to access ls using that (assuming it even works) would result in you seeing the contents of the ls binary, and nothing else.
Which is probably not very useful.
Yes it is possible (the first question) if the application is really really bad (in terms of security).
http://www.owasp.org/index.php/Top_10_2007-Malicious_File_Execution
Edit#2: I have edited out my comments as they were deemed sarcastic and blunt. Ok now as more information came from gAMBOOKa about this, Apache with Fedora - which you should have put into the question - I would suggest:
Post to Apache forum, highlighting you're running latest version of Apache and running on Fedora and submit the exploit to them.
Post to Fedora's forum, again, highlighting you're running the latest version of Apache and submit the exploit to them.
It should be noted, include the httpd.conf to both of the sites when posting to their forums.
To minimize access to passwd files, look into running Apache in a sandbox/chrooted environment where any other files such as passwd are not visible outside of the sandbox/chrooted environment...have you a spare box lying around to experiment with it or even better use VMWARE to simulate the identical environment you are using for the Apache/Fedora - try get it to be IDENTICAL environment, and make the httpd server run within VMWare, and remotely access the Virtual machine to check if the exploit is still visible. Then chroot/sandbox it and re-run the exploit again...
Document the step-by-step to reproduce it and include a recommendation until a fix is found, meanwhile if there is minimal impact to the webserver running in sandbox/chrooted environment - push them to do so...
Hope this helps,
Best regards,
Tom.
If you already can view etc/passwd then the server must be poorly configured...
if you really want to execute commands then you need to know the php script running in the server whether there is any system() command so that you can pass commands through the url..
eg: url?command=ls
try to view the .htaccess files....it may do the trick..
I've been looking into different web statistics programs for my site, and one promising one is Visitors. Unfortunately, it's a C program and I don't know how to call it from the web server. I've tried using PHP's shell_exec, but my web host (NFSN) has PHP's safe mode on and it's giving me an error message.
Is there a way to execute the program within safe mode? If not, can it work with CGI? If so, how? (I've never used CGI before)
Visitors looks like a log analyzer and report generator. Its probably best setup as a chron job to create static HTML pages once a day or so.
If you don't have shell access to your hosting account, or some sort of control panel that lets you setup up chron jobs, you'll be out of luck.
Is there any reason not to just use Google Analytics? It's free, and you don't have to write it yourself. I use it, and it gives you a lot of information.
Sorry, I know it's not a "programming" answer ;)
I second the answer of Jonathan: this is a log analyzer, meaning that you must feed it as input the logfile of the webserver and it generates a summarization of it. Given that you are on a shared host, it is improbable that you can access to that file, and even if you would access it, it is probable that it contains then entries for all the websites hosted on the given machine (setting up separate logging for each VirtualHost is certainly possible with Apache, but I don't know if it is a common practice).
One possible workaround would be for you to write out a logfile from your pages. However this is rather difficult and can have a severe performance impact (you have to serialize the writes to the logfile for one, if you don't want to get garbage from time to time). All in all, I would suggest going with an online analytics service, like Google Analytics.
As fortune would have it I do have access to the log file for my site. I've been able to generate the HTML page on the server manually - I've just been looking for a way to get it to happen automatically. All I need is to execute a shell command and get the output to display as the page.
Sounds like a good job for an intern.
=)
Call your host and see if you can work out a deal for doing a shell execute.
I managed to solve this problem on my own. I put the following lines in a file named visitors.cgi:
#!/bin/sh
printf "Content-type: text/html\n\n"
exec visitors -A /home/logs/access_log