how to determine source of http request in coldfusion-out.log - security

In the last couple of days it appears some malicious code is being executed on my coldfusion server. I've found this type of entry in the coldfusion-out.log file
08/20 08:47:02 Information [jrpp-6006] - Starting HTTP request {URL='http://32435898:80/c4/b.txt', method='get'}
I'm trying to track down the source of these HTTP requests but don't know where to look. I've searched all the code for this text and have found nothing, so I suspect it's being injected somewhere. Are there other log files or ways to determine what cf scripts are executing this http request?

Related

HTTP logs shows URLs with '\" and \"x\"=\"x' appended

I had a recent issue in my server-side code, where an unexpected http query value was coming through. Upon inspection of the logs, I see that my URL has had the following text appended to it:
\" and \"x\"=\"x
Thus the normal url: something.com/foo?test=true has been rewritten to something.com/foo?test=true\" and \"x\"=\"x.
I assume this is some sort of test to see if an exploit exists on my server. This is very difficult to Google due to the characters involved, and the only results I could come up with was a log file that also appeared to have the exact same text appended to a URL.
Anyone have any info on what this is trying to exploit? My service is a US-only service (providing local community information), and the IP address from which this request came from is 31.177.95.9, which unsurprisingly is a Russian IP address.

Benchmark Node.js Ghost with JMeter

I try to Benchmark Node.js Ghost with JMeter. I want to create a testplan which just signs in and then creates and publishes a post.
My problem now is that i do not get any session-cookies. So every request on the backend fails. I already tried to change the CookieManager settings within the user.properties file.
i tried following configuration:
CookieManager.check.cookies=false
CookieManager.delete_null_cookies=false
CookieManager.save.cookies=true
jmeter.save.saveservice.url=true
jmeter.save.saveservice.requestHeaders=true
This is the results tree (on the left side you can see my testplan setup):
I don't think Ghost uses cookies at all, the errors you're seeing are likely due to failed login.
Looking into response to the first request:
It seems Ghost uses OAuth authentication.
So you need to do the following:
Extract this access_token value from the /ghost/api/v0.1/authentication/token request response. You can do it using JSON Path PostProcessor like
Configure HTTP Header Manager for next requests to send Authorization header with the value of Bearer ${access_token}
The whole process of getting dynamic content from previous request, converting it to JMeter Variable and adding as a parameter to next request is known as correlation.

What's the easiest way to request a list of web pages from a web server one by one?

Given a list of URLs, how does one implement the following automated task (assuming windows and ubuntu are the available O/Ses)? Are there existing types of tools that can make implementing this easier or do this out of the box?
log in with already-known credentials
for each specified url
request page from server
wait for page to be returned (no specific max time limit)
if request times out, try again (try x times)
if server replies, or x attempts failed, request next url
end for each
// Note: this is intentionally *not* asynchronous to be nice to the web-server.
Background: I'm implementing a worker tool that will request pages from a web server so the data those pages need to crunch through will be cached for later. The worker doesn't care about the resulting pages' contents, although it might care about HTML status codes. I've considered a phantom/casper/node setup, but am not very familiar with this technology and don't want to reinvent the wheel (even though it would be fun).
You can request pages easily with the http module.
Here's an example.
Some people prefer the request module available in npm.
Here's a link to the github page
If you need more than that, you can use phantomjs.
Here's a link to the github page for bridging node and phantom
However, you could also look for simple cli commands for making requests such as wget or curl.

Session store get and set on every http request?

I am using node.js with https://github.com/visionmedia/connect-redis to store session variables in redis.
I ran redis-cli monitor and noticed that on a single page load, there are 3 sets of get and setex commands being executed. The 3 sets come from the 3 http requests made on my page load (favicon.ico, /, and index.css).
My question: Is it normal for a redis get and setex to run on every http request? Each pair contains identical data.
The 3 HTTP gets that you are seeing are normal for a web application.
You can set a very long expiration date on your favicon.ico so that the browser only requests it once.
For static assets (i.e. CSS, JS, images) you can do the same or put them in a different domain (or subdomain)
Be aware that if you put a very long expiration date on a CSS/JS file the browse will not request it again and you might run into weird "issues" in which you make a change to a CSS/JS file and the browser might not get the updated file. This is one of the reasons a lot of sites "version" their CSS files (e.g. styles-2013-02-17.css) so that they can use a different file name when a change is made.

How does the browser know a web page has changed?

This is a dangerously easy thing I feel I should know more about - but I don't, and can't find much around.
The question is: How exactly does a browser know a web page has changed?
Intuitively I would say that F5 refreshes the cache for a given page, and that cache is used for history navigation only and has an expiration date - which leads me to think the browser never knows if a web page has changed, and it just reloads the page if the cache is gone --- but I am sure this is not always the case.
Any pointers appreciated!
Browsers will usually get this information through HTTP headers sent with the page.
For example, the Last-Modified header tells the browser how old the page is. A browser can send a simple HEAD request to the page to get the last-modified value. If it's newer than what the browser has in cache, then the browser can reload it.
There are a bunch of other headers related to caching as well (like Cache-Control). Check out: http://www.w3.org/Protocols/rfc2616/rfc2616-sec14.html
Don't guess; read the docs. Here's a friendly, but authoritative introduction to the subject.
Web browsers send HTTP requests, and receive HTTP responses. They then displays the contents of the HTTP responses. Typically the HTTP responses will contain HTML. And many HTML elements may need new requests to receive the various parts of the page. For example each image is typically another HTTP request.
There are HTTP headers that indicate if a page is new or not. For example the last modified date. Web browsers typically use a conditional GET (conditional header field) or a HEAD request to detect the changes. A HEAD request receives only the headers and not the actual resource that is requested.
A conditional GET HTTP request will return a status of 304 Not Modified if there are no changes.
The page can then later change based on:
User input
After user input, changes can happen based on javascript code running without a postback
After user input, a new request to the server and get a whole new (possibly the same) page.
Javascript code can run once a page is already loaded and change things at any time. For example you may have a timer that changes something on the page.
Some pages also contain HTML tags that will scroll or blink or have other behavior.
You're getting along the right track, and as Jonathan mentioned, nothing is better than reading the docs. However, if you only want a bit more information:
There are HTTP response headers that let the server set the cacheability of a page, which falls into your expiration date system. However, one other important construct is the HTTP HEAD request, which essentially retrieves the MIME Type and Content-Length (if available) for a given page. Browsers can use the HEAD request to validate what is in their caches...
There is definitely more info on the subject though, so I would suggest reading the docs...

Resources