How can I get number of property with SWRL rule in Protege - protege

I want to write a rule to get the number of assets owned by a network and check if it is greater than 3. So I wrote the SWRL rule as:
Network(?n)^(hasAsset\>3)(?n)-\>NetworkhasManyAssets(?n)
But the Protege says "Unexpected character \>"
How to modify the rule.

Related

How do I create a rule to block all user agents with ModSecurity V3?

I want to add a custom ModSecurity (V3) rule that can block all user agents, and allow me to whitelist certain User Agents from a file.
If this is possible, if someone could share the rule with me, that would be great. I cannot seem to figure out the rule to do this.
Thanks!
This is a bit dangerous what you want to do, but I try to give you some help.
I think CRS rule 913100 would be a good point to start for you.
It's a bit complex if you new in ModSecurity any SecLang, so in short, this would be a possible solution. Create a rule for your WAF, like this:
SecRule REQUEST_HEADERS:User-Agent "!#pmFromFile allowed-user-agents.data" \
"id:9013100,\
phase:1,\
deny,\
t:none,\
msg:'Found User-Agent associated with security scanner',\
logdata:'Matched Data: illegal UA found within %{MATCHED_VAR_NAME}: %{MATCHED_VAR}'"
Please note, that you can choose any id for your rule what you want, but there is a reservation list for id's:
https://coreruleset.org/docs/rules/ruleid/#id-reservations
It's highly recommended you choose a right one to avoid the collision with other rules. 9013100 would be a good choice, and it represents where this rule is derived from.
Then you have to make a file with a list of your allowed user agents. Note, that you have to place that list in same directory as the rule conf file exists. The name of file must be (as you can see above) allowed-user-agents.data. You can put an agent per line. Also you can use comments with # at the beginning of the rule - just see the CRS's data file.
How this rule works?
SecRule is a token which tells the engine that this is a rule. REQUEST_HEADERS is a collection (a special variable), what the engine expands from the HTTP request. The : after the name indicates that you want to investigate only the mentioned header, namely User-Agent.
The next block is the operator. As documentation says #pmFromFile "Performs a case-insensitive match of the provided phrases against the desired input value.". This is what you need exactly. There is a ! sign before the operator. This inverts the operator's behavior, so it will be TRUE if the User-Agent isn't there in the file.
The next section is the action's list. id is mandatory, this identifies the rule. phase:1 is optional but very recommended to place one. For more information, see the reference. deny is a disruptive action, it terminates the request immediately. msg will append a message to the log in every cases. logdata will show a detailed info about the rule result.
Why is this a little dangerous
As you can see in the documentation of #pmFromFile operator, it uses patterns. This means you do not have to place the exact User-Agent names, it's enough to put a pattern, like "curl", or "mozilla" - but be careful, a wrong pattern can lead to false positive results, which means - in this case - an attacker can bypass your rule: it's enough to place the pattern to trick it.
Consider you put the pattern my-user-agent into the data file. Now if someone just uses this pattern as User-Agent, the rule won't match.
It is generally true that handling whitelists in this way (in some special contexts, like this) is dangerous, because it's easy to bypass them.

Htaccess - Rewrite URL with multiple different conditions

I want to make URL friendly with multiple conditions.
I got this: www.example.com/?lang=en&page=test&model=mymodel
I want to have: www.example.com/en/test/mymodel
But I got also this (with other parameters):
www.example.com/?lang=en&otherpage=othertest&othermodel=myothermodel
Must be:
www.example.com/en/othertest/myothermodel
How can I do this for my entire website?
If you're going to use friendly URLs that look like this:
www.example.com/<language>/<value1>/<value2>
then Apache won't be able to distinguish between the first and the second "non-friendly" URLs that you mentioned:
www.example.com/?lang=en&page=test&model=mymodel
www.example.com/?lang=en&otherpage=othertest&othermodel=myothermodel
This is because the parameter names (page and model in the 1st, otherpage and othermodel in the 2nd URL) are not present, and can't be guessed, from the friendly URL.
A possible workaround depends on how many different scenarios you have, that is, how many different parameters you want to handle.
E.g. if you only have a few scenarios, you can add a part to the friendly URL pattern telling Apache which parameter names to use, like so:
www.example.com/<language>/<parameter_set>/<value1>/<value2>
then, tell Apache to use the first parameter set if <parameter_set> equals e.g. 1, the second set if it equals 2 and so on.
A sample rewrite rule set could be:
RewriteEngine On
RewriteRule ^([\w]+)/1/([\w]+)/([\w]+)$ ./?lang=$1&page=$2&model=$3
RewriteRule ^([\w]+)/2/([\w]+)/([\w]+)$ ./?lang=$1&otherpage=$2&othermodel=$3
Please note that 1 and 2 are completely arbitrary (they could be any other string).
Naturally, the official docs are there to help.

Rewrite rules for making multiple paths work

I have a requirement to make the following paths work.
Depending on what the url consists of, they are mapped to go to different java classes.
/books/
/books/science/
/books/science/fiction/
/books/science/fiction/kids/
So, I have given the rewrite rules in my configuration file as:
^/books$
^/books/(.*)$
^/books/(.*)/(.*)$
^/books/(.*)/(.*)/(.*)$
but the moment I give a url something like this
http://localhost/books/science/fiction/kids/12345
instead of getting captured by the fourth rewrite rule, it is captured by the second one which is not what I want.
Can someone please tell me how to achieve this? Thanks in advance
^/books$ /webapp/wcs/stores/servlet/ABCController?resultsFor=allCategories [PT,QSA]
^/books/(.*)$ /webapp/wcs/stores/servlet/XYZController?make=$1&resultsFor=category [PT,QSA]
^/books/(.*)/(.*)$ /webapp/wcs/stores/servlet/ABCDController?format=$1-$2&resultsFor=subCategory [PT,QSA]
^/books/(.*)/(.*)/(.*)$ /webapp/wcs/stores/servlet/ASDFController?resultsFor=product [PT,QSA]
instead of getting captured by the fourth rewrite rule, it is captured by the second one
That’s because the dot matches any character, so slashes as well.
Replacing it by a character class allowing anything but a slash (and demanding at least one character out of that class, so + instead of *) should fix that: ([^/]+)
Another way would be to reverse the order of your rules … You should always try and write them in order from most to least specific anyway.

IIS Rewrite Global Rule

Recently the site I was working on added language handling in this way: content tailored toward users of a certain language will have urls that start with baseUrl/[a-z]{2}-[a-z]{2}/... (I may be more explicit since this will catch a lot of things that will not work for my use case). The default language will not. The current default language has a lot of rewrite rules and I want a way to globally set up the input used with the rewriter to either be URL path after '/' or after ([a-z]{2}-[a-z]{2}/)? which should be directly after the '/'. The only thing I have thought of that seems to work is convert each rule to a regex and add ([a-z]{2}-[a-z]{2}/)? to the beginning of every rule. Basically I want to apply the rewrite rules I have to sub domains of this site.
If you do not stop processing after each rewrite rule you could add a rule to "stuff" the language into a server variable which is essentially an HTTP header (Lookup allowed server variables). You can process your rules as normal and then rewrite the language back in if a header value is detected.

Is it possible with canonical URL for this pattern in htaccess: /a/*/id/uniqueid?

A big problem is that I am not a programmer….! So I need to solve this with means within my own competence… I would be very happy for help!
I have an issue with a lot of duplicated URLs in the Google index and there are strong signs that it is causing SEO problems.
I don’t have duplicate links on the site itself, but as it once was set-up, for certain pages the system allows all sorts of variations in the URL. As long as is it has a specific article-id, the same content will be presented under an infinite number of URLs.
I guess the duplicates in Google's index has been growing over long time and is due to links gone wrong from other sites that links to mine. The problem is that the system have accepted the variations.
Here are examples of variations that exists in the Google index:
site.com/a/Cow_Cat/id/5272
site.com/a/cow_cat/id/5272
site.com/a/cow…cat/id/5272
site.com/a/cowcat/id/5272
site.com/a/bird/id/5272
The first URL with mixed case is the one used site-wide and for now I have to live with it, it would take too long time to make a change to all lower case. I cannot make a manual effort via htaccess as it is a total of 300.000 articles. I believe there are 10 ‘s of thousands that have one or more duplicates.
My question is this:
Is it possible to create rules for canonical URLs in htaccess in order to make the above URLs to be handled as one as well as for the rest of the 300.000?
I e, is there a way to say that all URLs having
/a/*/id/uniqueid
should be seen as one = based only on the unique ID and not give any regard to the text expressed with the “*”?
My hope is that it would be possible to say that a certain pattern like above should only be differentiated by the last unique segment.
If it is not possible in htaccess, how would it be done with link rel="canonical" on each page, can the code include wildcards?
I should add that the majority of the duplicates are caused by incoming links being lower case where the site itself is using a mix. Would it be OK to assign a canonical URL only with lower case although the site itself is basically always using a mix of lower/upper case?
If this is possible, I would be very happy to be helped with how to do it!!!!
Jonas
Hi Michael! I am not an expert but this is how I think it could be done:
1) My problem is that the URLs have mixed cases and I cannot change that now.
2) If it is OK for the searchengines, it would be fine for me to make the canonical URL identical to the actual URLs with the difference that it was all lower case, that would solve approx 90% of the duplicates. I e this would be the used URL: site.com/a/Cow_Cat/id/5272 and this would be the canonical: site.com/a/cow_cat/id/5272. As I understand, that would be good SEO...or...?
My idea was NOT to change the address browser address bar (i e using 301 redirect) but rather just telling the search engines which URLs that are duplicates, as I understand, that can be done by defining a canonical URL either in htaccess (as a pattern - I hope) or as a tag on each page.
3) IF, it would be possible to find a wildcard solution...I am not sure if this is possible at all, but that would mean it was possible to NOT assign a specific canonical URL but rather a "group pattern", i e "Please search engine, see all URLs with this patter - having the unique identifier in the end - as if they are one and the same URL, you SE, decide which one you prefer": /a/*/id/uniqueid
Would that work? It will only work in htaccess if canonical URLs can be defined as a group where the group is defined as a pattern with a defined part as the unique id.
Is it possible when adding a tag for each page to say that "all URLs containing this unique id should be treated the same"? If that would work it would look something similar to this
link rel="canonical" /a/*/id/5272
I dont know if this syntax with wildcard exist but it would be nice : )
My advice would be to use 301 redirects, with URL rewriting. Ask your webmaster to place this in your apache config or virtual host config:
RewriteMap lc int:tolower
Then inside your .htaccess file you can use the map ${lc:$1} to convert matches to lower case. Here, the $1 part is a match (backreference from brackets in a regex in the RewriteRule) and the ${lc: } part is just how you apply the lc (lowercase) function set up earlier. Here is an example of what you might want in your .htaccess file:
RewriteCond %{REQUEST_URI} [A-Z] #this matches a url with any uppercase characters
RewriteRule (.*) /${lc:$1} [L,R=301] #this makes it lowercase
As for matching the IDs, presuming your examples mean "always end with the ID" you could use a regex like:
^(.+/)(\d+))$
The first match (brackets) gets everything up to and including the forward slash before the ID, and the second part grabs the ID. We can then use it to point to a single, specific URL (like canonical, but with a 301).
If you do just want to use canonical tags, then you'll have to say what you're using code wise, but an example I use (so as not add tags to hundreds of individual pages, for instance) in PHP would be:
if ($_SERVER["REDIRECT_URL"] != "") {
$canonicalUrl = $_SERVER["SERVER_NAME"] . $_SERVER["REDIRECT_URL"];
} else if ($_SERVER["REQUEST_URI"] != "") {
$canonicalUrl = $_SERVER["SERVER_NAME"] . preg_replace('/^([^?]+)\?.*$/', "$1", $_SERVER['REQUEST_URI']);
}
Here, the redirect URL is used if it's available, and if not the request uri is used. This code strips off the query string (this bold bit in http://www.mysite.com/a/blah/12345/?something=true). Of course you can add to this code to specify a custom path, not just taking off the query string, by playing with the regex.

Resources