I am running an app on port 799. I would like to prevent people from accessing via direct access:
http://www.website.com:799/folder/1
However, I need that port to still listen and operate otherwise. For example, certain pages direct the user to
http://www.website.com:799/folder/1
after checking to see whether they are members. The link is never actually revealed as a url to the user. Is there anyway to specify via htaccess to block contents when the user has requested directly via url but otherwise let the contents show. (When they are requested via php).
Related
I would like to block a specific IP address from accessing my website written in NodeJS, using embedded JS.
Here is my file tree (the important part is chat.ejs and server.js, ignore the other files)
How would I use ejs to block certain IPs from accessing the site? Or even just sending them to another "blocked" page if they visit from a blocked IP, not allowing them to access the main site. It's a small project and only a few people use the site at the moment so I'm not concerned about dynamic IPs or people being blocked with the same IP.
Thanks!
Colin
As mentioned this is more of a 'server' question and can be done many of ways by incorporating an ACL ( access control list )
Depending on where you are hosting (AWS has security groups where you can add CIDR blocks for IP ranges etc) or even on the running web-server (apache, NGINX)
This can also be done in 'code' if you so wish by adding middleware OR conditions in the get request itself. The basic idea is to obtain the client IP via the request object so for example
Lets say you have a 'predefined' array of 'allowed' IP's
const ips = ['123.23.232.23','123.23.232.23']
let reqIP = req.connection.remoteAddress //Gets the IP of the requestor
if( reqIP.indexOF(ips) !== -1){
//continue to process the request and return results
}else{
//IP not in the list, redirect them to a new page
}
This is not tested code but the concept should work in allowing you to 'block' specific request by IP however this only works if you know for sure the 'IP' is not dynamic etc
With that said, this is generally better to do on the web-server, otherwise your putting unneeded processes on the node server that could have been blocked at the WS layer OR (in the AWS case) before it even hits the webserver.
Say we have an internal URL https://my.internal.url (in our case a Liferay Portal) and from a web application firewall an external URL https://my.external.url pointing to this internal URL.
The internet user is using the external URL.
PrimeFaces extends attributes like for example
onclick="...;window.open('https://my.interal.url'..."
This leads to CORS problems.
The HTTP header Access-Control-Allow-Origin is not an option, since the internal URL is internal.
We'll talk with the WAF people about URL replacement, but I'd like to know wether or not we can tell PrimeFaces to use the external URL (or maybe relative URLs in case this would work).
The portal doesn't know about the external URL but of course we could implement this as a configuration option.
(watching the source code, there are more occurences of the internal URL outside of the jsf/PrimeFaces portlet, so I add the liferay tag too)
Update
The question is obsolete, WAF has to handle this correctly (an old SSL environment did it, a new WAF environment doesn't)
You say
The portal doesn't know about the external URL
however, any properly configured reverse proxy (or WAF) should forward the actual host name used to request the current page.
On Apache httpd's mod_proxy_http, this is done with the option ProxyPreserveHost On. When forwarding with AJP, the host is automatically forwarded. Other WAF/Proxy configurations - of course - differ. But the proper way to generate the URL is to let the generating server know what URLs it should generate.
If you need to worry about the proper host name, you'll need to do so by request: Liferay is well able to use Virtual Host names to distinguish between different sites - and if they're completely different, you might be signed in to one of them, but not to the other. This has a repercussion on the permissions.
Have the infrastructure handle it for you. Don't write code (or application configuration) for it.
I have a web site protected by an OpenAm Server and accessing pages, after authentication, works fine. But when I try to redirect to a page and pass information with the get method, I obtain a forbidden access message.
Is there a way to pass my informations from the source page to my destination page with the get method (or any other method) or is there any configuration to do, to the Web Agent, in order to avoid the OpenSSO Server to deny my accesses.
I actually use IIS 7.0 and the last Web Agent version 3.0.4.
Thanks a million for any incoming answers
I think you are asking why you get denied when passing params in a GET request. You need a policy that covers mysite.com/* as well as mysite.com/?
The policy engine allows restriction on incoming urls if they have args
Is there any way in which I can download all the first visits to a webpage to my local box and all the subsequent visits will retrieve the data from the local box rather than the internet? That is, like a service is running on a port and if i access that port and not the HTTP port, i get the data from local box?
I need to use this service for parsing webpages whose contents might change every time, so that i get the same content to work with.
You can use a caching proxy such as squid.
The squid service stores the webpages locally and the next requests return the stored file.
Sounds like you're talking about a proxy server
I need to use this service for parsing webpages whose contents might change
Have a look for a spidering engine, e.g. pavuk.
Background context: ASP.NET / IIS (not sure if it matters)
I have a web app at example.com, and a user of my app gets his own content page at an address like example.com/abc-trinkets. I would like to offer the user the ability to point his own registered domain at my web app so his content is accessed at abctrinkets.com. Initially looking on the order of 10-100 users with custom domains.
Ideally, I would like my user to just have a single hostname or IP address that he needs to know to configure properly with his registrar, and if I change the setup of my servers (different host, change addresses, load balancing, etc.) the user will not have to change his settings.
I have no trouble handling the requests once they hit my web app, but I am looking for input on the best way to set the routing up so requests actually come to my app/server. I would like a "catch-all" type of behavior that does not require me to individually configure anything for each domain a user might point to me.
I assume I will need some kind of layer between the address I give my user and my actual server ... is this like a managed DNS service or some other type of nameserver thing I would set up with my host? Is this something simple that should already be handled by a few simple settings on my webserver? I worry that I am making this more complicated than it needs to be.
Write a script that examines the Host header in the request. In your example, if it's abctrinkets.com, then you'd either redirect or forward the request to /abc-trinkets. You'd still need a database or something for mapping the domain names to the URLs; if you're going to allow arbitrary domain names for each user account, then there's no possible way to avoid that.