Struts Action without session (Liferay) - liferay

I would like to create a simple struts action within Liferay, which would be a publicly accessible path to get some data. This is all perfectly fine, and I am able to create as many of those as I need. However, the data comes with several extra http headers that are not needed, and more importantly, with a cookie and a session.
What I would like to do is to simply get the client to obtain its data and go, no sessions are required. Is that a way to achieve this? I know we can disable sessions for the entire system, but I would like to be punctual.
The code is pretty standard:
#Component( immediate = true, property = {
"path=" + AuthPublicPath.ASSET_BRIDGE_URL, "service.ranking:Integer=" + Integer.MAX_VALUE
}, service = StrutsAction.class )

Related

How do I manage groups/rooms with node WebSockets?

TL;DR below.
I am currently developing a React/Redux SPA that is driven by real-time data. I've decided to use ws, instead of socket.io since socket.io feels a bit high level for what I'm doing, I'd rather manage sockets myself.
In saying that, I'm struggling to find a way to manage the separation of updates/messages per view/route. Since I'm using client-side routing it's per express route won't really work...
Messages between the server and client via WebSockets are JSON with actions like GET_ITEMS then a response of GET_ITEMS_SUCCESS with an array of 'items' and for errors: ..._ERROR etc. This is all fine, since it's just 1 to 1 transaction. Though the problem arises when broadcasting (1 to all) to all relevant clients when the server receives an update.
So, I assume it best practice to limit these broadcasts to the clients that are viewing/want the data. So when viewing, for example, the Item page, there is no point in broadcasting updates to the User data since that is only used on the User page.
I haven't been able to find any common practices when dealing with this sort of situation, just a few small outdated/barely used wrappers for ws that just add a few basic functions to leave/join but don't offer much flexibility with implementation.
What I think MIGHT work is to have an object/array for each 'group'/'room', which stores the clients that are currently listening to updates from a given section. So a user would send an action to INIT_LISTEN (& ``) with a param of category, e.g. ITEM for updates and other actions related to items.
TL;DR
What my question really boils down to is: How do I store a reference to a single socket? (ws client object? ws client ID?) Then, can I store this in an object/array to iterate through like below.
const ClientRooms = {
Items: {
{
...ws
}
/* ...rest of the client */
}
}
or
const ClientRooms = {
Items: [ "xyz" ] /* Array of ws ids */
}
I have a "ping--pong" heartbeat function to keep clients active and prevent silent connection failures/disconnections. I can't find if ws.terminate() still fires the ws close event so I can iterate 'group'/'room' the object/array to find and remove instances of that client.

sails.js Use session param in model

This is an extension of this question.
In my models, every one requires a companyId to be set on creation and every one requires models to be filtered by the same session held companyid.
With sails.js, I have read and understand that session is not available in the model unless I inject it using the controller, however this would require me to code all my controller/actions with something very, very repetitive. Unfortunate.
I like sails.js and want to make the switch, but can anyone describe to me a better way? I'm hoping I have just missed something.
So, if I understand you correctly, you want to avoid lots of code like this in your controllers:
SomeModel.create({companyId: req.session.companyId, ...})
SomeModel.find({companyId: req.session.companyId, ...})
Fair enough. Maybe you're concerned that companyId will be renamed in the future, or need to be further processed. The simplest solution if you're using custom controller actions would be to make class methods for your models that accept the request as an argument:
SomeModel.doCreate(req, ...);
SomeModel.doFind(req, ...);
On the other hand, if you're on v0.10.x and you can use blueprints for some CRUD actions, you will benefit from the ability to override the blueprints with your own code, so that all of your creates and finds automatically use the companyId from the session.
If you're coming from a non-Node background, this might all induce some head-scratching. "Why can't you just make the session available everywhere?" you might ask. "LIKE THEY DO IN PHP!"
The reason is that PHP is stateless--every request that comes in gets essentially a fresh copy of the app, with nothing in memory being shared between requests. This means that any global variables will be valid for the life of a single request only. That wonderful $_SESSION hash is yours and yours alone, and once the request is processed, it disappears.
Contrast this with Node apps, which essentially run in a single process. Any global variables you set would be shared between every request that comes in, and since requests are handled asynchronously, there's no guarantee that one request will finish before another starts. So a scenario like this could easily occur:
Request A comes in.
Sails acquires the session for Request A and stores it in the global $_SESSION object.
Request A calls SomeModel.find(), which calls out to a database asynchronously
While the database does its magic, Request A surrenders its control of the Node thread
Request B comes in.
Sails acquires the session for Request B and stores it in the global $_SESSION object.
Request B surrenders its control of the thread to do some other asynchronous call.
Request A comes back with the result of its database call, and reads something from the $_SESSION object.
You can see the issue here--Request A now has the wrong session data. This is the reason why the session object lives inside the request object, and why it needs to be passed around to any code that wants to use it. Trying too hard to circumvent this will inevitably lead to trouble.
Best option I can think of is to take advantage of JS, and make some globally accessible functions.
But its gonna have a code smell :(
I prefer to make a policy that add the companyId inside the body.param like this:
// Needs to be Logged
module.exports = function(req, res, next) {
sails.log.verbose('[Policy.insertCompanyId() called] ' + __filename);
if (req.session) {
req.body.user = req.session.companyId;
//or something like AuthService.getCompanyId(req.session);
return next();
}
var err = 'Missing companyId';
//log ...
return res.redirect(307, '/');
};

Masking part of URL in jetty request log

I would like to log all HTTP request in Jetty, which is well documented, but I can;t find any resources how can I mask some of the arguments.
E.g.:
json/users/detail?id=dsgrw543
should be logged as:
json/users/detail?id=********
or similar.
The main motivation is that I could give those logs for analytics, without worries that privacy of our users could be compromised. Ideally on-line, without using batch processing or other script.
Please note, that I use other authentication mechanism (cookies/all write methods are POST/etc.) and I can't change the existing URLs.
So far my only idea is to implement it as a class on top of NCSARequestLog:
http://download.eclipse.org/jetty/stable-7/apidocs/org/eclipse/jetty/server/NCSARequestLog.html
What are the better ways of doing that?
you may write a simple util like this, use it wherever you are logging
Enumeration<String> s = request.getParameterNames();
String q = "";
while(s.hasMoreElements()){
q += s.nextElement() + "=***&";
}
String url = request.getServletPath()+(q.length()>0?"?"+q:"");
or may be request.getRequestedURI() in place of getServletPath

Request.Filter in an IIS Managed Module

My goal is to create an IIS Managed Module that looks at the Request and filters out content from the POST (XSS attacks, SQL injection, etc).
I'm hung up right now, however, on the process of actually filtering the Request. Here's what I've got so far:
In the Module's Init, I set HttpApplication.BeginRequest to a local event handler. In that event handler, I have the following lines set up:
if (application.Context.Request.HttpMethod == "POST")
{
application.Context.Request.Filter = new HttpRequestFilter(application.Context.Request.Filter);
}
I also set up an HttpResponseFilter on the application.Context.Response.Filter
HttpRequestFilter and HttpResponseFilter are implementations of Stream.
In the response filter, I have the following set up (an override of Stream.Write):
public override void Write(byte[] buffer, int offset, int count)
{
var Content = UTF8Encoding.UTF8.GetString(buffer);
Content = ResponseFilter.Filter(Content);
_responseStream.Write(UTF8Encoding.UTF8.GetBytes(Content), offset, UTF8Encoding.UTF8.GetByteCount(Content));
}
ResponseFilter.Filter is a simple String.Replace, and it does, in fact, replace text correctly.
In the request filter, however, there are 2 issues.
The code I have currently in the RequestFilter (an override of Stream.Read):
public override int Read(byte[] buffer, int offset, int count)
{
var Content = UTF8Encoding.UTF8.GetString(buffer);
Content = RequestFilter.Filter(Content);
if (buffer[0]!= 0)
{
return _requestStream.Read(UTF8Encoding.UTF8.GetBytes(Content), offset, UTF8Encoding.UTF8.GetByteCount(Content));
}
return _requestStream.Read(buffer, offset, count);
}
There are 2 issues with this. First, the filter is called twice, not once, and one of the requests is just basically a stream of /0's. (the if check on buffer[0] filters this currently, but I think that I'm setting something up wrong)
Second, even though I am correctly grabbing content with the .GetString in the read, and then altering it in RequestFilter.Filter(a glorified string.replace()), when I return the byte encoded Content inside the if statement, the input is unmodified.
Here's what I'm trying to figure out:
1) Is there something I can check prior to the filter to ensure that what I'm checking is only the POST and not the other time it is being called? Am I not setting the Application.Context.Request.Filter up correctly?
2) I'm really confused as to why rewriting things to the _requestStream (the HttpApplication.Context.Request.Filter that I sent to the class) isn't showing up. Any input as to something I'm doing wrong would be really appreciated.
Also, is there any difference between HttpApplication.Request and HttpApplication.Context.Request?
edit: for more information, I'm testing this on a simple .aspx page that has a text box, a button and a label, and on button click assigns the text box text to the label's text. Ideally, if I put content in the textbox that should be filtered, it is my understanding that by intercepting and rewriting the post, I can cause the stuff to hit the server as modified. I've run test though with breakpoints in the module and in code, and the module completes before the code behind on the .aspx page is hit. The .aspx page gets the values as passed from the form, and ignores any filtering I attempted to do.
There's a few issues going on here, but for future reference, what explains the page receiving the unfiltered post, as well as the filter being evaluated twice is that you are likely accessing the request object in some way PRIOR to you setting the Request.Filter. This may cause it to evaluate the inputstream, running the currently set filter chain as is, and returning that stream.
For example, simply accessing Request.Form["something"] would cause it to evaluate the inputstream, running the entire filter chain, at that point in time. Any modification to the Request.Filters after this point in time would have no effect, and would appear that this filter is being ignored.
What you wanted to do is possible, but also ASP.NET provides Request Validation to address some of these issues (XSS). However, Sql Injection is usually averted by never constructing queries through string concatenation, not via input sanitizing, though defense-in-depth is usually a good idea.

Fire Off an asynchronous thread and save data in cache

I have an ASP.NET MVC 3 (.NET 4) web application.
This app fetches data from an Oracle database and mixes some information with another Sql Database.
Many tables are joined together and lot of database reading is involved.
I have already optimized the best I could the fetching side and I don't have problems with that.
I've use caching to save information I don't need to fetch over and over.
Now I would like to build a responsive interface and my goal is to present the users the order headers filtered, and load the order lines in background.
I want to do that cause I need to manage all the lines (order lines) as a whole cause of some calculations.
What I have done so far is using jQuery to make an Ajax call to my action where I fetch the order headers and save them in a cache (System.Web.Caching.Cache).
When the Ajax call has succeeded I fire off another Ajax call to fetch the lines (and, once again, save the result in a cache).
It works quite well.
Now I was trying to figure out if I can move some of this logic from the client to the server.
When my action is called I want to fetch the order header and start a new thread - responsible of the order lines fetching - and return the result to the client.
In a test app I tried both ThreadPool.QueueUserWorkItem and Task.Factory but I want the generated thread to access my cache.
I've put together a test app and done something like this:
TEST 1
[HttpPost]
public JsonResult RunTasks01()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
ThreadPool.QueueUserWorkItem(o => MyFunc(1, 5000000, myCache));
return (Json(true, JsonRequestBehavior.DenyGet));
}
TEST 2
[HttpPost]
public JsonResult RunTasks02()
{
var myCache = System.Web.HttpContext.Current.Cache;
myCache.Remove("KEY1");
Task.Factory.StartNew(() =>
{
MyFunc(1, 5000000, myCache);
});
return (Json(true, JsonRequestBehavior.DenyGet));
}
MyFunc crates a list of items and save the result in a cache; pretty silly but it's just a test.
I would like to know if someone has a better solution or knows of some implications I might have access the cache in a separate thread?!
Is there anything I need to be aware of, I should avoid or I could improve ?
Thanks for your help.
One possible issue I can see with your approach is that System.Web.HttpContext.Current might not be available in a separate thread. As this thread could run later, once the request has finished. I would recommend you using the classes in the System.Runtime.Caching namespace that was introduced in .NET 4.0 instead of the old HttpContext.Cache.

Resources