Spring-Content 1.2.7: Identifier for the ContentStore - spring-content-community-project

What I intend to do: I'd like to use multiple ContentStores in the same system: one for freshly uploaded files (filesystem), one for (long term) archiving (AWS S3 or maybe GCS).
What I tried (and what actually does work):
Extended class File by another attribute private String contentStoreName;
Creating two ContentStores like described here: Spring-Content: Moving files from content store to another content store
Extending gettingstarted.FileContentController.setContent(Long, MultipartFile) by setting an identifier for the used ContentStore: f.get().setContentStoreName("regular");
Getting the content in dependence of the stored contentStoreName:
InputStream input;
if (Objects.equals(f.get().getContentStoreName(), "archive")) {
input = archiveContentStore.getContent(f.get());
} else {
input = regularContentStore.getContent(f.get());
}
Changing the contentStoreName when moving from one ContentStore to another:
Resource resource = regularContentStore.getResource(fileEntity.get());
archiveContentStore.setContent(fileEntity.get(), resource);
fileEntity.get().setContentStoreName("archive");
filesRepo.save(fileEntity.get());
The smell about this: Despite this code works, I guess it's not the intended way, because Spring-content usually does a lot with annotations and some magic in the background. But I can't find an annotation for an identifier / name for the ContentStore.
Question: Is there a more intended way of doing this in Spring-Content?

Beyond supporting multiple storage modules in a single application (via the FilesystemContentStore, et al annotations) Spring Content does not currently provide any logic for supporting classes of storage. That would be a layer that you need to create on top of Spring Content as you are starting to do.
In terms of Spring Content annotations it might be helpful for you to understand what modules manage what annotations.
Spring Content storage modules; FS, S3, etc all implement ContentStore and in doing so all provide management of the #ContentId and #ContentLength attributes (in addition to managing the actual content operations). Looks like you are using the ContentStore API (getContent/setContent) and therefore your entity's content id and length attributes will be managed for you.
Spring Content REST then provides management of #MimeType and #OriginalFileName attributes (in addition to providing comprehensive REST endpoints). My guess is that you are not using this and instead providing your own custom controller REST API that uses the 'contentStoreName' attribute to decide which store to put/get the content from. This approach seems fine.
That is all to say that a slightly more elegant approach, perhaps, that would allow you to use Spring Content REST might be to implement your own custom Archiving Storage module and encapsulate the "switching" logic you have above in its setContent/getContent/unsetContent methods. Note, this is actually quite easy (just 4 or 5 classes and I would point you at the GCP and Azure modules for inspiration). Note, that the REST API (for ContentStore) only calls those 3 APIs too so those are the only ones you would need to implement. This would mean you get to use Spring Content REST and all the features it provides; set of rest endpoints, byte range support and so on as well as encapsulating your "archiving" logic nicely.

Related

Liferay 7 - Multiple resourceCommands in single class?

I'm about to move from Liferay 6.2 to 7. I'v been using Spring in Liferay 6.2, but apparently using Spring on 7 doesn't have benefits of using component specific configuration via classes.
It seems to me that every single Liferay 7 ajax endpoint needs to be configured as single command class, leading to dozens of files per logical model/controller.
On LR 6.2 Spring I have had single controller which wraps every resource-endpoint to single file. Is this possible on LR 7 with components? If LR7 enforces to use single class-file per command, while is this forced instead of supporting single class with multiple methods (design-wise)?
I assume that you're talking about the serveResource phase of a portlet when you talk about Ajax endpoints.
If you go the ResourceCommand route: Yes, you'll need a single resource command for every named resource handler. However, you do not have to go this way, you can still implement in a single portlet class.
The one difference:
You're in control of your own portlet, which means that you can easily change and update it should you need a different behavior. Thus it's not a problem to go with a single (potentially larger) portlet class.
On the other hand Liferay's built-in components sometimes need to be updated by others (e.g. you), so separating them out into many smaller services is a great thing for Liferay users who intend to modify tiny aspects: They only need to override the single ResourceCommand they have in mind for their change.
Thus, you'll see excessive use of the ResourceCommand pattern all over Liferay. But you can totally ignore this for your own code and continue with individual named resource handlers in a single class.
That being said, some pseudocode (only written here, never compiled and tested) for such a portlet:
#Component(...)
public class MyPortlet extends GenericPortlet {
public void serveResource(ResourceRequest req, ResourceResponse res) {
String name = req.getParameter("name");
// handle request for named activity
}
}
(Edit of the code: Apologies, I completely mixed up the Action and Resource phases, rather suggested "Action" logic than "Resource")
And yet another alternative is to ditch the portlet implementation and just implement a Web Service - either through Liferay's Service Builder, or through REST. There are plenty of samples available for these cases as well, but your question appeared as if you were going the portlet route.

Kohana 3.3 - how to correctly move controller function to model

again I've got a question about Kohana and how I am supposed to use model functions.
I want to move parts of a controller function into a more appropriate model to be able to access this function from additional controllers. (From what I have read so far I conclude that calling the controller function from a different controller is considered bad architecture).
My problem is that depending on several circumstances (i.e. model parameters) this controller function creates a log entry in a different database table and sends an email to some users.
How am I supposed to create this log entry and send the mails if main functionality resides inside the model? Should I instantiate the second model from within the first, call the log function and afterwards send the mails exactly how I did from my controller?
Thanks in advance.
This is a question that doesn't have 1 correct answer, unfortunately. A lot of it comes down to how you prefer to implement the MVC pattern.
At its base MVC uses: Models, Views and Controllers
Models
These classes should represent entities in your database
Example:
Model_User maps to an entity in your Users table
$user = new Model_User;
$user->first_name = 'Joe';
$user->last_name = 'Smith';
$user->save();
Views
These files store presentation data/templates (html usually/mostly)
Example:
index.tpl
<h1>HELLO, WORLD!</h1>
<h2><?=$some_variable_from_controller?></h2>
Controllers
These files handle incoming requests and process data to be injected into views
Example:
Controller_Home handles request to the home page
class Controller_Home extends Controller {
public function action_index(){
$view = View::factory('index');
$view->render();
}
}
Now that you get the basics it's time to understand a specific problem this limited structure promotes. Controllers get kind of fat and messy. This leads us to libraries or a service oriented architecture.
These libraries allow us to move large groups of logic into a portable service layer that can easily be used across all controllers, models and other libraries. They also break our code up into smaller more concise chunks that actually made sense.
Example:
In my controller instead of using a bunch of logic that logs a user in via facebook I can simply create a Social_Login_Service and use it as follows.
Social_Login_Service::facebook($user_email);
Now you would simply see that 1 clean line in your login controller instead of a whole messy slew of logic that will pile up in your controller until it melts your brain to looks at.
This is a very basic overview of a possible direction (and one that I prefer).
It is very useful to break up your sites into smaller components and if you're using Kohana, I recommend doing this with Modules (http://kohanaframework.org/3.1/guide/kohana/modules) they're great.
I hope this little snippet helped.

how can I access endpoint properties set in faces-config.xml programmatically?

I am using the IBM Social Business Toolkit. I have defined a connection for my Notes app via endpoints in the faces-config xml file. I wonder how I can access this file pro grammatically since I could not find a service that returns me the base url of IBM Connections.
It's useful to remember that an endpoint definition is really just creating a managed bean. The managed bean has a variable name you refer to it - the managed-bean-name property. You can access this directly from SSJS or via ExtLibUtil.resolveVariable() in Java. The definition also tells you the Java class that's being used, e.g. com.ibm.sbt.services.endpoints.ConnectionsBasicEndpoint. That really gives you all the information you need to get or set the properties.
So from SSJS you can just cast it to the class name, e.g.
var myService:com.ibm.sbt.services.endpoints.ConnectionsBasicEndpoint = connections
So the bit after the colon will be the managed-bean-class value and the bit after the equals sign will be the managed-bean-name. In Java, you can use
ConnectionsBasicEndpoint myService = (ConnectionsBasicEndpoint) ExtLibUtil.resolveVariable(ExtLibUtil.getXspContext().getFacesContext(), "connections");
You'll then have access to all the methods of the class, so you should be able to retrieve what you need.
The properties are part of the Java class, who are referred to in the Faces-Config.xml. So get the class by his fully qualified name or by bean name and set or get the properties
I think the best route will most likely be what Paul is suggesting: resolve the variable by its name and use the getters to get the effective properties that way.
Sven's suggestion is a good one to keep in mind for other situations. By accessing the faces-config.xml file as a resource, you could load it into an XML parser and find the values using XPath. I'm doing much that sort of technique in the next version of the OpenNTF Domino API, which will have a set of methods for manipulating the Faces config. However, one key aspect there is that reading the XML file directly will just get you the string values, which may be EL expressions, whereas going the resolveVariable route will get you the real current properties.

IOneWay SOAP Parameters and Void Return

I'm having a couple of issues with getting a correct WSDL generated for a ServiceStack SOAP+REST service.
The main issue is that when I use AddServiceReference the generated IOneWay is populated and the methods all return void. Looking at the provided SOAP examples it looks like ISyncReply should be populated (currently no operations at all) and the response type should be one of the fields in the DTO + Response object. What am I doing wrong?
The types and operations are distributed across multiple namespaces
but they are all referenced with a single ContractNamespace in AssemblyInfo
The DTO + Response naming convention has been followed
I thought it might be an issue with inherited Request/Response types, but even cutting them out completely doesn't change the situation, though apparently you can't inherit from a concrete type even from within a named namespace and have the operation parameters be generated properly
Using explicit [DataContract]/[DataMember] annotations does not make a difference
REST calls seem to be working as expected
I'm using the latest ServiceStack binaries
The WSDL does not have wsdl:output elements at all
Applies both SOAP 1.1 and SOAP 1.2 wsdls
And just to complicate the situation still further the metadata produced in http://*/metadata for each operation seems to be completely accurate!!
Let me know if there is any specific further information I can provide.

Preventing StackOverflowException while serializing Entity Framework object graph into Json

I want to serialize an Entity Framework Self-Tracking Entities full object graph (parent + children in one to many relationships) into Json.
For serializing I use ServiceStack.JsonSerializer.
This is how my database looks like (for simplicity, I dropped all irrelevant fields):
I fetch a full profile graph in this way:
public Profile GetUserProfile(Guid userID)
{
using (var db = new AcmeEntities())
{
return db.Profiles.Include("ProfileImages").Single(p => p.UserId == userId);
}
}
The problem is that attempting to serialize it:
Profile profile = GetUserProfile(userId);
ServiceStack.JsonSerializer.SerializeToString(profile);
produces a StackOverflowException.
I believe that this is because EF provides an infinite model that screws the serializer up. That is, I can techincally call: profile.ProfileImages[0].Profile.ProfileImages[0].Profile ... and so on.
How can I "flatten" my EF object graph or otherwise prevent ServiceStack.JsonSerializer from running into stack overflow situation?
Note: I don't want to project my object into an anonymous type (like these suggestions) because that would introduce a very long and hard-to-maintain fragment of code).
You have conflicting concerns, the EF model is optimized for storing your data model in an RDBMS, and not for serialization - which is what role having separate DTOs would play. Otherwise your clients will be binded to your Database where every change on your data model has the potential to break your existing service clients.
With that said, the right thing to do would be to maintain separate DTOs that you map to which defines the desired shape (aka wireformat) that you want the models to look like from the outside world.
ServiceStack.Common includes built-in mapping functions (i.e. TranslateTo/PopulateFrom) that simplifies mapping entities to DTOs and vice-versa. Here's an example showing this:
https://groups.google.com/d/msg/servicestack/BF-egdVm3M8/0DXLIeDoVJEJ
The alternative is to decorate the fields you want to serialize on your Data Model with [DataContract] / [DataMember] fields. Any properties not attributed with [DataMember] wont be serialized - so you would use this to hide the cyclical references which are causing the StackOverflowException.
For the sake of my fellow StackOverflowers that get into this question, I'll explain what I eventually did:
In the case I described, you have to use the standard .NET serializer (rather than ServiceStack's): System.Web.Script.Serialization.JavaScriptSerializer. The reason is that you can decorate navigation properties you don't want the serializer to handle in a [ScriptIgnore] attribute.
By the way, you can still use ServiceStack.JsonSerializer for deserializing - it's faster than .NET's and you don't have the StackOverflowException issues I asked this question about.
The other problem is how to get the Self-Tracking Entities to decorate relevant navigation properties with [ScriptIgnore].
Explanation: Without [ScriptIgnore], serializing (using .NET Javascript serializer) will also raise an exception, about circular
references (similar to the issue that raises StackOverflowException in
ServiceStack). We need to eliminate the circularity, and this is done
using [ScriptIgnore].
So I edited the .TT file that came with ADO.NET Self-Tracking Entity Generator Template and set it to contain [ScriptIgnore] in relevant places (if someone will want the code diff, write me a comment). Some say that it's a bad practice to edit these "external", not-meant-to-be-edited files, but heck - it solves the problem, and it's the only way that doesn't force me to re-architect my whole application (use POCOs instead of STEs, use DTOs for everything etc.)
#mythz: I don't absolutely agree with your argue about using DTOs - see me comments to your answer. I really appreciate your enormous efforts building ServiceStack (all of the modules!) and making it free to use and open-source. I just encourage you to either respect [ScriptIgnore] attribute in your text serializers or come up with an attribute of yours. Else, even if one actually can use DTOs, they can't add navigation properties from a child object back to a parent one because they'll get a StackOverflowException.
I do mark your answer as "accepted" because after all, it helped me finding my way in this issue.
Be sure to Detach entity from ObjectContext before Serializing it.
I also used Newton JsonSerializer.
JsonConvert.SerializeObject(EntityObject, Formatting.Indented, new JsonSerializerSettings { PreserveReferencesHandling = PreserveReferencesHandling.Objects });

Resources