Which of the following util is suitable for creating an organization using Liferay API.
i) OrganizationUtil
ii) OrganizationServiceUtil
iii) OrganizationLocalServiceUtil
Basically, I want to know the difference between these three.
i) OrganizationUtil: com.liferay.portal.service.persistence.OrganizationUtil
The classes from the persistence layer directly talk to the Database and hence are good to be used from the service layer and also if you care about transaction.
Following are the words from the documentation:
The persistence utility for the organization service. This utility wraps OrganizationPersistenceImpl and provides direct access to the database for CRUD operations. This utility should only be used by the service layer, as it must operate within a transaction. Never access this utility in a JSP, controller, model, or other front-end class.
ii) OrganizationServiceUtil: com.liferay.portal.service.OrganizationServiceUtil
It can be called from any layer as such. This class also does permission checks (based on Permissions given in Liferay) which may be useful in some cases. This can also be used through web-service.
Well lets see what liferay's documentation has to say:
The utility for the organization remote service. This utility wraps com.liferay.portal.service.impl.OrganizationServiceImpl and is the primary access point for service operations in application layer code running on a remote server.
This is a remote service. Methods of this service are expected to have security checks based on the propagated JAAS credentials because this service can be accessed remotely.
iii) OrganizationLocalServiceUtil: com.liferay.portal.service.OrganizationLocalServiceUtil
This can also be used if you don't want any permission checks. The OrganizationServiceUtil ultimately makes a call to the localService layer.
Liferay's Documentation:
The utility for the organization local service. This utility wraps com.liferay.portal.service.impl.OrganizationLocalServiceImpl and is the primary access point for service operations in application layer code running on the local server.
This is a local service. Methods of this service will not have security checks based on the propagated JAAS credentials because this service can only be accessed from within the same VM.
Conclusion
Use OrganizationUtil if you care about transaction i.e. have to update multiple tables in a transaction then use this.
Use OrganizationServiceUtil if you creating Organization outside liferay or if you need permission checks & you don't care about transaction (i.e. transaction with your custom code)
Use OrganizationLocalServiceUtil if you are not using a web-service and you don't care about transaction or permissions.
Hope this gives you a fair idea. Let me know if it is still unclear.
Related
I keep reading about how Azure Managed Identities are the way to go to secure access to Azure resources, and i totally get the convenience and level of security they offer. But i often worry that at the same time they leave open the possibility that any vulnerability to any application that is running within my resource can then leverage that identity to do things that i may not want it to do, not just the application i want to give access to that resource.
This method of securing things while convenient has always felt awkward. Its like if i need to give a friend of mine access to my apartment to watch my dog while im on vacation, instead of giving my friend my keys i instead just slip the keys through the mail slot and they have 4 other roommates. (lets pretend that these keys are soul-bound to everyone that lives there and cannot be stolen)
Is it possible to combine both Managed Identities with traditional credentials to consume resources?
A specific example might be, i have a JAVA spring based application that consumes Azure Database for MySQL that i have deployed into a Kubernetes environment. I am using a sidecar container with NGINX to provide external HTTP access to that application. Using a pod-managed identity here implies to me that both the JAVA application and NGINX will have access to the database, when i only want my application to be the one that has access. Certainly there are other architectural approaches that i could take in this example but more trying to outline my concerns with managed identities alone.
In a hobby side project I am creating an online game where the user can play a card game by implementing strategies. The user can submit his code and it will play against other users strategies. Once the user has submitted his code, the code needs to be run on server side.
I decided that I want to isolate code execution of user submitted code into an AWS lambda function. I want to prevent the code from stealing my AWS credentials, mining cryptocurrency and doing other harmful activity.
My plan is to do following:
Limit code execution time
Prevent any communication to internet & internal services (except trough the return value).
Have a review process in place, which prevents execution of user submitted code before it is considered unharmful
Now I need your advice on how to achieve best isolation:
How do I configure my function, so that it has no internet access?
How do I configure my function, so that it has no access to my internal services?
Do you see any other possible attack vector?
How do I configure my function, so that it has no internet access?
Launch the function into an isolated private subnet within a VPC.
How do I configure my function, so that it has no access to my internal services?
By launching the function inside the isolated private subnet you can configure which services it has access to by controlling them via the security groups and further via Route Table this subnet attached including AWS Network ACLs.
Do you see any other possible attack vector?
There could be multiple attack vectors here :
I would try to answer from the security perspective in AWS Services. The most important would be to add AWS Billing Alerts setup, just in case there is some trouble at least you'll get notified and take necessary action and I am assuming you already have MFA setup for your logins.
Make sure you configure your lambda with the least privilege IAM Role
Create a completely separate subnet dedicated to launching the lambda function
Create security for lambda and control this lambda access to other services in your solution.
Have a separate route table for the subnet where you allow only the selected services or be very specific with corresponding IP addresses as well.
Make sure you use Network ACLs to configure all the outgoing traffic from the subnet by adding ACL as well as an added benefit.
Enable the VPC flow logs and have the necessary Athena queries with analysis in place and add alerts using AWS CloudWatch.
The list can be very long when you want to secure this deployment fully in AWS. I have added just few.
I'd start by saying this is very risky and allowing people to run their own code in your infrastructure can be very dangerous. However, that said, here's a few things:
Limiting Code Execution Time
This is already built in to Lambda. Functions have an execution limit on time which you can configure easily through IaC, the AWS Console or the CLI.
Restricting Internet Access
By default Lambda functions can be thought of as existing outside the constraints of a VPC for more applications. They therefore have internet access. I guess you could put your Lambda function inside a private subnet in a VPC and then configure the networking to not allow connections out except to locations you want.
Restricting Access to Other Services
Assuming that you are referring to AWS services here, Lamdba functions are bound by IAM roles in relation to other AWS services they can access. As long as you don't give the Lambda function access to something in it's IAM role, it won't be able to access those services unless a potentially malicious user provides credentials via some other means such as putting them in plain text in code which could be picked up by an AWS SDK implementation.
If you are referring to other internal services such as EC2 instances or ECS services then you can restrict access using the correct network configuration and putting your function in a VPC.
Do you see any other possible attack vector?
It's hard to say for sure. I'd really advise against this completely without taking some professional (and likely paid and insured) advice. There are new attack vectors that can open up or be discovered daily and therefore any advice now may completely change tomorrow if a new vulnerability is discovered.
I think your best bets are:
Restrict the function timeout to be as low as phyisically possible (allowing for cold starts).
Minimise the IAM policy for the function as far as humanly possible. Careful with logging because I assume you'll want some logs but not allow someone to run GB's of data in to your CloudWatch logs.
Restrict the language used so you are using one language that you're very confident in and that you can audit easily.
Run the lambda in a private subnet in a VPC. You'll likely want a seperate routing table and you will need to audit your security groups and network ACL's closely.
Add alerts and VPC logs so you can be sure that a) if something does happen that shouldn't then it's logged and traceable and b) you are able to automatically get alerted on the problem and rectify it as soon as possible.
Consider who will be reviewing the code. Are they experienced and trained to spot attack vectors?
Seek paid, professional advice so you don't end up with security problems or very large bills from AWS.
This is a bit descriptive so please bear with me. :)
In the application that I'm trying to build, there are distinct functionalities of product. Users can choose to opt-in for functionality A, B, D but not C. The way I'm building this, is that each of the distinct functionality is a Service (stateless, I'm thinking of storing the data in Azure SQL DBs and exposing REST APIs from each service). Bundled all services together is an ApplicationType. For each customer tenant (consider this as an shared account of a group of users) that is created, I'm thinking of creating a new concrete instance of registered ApplicationType using a TenantManagementService and calling client.ApplicationManager.CreateApplicationAsync() on a FabricClient instance so that I can have a dedicated application instance running on my nodes for that tenant. However, as I mentioned, a tenant can choose to opt-in only for specific functionality which is mapped to a subset of services. If a tenant chooses only service A of my Application, rest of the service instances corresponding to features B, C, D shouldn't be idly running on the nodes.
I thought of creating actors for each service, but the services I'm creating are stateless and I'd like to have multiple instances of them actively running on multiple nodes to load balance rather than having idle replicas of stateful services.
Similar to what I'm doing with application types, i.e., spawning application types as a new tenant registers, can I spawn/delete services as and when a tenant wants to opt-in/out of product features?
Here's what I've tried:
I tried setting InstanceCount 0 for the services at when packaging my application. In my ApplicationParameters XML files:
<Parameters>
<Parameter Name="FeatureAService_InstanceCount" Value="0" />
<Parameter Name="FeatureBService_InstanceCount" Value="0" />
</Parameters>
However, Service Fabric Explorer cribs when instantiating the application out of such application type. The error is this:
But on the other hand, when a service is deployed on the fabric, it gives me an option to delete it specifically, so this scenario should be valid.
Any suggestions are welcome!
EDIT: My requirement is similar to the approach mentioned by anderso in here - https://stackoverflow.com/a/35248349/1842699, However, the problem that I'm specifically trying to solve is to upload create an application instance with one or more packaged services having zero instance count!
#uplnCloud
I hope I understand everything right.
Your situation is the following:
Each customer should have separate Application (created from the same ApplicationType).
Each customer should have only subset of Services (defined in ApplicationType).
If I get it right then this is supported out of the box.
First of all you should remove <DefaultServices /> section from the ApplicationManifest.xml. This will instruct Service Fabric to don't create services automatically with the application.
Now the algorithm is the following:
Create application using FabricClient.ApplicationManager.CreateApplicationAsync()
For each required feature create a new corresponding Service using FabricClient.ServiceManager.CreateServiceAsync() (you need to specify the Application name of newly created Application)
Also note that CreateServiceAsync() accepts ServiceDescriptor that you can configure all service related parameters - starting from partitioning schema and ending up with instance count.
Unfortunately you can't have 0 instance services, Service Fabric has the idea that a named service always exists(running). In this case, when you define a service (give a name to a serviceType instance), it will have at least 1 instance running, otherwise, you shouldn't even have the definition of this service on your application if it does not need to be running.
But what you can have is the ServiceType definition, that means, you have the binaries but you will create it when required.
I assume you are being limited by the default services, where you declare the application and services structure upfront(before deployment of any application instance), instead, you should use dynamic service creation via FabricClient like you described, or via Powershell using New-ServiceFabricApplication and New-ServiceFabricService .
This link you guide you how to do it using FabricClient
I'll just add this as a new answer instead of commenting on another answer.
As other have mentioed, remove DefaultServices from your ApplicationManifest. That way, every new instance of the ApplicationType you create will come online without services, and you'll have to create those manually depending on what functionallity your customer has selected.
Also, going with the "services per customer" approach, make sure you got enough nodes to handle the load when you get customers online. You'll end up with a lot of processes (since Application Instances) runs their own processes of the services, and if you have few nodes with a lot of these, reboots to your cluster nodes can take a bit to stabilise since it can potentionally have many services it needs to relocate. Altough running Stateless Services will aleviate a good part of this.
I have been assigned to think of a layered microservices architecture for Azure Service Fabric. But my experience mostly been on monolithic architectures I can't come up with a specific solution.
What I have thought as of now is like...
Data Layer - This is where all the Code First entities resides along with DBContext.
Business Layer - This is where all the Service Managers would be performing and enforcing the Business Logic i.e. UserManager (IUserManager), OrderManager (IOrderManager), InvoiceManager (IInvoiceManager) etc.
WebAPI (Self Hoted Inside Service Fabric) - Although this WebAPI is inside Service Fabric but does nothing except to receive the request and call respectic Services under Service Fabric. WebAPI Layer would also do any Authentication and Authorization (ASP.NET Identity) before passing on the call to other services.
Service Fabric Services - UserService, OrderService, InvoiceService. These services are invoked from WebAPI Layer and DI the Business Layer (IUserManager, IOrderManager, IInvoiceManager) to perform it's operation.
Do you think this is okay to proceed with?
One theoretical issue though, while reading up for several microservices architecture resources, I found that, all of them suggests to have Business Logic inside the service so that the specific service can be scaled independently. So I believe, I'm violating the basic aspect of microservices.
I'm doing this because, the customer requirement is to use this Business Layer across several projects, such as Batch Jobs (Azure Web Jobs), Backend Dashboard for Internal Employees (ASP.NET MVC) etc. So If I don't keep the Business Layer same, I have to write the same Business Logic again for Web Jobs and Backend Dashboard which I feel is not a good idea. As a simple change in Business Logic would require change in code at several places then.
One more concern is, in that case, I have to go with Service to Service communication for ACID transactions. Such as, while creating an Order, a Order and Invoice both must be created. So in that case, I thought of using Event Driven programming i.e. Order Service will emit an event which the Invoice Service can subscribe to, to create Invoice on creation of Order. But the complications are if the Invoice Service fails to create invoice, it can either keep trying do that infinitely (which is a bad idea I think), or emit another event to Order Service to subscribe and roll back the order. There can be lots of confusion with this.
Also, I must mention that, we are using a Single Database as of now.
So my questions are...
What issue do you see with my approach? Is it okay?
If not, please suggest me a better approach. You can guide me to some resources for implementation details or conceptual details too.
NOTE : The requirement of client is, they can scale specific module in need. Such as, UserService might not be used much as there won't be many signups daily or change in User Profile, but OrderService can be scaled along as there can be lots of Orders coming in daily.
I'll be glad to learn. As this is my first chance of getting my hands on designing a microservices architecture.
First of all, why does the customer want to use Service Fabric and a microservices archtecture when it at the same time sounds like there are other parts of the solution (webjobs etc) that will not be a part of thar architecture but rather live in it's own ecosystem (yet share logic)? I think it would be good for you to first understand the underlying requirements that should guide the architecture. What is most imortant?
Scalability? Flexibility?
Development and deployment? Maintinability?
Modularity in ability to compose new solutions based on autonomous microservices?
The list could go on. Until you figure this out there is really no point in designing further as you don't know what you are designing for...
As for sharing business logic with webjobs, there is nothing preventing you from sharing code packages containing the same BL, it doesn't have to be a shared service and it doesn't mean that it has to be packaged the same way in relation to its interface or persistance. Another thing to consider is, why do you wan't to run webjobs when you can build similar functionality in SF services?
Is it possible to create liferay service builder without any configuring any database tables in service.xml file.
Actually purpose here is to create a service layer using liferay service builder. And there is no database interaction directly in this service layer.
Yes, and it's quite simple. While you still need an entity (which provides the name for your service) you can leave this entity definition empty.
This will create the service (local or remote, as configured in the entity) but no model, no persistence and no database table.
One of the situations where this comes in really handy is when you want to add another method to an existing service (which you can't) - you just create a new service with your custom methods and delegate to the original service.
I agree with #Olaf Kock answare in which say that it is possible have an empty model with service builder. Furthermore have an empty entity you can benefit of have the same transactional context of your portal and benefit of cluster managing and benefit of a complete integration with liferay portal.
If you have the same transactional enviroment of the portal you can image of create a service that agregate native liferay service and you get the assurance that the transactional context is the same of the portal.
I hop that this reflection can add value.
Its highly recommended that If you're creating Service.xml then at least one entity should be there. Otherwise no need to add that configuration.
Able to create service builder without real entities.
As provided in the link it is possible to create service builder without entities.
Also discussed more in detail in this forum
Hope it helps some one. Thanks