Is there a way to implement platform independent transactional I/O operations using .Net Core?
In pure .Net world we can use Transactional NTFS, but that is not an option.
Use transient transaction using the System.Transactions namespaces in .NET. In .NET Core 2.1, SQL client and EF Core fully support integrating into ambient transactions. See EF Core's Using System.Transaction documentation.
You can then use enlistments to add custom logic that supports commit and rollback functionality. See Transaction.EnlistVolatile documentation and a live sample code used in the dotnet install tool logic.
There is a package on NuGet (TxFilemanager) that implements common file operations using System.Transations but its site on CodePlex is now down, however there are forks on GitHub like this one containing the source code which should help implementing these actions.
Related
I have Core project where I need to do some cryptographic operations, e.g. verification of SHA256. What can I do if it's Core project, so it shouldn't depend on anything? I have to write my own cryptographic functions that are resistant to e.g. side-channel attack? This causes security problems.
So what to do? Can my Core project depend on a nuget package if I use Clean Architecture?
The guideline regarding dependencies is to keep the core project as simple as possible so that most of its logic is about solving the business problem.
By keeping it simple, it's much easier to express which part of the business domain the classes solve. It's also easy to write focused tests that prove that the code can solve the correct part of the business problem.
To me, preventing attacks is not a part of that. It's something that should be done on inbound API calls before the domain is called. I would put that logic in application services. Those services can, of course, live in the Core project but not in any of the bounded contexts.
In Clean Architecture we try to keep the domain and application logic as independent from external libraries and frameworks as possible so that we do not depend on their future development.
Nevertheless the application logic will have to interact with external libraries, services and other IO which is achieved via "dependency inversion": the application logic defines an interface which is implemented by the outer layers (infrastructure).
This was the application logic remains "clean" and can focus on decision making while you can still reuse external libraries and services.
A more detailed discussion of this topic you can find here: http://www.plainionist.net/Implementing-Clean-Architecture-Frameworks/
I want implement the claim-check workflow with Azure Service Bus:
claim check
Is it possible to do that using node.js?
I just found only .net examples and I cant see anything in #azure/service-bus node package that seems to refer to claim check.
Anyone that had the same problem?
A claim check pattern can be implemented using any language as long as there's support to hook into the pipeline. For a long time, this pattern was not easy not possible to implement with .NET either as the previous .NET SDK did not provide a way to hook into the sending/receiving. The new .NET SDK had that consideration from the beginning. Almost. Once it was implemented, claim-check pattern implementation was a no brainer.
If you're looking to implement a claim check pattern as a plugin, it would need to have a pipeline concept support in the Node.js SDK.. You can raise an issue for the Service Bus library to request it or contribute yourself.
Another alternative is to abstract sending and receiving operations with your custom implementation that would use some kind of storage for the payloads.
I want to create a nodejs Azure Service Bus client to read from a topic, and I want my client to automatically register its subscription.
So #azure/service-bus allows to work with SB normally, yet does not allow to create a subscription. azure-sb does allow to create one and also operate on the bus.
Do I need to install both in my nodejs app? Both seem official package from Microsoft, however, I wonder what is the intended usage for both and intended future (will one replace the other?)
To put things simply, azure-sb is the old SDK for managing Service Bus while #azure/service-bus is the new one.
There are a few differences in the two packages:
Protocol: azure-sb is a wrapper over Service Bus REST API (HTTP protocol) while #azure/service-bus makes use of AMQP protocol to communicate with Service Bus.
Features: azure-sb enables you to work with entities (Queues, Topics and Subscriptions) by allowing them to perform CRUD operations on them. These features were removed from #azure/service-bus as Microsoft is moving these CRUD operations under control plane by enforcing RBAC on these operations. For performing these operations, you will need to use #azure/arm-servicebus. #azure/service-bus package is more geared towards sending and receiving messages. It also supports Websockets in case you need that.
Do I need to install both in my nodejs app? Both seem official package
from Microsoft, however, I wonder what is the intended usage for both
and intended future (will one replace the other?)
Considering your requirement, my answer is yes. You would need both of these packages. I'm not sure if that will cause any problems. In our project, we ended up implementing REST API directly (instead of using azure-sb package) for CRUD operations on entities and used #azure/service-bus for working with messages.
Regarding #azure/service-bus will eventually replace azure-sb package, I will be purely speculating but I don't think that will happen anytime soon. For this, Service Bus team will have to remove the REST API first which seems highly unlikely however Microsoft is pushing hard for RBAC at entities level so I would recommend that you use #azure/arm-servicebus along with #azure/service-bus if you're starting new.
I am developing an application that is rolled out in stages. For each sprint, there are database changes so core data migration has been implemented. So far we have had 3 stage releases. Whenever successive up gradation is done , the application runs fine. But whenever I try to upgrade from version 1 to version 3, 'unable to add persistent store' error occurs'. Can someone help me with the issue ?
Core Data migration does not have a concept of versions as you would expect them. As far as Core Data is concerned there are only two versions, the version of the NSPersistentStore and the version you are currently using.
To use lightweight migration, you must test every version of your store and make sure that it will migrate to the current version directly. If it does not then you cannot use lightweight migration for that specific use case and you either need to develop a migration model or come up with another solution.
Personally, on iOS, I avoid heavy migration as it is very expensive in terms of memory and time. If I cannot use a lightweight migration I most often will explore export/import solutions (exporting to JSON for example and importing in to the new model) or look at refreshing data from the server.
My problem is i am trying to change my attribute datatype during automatic lightweight migration, as automatic lightweight core data migration does not support data type change. I resolved this issue by resetting the data type to older one.
I am new to the enterprise integration area.
We have a requirement to develop a solution where multiple OSS (operations support systems) should talk to multiple EMS (element management systems) and network devices (Different transports and protocols has to be supported), solution should be such that, that it should run in Weblogic.
Queries
Which will be the best fit for this situation ESB/Apache ServiceMix/Spring Integration?
If we use opensource ESBs ( like WSo2 and Talend ESBs) I think we need to maintain two servers ESB server and Weblogic server and ESB/Weblogic integration will be an issue?
Apache servicemix or Spring Integration be deployed/run inside Weblogic?
Whether Apache ServiceMix is supported now, as I could see most updates are happening in fuse ESB only?
You need to analyze your scenario and then decide. If you need only transformation or alongside with a simple routing you can use some frameworks like smooks, camel etc.
You need to transform and still a lot of system involved where you need those transformed messages then you could use an ESB.
Then comes selecting the ESB product is also based on you application eco system. All products are amazing and each fits the better than the other in their own application eco system.
First you need to know a few things on Camel / Fuse ESB / Service Mix
All the above revolve around the same, each of them are projects where camel integration framework is the coding De fact o
1.Camel -- Integration framework and the De fact o coding way(sophisticated in its own way and much flexible)
2.Service Mix -- Container for deploying the your integration code. (Camel integration code)
3.Fuse ESB -- Enterprise Feather on the hat of Service Mix where it provides a Studio for coding , a list of components and wrappers like clustering and other facilities around service service mix
.
I would like you to also consider Mule ESB which could also and it will be a good contender in your list.
Some answers for your questions
1.You can deploy Camel code or the spring integration code into the what so ever container (all in the hands Maven and jar management thing you need to do....)
2.Service mix is a Apache license and is complete open source and if you need some support I suggest you to choose the FUSE ESB which is not part of JBoss family and powered by RedHat
Please follow this link below for more detailed discussion from other stackoverflow.com users
use the below for your analysis
Apache Camel and other ESB products
What is an ESB and what is it good for?
Messaging, Queues and ESB's - I know where I want to be but not how to get there
JMS and ESB - how they are related?