I want to create a workflow automation where an activity comes in and user can setup a multilevel workflow.
For frontend i am using https://reactflow.dev
How to structure things in nodejs backend. Things like database, accessing control flow statements, statements which requires crons.
You also may want to have a look at node-red.
It's an open-source product that does exactly that.
There's a set of built-in nodes.
You can develop your own nodes, or import 3rd party ones. Which are stored in NPM.
You can also just create a node with javascript or typescript code in it, on the fly.
You should check Flumejs: https://flume.dev/
https://flume.dev/docs/quick-start/
Also you should see this code sandbox example. Try to read the code
and all the dependencies: https://codesandbox.io/s/node-based-code-generation-test-forked-ll9flz?file=/src/App.tsx
I hope you find this helpful.
Related
I would like to know how to post multiple records to SAP using "BatchRequestBuilder" along with ChangeSet .I am using a custom odata service call(ODataCreateRequestBuilder),not using the VDM model. I did'nt get any blog or documentation to start with.
Can you please help me in this regard.
Updated:
Below is what I am trying to post to SAP
[{"purchaseSchAgrmntNo":"","customerMaterialNumber":"","plant":"","vendorNo":""},{"purchaseSchAgrmntNo":"","customerMaterialNumber":"","plant":"","vendorNo":""}]
SAP SDK version : 3.9.0
I have added below code with only one CreateRequest.
ChangeSet changeSet = new ChangeSetBuilder().addCreateRequest( ODataCreateRequestBuilder.withEntity(sapConfig.getServiceUrlRepriceList(),
sapConfig.getEntityRepriceList())
.withBodyAsMap(responseBody)
.build()).build();
BatchResult batchResult = BatchRequestBuilder.withService("URL?").addChangeSet(changeSet).build().execute(httpClient);
Can you let me know if this is correct.Also let me know what I have to pass in the service.Is it service URL?
Thanks,
Arun Pai
The BatchRequestBuilder is actually not directly part of the SAP Cloud SDK but a dependency that the SDK internally uses to execute batch requests. That is why on the SDK level there is no documentation on how to use it.
Roughly, a batch request comprises of multiple change sets which in turn group together multiple operations. The ChangeSetBuilder allows you to build up change sets which you can then pass to a BatchRequestBuilder.
So if you want to run create requests in batch mode you would want to leverage public ChangeSetBuilder addCreateRequest(ODataCreateRequest oDataCreateRequest).
You can take a look at how the SAP Cloud SDK uses these classes to build up batch requests to get an idea how it works in detail. As a starting point look towards BatchFluentHelperBasic. However, unless you don't know the service you want to query at compile time, I recommend that you leverage the generator to generate this code so that you can use the VDM instead which simplifies this.
If you extend your question to hold more specific information on what you actually want to achieve I can expand my answer to give a more concrete example. Also please include the SDK version you are using.
I have made three different applications in j JHipster with monolithic. I need to merge these applications. I know that by using micro-services my quest can be easy but the current requirement is to do the merging with the monolithic pattern only.
I need to merge two applications with another or main application. I am using MySQL as database. I don't know where I need to change and how. Please help me out, I am a newbie in this scenario.
i tried to create a java file for setter getter methods and and a dao file for three databases and now in the main class file and am trying to take every dao file as an array of object and integrate it and put it into the third db.is it possible.i wanted to show the code bt,since i am new not able to maintain the coding standards to show.
by this i way i tried to involve three databases in a single scenario and want to complete my query through CRUD model.
as you already pointed out, the proper way of merging here would be using the microservice option, which you cannot take, as you are forced to use monolithic architecture...
almost automatic merge
if you did not changed anything to your code, after generating the entities, you just can put the contents of your applications .jhipster directory into one, and run yo jhipster --with-entities to regenerate the entities in one application. You should keep in mind, you will have to take a look at your main/resource/config/liquibase folder, to set the migration ids properly.
manual merge
For this you should be more experienced in the underlying technologies, as you will have to:
recreate your entity classes
recreate zour DAO/Repositories
(maybe) recreate your services, or service implementations
recreate your REST controllers
do a proper liquibase migration
provide some tests
migrate the frontend code, by adding states, components, templates etc..
the most of these things you just can copy paste already generated code.
For more information, you should ask more precise, what is not working, if you already tried something...
I'm trying to setup a nodejs-express boilerplate for my new project, and this time I want to try doc-driven flow. I've checked couples of packages like swagger-node, swaggerize-express ...etc. They all provide great functionalities.
However, I don't see anything that could support incremental scaffolding when the Swagger file is updated. That means when the spec changes I have to manually check the diff and manually add/modify the new specs. That doesn't sound cool.
Does anyone could share something that is more reasonable? Thanks!!!
Edit:
After trying some frameworks, I decided to use swagger-express-middleware. This framework offers a convenient way to automatically check routes/parameters for your service.
You can use tools like swagger-maven-plugin to incrementally rebuild your server code, which means reading from your api definition and updating/building code as necessary. There are SAAS products like SwaggerHub which enable this as well, by merging code and pushing to git.
i'm newbie in JHipster and i'm trying to figure this, when i create a new entity JHipster generates several files, angular, html and java classes, now if i want a common code for all this generated code i must edit each time that i use the yeoman generator? what i want is:
Custom Index template, and pages, is secure to edit them?.
Customize the entity tables, entity forms using angular, maybe extending yeoman generators
Customize generate java classes, maybe i think using AOP
So i need to edit each time for each generated code? and is a good practice this or what i want? for clarify more i want to use a Custom Bootstrap/angular Dashboard template like Minovate, i see how to customize bootstrap in the documentation but not about what i'm asking for, Thanks.
JHipster is just a code generator, once generated the code is yours.
For angular screens I would say do as much as you can in CSS/SASS.
But it's very likely that you will need to build some screen mixing several entities and change the structure of entity screens.
So you should rather consider them as a starting point and do your own stuff in another folder so that it does not get overwritten by next re-generations.
This way you can still update your entity definitions in .jhipster folder and re-run yo jhispter:entity <entity name> on the entities you modified.
Customising java Entities is usually much simpler and you can easiliy achieve this by merging generated code with git and defining your service classes.
AOP seems overkill here.
Extending a yeoman generator is a lot of work.
I suggest to use some VSC (git, subversion or whatever you like) have a branch dedicated to plain jhipster generated code and another one where you make customization.
Eventually regenerate on jhipster branch and merge back on yours.
You should at least reduce manual intervention.
I'm a complete Node noob, so I apologize if this question has an obvious answer.
I'm looking to create a web app that will run plugins from untrusted sources (i.e. community submissions). So I need to lock down those plugins into a sandbox where only certain access is allowed (can't write to disk, etc.). Ideally, the plugin would only be able to use certain approved node packages and APIs.
Is this possible in Node? If so, can you point me toward a package or documentation that will get me started?
Here is a small list of projects that can help you:
https://github.com/gf3/sandbox
https://github.com/hflw/node-sandbox
https://github.com/bcoe/sandcastle
https://github.com/wearefractal/boxy
I suggest the first one (sandbox) since it's more mature.
I would also contribute to the list with my library: https://github.com/asvd/jailed. In addition to the sandboxing of the untrusted code (in a restricted subprocess), it gives an opportunity to export any set of functions inside the sandbox thus defining a custom API for the sandboxed code.