I am trying to add a feature that archives data without using paranoid option.
The system I am working on is complicated (2 services and 1 micro-service connected together).
Archiving data requires instances to be returned whenever needed (by calling their ID but doesn't allow user to do any action like delete, update, change it's related data or even upload new photos for this archived instance)
I tried using the defaultScope in the model defining (sequalize) as follow:
defaultScope: {
where: { isArchived: false },
},
This works as it hides all instances that are with isArchived: true, so user can not reach them to delete or update them, but it also banned the user from retrieving archived instances to view their details or their related info.
I thought of customizing some functions in the repo folder but it adds more complications and code lines, I would like to see other suggestions that cover this feature.
Note: we don't want to use paranoid for archiving because it is used for deleting feature, so we don't want to make confusion.
Thanks
Related
We are trying to setup a workflow for delivering content model changes to our other environments (stage & prod).
Right now, our approach is this:
Create a new contentful field as a migration script using Contentful CLI.
Run script in local dev to make sure the result is as desired using contentful space migration migrations/2023-01-12-add-field.ts
Add script to GIT in folder migrations/[date]-[description].js
When release to prod, run all scripts in the migrations folder, in order, as part of the build process.
When folder contains "too many" scripts, and we are certain all changes are applied to all envs, manually remove the scripts from GIT and start over in an empty folder.
Where it fails
But, between point 4 & 5 there will be cases where a script has already been run in an earlier release, and that throws an error:
I would like the scripts to continue more gracefully without throwing an error, but I cant find support for it in the space migration docs. I have tried wrapping the code in try/catch without any luck.
Contentful recommends using the Content Migration API in favour for Content Management API since it is faster. I guess we could use the Content Management API, but at the same time we want to use "best practise".
Are we missing something here?
I would like to do something like:
module.exports = function (migration) {
// Create a new category field in the blog post content type.
const blogPost = migration.editContentType('blogPost')
if (blogPost.fieldExists('testField')) {
console.log('Field already exists')
} else {
blogPost.createField('testField').name('Test field').type('Symbol')
}
}
I'm new to this frontend world, I have some knowledge on React and GraphQL and that's why I've decided to try and implement a test blog with Gatsby, as it seems pretty popular and easy to use.
I also wanted to get my hands into Material UI so I'm using this Gatsby starter : https://www.gatsbyjs.org/starters/Vagr9K/gatsby-material-starter
This starter seems to have included the integration with Netlify CMS, so I wanted to change that and start using Strapi CMS, so I can have the content there.
Any idea on how to do this?
There's a lot of stuff in your question, I'll try to answer it step by step if not, please let me know if you need more details of how to create pages, etc and I will update my answer to add more details if needed.
If you want to change your source from Netlify to Strapi you need to set it up in your gatsby-config.js, replacing gatsby-plugin-netlify-cms plugin for something like that:
{
resolve: `gatsby-source-strapi`,
options: {
apiURL: `http://localhost:1337`,
queryLimit: 1000, // Default to 100
contentTypes: [`article`, `user`],
//If using single types place them in this array.
singleTypes: [`home-page`, `contact`],
// Possibility to login with a strapi user, when content types are not publically available (optional).
loginData: {
identifier: "",
password: "",
},
},
},
Note that you'll have to install your desired plugins and remove the unnecessary in order to reduce the bundle package and improve performance when using starters.
The next step is to create pages from your source CMS (articles, posts, pages, etc) using GraphQL. Maybe this blog helps you. But as a short summary, you need to create queries in your gatsby-node.js to retrieve data from Strapi CMS and create pages using Gatsby's API.
The idea is the same as from your starters, however, instead of using gatsby-source-filesystem and using allMarkdownRemark in your page creation, you will use the object provided by Strapi CMS. You can check the queries and the available objects using gatsby develop and entering to localhost:8000/___graphql.
Keep in mind that you will always query static data (i.e: pre-downloaded data) from your multiple sources so when you run the develop command, the data is downloaded and accessible via GraphQL.
You can check for further information in its starter repository.
The project I'm working on uses the feathers JS framework server side. Many of the services have hooks (or middleware) that make other calls and attach data before sending back to the client. If I have a new feature that needs to query a database but for a only few specific things I'm thinking I don't want to use the already built out "find" method for this database query as that "find" method has many other unneeded hooks and calls to other databases to get data I do not need for this new query on my feature.
My two solutions so far:
I could use the standard "find" query and just write if statements in all hooks that check for a specific string parameter that can be passed in on client side so these hooks are deactivated on this specific call but that seems tedious especially if I find this need for several other different services that have already been built out.
I initialize a second service below my main service so if my main service is:
app.use('/comments', new JHService(options));
right underneath I write:
app.use('/comments/allParticipants', new JHService(options));
And then attach a whole new set of hooks for that service. Basically it's a whole new service with the only relation to the origin in that the first part of it's name is 'comments' Since I'm new to feathers I'm not sure if that is a performant or optimal solution.
Is there a better solution then those options? or is option 1 or option 2 the most correct way to solve my current issue?
You can always wrap the population hooks into a conditional hook:
const hooks = require('feathers-hooks-common');
app.service('myservice').after({
create: hooks.iff(hook => hook.params.populate !== false, populateEntries)
});
Now population will only run if params.populate is not false.
I'm trying to implement some kind of permission framework in Node js, using sequelize as an ORM (with Postgres). After hours of research, the closest thing I can find to do this with existing npm modules is using acl with acl sequelize to support my stack.
The problem is that it looks like the acl module assigns a role, where that role would get a set of permissions to all instances of a specific resource. However, I need to do permissioning for instances based on existing relationships of that user.
As an example, consider a permissioning system for a simple forum. It gives these permissions for each role:
// allow guests to view posts
acl.allow("guest", "post", "view");
// allow registered users to view and create posts
acl.allow("registered users", "post", ["view", "create"]);
// allow administrators to perform any action on posts
acl.allow("administrator", "post", "*");
Suppose that I want to also add the ability for registered users to also edit their own posts, and the user has a relationship to all the posts they've created.
Is there any way for this module to do this, or any other module that can support this kind of behavior on the database / ORM level?
If not, and I have to implement a custom one, what would the best approach to creating something like this.
There is relatively new library CASL. I'm the author of this library. And it's possible to implement your usecase quite easily:
const { AbilityBuilder } = require('casl')
const ability = AbilityBuilder.define((can, cannot) => {
can('read', 'all')
can(['update', 'delete'], 'Article', { author_id: loggedInUser.id })
})
The code above basically says:
- anyone can read everything
- anyone can update and delete articles where author_id equals logged in user id
Later you can do:
ability.can('delete', 'Post')
//or
ability.can('update', post)
// where post variable is an instance of your Post model
Also there is an article which explains how to integrate CASL with MongoDB and Express exactly for your usecase.
When we create a new table in Azure Mobile Services Data, it creates a [__deleted] column along with others like [__createdAt] etc. This is good, in case if I have to soft delete a record, I set _deleted = true, instead of permanently deleting it.
My question is, when we query a MobileServices table say from client side or in server scripts using table.read or mssql.query, do I need to specify __deleted=false in each read/query explicitly or is there any app level config/setting available in MobileServices that we can set so that it doesn't return the records with __deleted=true by default.
By default, queries going through the standard path (formed via client or server table.read) should filter deleted records. (Essentially a __deleted = false clause will be added for you)
To get deleted records from the client you can send the __includeDeleted querystring parameter or on server you can use table.read({includeDeleted: true, ...) This will disable that default clause from being added.