What are the risks involved in using custom decorators as validation pipes in Nestjs? - nestjs

Background
A data structure in my database consists of "sections"; lists of custom objects. The number of sections may expand in the future, so to keep my code as DRY as possible, I wanted to determine the section to add/update/delete an item from to be defined dynamically as a parameter.
I quickly realised that doing something like #Body() section: SectionA | SectionB | SectionC... disables validation so I needed a single DTO Section that could encompass all sections. To do that I need to define dynamically which validators to apply as I have several #IsNotEmpty constraints.
So I came across this post whose selected answer recommends the usage of groups.
This posed the following challenges:
I now have to write a custom validation pipe. Relied heavily on this
I want to override the global validator pipe that I already had running and use my custom one for just that method. Outcome: didn't work, had to start defining the pipe on every controller method, a tradeoff I am willing to accept. Looks like there is no simple alternative.
However, I'm now faced with the final problem: how to use the parameters in the request to define these groups in the validator; another brick wall. No simple solution.
Solution
This question has been asked here but no satisfactory solution was actually given.
Option one recommended redefining the scope of the pipe to "request" level but didn't explain how, and solutions found online didn't work.
The second solution, using a custom decorator to perform the validation instead, did work, very well in fact here is a simplified version of the code:
export const ProfileSectionData = createParamDecorator(
async (data: unknown, ctx: ExecutionContext) => {
const request = ctx.switchToHttp().getRequest();
let object = plainToInstance(SectionDto, request.body); // I don't need to access the metatype from the request because I know what type I need but I'm sure I could if need-be.
const groups = [request.params.profileSection];
let validatorOptions = { groups, ...defaultOptions };
const errors = await validate(object, validatorOptions);
if (errors.length > 0) {
throw new BadRequestException();
}
return request.body;
},
);
Implications?
Here's my question. When Jay McDoniel recommended using a custom decorator, they warn: "Do note, that this could impact how the ValidationPipe is functioning if that is bound globally, at the class, or method level."
What does this mean?
Are there any vulnerabilities or performance drawbacks associated with this solution?
Obviously, one drawback is that you are using validation outside a validation pipe which is not ideal from a point of view of order and single-responsibility but I can't think of tangible inconveniences beyond aesthetics and maintainability.
Knowing the background, would you have approached the problem in a completely different way?

Related

Best practice core data same view for creating and editing [duplicate]

This question already has answers here:
swiftui how to fetch core data values from Detail to Edit views
(2 answers)
Closed 11 months ago.
I try to use core data to store persistent data on an iOS device. I've got user flows to create and edit domain objects with a few related and deeply nested objects.
Those user flows are very similar, so I would like to use the same views for those tasks, just deciding on appear if the view got an existing domain object passed or it needs to create a new one.
After testing different approaches, nothing seems to fit this context so I wonder if there are recommended ways for this situation?
The following options got tested:
initializing the core data object in init() results in an Modifying state during view update, this will cause undefined behavior. warning
initializing the core data object in .onAppear requires the #ObservedObject var domainObjectPassed: DomainObject to be optional, not quite what I'm looking for as well
Any suggestions?
Already did that. I extracted the same logic into one view and have two distinct wrapper views that should handle this problem. But I've got the same situation one level higher.
struct CreateView: View {
#ObservedObject private var domainObject: DomainObject
init(moc: NSManagedObjectContext) {
domainObject = DomainObject(context: moc)
domainObject.id = UUID()
try? moc.save()
}
var body: some View {
CustomizeView(domainObject: domainObject)
}
}
-> results in warning from the first option
struct CreateView: View {
#Environment(\.managedObjectContext) var moc
#ObservedObject private var domainObject: DomainObject? = nil
var body: some View {
CustomizeView(domainObject: domainObject)
.onAppear {
...
}
}
}
-> requires domainObject in CustomizeView to be optional, not what I'm looking for
I am using the same concept (one view for editing and creating), but in my case I find myself comfortable using an optional to get the core data object. If you can live with an optional, this could work.
In this way, you just need to check if the object passed is nil: if it is, create the object, otherwise use the object passed.
In a very schematic way, the view could be:
struct EditOrCreate: View {
var coreDataObject: MyCoreDataEntity
// Optional object at initializer
init(objectPassed: MyCoreDataEntity? = nil) {
if objectPassed == nil {
coreDataObject = MyCoreDataEntity(context: thePersistentContainerContext)
} else {
coreDataObject = objectPassed!
}
}
var body: some View {
// All the text fields and save
}
}
When creating, just call EditOrCreate().
When editing, just pass the object: EditOrCreate(objectPassed: theObjectBeingShown).
You can also make coreDataObject an optional variable (which is what I did in my code), depending on how you want to handle your logic - for example, to cancel the creation before saving: in that case, you need to check for nil after the user has confirmed the creation of the new object.
Some programmers believe that "Premature Optimization Is the Root of All Evil". What I've seen happens when creating generic view code with only a small number of cases that have not yet been thought through is that after creating the generic version, you realise that you need to customise one of them, so end up trying to override the default behaviour in certain cases, which gets real messy and you would have been better off just with separate versions in the first place.
In SwiftUI, the View structs are lightweight and there is no issue with creating lots of them. Try to break your View structs up to be as small as possible then you can compose them together to form your different use cases.
The time you save could be spent on learning more about how SwiftUI works and fixing the issues in the code you posted. E.g. we don't init objects in the View struct init, or in body. Those need to run fast because that code runs quite frequently and creating objects is a comparably heavy task, also those objects that are created are immediately discarded after SwiftUI has finished building the View struct hierarchy, diffed it from last time, and updated the actual UIKit Views on screen. I highly recommend watching every WWDC SwiftUI video, there is a lot to learn and there is a lot of magic going on under the hood.
Another thing you could spend time learning is Swift generics and protocols. These are powerful ways to build reusable code using value types instead of class inheritance that we would typically use as ObjC/UIKit developers which tends to be buggy. You can read more about here: Choosing Between Structures and Classes (Apple Developer)

Build a validation parameter decorator in typescript

I read the doc, been there, done that. Still no clue how to write a decorator I need in a way that makes common sense.
In brief: Got an interceptor that executes before the validation layer. That simply means that invalid data can get in the interceptor and break the app. To avoid that I would like to use a decorator on some methods, and to be more accurate on parameters of such methods.
public async getUserById(#IsIntNumber() userId: number): Promise<UserEntity>
{
// method logic
}
Here #IsIntNumber() is a custom decorator that validates the userId parameter.
As a matter of fact I'd like to have a little library of mine in the application holding a bunch a that kinds of validation decorators that I could apply to different parameters.
Is there some legal method to do this without shedding too much blood and too many tears?
I know it's a difficult question.
In the docs they sort of say:
The #required decorator adds a metadata entry that marks the parameter
as required. The #validate decorator then wraps the existing greet
method in a function that validates the arguments before invoking the
original method.
Meaning I've got to pack all my validation logic into that validate function or what? Really?
Does it mean that we don't have adequate parameter decorators in TS? Cos if I understand this right, these ones are absolutely, totally unusable.

Writing ENV variables to configure an npm module

I currently have a project in a loose ES6 module format and my database connection is hard coded. I am wanting to turn this into an npm module and am now facing the issue of how to best allow the end user to configure the code. My first attempt was to rewrite it as classes to be instantiated but it is making the use of the code more convoluted than before so am looking at alternatives. I am exploring my configuration options. It looks like writing to the process env would be the way but I am pondering potential issues, no-nos and other options I have not considered.
Is having the user write config to process env an acceptable method of configuring an npm module? It's a bit like a global write so am dealing with namespace considerations for one. I have also considered using package.json but that's not going to work for things like credentials. Likewise using an rc file is cumbersome. I have not found any docs on the proper methodology if any.
process.env['MY_COOL_MODULE_DB'] = ...
There are basically 5ish options as I see it:
hardcode - not an option
create a configured scope such as classes - what I have now and bleh
use a config such as node-config - not really a user friendly option for npm
store as globals/env. As suggested in comment I can wrap that process in an exported function and thereby ensure that I have a complex non collisive namespace while abstracting that from end user
Ask user to create some .rc file - I would if I was big time like AWS but not in this case.
I mention this npm use case but this really applies to the general challenge of configuring code that is exported as functions. I have use cases for classes but when the only need is creating a configured scope at the expense (in my case) of more complex code I am not sure its worth it.
Update I realize this is a bit of a discussion question but it's helped me wrap my brain around options. I think something like this:
// options.js
let options = {}
export function setOptions(o) { options = o }
export function getOptions(o) { return options }
Then have the user call setOptions() and call this getOptions internally. I realize that since Node requires the module just once that my options object will be kept configured as I pass it around.
NPM modules should IMO be agnostic as to where configuration is stored. That should be left up to the developer, and they may pick their favorite method (env vars, rc files, JSON files, whatever).
The configuration can be passed to your module in various ways. A common way is to export a function that takes an options object:
export default options => {
let db = database.connect(options.database);
...
}
From there, it really depends on what exactly your module provides. If it's just a bunch of loosely coupled functions, you can just return an object:
export default options => {
let db = database.connect(options.database);
return {
getUsers() { return db.getUsers() }
}
}
If you want to allow multiple versions of that object to exist simultaneously, you can use classes:
class MyClass {
constructor(options) {
...
}
...
}
export default options => {
return new MyClass(options)
}
Or export the entire class itself.
If the number of configuration options is limited (say 3 or less), you can also allow them to be passed as separate arguments, instead of passing an object.

Problems de-serializing System.Security.Claims.Claim

I'm implementing an oAuth server and need to store refresh tokens, to do this I have (at the moment) chosen to serialize the tokens into JSON.
While I can see that the JSON includes everything that would be needed to rehydrate, when I de-serialize with token.FromJson() the embedded claims are not being reconstructed correctly.
So far I've considered inheriting from JsonConverter to create a claims converter but don't see a way of adjusting the global JsConfig to utilise it :(
Can any one point me in a good direction?
So...
Walking away from the code and returning did the trick!
Instead of using a JsonConverter you need to utilise a generic version of JsConfig when changing/overriding the behaviour of ServiceStack on a specific class, just stick the following in your services start-up code for example.
JsConfig<Claim>.SerializeFn = claim => string.Format("{0}|{1}", claim.Type, claim.Value);
JsConfig<Claim>.DeSerializeFn = claimDetails =>
{
var values = claimDetails.Split('|');
return new Claim(values[0], values[1]);
};

Ignore certain TypeScript compile errors?

I am wondering if there is a way to ignore certain TypeScript errors upon compilation?
I basically have the same issues most people with large projects have around using the this keyword, and I don't want to put all my classes methods into the constructor.
So I have got an example like so:
TypeScript Example
Which seems to create perfectly valid JS and allows me to get around the this keyword issue, however as you can see in the example the typescript compiler tells me that I cannot compile that code as the keyword this is not valid within that scope. However I don't see why it is an error as it produces okay code.
So is there a way to tell it to ignore certain errors? I am sure given time there will be a nice way to manage the this keyword, but currently I find it pretty dire.
== Edit ==
(Do not read unless you care about context of this question and partial rant)
Just to add some context to all this to show that I'm not just some nut-job (I am sure a lot of you will still think I am) and that I have some good reasons why I want to be able to allow these errors to go through.
Here are some previous questions I have made which highlight some major problems (imo) with TypeScript current this implementation.
Using lawnchair with Typescript
Issue with child scoping of this in Typescript
https://typescript.codeplex.com/discussions/429350 (And some comments I make down the bottom)
The underlying problem I have is that I need to guarantee that all logic is within a consistent scope, I need to be able to access things within knockout, jQuery etc and the local instance of a class. I used to do this with the var self = this; within the class declaration in JavaScript and worked great. As mentioned in some of these previous questions I cannot do that now, so the only way I can guarantee the scope is to use lambda methods, and the only way I can define one of these as a method within a class is within the constructor, and this part is HEAVILY down to personal preference, but I find it horrific that people seem to think that using that syntax is classed as a recommended pattern and not just a work around.
I know TypeScript is in alpha phase and a lot will change, and I HOPE so much that we get some nicer way to deal with this but currently I either make everything a huge mess just to get typescript working (and this is within Hundreds of files which I'm migrating over to TypeScript ) or I just make the call that I know better than the compiler in this case (VERY DANGEROUS I KNOW) so I can keep my code nice and hopefully when a better pattern comes out for handling this I can migrate it then.
Also just on a side note I know a lot of people are loving the fact that TypeScript is embracing and trying to stay as close to the new JavaScript features and known syntax as possible which is great, but typescript is NOT the next version of JavaScript so I don't see a problem with adding some syntactic sugar to the language as people who want to use the latest and greatest official JavaScript implementation can still do so.
The author's specific issue with this seems to be solved but the question is posed about ignoring errors, and for those who end up here looking how to ignore errors:
If properly fixing the error or using more decent workarounds like already suggested here are not an option, as of TypeScript 2.6 (released on Oct 31, 2017), now there is a way to ignore all errors from a specific line using // #ts-ignore comments before the target line.
The mendtioned documentation is succinct enough, but to recap:
// #ts-ignore
const s : string = false
disables error reporting for this line.
However, this should only be used as a last resort when fixing the error or using hacks like (x as any) is much more trouble than losing all type checking for a line.
As for specifying certain errors, the current (mid-2018) state is discussed here, in Design Meeting Notes (2/16/2018) and further comments, which is basically
"no conclusion yet"
and strong opposition to introducing this fine tuning.
I think your question as posed is an XY problem. What you're going for is how can I ensure that some of my class methods are guaranteed to have a correct this context?
For that problem, I would propose this solution:
class LambdaMethods {
constructor(private message: string) {
this.DoSomething = this.DoSomething.bind(this);
}
public DoSomething() {
alert(this.message);
}
}
This has several benefits.
First, you're being explicit about what's going on. Most programmers are probably not going to understand the subtle semantics about what the difference between the member and method syntax are in terms of codegen.
Second, it makes it very clear, from looking at the constructor, which methods are going to have a guaranteed this context. Critically, from a performance, perspective, you don't want to write all your methods this way, just the ones that absolutely need it.
Finally, it preserves the OOP semantics of the class. You'll actually be able to use super.DoSomething from a derived class implementation of DoSomething.
I'm sure you're aware of the standard form of defining a function without the arrow notation. There's another TypeScript expression that generates the exact same code but without the compile error:
class LambdaMethods {
private message: string;
public DoSomething: () => void;
constructor(message: string) {
this.message = message;
this.DoSomething = () => { alert(this.message); };
}
}
So why is this legal and the other one isn't? Well according to the spec: an arrow function expression preserves the this of its enclosing context. So it preserves the meaning of this from the scope it was declared. But declaring a function at the class level this doesn't actually have a meaning.
Here's an example that's wrong for the exact same reason that might be more clear:
class LambdaMethods {
private message: string;
constructor(message: string) {
this.message = message;
}
var a = this.message; // can't do this
}
The way that initializer works by being combined with the constructor is an implementation detail that can't be relied upon. It could change.
I am sure given time there will be a nice way to manage the this keyword, but currently I find it pretty dire.
One of the high-level goals (that I love) in TypeScript is to extend the JavaScript language and work with it, not fight it. How this operates is tricky but worth learning.

Resources