In particular I'm trying to access the doc_count (of the requested db), but I'd also like to know how to access the rest of the server, as much as possible.
You can get database document's count from req.info.doc_count variable. See Request object structure for more info. To easily inspect your own requests you may use dummy update function:
function(doc, req){
return [null, {"json": req}]
}
However, this is the only server data that you can access from the update functions.
Related
I have a validate endpoint, which takes in a JWT that's POSTed to it, and that works fine. Currently set up like this in the application setup:
let server = HttpServer::new(|| {
App::new()
.wrap(Logger::default())
.route("/ping", web::get().to(health_check))
.route("/validate", web::post().to(validate))
})
I'm now looking to provide some JWKs, which I've done via a provider-style setup, where the calling code can just call get_key and the provider should handle caching and refreshing that cache automatically every X minutes so that I don't have to call the endpoint that provides the JWKs on each request. However, obviously that will only work if I can maintain the same instance of the provider object.
What would be the best way of doing this? Could I create an instance of the provider at the same level as the server creation code and pass in the results of provider.get_key through the app_data method that actix provides? Or perhaps do the same via middleware somehow?
I've tried passing the entire provider instance through the app_data method but can't get this to work (I think because my struct can't implement Copy due to containing a Vec), so I'm trying to find alternate methods of doing it!
I am wondering about the security of apps script libraries. If a user imports a library, is there any way for them to retrieve the code within the library?
I ask because I am writing a library that connects many sheets to a single sheet which acts like a database. Users of the many sheets should not be able to find the database sheet.
I have tested console logging the functions, and they just return [Function] and not the actual function definition. However I still don't know if this is a safe implementation or not. Would love to hear your thoughts.
For other users to use your library, you have to give them access by sharing the script.
Authorized users can view the function code by printing the function or going to the script link
Using print:
Using script link:
https://script.google.com/d/(Script ID Here)/edit
In your post above, you want to hide any data that will lead users to database sheet.
I suggest to create a temporary function in your library script that will set a property containing the Sheet ID. This can be done by using Properties Service. Using this service will allow you to store strings as key-value pairs scoped to one script.
Example:
function setProperty(){
PropertiesService.getScriptProperties().setProperty("Sheet_ID", "123456");
}
Usage:
function myFunction() {
var databaseID = PropertiesService.getScriptProperties().getProperty("Sheet_ID");
SpreadsheetApp.getActiveSpreadsheet().getSheetId(databaseID)
}
Note: Before deploying your library script, run the setProperty() function and delete it in your script editor. This will prevent users from viewing the source code for setProperty() function. Also, make sure that the role of the users you will authorize to access your library is Viewer only to prevent them from editing your script and printing the Property value.
Reference:
Properties Service
Apollo Server 2.0 has the ability to receive file uploads as described in this blog post.
However, all the tutorials and blog posts I found only showed how to upload a file. Nobody demonstrated how to actually retrieve the file back to display it onscreen.
Does anybody know how to properly query the file contents for display onscreen?
Also, there's the possibility that maybe there is no way of querying a file and you have to build a separate rest endpoint to retrieve the contents?
Some thoughts:
I imagine the query to be something like
query {
fetchImage(id: 'someid')
}
with the respective server-side definition
type Query {
fetchImage(id : ID!): Upload //maybe also a custom type, but how do I include the actual file contents?
}
Hint: Upload is a scalar type that apollo-server automatically adds to your type definition. It is used for the upload so I imaging it also being usable for the download/query. Please read the blog post mentioned above for more information.
The response from a GraphQL service is always serialized as a JSON object. Technically, a format other than JSON could be used in serialization but in practice only JSON is used because it meets the serialization requirements in the spec. So, the only way to send a file through GraphQL would be to convert the file into some format that's JSON-compatible. For example, you could convert a Buffer to a byte array and send that as an array of integers. You would also have to send the appropriate mime type. It would be up to the client to convert the byte array back into a usable format on receiving the response.
If you go this route, you'd have to use your own scalar or object type -- the Upload scalar does not support serialization so it will throw if you try to use it as an output type (and it's not really suitable for this sort of thing anyway).
However, while doing this is technically possible, it's also inadvisable. Serializing a larger file could cause you to run out of memory since there's no way to stream data through GraphQL (the entire response has to be in memory before it can be sent). It's much better to serve the file statically (ideally using nginx instead of Node). If your API needs to refer to the file, it can then just return the file's path.
You can do this by installing express with apollo server.
apollo-server-express
Install above package and instantiate Express object with Apollo Server as explained in package docs.
Then set the static folder using express like this
app.use("/uploads", express.static("uploads")); //Server Static files over Http
uploads is my static folder & /uploads will server get request to that path
//Now I can access static files like this
http://localhost:4000/uploads/test.jpg
As per the title, can a Node.js package require a database connection?
For example, I have written a specific piece of middlware functionality that I plan to publish via NPM, however, it requires a connection to a NoSQL database. The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value.
Is this considered bad practice?
It is not a bad practice as long as you require the DB needed and also explicitly state it clearly in your Readme.md file, it's only a bad practice when you go ahead and work without provide a comment in your codes or a readme.md file that will guide any other person going through your codes.
Example:
//require your NoSQL database eg MongoDB
const mongoose = require('mongoose');
// to connect to the database. **boy** is the database name
mongoose.connect('mongodb://localhost/boy', function(err) {
if (err) {
console.log(err);
} else {
console.log("Success");
}
});
You generally have two choices when your module needs a database and wants to remain as independently useful as possible:
You can load a preferred database in your code and use it.
You can provide the developer using your module a means of passing in a database that meets your specification to be used by your module. The usual way of passing in the database would be for the developer using your module to pass your module the data in a module constructor function.
In the first case, you may need to allow the developer to specify a disk store path to be used. In the second case, you have to be very specific in your documentation about what kind of database interface is required.
There's also a hybrid option where you offer the developer the option of configuring and passing you a database, but if not provided, then you load your own.
The functionality in its current state uses Mongoose to save data in a specific format and returns a boolean value. Is this considered bad practice?
No, it's not a bad practice. This would be an implementation of option number 1 above. As long as your customers (the developers using your module) don't mind you loading and using Mongoose, then this is perfectly fine.
I'm new to angular and developing my first 'real' application. I'm trying to build a calendar/scheduling app ( source code can all be seen on github ) and I want to be able to change the content if there is a user logged in (i.e. display details relevant to them) but here's the catch:
I don't want the app to be dependent on having a logged in user ( needs to be something that can be configured to work publicly, privately or both)
I don't want to implement the user/login within this app if it can be avoided ( I want to eventually include my app in another app where this might be implemented but isn't necessarily implemented using any particular security frameworks or limited to any)
I had an idea of creating some global variable user that could be referenced through out my application, or if I had to implement a system to do it all in this app that I could do so in in some abstract way so that different options could be injected in.
some of my ideas or understanding of what I should be doing may be completely wrong and ignorant of fundamentals but I genuinely do not know what approach I should take to do this.
In case it is relevant I currently don't have any back-end but eventually hope use MongoDB for storage and nodejs for services but I also want to try keep it open-ended to allow others to use different storage/backends such as sql and php
is there away to have a global uservariable/service that I could inject/populate from another (parent?) app?
If so what would be the best approach to do so?
If Not, why and what approach should I take and why?
Update
I Believe from comments online and some suggestion made to me that a service would be the best option BUT How would I go about injecting from a parent application into this applications service?
If your (single) page is rendered dynamically by the server and the server knows if you are logged-in or not, then you could do the following:
Dynamically render a script tag that produces:
<script>
window.user = { id: 1234, name: 'User A', isLoggedIn: true };
</script>
For non logged-in users:
<script>
window.user = { isLoggedIn: false };
</script>
For convinience, copy user to a value inside angular's IOC:
angular.module('myApp').value('user', window.user);
Then, you can use it in DI:
angular.module('myApp').factory('myService', function(user) {
return {
doSomething: function() {
if (user.isLoggedIn) {
...
} else {
...
}
}
};
});
Something tricky (which you should thing twice before doing [SEE COMMENTS]) is extending the $scope:
angular.module('myApp').config(function($provide) {
$provide.decorator('$controller', function($delegate, user) {
return function(constructor, locals) {
locals.$scope._user = user;
return $delegate(constructor, locals);
};
});
});
This piece of code decorates the $controller service (responsible for contructing controllers) and basically says that $scope objects prior to being passed to controllers, will be enhanced with the _user property.
Having it automatically $scoped means that you can directly use it any view, anywhere:
<div ng-if="_user.isLoggedIn">Content only for logged-in users</div>
This is something risky since you may end up running into naming conflicts with the original $scope API or properties that you add in your controllers.
It goes without saying that these stuff run solely in the client and they can be easily tampered. Your server-side code should always check the user and return the correct data subset or accept the right actions.
Yes you can do it in $rootScope. However, I believe it's better practice to put it inside a service. Services are singletons meaning they maintain the same state throughout the application and as such are prefect for storing things like a user object. Using a "user" service instead of $rootScope is just better organization in my opinion. Although technically you can achieve the same results, generally speaking you don't want to over-populate your $rootScope with functionality.
You can have a global user object inside the $rootScope and have it injected in all your controllers by simply putting it into the arguments of the controller, just as you do with $scope. Then you can implement functionalities in a simple check: if($rootScope.user). This allows you to model the user object in any way you want and where you want, acting as a global variable, inside of Angular's domain and good practices with DI.
Just to add on my comment and your edit. Here is what the code would look like if you wanted to be able to re-use your user service and insert it into other apps.
angular.module('user', []).service('userService', [function(){
//declare your user properties and methods
}])
angular.module('myApp', ['user'])
.controller('myCtrl', ['userService', '$scope', function(userService, scope){
// you can access userService from here
}])
Not sure if that's what you wanted but likewise you could have your "user" module have a dependency to another "parent" module and access that module's data the same way.