Select directories on client and upload info to node - node.js

I need to allow users to select directories locally (similar to the html file selector), with the path info uploaded to node.
I appreciate that sounds pointless, but it's to create a config file on the server that will match the client's requirements. So the user can select, say 'c:\documents' and 'd:\data\stuff' and have those paths passed as strings via http POST to the node (express) process.
How might I achieve this?
EDIT More info:
The data is for a client app, that works with the SaaS I'm working on. So the user (of which there will be many - I hope!) can make all their changes in one place, rather than having to configure settings in the portal, and then settings in the client GUI.
When the client app runs, it will pull the config (ie the directories it needs to process) from the server.

Related

How to safely deploy nodejs script

I have a NodeJS file that processes IoT data. It runs on a domain, and I have a different DOCROOT for this domain, point to /public.
Ideally I would store my main.js file outside the public folder so the javascript source code remains protected. But how would a user be able to interact with the server? Would that be to create a public/main.js, that includes the main.js file from the public parents folder. Is this correct?
So my guess is that the folder structure would be like this:
/node_modules
/public
/public/main.js
/.env
/.gitignore
/main.js
/package-lock.json
/package.json
/README.md
Is this how one would safely deploy a NodeJS server script? Or should it be done differently? I'm sorry if this is an inappropriate question, but I'm new to NodeJS and would appreciate some guidance.
Nodejs is usually server-sided and not client-sided, meaning you would need a server/host to run the code (just like you need to run "node main.js"). There are many options out there, but as an example you could use heroku.com.
When the scripts are deployed on a server/hosting platform, the users are not able to read the source files (as long as you don't make them available within the code).
In order for the users to be able to interact with the data/server you would need to create a endpoint that allows the users to connect to the server. You could for example use the npm library called "express". This would allow you to create different endpoints on the server that the users can access.
Not sure how much this helped or if I understood this correctly, but let me know :)

Recommended practices when developing full-stack applications based on Node.js and AWS

I've been working on the front-end so far, now I'm going to create my first full-stack application. I want to use node.js, express and AWS for this.
At the design stage, I already encountered a few problems. Therefore, I have a few questions and I am asking you for help:
Can I send a message (simple JSON or database value) from the server to all clients who have already opened my home page in a simple and cheap way?
I'm not talking about logged in users, but all who downloaded the main page (GET, '/')?
Using the admin panel ('www.xxxxxxxxx/admin'), I want to send a message to the server once a day. Then I want to change the HTML to display this message. I was thinking to use EJS for this and download this message from the database.
Can I make it better? If someone visits my home page (GET, '/'), EJS will download the message from the database each time! Even though its value is the same for 24 hours. Can I get the value once and then use it until the value is changed? How to store the message? As a JSON on the server? Or maybe in the .env file?
If the user refreshes the page, do I have to pay for calling all AWS functions to build the page each time? Even if nothing has changed in the files?
How to check if the page has new content and then send it to the user, instead of sending the unchanged page files: .html, .js, .css, etc.?
Can I send the user only the changed, dynamically created html file, and not send again unchanged .js and .css files?
Does every user who opens the home page (GET, '/') create a new connection to the server using WebSocket / socket.io?
I will try to answer some of your questions:
Can I send a message (simple JSON or database value) from the server to all clients who have already opened my home page in a simple
and cheap way? I'm not talking about logged in users, but all who
downloaded the main page (GET, '/')?
I guess you mean sending push notifications from the server to the user. This can be done with different services depending on what are you trying to build.
If you are planning to use GraphQL, you already have GraphQL subscriptions out of the box. If you are using AWS, go for Appsync, which is the AWS service for GraphQL.
If you are using REST and a WebApp (not a mobile app), go for AWS IoT using lambdas. Here is a good resource using Serverless Framework (API Gateway + lambdas + IoT) for unauthenticated users: https://www.serverless.com/blog/serverless-notifications-on-aws
If you are planning to use notifications on a mobile app, you can go for SNS, the "de facto" service for push notifications in AWS world.
Using the admin panel ('www.xxxxxxxxx/admin'), I want to send a message to the server once a day. Then I want to change the HTML to display this message. I was thinking to use EJS for this and download this message from the database. Can I make it better? If someone visits my home page (GET, '/'), EJS will download the message from the database each time! Even though its value is the same for 24 hours. Can I get the value once and then use it until the value is changed? How to store the message? As a JSON on the server? Or maybe in the .env file?
Yes, this is the way it's expected to work. The HTML is changed dynamically using frontend code in Javascript; which makes calls (using axios for example) to the backend every time you get into, i.e. "/" path. You can store this data in frontend variables, or even use state management in the frontend using REDUX, VUEX, etc. Remember the frontend code will always run in the browser of your users, not on your servers!
If the user refreshes the page, do I have to pay for calling all AWS functions to build the page each time? Even if nothing has changed in the files?
What you can do is store all your HTML, CSS, Javascript in an S3 bucket and serve from there (this is super cheap, even free till a certain limit). If you want to use Server Side Rendering (SSR), then yes, you'll need to serve your users every time they make a GET request for example. If you use lambda, the first million request per month are free. If you have an EC2 instance to serve your content, then a t2.micro is also free. If you need more than that, you'll need to pay.
How to check if the page has new content and then send it to the user, instead of sending the unchanged page files: .html, .js, .css, etc.?
I think you need to understand how JS (or frameworks like React, Vue or Angular) do this. Basically you download the js code on the client, and the js makes all the functionality to update backend and frontend accordingly. In order to connect frontend with backend, use Axios for example.
Can I send the user only the changed, dynamically created html file, and not send again unchanged .js and .css files?
See answer above. Use frameworks like React or Vue, will help you a lot.
Does every user who opens the home page (GET, '/') create a new connection to the server using WebSocket / socket.io?
Depends on what you code. But by default what happens is the user will make a new GET request everytime he accesses your domain, and that's it. (It's not establishing any connection if you don't tell the code to do so).
Hope this helps!! Happy coding!

custom formats to hide threejs software backend working

To render on threejs, we need some images(jpg/png) and , jsons(uv data). All these files are stored in respective folders and the files visible for clients to look at.
I use django/python to start a local server, python code is compiled to .pyc & js code is obfuscated. But the folder structure is accessible for Casual Users. In threejs, we use tex_loader and json_loader functions to which the file paths are given as inputs. Was looking at ways of securing the behind the scenes work.
Happened to read about custom binary formats, but that felt like a lot of work.
or giving access to files only for certain process starting through django/web browser?
Are there any available easy to deploy solutions to protect our IP ?
An option would be to only serve the files to authenticated users. This could be achieved by having an endpoint on your backend like:
api/assets/data.json
and the controller in the backend would receive the file name(data.json), the code could check if the user requesting the endpoint is authenticated and if so read the file from the file system(my-private-folder/assets/data.json) and return it as file with correct mime-type to the browser.

Is there a way to "escape" linux commands?

I'm making an service that will allow users to post files to my web server which will then copy that file (after a few checks) to the image server. The main way of communicating between my web server and my image server will be scp. However, I also want to maintain user filenames, so it would look like this:
User posts their file to web server
Web server checks if the file is supported
Web server checks if file is under file size limit
Web server says OK and tries to send to image server
Web server runs ("scp " + filepath + " root#imageServer:~/images")
Image server receives the file and is ready to send the file to user on request (the folder is public and will be served by nginx)
the dangerous part here is the scp command. I'm not an expert on security, but is there a way that this command can get hijacked the same way a database can get SQL injection? What if somebody named their file to be malicious. Is there a way to safely join the filename to the script? To safely "escape" the command?
I'm using express (node.js) for the web server. Is there another way to send files from the web server to a simple Ubuntu install without unix commands or writing up a REST api for the image server? Is there is, then I might not need to "escape" at all
Btw, the reason why I'm choosing to have the image server and the web server separate is because I want to scale the application in the future. For example, if there were 10 web servers and no central image server, then it would be impossible to retrieve files if the file isn't on the web server you request from.
You can run an external command without a shell (and therefore without issues with shell metacharacters) using child_process.spawn (or other methods in child_process). (Obviously, you must not specify the shell option as anything other than the default false.)
That lets you not worry about metacharacters in the filepath, but it seems to me that there are plenty of other issues with letting the user provide a filepath name to be used as such on a live filesystem. Personally, I'd autogenerate safe, short names and keep the correspondence from user name to filesystem name in a database somewhere.

How can I perform HTTP request caching in a node.js CLI application that persists once the application has exited?

I'm writing a node.js CLI application that uses the GitHub API. Instead of making several HTTP requests to the GitHub every time the app is invoked, I'd like to cache responses and use the ETag and Last-Modified headers to determine if I need to make a request.
How can I persist these requests on a user's machine without resorting to reading from and writing to flat files in the user's home directory?
If you're expecting it to persist when the app isn't running, then storing to a file is your simplest option. You could do it with a redis cache or local sql database, but then your user has to set that up too.
You could use something like memory-cache to have it cache the response while the app is running though, and just rely on a single new HTTP request each time the app starts up.
Node.js has a variable called __dirname which points to the directory the current running script resides in. If a package is installed via NPM, this corresponds to the global node_modules folder, which scripts have read/write access to. I ended up using the trivialdb package to store a JSON file in this directory.
One caveat: the folder will be overwritten if the user updates the package, so if you want to store data that persists through multiple installs of your package, storing in the user's home directory is probably the best bet.

Resources