If I remove the node.js log file, how to create one? - node.js

I remove the node.js log file like this.
cd /var/log
rm node.log
And now I create a log file named "node.log", but the file is not written by node.js.
How should I do? Thanks!!

Node will continue to write to the old (unlinked, but not yet deleted, because it is still open) log file.
Easiest way would be to restart node.

Related

How to deploy an heroku application and ignore a file?

I am building a web application for an online "build your own" card game. In the application, I have a cards.json file that holds custom card data. This file is changed with fs whenever a user creates a card. Whenever I push local changes, the cards.json file gets overwritten on deploy. That means all the remote data gets lost on every deploy. How can I include a cards.json file remotely but not change the file whenever I push changes using git push heroku master?EDIT: I guess for clarification reasons, I have tried using a .gitignore as well as removing the file from the staging area. I'm not entirely sure, but I think the issue is that when the application is deployed the file is overwritten there.
So I just found out that the data created during runtime will always be deleted/reset.
https://devcenter.heroku.com/articles/dynos#ephemeral-filesystem
I guess the best fixes for anyone else who has this same issue are:
a) Look into Databases and Heroku Add-ons, or
b) This is very workaround, and there might be better ways to do it, but:
// Go into a new directory, and use
$ heroku ps:copy <FILENAME> --app <APPNAME>
// Then, copy+paste the data from this file into your main repo.
/* Now, each time you do this, you need to make sure you delete that file from the
* extra directory you created as ps:copy only works when the file doesnt exist locally.
*/
I think git fetch doesn't work in this instance, as it only pulls that unchanged file, rather than the changed one from the dyno.
Look up the .gitignore file in git, seems to me that's exactly what you're looking for.
If it doesn't recognize .gitignore properly at first:
git add [uncommitted changes you want to keep] && git commit
git rm -r --cached .
git add .
git commit -m "fixed untracked files"
In .gitignore add the cards.json along with the path .
eg. src/test/resources/testdata/cards.json

Node-RED docker problem reading directory contents

I have a Node-RED app running in a docker container, with the aim to periodically read contents of a directory where .csv files are constantly updated and new .csv files are sometimes added. The point is to read new entries periodically, parse data, and send it onward.
I have not utilized the numerous 'contrib' nodes, as I have enabled the NodeJS 'fs' module and played with it. Additionally the built-in 'file' and 'file in' Node-RED modules are useful when reading the .csv files' contents, so that is not an issue.
The problem comes with the new .csv files being added into the directory where all the .csv files are. I want be able to read all the file names and subsequently read all the .csv files.
I have mounted the .csv file directory into the docker container, and when testing whether I'm able to read the file names, weird things happen. Even though the files are visible in the container (viewed using docker exec -it CONTAINER /bin/bash) a piece of code containing fs.readdir does not list the files. When I try the fs.readdir too see the contents of /data directory, which is mounted into the container, it lists the contents like 10 % of the time (injecting a timestamp into the node to run it)
As you can see from the image, the contents of the directorty in question are not listed on every execution of the node. The contents of the mounted directory containing the .csv files are never listed upon running this node with the correct path as parameter.
The operating system is CentOS 7, where I am not a sudoer. I have managed to make it so that none of the mounted files or directories are owned by root, so they are owned by user node-red within the container. I managed to pull this directory file listing through on my ubuntu where I am a sudoer, but as none of the stuff is root-owned there either, I am not sure if that is the problem. I have a feeling this might be an operating system -relating thing.
Notes:
All relevant files and directories have permissions rwxr-xr-x
I have tried to mount the .csv files containing directory under /data directory, and as its own directory directly under root as /files
I am able to read the file contents with the Node-RED file nodes, just not the directories. Reading static file names is not enough as the directory contents keep changing
I have enabled NodeJS 'fs' module from the settings.js file which is mounted into the container
The Node-RED node (in image) does not output any errors (I tried this by adding an error return to the function in the image)
I have tried to run the Node-RED container as root user and without defining the user
I am running the Node-RED container using docker-compose
I hope this was not too much text or too unclear, I just wanted to make sure at least most of the stuff I have tried would be written here. If someone has some insight on the workings of Node-RED under docker and using the NodeJS fs module, it would be most appreciated :)
The core Watch node should do all of this for you, no need to write function nodes.
If you want walk subdirectories make sure you tick the right box in the config.
From the Sidebar docs for the watch node:
The full filename of the file that actually changed is put into
msg.payload and msg.filename, while a stringified version of the watch
list is returned in msg.topic.
msg.file contains just the short filename of the file that changed.
msg.type has the type of thing changed, usually file or directory,
while msg.size holds the file size in bytes.
To answer my question of why Node-RED was unable to read directory contents most of the time, it was because of using the asynchronous fs.readdir module. When I switched to using the synchronous version fs.readdirSync, Node-RED was able to read directory contents without problems.

Saving a file in a Snap (Snapcraft) with NodeJS

I am having an issue with creating a new file while my snap is running; example:
1) Snap starts and checks for the config.json file at ./config/config.json
2) If that file is not found (it never is the first time the application runs) it will create it fs.writeFile('./config/config.json', 'My Data', 'utf8', (err) => {....})
3) I Then look for that file later to use it.
I am able to run my node app and all works as expected when using node index.js
I am also able to run using snap try prime/ --devmode and all works.
When running snap try prime/ I get this error in the syslog
Error: ENOENT: no such file or directory, open './config/config.json'
It is erroring at the point of creation.
Any help with this would be awesome!! Thanks in advance.
I was able to solve this by NOT creating and checking for the config files in NodeJS and moving all of that logic to an install hook (https://docs.snapcraft.io/build-snaps/hooks).
So now my Install hook will check for the config file and create it if it's not there, then I allow NodeJS to write to that file later so I can still make all the HTTP requests in NodeJS and not in Bash. Below is my Install hook, don't forget to make it executable.
This file is located at snap/hooks/install
#!/bin/sh
set -e
CONFIG_FILE=$SNAP_COMMON/config.json
if [ ! -f $CONFIG_FILE ]; then
# File Not Found, Create it
echo '{}' > $CONFIG_FILE
fi
Hope this helps someone!

Openshift data dir access in NodeJS

I've been trying to make my user uploaded date on OpenShift accessible publicly. However, I run into the issue that I can't seem to make it work in any way.
I'm using NodeJS to upload the files to process.env.OPENSHIFT_DATA_DIR via express4 and fs.
The files upload just fine. However, I've read plenty of messages saying that I should link the folders together using "ln -sf ../route/to/app-root/data/folder linked_folder". Which I've done, but I cannot still access them publicly.
I honestly don't know what else should I do. Do the files automatically sync? Because that doesn't seem to be the case. Or should I be uploading to my repo folder and then OpenShift automatically links it to the data dir folder?
My current exact setup when doing "ln" is:
01| cd app-root/repo/public/
02| ln -sf ../../data/user-files user-files
Doing this to link the user-files folder in repo/public with the openshift data/user-files folder.
So the thing is that I can't access the files in the front end by doing "ln" at all. No clue where to go from here.
All what you need is:
1. store all your files in OPENSHIFT_DATA_DIR directory
2. write a script that run before server.js or app.js it's function is to copy all data from OPENSHIFT_DATA_DIR to your desired directory inside your repo like public directory or whatever you want.
SAMPLE: initDataBeforeRun.js
var fs = require('fs');
fs.writeFileSync('./public', fs.readFileSync(process.env.OPENSHIFT_DATA_DIR));

Run executable from local storage using Azure Web Role

i'm trying to run a simple executable using an Azure Web Role.
The executable is stored in the Web Role's local storage.
The executable produces a log.txt file once it has been run.
This is the method I am using to run the executable:
public void RunExecutable(string path)
{
Process.Start(path);
}
Where path is localStorage.RootPath + "Application.exe"
The problem I am facing is that when I open the local storage folder the executable is there however there is no log.txt file.
I have tested the executable, it works if I manually run it, it produces the log.txt file.
Can anyone see the problem?
Try setting an explicit WorkingDirectory for the process... I wonder if log.txt is being created, just not where you expect. (Or perhaps the app is trying to create log.txt but failing because of the permissions on the directory it's trying to create it in.)
If you remote desktop into the instance, can't you find the file created at E:\approot\ folder ? As Steve said, using a WorkingDirectory for the process will fix the issue
You can use Environment.GetEnvironmentVariable("RoleRoot") to construct the URL to your application root

Resources