I am having issues attempting to properly use pkg (nodejs module) properly.
I am doing a stand alone file manager (well, it would swap video/audio files to & from preselected directories, intended to allow it without any internet connection it self to remove & add files to a syncing folder like onedrive/dropbox/googledrive/etc. using a text file.)
The issue I am having, is I am at a loss of after I package it into a binary.. I do not understand how to allow/force it to create/read the text file outside compiled binary.
-- I would love for it to be within the same folder as the executable.
I am attempting to find a way to store data without having to share the sourcecode, or require node be installed on other machines.
-- I intend to have a minimal permissions as possible, and outside reading/writing the config & 'database' [which is simply a text file with what files are in the local storage, and what files are & are not in the remote storage]
What am I missing about pkg, & if it can store data internally some how... how do I get it to read an external file?
-- Though I would greatly prefer to have the txt files outside the binary & in plain text easy to read.
As a side question, I am not understanding how to pass an argument through & use it inside the program after it's compiled. [Hell, I'm having a heck of a time, properly understanding the readme for the pkg module]
Use fs features to load config object as in this three-lines of code
filename="./config.json";
let rawdata = fs.readFileSync(filename);
let config = JSON.parse(rawdata);
config.json must be in same direcory of pkg executable
If you need to change path of config.json, you will able to specify full-path of this file using command line arguments.
These can be read at runtime using process.argv variable as explained here
Related
I am creating a build system for development purposes for the FreeCAD application. Repo is here if you want to get a better scope of what I'm talking about.
Essentially the folder structure is:
(Main)
(Linux)
(Ubuntu)
ubuntu.sh
ubuntu.Dockerfile
(Fedora)
fedora.sh
fedora.Dockerfile
(Windows)
(Mac)
.env
What I want to do is use the env variables in .env as a central source of truth for all the build scripts in the tree. But I don't want to have to explicitly define the path of the .env inside the files, absolute or relative paths, as I'm still iterating and I don't want to update all the files if I rearrange the tree. Alternatively, I don't want to put independent .env's in all the child dirs for the same reason (unless they auto update somehow)
My question is as follows:
How do I just explicitly define the "local" path of .env in each script, Dockerfile, etc but only have to modify one top level .env file to auto-update an evolving tree. In a cross platform way
Some things I thought through:
Windows uses "hard links" which are equivalent but non compatible with POSIX hardlinks. I thought about creating windows.env and posix.env in each child dir that point to the same main .env. But most config files can only take one .env path argument.
I thought about writing a script that will update all the .env's when run (would rather not have to), or alternatively, I will accept an answer that uses some dotenv tooling to accomplish the same goal as long as it's cross-platform, and runs locally. I'm just not super familiar with those toolings. I would prefer the tooling or script run as a service and not have to be run everytime in order to update the files.
IF I'm using Git AND only referring to shell scripts, then a command at the top of the script such as . /$(git rev-parse --show-toplevel)/.env works well but has major limitations for use with dockerfiles and other yml based file types.
I currently use a run.sh file at the top level dir that sources the .env and then calls the other files within it. This seems to be the most used pattern I see in other repos. But this means I need to have two files run.sh and run.pwsh which just seems extranuous and hacky to add extras files that are basically one liners.
I have created a package and am now creating my tests within the package. For one test my inputs are a set of files and my outputs will be a different set a files created within the test.
I am saving the input files in the test directory of my package and would like to save the output files there too. Since others may run this test, I do not want to specify the input/output file location using my own path eg /home/myname/.julia/v4.0/MyPackage/test/MyInputFile.txt
How do I specify that the input location is within the package's test folder?
So basically how do I tell Julia to look in the packages's folder under the test directory and not have to worry about specifying the entire path including user name etc?
For example currently I have to say
readtable(/home/myname/.julia/v4.0/MyPackage/test/MyInputFile.txt, separator = '\t', header = false)
But I'd like to just be able to say
readtable(/MyPackage/test/MyInputFile.txt, separator = '\t', header = false)
so that no matter who the user of the package is and where they may store the package, they can still run the test?
I know that LOAD_PATH gives the path Julia looks for packages but I can't find any information on where it looks when importing files.
joinpath(Pkg.dir("MyPackage"), "test") is what you need.
As #GnimucK mentioned in a comment, a better solution is
dirname(#__FILE__)
Why is this better? A package could be installed and used from somewhere else (not the standard package directory). Pkg.dir is "stupid" and does not know better. This is rare, of course, and in most cases it won't matter.
I've been going slightly crazy trying to figure this out. I have some certs that I need to pass through to an authentication client from my api; however, the application continues to throw ENOENT exceptions even though the file clearly exists within the same directory (I've fiddled with this to make sure). I'm using readFileSync, effectively doing the following:
key: fs.readFileSync('./privateKey.pem'),
Strangely, if I run this on a standalone Node server not as a part of an api, the file is able to be found without a problem. Is there some consideration I'm not aware of when trying to use readFileSync in such a scenario?
Thanks!
In node you need to be very careful with relative file paths. The only place where I'd ever really use them is in require('./_____') statements, where ./ to mean "relative to this file". However, require is kind of a special case because it is a function that node automatically creates per-file, so it knows the path of the current file.
In general, standard functions have no way of knowing the directory containing the script that happened to call a function, so in almost all cases, ./ means relative to the current working directory (the directory you were in when you ran node <scriptname>.js). The only time that is not the case is if your script or a module you use explicitly calls process.chdir to set the working directory to something else. The correct way to reference files relative to the current script file is to explicitly use an absolute path by using __dirname + '/file.js'.
I know puppet modules always have a files directory and I know where it's supposed to be and I have used the source => syntax effectively from my own, handwritten modules but now I need to learn how to deploy files using Hiera.
I'm starting with the saz-sudo module and I've read the docs but I can't see anything about where to put the sudoers file; the one I want to distribute.
I'm not sure whether I need to set up a site-wide files dir in /etc/puppetlabs/puppet and then make subdirs in there for every module or what. And does Hiera know to look in /etc/puppetlabs/puppet/files/sudo if I say, source => "puppet:///files/etc/sudoers" ? Do I need to add a pathname in /etc/hiera.yaml? Add a line - files ?
Thanks for any clues.
My cursory view of the puppet module, given their example of using hiera:
sudo::configs:
'web':
'source' : 'puppet:///files/etc/sudoers.d/web'
'admins':
'content' : "%admins ALL=(ALL) NOPASSWD: ALL"
'priority' : 10
'joe':
'priority' : 60
'source' : 'puppet:///files/etc/sudoers.d/users/joe'
Suggest it assumes you have a "files" puppet module. So under you puppet modules section:
mkdir -p files/files/etc/sudoers.d/
Drop your files in there.
Explanation:
The url 'puppet:///files/etc/sudoers.d/users/joe' is broken down thus:
puppet: protocol
///: Three slashes indicate the source of the file is in a module.
files: name of the module
etc/sudoers.d/users/joe: full path to the file within the module's "files" directory.
You don't.
The idea of a module (Hiera backed or not) is to lift the need to manage the whole sudoers file from you. Instead, you can manage each single entry in the sudoers file.
I recommend reviewing the documentation carefully. You should definitely not have a file { "/etc/sudoers": } resource in your manifest.
Hiera doesn't have to do anything with Files.
Hiera is like a Variables Database, and servers you based on the hierarchy you have.
the files inside puppet, are usually accessed in methods like source => but also these files are using some basic structure.
In most cases when you call an file or template.
A template can serve your needs to automatically build an sudoers based on that.
There are also modules that supports modifying sudoers too.
It is up to you what to do.
In this case, saz stores the location of the file in hiera, but the real location can be a file inside your puppet (like a module file or something similar).
Which is completely unrelated.
Read about puppet file server
If you have questions, just ask.
V
I'm trying to wrap code that requires two *.db4o data files for easy use. I've added the data files to my eclipse .classpath by placing the files in ${project_dir}/res/ and adding the line:
<classpathentry kind="src" path="res"/>
to my .classpath.
I then defined a default constructor to my wrapper class that takes no arguments but goes and finds the paths to the *.db4o files (the paths are required by the compiled code I'm using to set things up). My approach for getting the paths is:
String datapath = ClassLoader.getSystemResource("resource_name").getPath();
This works great when I debug/run my code in eclipse. However when I export it as a jar, I can see that the *.db4o files are in the jar, as well as my compiled code, but the path returned to "datapath" is of the form:
datapath = ${pwd}/file:${absolute_path_to_jar}!/{resource_name}
Is there something about the resource being inside of the jar that prevents an absolute path from working? Also, why is the behavior different simply because the code and resources live in a jar file? One last note is that while my application is intended for wider use (from PIG, python, etc. code) I'm testing it from Matlab which is where I'm getting the odd value assigned to "datapath".
Thanks in advance for any responses.
getSystemResource() returns URL to resource. If your resource is zipped in a jar file then the URL will point into it (with the "!" notation). getPath() returns the "path" part of the URL, not always an actual file path. URL can be one of many things, not just a file.