dotenv doesn't seem to load any env variable - node.js

I'm developing the standard Node+Express web app. Everything else is working fine, but I can't make the .env file to populate process.env
At first, I thought it was a scope problem since my app.js, where dotenv is called from, is within the src subfolder and .env is in the root. But using node tools and confirming it with a package called find-config, I have the correct absolute path. I've never gotten an ENOENT for a file not found.
I tried everything, from dotenv's debug thingy explained in the docs, to my own debugging, making sure everything is in place. This is my latest attempt:
const fs = require('fs');
const realpath = require('find-config')('.env');
console.log(dotenv.parse(fs.readFileSync(realpath)));
I've paused execution and asserted that indeed realpath is exactly the absolute .env path
And here's .env
NODE_ENV=development
NODE_HOST=localhost
NODE_PORT=8080
SESSION_SECRET=eX&frsz9M*3XqFKUrK6
The console.log outputs {}, which is consistent with every avenue I've tried: never an error, never a parsed object either. Just nothingness.
Doing this:
const results = JSON.stringify(dotenv.config({"path":"/100%/correct/path/.env"}));
It throws back {"parsed":{}}
I've become so suspicious that I downloaded, installed and run the mega Hackathon Starter repo of 29k stars, which uses the same method.
Initially, it doesn't work because the author used a relative path. With the absolute path it works.
A bit more info in case it helps:
/* =========== Dotenv troubleshooting ===========
*/
const realPath = path.join(__dirname, '../.env');
const buffer = fs.readFileSync(realPath);
const envConfig = dotenv.parse(buffer, {debug:true});
l(realPath);
l(buffer);
l(envConfig);
/* end of Dotenv troubleshooting ------------------ */
This logs the following:
> node server.js
SESSION_SECRET=blabla value when parsing line 1: NODE_ENV=development
19:06:11 info: /100%/correct/path/.env
19:06:11 info: <Buffer 4e 4f 44 45 5f 45 4e 56 3d 64 65 76 65 6c 6f 70 6d 65 6e 74 0d 4e 4f 44 45 5f 48 4f 53 54 3d 6c 6f 63 61 6c 68 6f 73 74 0d 4e 4f 44 45 5f 50 4f 52 54 ... 41 more bytes>
19:06:11 info: {}
And as you can tell, that buffer is indeed the file:
/100%/correct/path $ xxd .env
00000000: 4e4f 4445 5f45 4e56 3d64 6576 656c 6f70 NODE_ENV=develop
00000010: 6d65 6e74 0d4e 4f44 455f 484f 5354 3d6c ment.NODE_HOST=l

After too much time scrutinizing my code, I finally discovered the issue.
Depending on operating systems and down to files themselves (and text editors and IDEs), at the end of the line you can have a Line Feed ("LF") character (0x0A, \n), Carriage Return ("CR") character (0x0D, \r) or End of Line ("EOL") character (0x0D0A, \r\n).
Unfortunately, dotenv only understands \n as the end of a line. So it was a parsing issue due to the simplicity of the checking. As far as I know, however, \n is kind of the standard.
For some reason, the .env file was using \r as a line separator. So it was a quick fix:
In my case, using a JetBrains product you can configure in the settings to always use the same end of line.

Related

Handling data packets in Node Js

Let's say I'm reading a TCP or UDP stream in Node Js. This question basically applies to any language or platform, but how do I go about creating a header for my data layer?
I suppose I need
A magic set of characters to identify a header
A number that says the length of the packet
???
I would like to future proof it and follow any "typical" data packet header structures (maybe they usually include version? protocol?) but cannot for the life of me find any great information online.
Use the pcapng format. The spec should have everything you need if you want to look at header bytes at a deeper level. Pcap is the older format, but has limitations.
There's already a pcapng parser available, pcap-ng-parser available via npm.
If you want a general protocol analyzer, you should look at Wireshark
Generate a pcapng file
In order to work with a pcapng, we need a pcapng file. Fortunately, tshark (part of Wireshark), makes this easy. We can use tshark to generate 10 packets (-c 10) and save to the pcapng format (-F).
tshark -w myfile.pcapng -F pcapng -c 10
JS pcapng libraries
pcap-ng-parser
We can use the sample js file on the about page:
# temp.js
const PCAPNGParser = require('pcap-ng-parser')
const pcapNgParser = new PCAPNGParser()
const myFileStream = require('fs').createReadStream('./myfile.pcapng')
myFileStream.pipe(pcapNgParser)
.on('data', parsedPacket => {
console.log(parsedPacket)
})
.on('interface', interfaceInfo => {
console.log(interfaceInfo)
})
Getting info from pcapng file
Running sample JS
Running it on my system, we see link and interface information.
$ node temp.js
{
linkType: 1,
snapLen: 524288,
name: 'en0\u0003\u0005Wi-Fi\t\u0001\u0006',
code_12: 'Mac OS X 10.14.6, build 18G103 (Darwin 18.7.0)\u0000\u0000\u0000\u0000\u0000\u0000h\u0000\u0000\u0000'
}
{
interfaceId: 0,
timestampHigh: 367043,
timestampLow: 1954977647,
data: <Buffer a8 bd 27 c8 f2 fe 6c 96 cf d8 7f e7 08 00 45 00 00 28 87 c3 00 00 40 06 e4 ba ac 1f 63 c6 8c 52 72 1a fc 3c 01 bb 6c 24 4d 01 54 03 1b 06 50 10 08 00 ... 4 more bytes>
}
... <output truncated>
Vs tshark
Depending on your use case, tshark may make more sense anyway
tshark -r myfile.pcapng -c 1 -T json
[
{
"_index": "packets-2019-12-15",
"_type": "pcap_file",
"_score": null,
"_source": {
"layers": {
"frame": {
"frame.interface_id": "0",
"frame.interface_id_tree": {
"frame.interface_name": "en0",
"frame.interface_description": "Wi-Fi"
},
"frame.encap_type": "1",
"frame.time": "Dec 15, 2019 12:04:14.932076000 PST",
"frame.offset_shift": "0.000000000",
"frame.time_epoch": "1576440254.932076000",
"frame.time_delta": "0.000000000",
"frame.time_delta_displayed": "0.000000000",
"frame.time_relative": "0.000000000",
"frame.number": "1",
"frame.len": "175",
"frame.cap_len": "175",
"frame.marked": "0",
"frame.ignored": "0",
"frame.protocols": "eth:ethertype:ip:udp:db-lsp-disc:json",
"frame.coloring_rule.name": "UDP",
"frame.coloring_rule.string": "udp"
},
"eth": {
"eth.dst": "ff:ff:ff:ff:ff:ff",
"eth.dst_tree": {
...

How to add tags like Author,Title and Thumbnail to an .m4a audio file using node.js?

Using Node.js to download files containing music, in .m4a format.
Issue: I cannot find a way to add tags and the Cover Art (thumbnail) to .m4a files.
I had done this before using another program: achieved by MediaHuman youtube -> mp3 downloader (even though it downloads as m4a, my desired format) https://ufile.io/yzyzt
(P.S.I'm open to use another language, as long as the language can be linked it to node, but it is definitely very much preferred if it could be done purely in node.js)
Any clues on this subject are very appreciated.
One way to do it is to use node-taglib2, a Node.js C++ addon based on taglib and available in the npm repository.
This module makes trivial editing mpeg metadata:
const fs = require('fs');
const taglib = require('taglib2');
let props = {
artist: 'Howlin\' Wolf',
title: 'Evil is goin\' on',
pictures: [
{
"mimetype": 'image/jpeg',
"picture": fs.readFileSync('./thumbnail.jpg')
}
]
}
taglib.writeTagsSync('./file.m4a', props);
Now we can check that metadata have been updated:
let tags = taglib.readTagsSync('./file.m4a')
console.log(tags.artist, '-', tags.title) // Howlin' Wolf - Evil is goin' on
console.log(tags.pictures) // [ { mimetype: 'image/jpeg', picture: <Buffer ff d8 ff e0 00 10 4a 46 49 46 00 01 01 00 00 01 00 01 00 00 ff db 00 43 00 03 02 02 03 02 02 03 03 03 03 04 03 03 04 05 08 05 05 04 04 05 0a 07 07 06 ... > } ]
But of course there are others options to do the same thing and I'm sure you could also use ffmpeg, as mentioned by Brad in his comment. I would be interested in your feedback if you try it.
I have finally solved my issue by the use of ffmpeg using node!
https://www.ffmpeg.org/
https://www.npmjs.com/package/ffmpeg
The issue was that \node_modules\ffmpeg\lib\video.js decided to skip duplicate commands, therefore my requests consisting of multiple of same commands were never read properly by ffmpeg. However, with a quick mod to the video.js file, I was able to make it work! I have successfully written both, the tags, and a thumbnail onto my .m4a

Line endings in child_process.exec() stdout

I'm building my custom wrapper around wmic using node.js and I have problems parsing it's output. Not a problem, but I just don't know why this happens.
I thought line endings are either \r or \r\n and decided to look on it
Here's a demo piece of code:
exec('wmic process', {maxBuffer :1000*1024}, function (err, stdout, stderr){
let location =stdout.indexOf('\n')-6;
let region =stdout.substr(location, 10);
console.log(region);
let buf = new Buffer(stdout);
let buf2 = new Buffer(30);
buf.copy(buf2, 0, location, location+30);
console.log(buf2);
});
I looked at region and saw this:
For some reason there are two \rs instead of one!
Output of console.log(buf2) also shows this:
<Buffer 6f 75 6e 74 20 20 0d 0d 0a 53 79 73 ...
^^ ^^ ^^
I was ready to split lines by \r\r\n when I decided to pipe its output to text file and look through a hex editor:
wmic process > wmic.txt
What? Where are two \rs?
I have two questions:
Where does Node get second \r from? I tried to find some info, but Google is kind of heavy on queries such as \r\r\n, this is the closest I could get, I'm not sure if it's related
Why hex editor shows 16bit encoding and Node doesn't? I suppose I should've specified encoding in exec() options but I don't know which one. chcp shows 437th codepage, but I kind of want it to work in other codepages as well.

Kohana app database not connecting

ErrorException [ Fatal Error ]: Class 'Database_Mysqli' not found
MODPATH/database/classes/kohana/database.php [ 78 ]
73
74 // Set the driver class name
75 $driver = 'Database_'.ucfirst($config['type']);
76
77 // Create the database connection instance
78 new $driver($name, $config);
79 }
80
81 return Database::$instances[$name];
82 }
83
{PHP internal call} ยป Kohana_Core::shutdown_handler()
Environment
i tried everything doesnt work, can anyone help, i cant configure it, i can give remote access to see whats wrong
As mentioned in comment download enter link description here
In config -> database.php use type => 'MySQLi'
Note: It looks like you typed it as Mysqli as In kohana 3.3 onward class name are case sensitive.

Convert OpenISO8583.Net into different formats

I'm trying to implement an ISO8589 message to a financial institution. They however, have a Web Service that I call and then I load the ISO8589 payload into an appropriate field of the WCF service.
I have created an ISO8589 message this way:
var isoMessage = new OpenIso8583.Net.Iso8583();
isoMessage.MessageType = Iso8583.MsgType._0100_AUTH_REQ;
isoMessage.TransactionAmount = (long) 123.00;
isoMessage[Iso8583.Bit._002_PAN] = "4111111111111111";
// More after this.
I can't seem to figure out how I can convert the isoMessage into an ASCII human readable format so I can pass it through to the web service.
Anyone have any idea how this can be done with this library? Or am I using this library the wrong way?
Thanks.
UPDATED:
I have figured out how to do this doing:
var asciiFormatter = new AsciiFormatter();
var asciiValue = asciiFormatter.GetString(isoMessage.ToMsg());
However, Now I am trying to take the isoMessage and pass the entire thing as hex string easily using OpenIso8583.Net, as follows:
var isoMessage = new OpenIso8583Net.Iso8583();
isoMessage.MessageType = Iso8583.MsgType._0800_NWRK_MNG_REQ;
isoMessage[Iso8583.Bit._003_PROC_CODE] = "000000";
isoMessage[Iso8583.Bit._011_SYS_TRACE_AUDIT_NUM] = "000001";
isoMessage[Iso8583.Bit._041_CARD_ACCEPTOR_TERMINAL_ID] = "29110001";
I know this is tricky, because some fields are BCD, AlpahNumeric, Numeric, etc. however, this should be realively easy (or I would think) using OpenIso8583.Net? The result I'd like to get is:
Msg Bitmap (3, 11, 41) ProcCode Audit Terminal ID
----- ----------------------- -------- -------- -----------------------
08 00 20 20 00 00 00 80 00 00 00 00 00 00 00 01 32 39 31 31 30 30 30 31
Any help would be greatly appreciated!
Essentially, you need to extend Iso8583 which you initialise with your own Template In the Template, you can set the formatters for each field so that BCD and binary packing is not used. Have a look at the source code for Iso8583 as to how it works.

Resources