Let us say I have this very simple python command that wraps a call to "pm2 ls":
python3 -c "import subprocess; subprocess.run(['pm2', 'ls'])"
Now, let us run this same command, but within a pm2 start:
pm2 start -n "test" "python3 -c \"import subprocess; subprocess.run(['pm2', 'ls'])\""
If I check pm2 logs, I can see it fails with the message below (output of pm2 log test)
0|test | child_process.js:159
0|test | p.open(fd);
0|test | ^
0|test |
0|test | Error: ENOTTY: inappropriate ioctl for device, uv_pipe_open
0|test | at Object._forkChild (child_process.js:159:5)
0|test | at setupChildProcessIpcChannel (internal/bootstrap/pre_execution.js:356:30)
0|test | at prepareMainThreadExecution (internal/bootstrap/pre_execution.js:53:3)
0|test | at internal/main/run_main_module.js:7:1 {
0|test | errno: -25,
0|test | code: 'ENOTTY',
0|test | syscall: 'uv_pipe_open'
0|test | }
Now, if I change subprocess.run() with os.spawnlp() as below:
pm2 start -n test2 "python -c \"import os; os.spawnlp(os.P_WAIT, 'pm2', 'pm2', 'ls')\""
It runs fine.
My question is:
what is the difference between subprocess.run() and os.spawnlp() in this context?
is there a way to make this work with subprocess.run()?
By the way I use:
python 3.9.10
pm2 5.1.1
node 14.17.5
macOS 13.1
Related
Basically I want to establish website with the LXC. so I installed LXD and created LXC called app1, then installed apache2. All are running, but when I use the LXC IP on the browser it gives "This site can’t be reached", I disabled the ufw even though I removed it but nothing happen.
Here are the commands that I did to test with their results:
$ sudo lxc list
| app1 | RUNNING | 10.221.72.14 (eth0) | fd42:969c:2638:6357:216:3eff:fe59:efd7 (eth0) | CONTAINER | 0
$ sudo lxc network ls
| br0 | bridge | NO | | 0 |
+--------+----------+---------+-------------+---------+
| ens3 | physical | NO | | 0 |
+--------+----------+---------+-------------+---------+
| lxdbr0 | bridge | YES | | 2 |
+--------+----------+---------+-------------+---------+
| virbr0 | bridge | NO | | 0 |
$ sudo lxc network show lxdbr0
config:
ipv4.address: 10.221.72.1/24
ipv4.nat: "true"
ipv6.address: fd42:969c:2638:6357::1/64
ipv6.nat: "true"
description: ""
name: lxdbr0
type: bridge
used_by:
- /1.0/instances/app1
- /1.0/profiles/default
managed: true
status: Created
locations:
- none
[Question posted by a user on YugabyteDB Community Slack]
I'm running the yb-cluster of 3 logical node on 1 VM. I am trying with SSL Mode enabled cluster. Below is the property file i am using to start the cluster with SSL Mode ON:
./bin/yugabyted start --config /data/ybd1/config_1.config
./bin/yugabyted start --base_dir=/data/ybd2 --listen=127.0.0.2 --join=192.168.56.12
./bin/yugabyted start --base_dir=/data/ybd3 --listen=127.0.0.3 --join=192.168.56.12
my config file:
{
"base_dir": "/data/ybd1",
"listen": "192.168.56.12",
"certs_dir": "/root/192.168.56.12/",
"allow_insecure_connections": "false",
"use_node_to_node_encryption": "true"
"use_client_to_server_encryption": "true"
}
I am able to connect using:
bin/ysqlsh -h 127.0.0.3 -U yugabyte -d yugabyte
ysqlsh (11.2-YB-2.11.1.0-b0)
Type "help" for help.
yugabyte=# \l
List of databases
Name | Owner | Encoding | Collate | Ctype | Access privileges
-----------------+----------+----------+---------+-------------+-----------------------
postgres | postgres | UTF8 | C | en_US.UTF-8 |
system_platform | postgres | UTF8 | C | en_US.UTF-8 |
template0 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
template1 | postgres | UTF8 | C | en_US.UTF-8 | =c/postgres +
| | | | | postgres=CTc/postgres
yugabyte | postgres | UTF8 | C | en_US.UTF-8 |
But when I am trying to connect to my yb-cluster from psql client. I am getting below errors.
psql -h 192.168.56.12 -p 5433
psql: error: connection to server at "192.168.56.12", port 5433 failed: FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
postgres#acff2570dfbc:~$
And in yb t-server logs I am getting below errors:
I0228 05:00:21.248733 21631 async_initializer.cc:90] Successfully built ybclient
2022-02-28 05:02:21.248 UTC [21624] FATAL: Timed out: OpenTable RPC (request call id 2) to 192.168.56.12:9100 timed out after 120.000s
I0228 05:02:21.251086 21627 poller.cc:66] Poll stopped: Service unavailable (yb/rpc/scheduler.cc:80): Scheduler is shutting down (system error 108)
2022-02-28 05:54:20.987 UTC [23729] LOG: invalid length of startup packet
Any HELP in this regard is really apricated.
You’re setting your config wrong when using yugabyted tool. You want to use --master_flags and --tserver_flags like explained in the docs: https://docs.yugabyte.com/latest/reference/configuration/yugabyted/#flags.
An example:
bin/yugabyted start --base_dir=/data/ybd1 --listen=192.168.56.12 --tserver_flags=use_client_to_server_encryption=true,ysql_enable_auth=true,use_cassandra_authentication=true,certs_for_client_dir=/root/192.168.56.12/
Sending the parameters this way should work on your cluster.
I have a node.js api app on azure. I use bunyan to log every request to sdtout. How can I save and read the log files? I enabled BLOB-logging. The only thing that shows up in my storage is a bunch of csv-files. Here is an example:
| date | level | applicationName | instanceId | eventId | pid | tid | message
_______________________________________________________________________________________________________________________________________________________________
| 2017-05-17T14:21:15 | Verbose | myApp | tae9d6 | 636306276755847146 | 13192 | -1 | SnapshotHelper::RestoreSnapshotInternal SUCCESS - File.Copy
| 2017-05-17T14:21:15 | Verbose | myApp | tae9d6 | 636306276756784690 | 13192 | -1 | SnapshotHelper::RestoreSnapshotInternal SUCCESS - process
Where are my logs, that I printed to stdout?
1) Create file iisnode.yml in your root folder (D:\home\site\wwwroot) if not exists.
2) Add the following lines to it.
loggingEnabled: true
logDirectory: iisnode
After that done, you can find logs in D:\home\site\wwwroot\iisnode.
For more info, please refer to https://learn.microsoft.com/en-us/azure/app-service-web/web-sites-nodejs-debug#enable-logging.
After above settings in iisnode.yml, the logs you see D:\home\site\wwwroot\iisnode are from BLOB storage or file system.
The Problem
Connecting directly through redis-cli to my twemproxy will correctly proxy me over to redis without any issues/disconnects. However, when I use node-redis to connect to twemproxy I get the following error:
[Error: Redis connection gone from end event.]
Trace is as follows:
Error: Ready check failed: Redis connection gone from end event.
at RedisClient.on_info_cmd (/home/vagrant/tests/write-tests/node_mo
dules/redis/index.js:368:35)
at Command.callback (/home/vagrant/tests/write-tests/node_modules/r
edis/index.js:418:14)
at RedisClient.flush_and_error (/home/vagrant/tests/write-tests/nod
e_modules/redis/index.js:160:29)
at RedisClient.connection_gone (/home/vagrant/tests/write-tests/nod
e_modules/redis/index.js:474:10)
at Socket.<anonymous> (/home/vagrant/tests/write-tests/node_modules
/redis/index.js:103:14)
at Socket.EventEmitter.emit (events.js:117:20)
at _stream_readable.js:919:16
at process._tickCallback (node.js:419:13)
This error occurs whether or not the redis-server is even running, so I am pretty sure it has to do with how node-redis and twemproxy are interacting. Or not interacting, as the case may be.
My Question
Just what the heck is happening?
Background Information
I've got a simple test setup that is as follows:
+------------------+
| +----+----+ |
| | r1 + r2 + |
| +----+----+ |
| | | |
| +---------+ |
| |twemproxy| |
| +---------+ |
| / | \ |
| +----+----+----+ |
| | aw | aw | aw | |
| +----+----+----+ |
+------------------+
aw = api worker
r1/r2 = redis instance
twemproxy = twemproxy
the aw's are currently nodejs clustered on the same host
r1/r2 are instances of node, again on the same host
node version 0.10.x
all three machines are running with very sparse vagrant file. Static IPs assigned to each one for now, private network. Each machine is reachable from every other machine on the specified ports.
After a bit of poking, I realize it is because node_redis attempts to call the "info" command on connection on default.
Simply modifying the connection options to include no_ready_check: true will solve this issue and force the connection through twemproxy.
My assumption is that any module tested using Intern will automatically be covered by Istanbul's code coverage. For reasons unknown to me, my module is not being included.
I am:
running Intern 1.6.2 (installed with npm locally)
testing NodeJS code
using callbacks, not promises
using CommonJS modules, not AMD modules
Directory Structure (only showing relevant files):
plister
|
|--libraries
| |--file-type-support.js
|
|--tests
| |--intern.js
| |--unit
| |--file-type-support.js
|
|--node_modules
|--intern
plister/tests/intern.js
define({
useLoader: {
'host-node': 'dojo/dojo'
},
loader: {
packages: [
{name: 'libraries', location: 'libraries'}
]
},
reporters: ['console'],
suites: ['tests/unit/file-type-support'],
functionalSuites: [],
excludeInstrumentation: /^(tests|node_modules)\//
});
plister/tests/unit/file-type-support.js
define([
'intern!bdd',
'intern/chai!expect',
'intern/dojo/node!fs',
'intern/dojo/node!path',
'intern/dojo/node!stream-equal',
'intern/dojo/node!../../libraries/file-type-support'
], function (bdd, expect, fs, path, streamEqual, fileTypeSupport) {
'use strict';
bdd.describe('file-type-support', function doTest() {
bdd.it('should show that the example output.plist matches the ' +
'temp.plist generated by the module', function () {
var deferred = this.async(),
input = path.normalize('tests/resources/input.plist'),
output = path.normalize('tests/resources/output.plist'),
temporary = path.normalize('tests/resources/temp.plist');
// Test deactivate function by checking output produced by
// function against test output.
fileTypeSupport.deactivate(fs.createReadStream(input),
fs.createWriteStream(temporary),
deferred.rejectOnError(function onFinish() {
streamEqual(fs.createReadStream(output),
fs.createReadStream(temporary),
deferred.callback(function checkEqual(error, equal) {
expect(equal).to.be.true;
}));
}));
});
});
});
Output:
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (29ms)
1/1 tests passed
1/1 tests passed
Output (on failure):
FAIL: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (30ms)
AssertionError: expected true to be false
AssertionError: expected true to be false
0/1 tests passed
0/1 tests passed
npm ERR! Test failed. See above for more details.
npm ERR! not ok code 0
Output (after removing excludeInstrumentation):
PASS: main - file-type-support - should show that the example output.plist matches the temp.plist generated by the module (25ms)
1/1 tests passed
1/1 tests passed
------------------------------------------+-----------+-----------+-----------+-----------+
File | % Stmts |% Branches | % Funcs | % Lines |
------------------------------------------+-----------+-----------+-----------+-----------+
node_modules/intern/ | 70 | 50 | 100 | 70 |
chai.js | 70 | 50 | 100 | 70 |
node_modules/intern/lib/ | 79.71 | 42.86 | 72.22 | 79.71 |
Test.js | 79.71 | 42.86 | 72.22 | 79.71 |
node_modules/intern/lib/interfaces/ | 80 | 50 | 63.64 | 80 |
bdd.js | 100 | 100 | 100 | 100 |
tdd.js | 76.19 | 50 | 55.56 | 76.19 |
node_modules/intern/lib/reporters/ | 56.52 | 35 | 57.14 | 56.52 |
console.js | 56.52 | 35 | 57.14 | 56.52 |
node_modules/intern/node_modules/chai/ | 37.9 | 8.73 | 26.38 | 39.34 |
chai.js | 37.9 | 8.73 | 26.38 | 39.34 |
tests/unit/ | 100 | 100 | 100 | 100 |
file-type-support.js | 100 | 100 | 100 | 100 |
------------------------------------------+-----------+-----------+-----------+-----------+
All files | 42.14 | 11.35 | 33.45 | 43.63 |
------------------------------------------+-----------+-----------+-----------+-----------+
My module passes the test and I can make it fail too. It just will not show up in the code coverage. I have done the tutorial hosted on GitHub without any problems.
I tried dissecting the Istanbul and Intern dependencies. I place a console.log where it seems files to be covered go through, but my module doesn't get passed. I have tried every variation of deferred.callback and deferred.rejectOnError with no difference to the code coverage.
Also, any feedback on my use of deferred.callback and deferred.rejectOnError will be greatly appreciated. I am still a little uncertain on their usage.
Thanks!
As of Intern 1.6, only require('vm').runInThisContext is hooked to add code coverage data, not require. Instrumentation of require was added in Intern 2.0.
The use of callback/rejectOnError in the above code is correct.