unable to successfully install yugabytedb on Macos 12.0.1 - yugabytedb

following Yugabyte standard install for MacOS, it fails to run on 12.0.1, it does run on MacOS 11
marcsair-m:logs Marc$ cat yugabyted.log
[yugabyted start] 2021-11-03 23:07:27,388 INFO: | 0.0s | cmd = start using config file: /Users/Marc/var/conf/yugabyted.conf (args.config=None)
[yugabyted start] 2021-11-03 23:07:27,389 INFO: | 0.0s | Found directory /Users/Marc/yugabyte-2.9.1.0/yugabyte-2.9.1.0/bin for file yb-admin
[yugabyted start] 2021-11-03 23:07:27,389 INFO: | 0.0s | Starting yugabyted...
[yugabyted start] 2021-11-03 23:07:27,391 INFO: | 0.0s | Daemon grandchild process begins execution.
[yugabyted start] 2021-11-03 23:07:27,392 INFO: | 0.0s | yugabyted started running with PID 21931.
[yugabyted start] 2021-11-03 23:07:27,392 INFO: | 0.0s | Changed RLIMIT_NOFILE from 2560 to 64000
[yugabyted start] 2021-11-03 23:07:27,393 INFO: | 0.0s | Found directory /Users/Marc/yugabyte-2.9.1.0/yugabyte-2.9.1.0/bin for file yb-master
[yugabyted start] 2021-11-03 23:07:27,393 INFO: | 0.0s | Found directory /Users/Marc/yugabyte-2.9.1.0/yugabyte-2.9.1.0/bin for file yb-tserver
[yugabyted start] 2021-11-03 23:07:27,394 INFO: | 0.0s | About to start master with cmd /Users/Marc/yugabyte-2.9.1.0/yugabyte-2.9.1.0/bin/yb-master --stop_on_parent_termination --undefok=stop_on_parent_termination --fs_data_dirs=/Users/Marc/var/data --webserver_interface=0.0.0.0 --metrics_snapshotter_tserver_metrics_whitelist=handler_latency_yb_tserver_TabletServerService_Read_count,handler_latency_yb_tserver_TabletServerService_Write_count,handler_latency_yb_tserver_TabletServerService_Read_sum,handler_latency_yb_tserver_TabletServerService_Write_sum,disk_usage,cpu_usage,node_up --yb_num_shards_per_tserver=1 --ysql_num_shards_per_tserver=1 --cluster_uuid=98ea4dc5-9cd3-4372-8229-3eacd9f44475 --rpc_bind_addresses=127.0.0.1:7100 --server_broadcast_addresses=127.0.0.1:7100 --replication_factor=1 --use_initial_sys_catalog_snapshot --server_dump_info_path=/Users/Marc/var/data/master-info --master_enable_metrics_snapshotter=true --webserver_port=7000 --default_memory_limit_to_ram_ratio=0.35 --instance_uuid_override=00150954eb114f8a82269730e122f288 --master_addresses=127.0.0.1:7100
[yugabyted start] 2021-11-03 23:07:27,396 INFO: | 0.0s | master started running with PID 21932.
[yugabyted start] 2021-11-03 23:07:27,397 ERROR: | 0.0s | Failed to create symlink from /Users/Marc/var/data/yb-data/master/logs to /Users/Marc/var/logs/master
[yugabyted start] 2021-11-03 23:07:27,398 INFO: | 0.0s | Waiting for master
[yugabyted start] 2021-11-03 23:07:27,398 INFO: | 0.0s | run_process: cmd: [u'/Users/Marc/yugabyte-2.9.1.0/yugabyte-2.9.1.0/bin/yb-admin', u'--init_master_addrs', u'127.0.0.1:7100', u'list_all_masters']
[yugabyted start] 2021-11-03 23:07:27,899 INFO: | 0.5s | run_process returned -11:
OUT >>
<< ERR >>
<<
[yugabyted start] 2021-11-03 23:07:28,400 ERROR: | 1.0s | Failed waiting for yb-master... process died.
[yugabyted start] 2021-11-03 23:07:28,402 ERROR: | 1.0s | Failed to setup master. Exception: Traceback (most recent call last):
File "./bin/yugabyted", line 972, in setup_master
if retry_op(self.wait_master, timeout):
File "./bin/yugabyted", line 2654, in retry_op
return func()
File "./bin/yugabyted", line 929, in wait_master
raise RuntimeError("process died unexpectedly.")
RuntimeError: process died unexpectedly.
For more information, check the logs in /Users/Marc/var/logs
[yugabyted start] 2021-11-03 23:07:28,403 INFO: | 1.0s | Shutting down...

Related

ENOTTY error when execute pm2 from a pm2 process

Let us say I have this very simple python command that wraps a call to "pm2 ls":
python3 -c "import subprocess; subprocess.run(['pm2', 'ls'])"
Now, let us run this same command, but within a pm2 start:
pm2 start -n "test" "python3 -c \"import subprocess; subprocess.run(['pm2', 'ls'])\""
If I check pm2 logs, I can see it fails with the message below (output of pm2 log test)
0|test | child_process.js:159
0|test | p.open(fd);
0|test | ^
0|test |
0|test | Error: ENOTTY: inappropriate ioctl for device, uv_pipe_open
0|test | at Object._forkChild (child_process.js:159:5)
0|test | at setupChildProcessIpcChannel (internal/bootstrap/pre_execution.js:356:30)
0|test | at prepareMainThreadExecution (internal/bootstrap/pre_execution.js:53:3)
0|test | at internal/main/run_main_module.js:7:1 {
0|test | errno: -25,
0|test | code: 'ENOTTY',
0|test | syscall: 'uv_pipe_open'
0|test | }
Now, if I change subprocess.run() with os.spawnlp() as below:
pm2 start -n test2 "python -c \"import os; os.spawnlp(os.P_WAIT, 'pm2', 'pm2', 'ls')\""
It runs fine.
My question is:
what is the difference between subprocess.run() and os.spawnlp() in this context?
is there a way to make this work with subprocess.run()?
By the way I use:
python 3.9.10
pm2 5.1.1
node 14.17.5
macOS 13.1

How to fix "RuntimeError: CUDA error: out of memory"

I have successfully trained in one GPU, but it cant work in multi GPU. I check the code but it just simple set some val in map, then carry out multi GPU training like torch.distributed.barrier.
I set the following code but failed even I set the batch size = 1.
docker exec -e NVIDIA_VISIBLE_DEVICES=0,1,2,3 -it jy /bin/bash
os.environ["CUDA_VISIBLE_DEVICES"] = '0,1,2,3'
The use of GPU.
|===============================+======================+======================|
| 0 GeForce RTX 208... On | 00000000:3D:00.0 Off | N/A |
| 24% 26C P8 21W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 1 GeForce RTX 208... On | 00000000:3E:00.0 Off | N/A |
| 25% 27C P8 2W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 2 GeForce RTX 208... On | 00000000:40:00.0 Off | N/A |
| 25% 25C P8 20W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
| 3 GeForce RTX 208... On | 00000000:41:00.0 Off | N/A |
| 26% 25C P8 15W / 250W | 8MiB / 11019MiB | 0% Default |
| | | N/A |
+-------------------------------+----------------------+----------------------+
The error informations.
/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py:186: FutureWarning: The module torch.distributed.launch is deprecated
and will be removed in future. Use torchrun.
Note that --use_env is set by default in torchrun.
If your script expects `--local_rank` argument to be set, please
change it to read from `os.environ['LOCAL_RANK']` instead. See
https://pytorch.org/docs/stable/distributed.html#launch-utility for
further instructions
FutureWarning,
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
| distributed init (rank 2): env://
| distributed init (rank 1): env://
| distributed init (rank 3): env://
| distributed init (rank 0): env://
Traceback (most recent call last):
File "main_track.py", line 398, in <module>
main(args)
File "main_track.py", line 159, in main
utils.init_distributed_mode(args)
File "/jy/TransTrack/util/misc.py", line 459, in init_distributed_mode
torch.distributed.barrier()
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/distributed_c10d.py", line 2709, in barrier
work = default_pg.barrier(opts=opts)
RuntimeError: CUDA error: out of memory
CUDA kernel errors might be asynchronously reported at some other API call,so the stacktrace below might be incorrect.
For debugging consider passing CUDA_LAUNCH_BLOCKING=1.
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 355 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 357 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 358 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: 1) local_rank: 1 (pid: 356) of binary: /root/anaconda3/envs/pytorch171/bin/python3
Traceback (most recent call last):
File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 193, in _run_module_as_main
"__main__", mod_spec)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/runpy.py", line 85, in _run_code
exec(code, run_globals)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 193, in <module>
main()
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/run.py", line 713, in run
)(*cmd_args)
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 131, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
File "/root/anaconda3/envs/pytorch171/lib/python3.7/site-packages/torch/distributed/launcher/api.py", line 261, in launch_agent
failures=result.failures,
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
============================================================
main_track.py FAILED
------------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
------------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2022-10-13_08:54:25
host : 2f923a848f88
rank : 1 (local_rank: 1)
exitcode : 1 (pid: 356)
error_file: <N/A>
traceback : To enable traceback see: https://pytorch.org/docs/stable/elastic/errors.html
============================================================

Is the following synthesizable?

Hi I am trying to create a verilog register that outputs its value only when the write signal is high else it is high impedance. Is the following synthesizable?
module R(data_from_bus,data_to_bus,clk,read,write);
input [7:0]data_from_bus;
input clk,read,write;
output reg[7:0] data_to_bus;
reg[7:0] r_reg;
always#(posedge clk)
begin
if (read==1)
r_reg<=data_from_bus;
end
always#(write)
begin
if (write==1)
data_to_bus=r_reg;
else
data_to_bus=8'bz;
end
endmodule
yes, it is synthesizable, but not necessarily doing what you want because of the questionable format.
here's a better (safer) version:
module R(data_from_bus,data_to_bus,clk,read,write);
input [7:0]data_from_bus;
input clk,read,write;
output data_to_bus;
reg[7:0] r_reg;
always#(posedge clk) begin
if (read)
r_reg<=data_from_bus;
else
r_reg<=r_reg;
end
wire[7:0] r_reg_wire;
assign r_reg_wire = r_reg;
assign data_to_bus = write ? r_reg_wire : 8'bz;
endmodule
the main problem of the one you posted is that you are not having an else statement for the first non-blocking assignment: (if (read == 1))
This might result in inferring a latch (but tools are most likely smart enough to fix it implicitly), which does the same thing in simulation as a flip-flop in simulation, but will mess with timing in real life deployment
a really good approach is to use 'always_ff' for registers assignment, 'always_comb' for combinational logic assignment, and 'always_latch' for intended latch (which is rarely used apart from really fishy timing case such as clock gating); but these keyword are only supported in SystemVerilog
Yes.
Here is the result of synthesizing the posted code in the free online tools available at the EDA Playground website, using Mentor Precision.
Please add r_reg to the sensitivity list for the combinational logic to assure the simulation and synthesis results agree. Use always #(*) to accomplish the same thing using a wildcard style approach.
Synthesis ran and produced no errors.
The log is shown below.
The last part of the log is a post synthesis Verilog netlist.
Note the tool used the FDRE primitive to implement the registers bits.
To repeat this process, see the reference design at:
https://www.edaplayground.com/x/2BmJ
Copy the reference design to your EDA Playground account (assuming you have one;you should its free and helpfu) using the copy button.
Paste the design you want to synthesize into the design.v tab.
Run it by clicking the run button.
Log file
[2022-05-08 23:57:07 UTC] precision -shell -file run.do -fileargs "design.sv" && sed 's-$-<br>-g' precision.v > tmp.html && echo '<!DOCTYPE html> <html> <head> <style> body {font-family: monospace;} </style> </head> <body>' > tmp2.html && echo '</body> </html> ' > tmp3.html && cat tmp2.html tmp.html tmp3.html > precision.html
precision: Setting MGC_HOME to /usr/share/precision/Mgc_home ...
precision: Executing on platform: Derived from Red Hat Enterprise Linux 7.1 (Source) -- 5.4.0-107-generic -- x86_64
// Precision RTL Synthesis 64-bit 2021.1.0.4 (Production Release) Tue Jul 20 01:22:31 PDT 2021
//
// Copyright (c) Mentor Graphics Corporation, 1996-2021, All Rights Reserved.
// Portions copyright 1991-2008 Compuware Corporation
// UNPUBLISHED, LICENSED SOFTWARE.
// CONFIDENTIAL AND PROPRIETARY INFORMATION WHICH IS THE
// PROPERTY OF MENTOR GRAPHICS CORPORATION OR ITS LICENSORS
//
// Running on Linux runner#eaa22c631d4a #121-Ubuntu SMP Thu Mar 24 16:04:27 UTC 2022 5.4.0-107-generic x86_64
//
// Start time Sun May 8 19:57:09 2022
# -------------------------------------------------
# Info: [9569]: Logging session transcript to file /home/runner/precision.log
# Warning: [9508]: Results directory is not set. Use new_project, open_project, or set_results_dir.
# Info: [9577]: Input directory: /home/runner
# Info: [9572]: Moving session transcript to file /home/runner/precision.log
# Info: [9558]: Created project /home/runner/project_1.psp in folder /home/runner.
# Info: [9531]: Created directory: /home/runner/impl_1.
# Info: [9557]: Created implementation impl_1 in project /home/runner/project_1.psp.
# Info: [9578]: The Results Directory has been set to: /home/runner/impl_1/
# Info: [9569]: Logging project transcript to file /home/runner/impl_1/precision.log
# Info: [9569]: Logging suppressed messages transcript to file /home/runner/impl_1/precision.log.suppressed
# Info: [9552]: Activated implementation impl_1 in project /home/runner/project_1.psp.
# Info: [20026]: MultiProc: Precision will use a maximum of 8 logical processors.
# Info: [15302]: Setting up the design to use synthesis library "xca7.syn"
# Info: [585]: The global max fanout is currently set to 10000 for Xilinx - ARTIX-7.
# Info: [15328]: Setting Part to: "7A100TCSG324".
# Info: [15329]: Setting Process to: "1".
# Info: [7513]: The default input to Vivado place and route has been set to "Verilog".
# Info: [7512]: The place and route tool for current technology is Vivado.
# Info: [3052]: Decompressing file : /usr/share/precision/Mgc_home/pkgs/psr/techlibs/xca7.syn in /home/runner/impl_1/synlib.
# Info: [3022]: Reading file: /home/runner/impl_1/synlib/xca7.syn.
# Info: [645]: Loading library initialization file /usr/share/precision/Mgc_home/pkgs/psr/userware/xilinx_rename.tcl
# Info: [40000]: hdl-analyze, Release RTLC-Precision 2021a.12
# Info: [42003]: Starting analysis of files in library "work"
# Info: [41002]: Analyzing input file "/home/runner/design.sv" ...
# Info: [670]: Top module of the design is set to: R.
# Info: [668]: Current working directory: /home/runner/impl_1.
# Info: [40000]: RTLC-Driver, Release RTLC-Precision 2021a.12
# Info: [40000]: Last compiled on Jul 2 2021 08:23:33
# Info: [44512]: Initializing...
# Info: [44504]: Partitioning design ....
# Info: [40000]: RTLCompiler, Release RTLC-Precision 2021a.12
# Info: [40000]: Last compiled on Jul 2 2021 08:49:53
# Info: [44512]: Initializing...
# Info: [44522]: Root Module R: Pre-processing...
# Info: [44523]: Root Module R: Compiling...
# Warning: [45784]: "/home/runner/design.sv", line 11: Module R, Net(s) r_reg[7:0]: Although this signal is not part of the sensitivity list of this block, it is being read. This may lead to simulation mismatch.
# Info: [44842]: Compilation successfully completed.
# Info: [44856]: Total lines of RTL compiled: 17.
# Info: [44835]: Total CPU time for compilation: 0.0 secs.
# Info: [44513]: Overall running time for compilation: 1.0 secs.
# Info: [668]: Current working directory: /home/runner/impl_1.
# Info: [15334]: Doing rtl optimizations.
# Info: [671]: Finished compiling design.
# Info: [668]: Current working directory: /home/runner/impl_1.
# Info: [20026]: MultiProc: Precision will use a maximum of 8 logical processors.
# Info: [15002]: Optimizing design view:.work.R.INTERFACE
# Info: [15002]: Optimizing design view:.work.R.INTERFACE
# Info: [8010]: Gated clock transformations: Begin...
# Info: [8010]: Gated clock transformations: End...
# Info: [8053]: Added global buffer BUFGP for Port port:clk
# Info: [3027]: Writing file: /home/runner/impl_1/R.edf.
# Info: [3027]: Writing file: /home/runner/impl_1/R.xdc.
# Info: -- Writing file /home/runner/impl_1/R.tcl
# Info: [3027]: Writing file: /home/runner/impl_1/R.v.
# Info: -- Writing file /home/runner/impl_1/R.tcl
# Info: [671]: Finished synthesizing design.
# Info: [11019]: Total CPU time for synthesis: 0.8 s secs.
# Info: [11020]: Overall running time for synthesis: 1.0 s secs.
# Info: /home/runner/impl_1/precision_tech.sdc
# Info: [3027]: Writing file: /home/runner/precision.v.
# Info: [3027]: Writing file: /home/runner/precision.xdc.
# Info: -- Writing file /home/runner/impl_1/R.tcl
# Info: Info, Command 'auto_write' finished successfully
# Info: Num File Type Path
# Info: --------------------------------------------------------
# Info: 0 /home/runner/impl_1/R_area.rep
# Info: 1 /home/runner/impl_1/R_con_rep.sdc
# Info: 2 /home/runner/impl_1/R_tech_con_rep.sdc
# Info: 3 /home/runner/impl_1/R_fsm.rep
# Info: 4 /home/runner/impl_1/R_dsp_modes.rep
# Info: 5 /home/runner/impl_1/R_ram_modes.rep
# Info: 6 /home/runner/impl_1/R_env.htm
# Info: 7 /home/runner/impl_1/R.edf
# Info: 8 /home/runner/impl_1/R.v
# Info: 9 /home/runner/impl_1/R.xdc
# Info: 10 /home/runner/impl_1/R.tcl
# Info: ***************************************************************
# Info: Device Utilization for 7A100TCSG324
# Info: ***************************************************************
# Info: Resource Used Avail Utilization
# Info: ---------------------------------------------------------------
# Info: IOs 19 210 9.05%
# Info: Global Buffers 1 32 3.12%
# Info: LUTs 1 63400 0.00%
# Info: CLB Slices 1 15850 0.01%
# Info: Dffs or Latches 8 126800 0.01%
# Info: Block RAMs 0 135 0.00%
# Info: DSP48E1s 0 240 0.00%
# Info: ---------------------------------------------------------------
# Info: *****************************************************
# Info: Library: work Cell: R View: INTERFACE
# Info: *****************************************************
# Info: Number of ports : 19
# Info: Number of nets : 40
# Info: Number of instances : 29
# Info: Number of references to this view : 0
# Info: Total accumulated area :
# Info: Number of Dffs or Latches : 8
# Info: Number of LUTs : 1
# Info: Number of Primitive LUTs : 1
# Info: Number of accumulated instances : 29
# Info: *****************************
# Info: IO Register Mapping Report
# Info: *****************************
# Info: Design: work.R.INTERFACE
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | Port | Direction | INFF | OUTFF | TRIFF |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_from_bus(7) | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_from_bus(6) | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_from_bus(5) | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_from_bus(4) | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_from_bus(3) | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_from_bus(2) | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_from_bus(1) | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_from_bus(0) | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_to_bus(7) | Output | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_to_bus(6) | Output | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_to_bus(5) | Output | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_to_bus(4) | Output | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_to_bus(3) | Output | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_to_bus(2) | Output | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_to_bus(1) | Output | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | data_to_bus(0) | Output | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | clk | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | read | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: | write | Input | | | |
# Info: +---------------------+-----------+----------+----------+----------+
# Info: Total registers mapped: 0
# Info: [12022]: Design has no timing constraint and no timing information.
# Info: //
# Info: // Verilog description for cell R,
# Info: // Sun May 8 19:57:18 2022
# Info: //
# Info: // Precision RTL Synthesis, 64-bit 2021.1.0.4//
# Info: module R ( data_from_bus, data_to_bus, clk, read, write ) ;
# Info: input [7:0]data_from_bus ;
# Info: output [7:0]data_to_bus ;
# Info: input clk ;
# Info: input read ;
# Info: input write ;
# Info: wire [7:0]data_from_bus_int;
# Info: wire clk_int;
# Info: wire read_int, write_int, nx57998z1, nx198;
# Info: wire [7:0]r_reg;
# Info: OBUFT \data_to_bus_triBus1(0) (.O (data_to_bus[0]), .I (r_reg[0]), .T (
# Info: nx57998z1)) ;
# Info: OBUFT \data_to_bus_triBus1(1) (.O (data_to_bus[1]), .I (r_reg[1]), .T (
# Info: nx57998z1)) ;
# Info: OBUFT \data_to_bus_triBus1(2) (.O (data_to_bus[2]), .I (r_reg[2]), .T (
# Info: nx57998z1)) ;
# Info: OBUFT \data_to_bus_triBus1(3) (.O (data_to_bus[3]), .I (r_reg[3]), .T (
# Info: nx57998z1)) ;
# Info: OBUFT \data_to_bus_triBus1(4) (.O (data_to_bus[4]), .I (r_reg[4]), .T (
# Info: nx57998z1)) ;
# Info: OBUFT \data_to_bus_triBus1(5) (.O (data_to_bus[5]), .I (r_reg[5]), .T (
# Info: nx57998z1)) ;
# Info: OBUFT \data_to_bus_triBus1(6) (.O (data_to_bus[6]), .I (r_reg[6]), .T (
# Info: nx57998z1)) ;
# Info: OBUFT \data_to_bus_triBus1(7) (.O (data_to_bus[7]), .I (r_reg[7]), .T (
# Info: nx57998z1)) ;
# Info: IBUF write_ibuf (.O (write_int), .I (write)) ;
# Info: IBUF read_ibuf (.O (read_int), .I (read)) ;
# Info: IBUF \data_from_bus_ibuf(0) (.O (data_from_bus_int[0]), .I (
# Info: data_from_bus[0])) ;
# Info: IBUF \data_from_bus_ibuf(1) (.O (data_from_bus_int[1]), .I (
# Info: data_from_bus[1])) ;
# Info: IBUF \data_from_bus_ibuf(2) (.O (data_from_bus_int[2]), .I (
# Info: data_from_bus[2])) ;
# Info: IBUF \data_from_bus_ibuf(3) (.O (data_from_bus_int[3]), .I (
# Info: data_from_bus[3])) ;
# Info: IBUF \data_from_bus_ibuf(4) (.O (data_from_bus_int[4]), .I (
# Info: data_from_bus[4])) ;
# Info: IBUF \data_from_bus_ibuf(5) (.O (data_from_bus_int[5]), .I (
# Info: data_from_bus[5])) ;
# Info: IBUF \data_from_bus_ibuf(6) (.O (data_from_bus_int[6]), .I (
# Info: data_from_bus[6])) ;
# Info: IBUF \data_from_bus_ibuf(7) (.O (data_from_bus_int[7]), .I (
# Info: data_from_bus[7])) ;
# Info: INV ix57998z1315 (.O (nx57998z1), .I (write_int)) ;
# Info: BUFGP clk_ibuf (.O (clk_int), .I (clk)) ;
# Info: GND ps_gnd (.G (nx198)) ;
# Info: FDRE \reg_r_reg(7) (.Q (r_reg[7]), .C (clk_int), .CE (read_int), .D (
# Info: data_from_bus_int[7]), .R (nx198)) ;
# Info: FDRE \reg_r_reg(6) (.Q (r_reg[6]), .C (clk_int), .CE (read_int), .D (
# Info: data_from_bus_int[6]), .R (nx198)) ;
# Info: FDRE \reg_r_reg(5) (.Q (r_reg[5]), .C (clk_int), .CE (read_int), .D (
# Info: data_from_bus_int[5]), .R (nx198)) ;
# Info: FDRE \reg_r_reg(4) (.Q (r_reg[4]), .C (clk_int), .CE (read_int), .D (
# Info: data_from_bus_int[4]), .R (nx198)) ;
# Info: FDRE \reg_r_reg(3) (.Q (r_reg[3]), .C (clk_int), .CE (read_int), .D (
# Info: data_from_bus_int[3]), .R (nx198)) ;
# Info: FDRE \reg_r_reg(2) (.Q (r_reg[2]), .C (clk_int), .CE (read_int), .D (
# Info: data_from_bus_int[2]), .R (nx198)) ;
# Info: FDRE \reg_r_reg(1) (.Q (r_reg[1]), .C (clk_int), .CE (read_int), .D (
# Info: data_from_bus_int[1]), .R (nx198)) ;
# Info: FDRE \reg_r_reg(0) (.Q (r_reg[0]), .C (clk_int), .CE (read_int), .D (
# Info: data_from_bus_int[0]), .R (nx198)) ;
# Info: endmodule

How to solve "Operation not permitted: '/var/lib/pgadmin'" error in laradock at Windows Subsystem for Linux?

I am using the Laradock in my Laravel project for dockerizing with Nginx, Postgres, and Pgadmin. All the containers are running well but the Pgadmin is unable to do so. Here is my error log,
pgadmin_1 | WARNING: Failed to set ACL on the directory containing the configuration database: [Errno 1] Operation not permitted: '/var/lib/pgadmin'
pgadmin_1 | Traceback (most recent call last):
pgadmin_1 | File "run_pgadmin.py", line 4, in <module>
pgadmin_1 | from pgAdmin4 import app
pgadmin_1 | File "/pgadmin4/pgAdmin4.py", line 92, in <module>
pgadmin_1 | app = create_app()
pgadmin_1 | File "/pgadmin4/pgadmin/__init__.py", line 241, in create_app
pgadmin_1 | create_app_data_directory(config)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 40, in create_app_data_directory
pgadmin_1 | _create_directory_if_not_exists(config.SESSION_DB_PATH)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 16, in _create_directory_if_not_exists
pgadmin_1 | os.mkdir(_path)
pgadmin_1 | PermissionError: [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin_1 | sudo: setrlimit(RLIMIT_CORE): Operation not permitted
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Starting gunicorn 19.9.0
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Listening at: http://[::]:80 (1)
pgadmin_1 | [2020-06-07 11:48:43 +0000] [1] [INFO] Using worker: threads
pgadmin_1 | /usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
pgadmin_1 | return io.open(fd, *args, **kwargs)
pgadmin_1 | [2020-06-07 11:48:43 +0000] [83] [INFO] Booting worker with pid: 83
pgadmin_1 | [2020-06-07 11:48:44 +0000] [83] [ERROR] Exception in worker process
pgadmin_1 | Traceback (most recent call last):
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/arbiter.py", line 583, in spawn_worker
pgadmin_1 | worker.init_process()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/gthread.py", line 104, in init_process
pgadmin_1 | super(ThreadWorker, self).init_process()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 129, in init_process
pgadmin_1 | self.load_wsgi()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/workers/base.py", line 138, in load_wsgi
pgadmin_1 | self.wsgi = self.app.wsgi()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/base.py", line 67, in wsgi
pgadmin_1 | self.callable = self.load()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 52, in load
pgadmin_1 | return self.load_wsgiapp()
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/app/wsgiapp.py", line 41, in load_wsgiapp
pgadmin_1 | return util.import_app(self.app_uri)
pgadmin_1 | File "/usr/local/lib/python3.8/site-packages/gunicorn/util.py", line 350, in import_app
pgadmin_1 | __import__(module)
pgadmin_1 | File "/pgadmin4/run_pgadmin.py", line 4, in <module>
pgadmin_1 | from pgAdmin4 import app
pgadmin_1 | File "/pgadmin4/pgAdmin4.py", line 92, in <module>
pgadmin_1 | app = create_app()
pgadmin_1 | File "/pgadmin4/pgadmin/__init__.py", line 241, in create_app
pgadmin_1 | create_app_data_directory(config)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 40, in create_app_data_directory
pgadmin_1 | _create_directory_if_not_exists(config.SESSION_DB_PATH)
pgadmin_1 | File "/pgadmin4/pgadmin/setup/data_directory.py", line 16, in _create_directory_if_not_exists
pgadmin_1 | os.mkdir(_path)
pgadmin_1 | PermissionError: [Errno 13] Permission denied: '/var/lib/pgadmin/sessions'
pgadmin_1 | [2020-06-07 11:48:44 +0000] [83] [INFO] Worker exiting (pid: 83)
pgadmin_1 | WARNING: Failed to set ACL on the directory containing the configuration database: [Errno 1] Operation not permitted: '/var/lib/pgadmin'
pgadmin_1 | /usr/local/lib/python3.8/os.py:1023: RuntimeWarning: line buffering (buffering=1) isn't supported in binary mode, the default buffer size will be used
pgadmin_1 | return io.open(fd, *args, **kwargs)
pgadmin_1 | [2020-06-07 11:48:44 +0000] [1] [INFO] Shutting down: Master
pgadmin_1 | [2020-06-07 11:48:44 +0000] [1] [INFO] Reason: Worker failed to boot.
I have tried many ways to solve this problem. Such as,
OSError: [Errno 13] Permission denied: '/var/lib/pgadmin'
https://www.pgadmin.org/docs/pgadmin4/latest/container_deployment.html
and some other github issues and their solutions. I also run the sudo chmod -R 777 ~/.laradock/data/pgadmin and sudo chmod -R 777 /var/lib/pgadmin command to get the permission but still the same error log. Can you guys help me on this? I think some others are also getting this error on their local machine.
Thanks 🙂
You may try this:
sudo chown -R 5050:5050 ~/.laradock/data/pgadmin
Then restart the container. Cause in the container with:
uid=5050(pgadmin) gid=5050(pgadmin)
and
drwx------ 4 pgadmin pgadmin 56 Jan 27 08:25 pgadmin
As others have noted above, I found that Permission denied: '/var/lib/pgadmin/sessions' in Docker speaks to the challenge on the persistent local folder not having the correct user permissions.
After running sudo chown -R 5050:5050 ~/.laradock/data/pgadmin and restarting the container, the below error is no longer in my log
PermissionError: [Errno 13] Permission denied:
A similar error happens when using Kubernetes and the pgadmin4 helm chart from https://github.com/rowanruseler/helm-charts.
The solution is to set:
VolumePermissions:
enabled: true
even when persistence is not enabled. In this way also the /var/lib/pgadmin folder in the container gets assigned of the correct permissions and the pgadmin4.db database can be created correctly.
Assuming you have folder with pgadmin4.db already defined on your git repo with an other user than pgadmin, you can do so:
postgres_interface:
image: dpage/pgadmin4
environment:
- PGADMIN_DEFAULT_EMAIL=user#domain.com
- PGADMIN_DEFAULT_PASSWORD=postgres
ports:
- "5050:80"
user: root
volumes:
- ./env/local/pgadmin/pgadmin4.db:/pgadmin4.db
entrypoint: /bin/sh -c "cp /pgadmin4.db /var/lib/pgadmin/pgadmin4.db && cd /pgadmin4 && /entrypoint.sh"
The only solution I can provide is to log in to the container with
docker-compose exec --user root pgadmin sh
and then
chmod 0777 /var/lib/pgadmin -R
probably it may be better to create your own dockerfile from dpage/pgadmin4 and run these commands in advance.

Messages are getting struck in apache activemq master-slave configuration

I am facing problem of messages getting struck in activemq queue (with master slave configuration) and consumer is not able to consume messages when I pump around ~100 messages to different queues.
I have configured ActiveMQ (v5.9.0) message broker in two separate linux instances in master-slave mode. For persistenceAdapter conf on both the instances, I have mounted a NAS storage onto both the instances where activemq server is running. Hence same NAS storage is mounted on both activemq instances at mount point '/mnt/nas'. The size of the NAS storage is 20 GB.
So my persistenceAdapter conf looks like below
<persistenceAdapter>
<kahaDB directory="/mnt/nas" ignoreMissingJournalfiles="true" checkForCorruptJournalFiles="true" checksumJournalFiles="true"/>
</persistenceAdapter>
The systemUsage configuration on both activemq server is like below
<systemUsage>
<systemUsage>
<memoryUsage>
<memoryUsage percentOfJvmHeap = "70"/>
</memoryUsage>
<storeUsage>
<storeUsage limit = "15 gb"/>
</storeUsage>
<tempUsage>
<tempUsage limit = "7 gb"/>
</tempUsage>
</systemUsage>
</systemUsage>
and I have enabled only 'tcp' transport connector
<transportConnector name = "openwire" uri = "tcp://0.0.0.0:61616 maximumConnections=1000&wireFormat.maxFrameSize=104857600"/>
When I see the partition details of the mount point '/mnt/nas' on both activemq instances, I see following for the command df -k
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/xvda2 102953264 5840280 91883264 6% /
tmpfs 967652 0 967652 0% /dev/shm
/dev/xvda1 253871 52511 188253 22% /boot
//nas151.service.softlayer.com/IBM278684-16
139328579072 56369051136 82959527936 41% /mnt/nas
Hence I see 41% of /mnt/nas is used
The problem is when I start the activemq server (on both instances), I see the following messages in the activemq.log
**************** START ***************
2014-06-05 12:48:40,350 | INFO | PListStore:[/var/lib/apache-activemq-5.9.0/data/localhost/tmp_storage] started | org.apache.activemq.store.kahadb.plist.PListStoreImpl | main
2014-06-05 12:48:40,454 | INFO | JMX consoles can connect to service:jmx:rmi:///jndi/rmi://localhost:1099/jmxrmi | org.apache.activemq.broker.jmx.ManagementContext | JMX connector
2014-06-05 12:48:40,457 | INFO | Using Persistence Adapter: KahaDBPersistenceAdapter[/mnt/nas] | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:40,612 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 8163..8209 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,613 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 12256..12327 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,649 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 20420..20585 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,650 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 28559..28749 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,651 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 32677..32842 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,652 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 36770..36960 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,655 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 49099..49264 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,657 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 61403..61474 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,658 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 65521..65567 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,659 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 69614..69685 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:40,660 | INFO | Corrupt journal records found in '/mnt/nas/db-1.log' between offsets: 77778..77824 | org.apache.activemq.store.kahadb.disk.journal.Journal | main
2014-06-05 12:48:41,543 | INFO | KahaDB is version 5 | org.apache.activemq.store.kahadb.MessageDatabase | main
2014-06-05 12:48:41,592 | INFO | Recovering from the journal ... | org.apache.activemq.store.kahadb.MessageDatabase | main
2014-06-05 12:48:41,604 | INFO | Recovery replayed 66 operations from the journal in 0.028 seconds. | org.apache.activemq.store.kahadb.MessageDatabase | main
2014-06-05 12:48:41,772 | INFO | Apache ActiveMQ 5.9.0 (localhost, ID:10.106.99.101-60576-1401972521638-0:1) is starting | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:41,892 | INFO | Listening for connections at: tcp://10.106.99.101:61616?maximumConnections=1000&wireFormat.maxFrameSize=104857600 | org.apache.activemq.transport.TransportServerThreadSupport | main
2014-06-05 12:48:41,893 | INFO | Connector openwire started | org.apache.activemq.broker.TransportConnector | main
2014-06-05 12:48:41,893 | INFO | Apache ActiveMQ 5.9.0 (localhost, ID:10.106.99.101-60576-1401972521638-0:1) started | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:41,893 | INFO | For help or more information please see: http://activemq.apache.org | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:41,897 | WARN | Store limit is 2048 mb, whilst the data directory: /mnt/nas only has 0 mb of usable space - resetting to maximum available disk space: 0 mb | org.apache.activemq.broker.BrokerService | main
2014-06-05 12:48:41,897 | ERROR | Store limit is 0 mb, whilst the max journal file size for the store is: 32 mb, the store will not accept any data when used. | org.apache.activemq.broker.BrokerService | main
******************** END **************
I see 'Corrupt journal records found in '/mnt/nas/db-1.log'. This comes for everytime restart even though if I delete this file and restart.
I had put the flag to recover but still this log entry comes for every restart.
Another problem is, even though my NAS storage is 20GB, it shows '/mnt/nas only has 0 mb of usable space'. This is really weird. I am not sure why 0 MB is available to activemq
I request people here to give me some suggestions on why it is happening like this and suggest me any better configurations to avoid messages getting struck in queue.
Thanks
Raagu

Resources