I've created a simple nbd-server instance which is sharing a single 1GB file which I created with:
dd if=/dev/zero of=nbd.file bs=1048576 count=1024
The nbd.conf file looks like this:
[generic]
[export1]
exportname = /Users/michael/Downloads/nbd-3.21/nbd.file
I start the server on my Mac as follows:
nbd-server -d -C /Users/michael/Downloads/nbd-3.21/nbd.conf
But when I try to connect the Linux client I get an error:
$ nbd-client -p -b 4096 nbd-server.local -N export1 /dev/nbd0
Negotiation: ..size = 1024MB
Error: Failed to setup device, check dmesg
Exiting.
There is nothing in dmesg and I can't find any documentation on exactly what went wrong. The server output looks like this, showing no obvious errors:
** Message: 20:05:55.820: virtstyle ipliteral
** Message: 20:05:55.820: connect from 192.168.1.105, assigned file is /Users/michael/Downloads/nbd-3.21/nbd.file
** Message: 20:05:55.820: No authorization file, granting access.
** Message: 20:05:55.820: Size of exported file/device is 1073741824
** Message: 20:05:55.821: Starting to serve
Error: Connection dropped: Connection reset by peer
Exiting.
All of these error messages lead me to believe the issue is on the client: it doesn't like something, so it terminates the connection. If I daemonize the server it happily lets the client try to reconnect.
I thought perhaps I should have more lines in my config file, but I don't see any obvious optional config items that would help. I thought perhaps there was some minimum file size, so I bumped it up from 16MB to 1GB.
What does the error "Failed to setup device" mean? How can I troubleshoot what is going wrong or fix it?
Try to run the client as root: sudo nbd-client ...
Related
I'm using Debian and connecting it to Bit vise SSH server, I configured everything is /etc/ssh/sshd_config but when I try to connect using root and its password I get this Error:
The SSH session has terminated with error. Reason: FlowSocketReader: Error receiving bytes. Windows error 10054: An existing connection was forcibly closed by the remote host.
Is there anyway to fix this? I've been searching all night please help
NWJS Version: v.0.32.1 (Tested on different 0.31 versions also and 0.32.beta)
Operating System: Linux only. Tested on Ubuntu 16.04 LTS x64 and Elementary OS 0.4.1 Loki x64
Expected behaviour
The program must respond.
Actual behaviour
The program not responding and if I break it (Ctrl + C) i seen in console that messages:
[14399:14410:0802/142733.750943:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
[14399:14410:0802/142733.761023:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
[14399:14409:0802/142733.761203:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
[14399:14409:0802/142733.761402:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
[14399:14409:0802/142733.761801:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
[14399:14410:0802/142733.761896:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
But if I'am using nwjs-builder-phoenix and running SDK - working correctly.
How to reproduce
Prepare
Download and extract fake data to parsing: https://github.com/trofivan/myq-jobs-archive-parser/releases/download/v0.1.0/fake-data-big.zip
git clone https://github.com/trofivan/myq-jobs-archive-parser.git
cd myq-jobs-archive-parser
npm i
Working correctly (only with nwjs-builder-phoenix SDK)
npm start
In the opened window click Select folder to parse
Select folder with fake data
Wait some time while data will have parsed.
Use filters
Close the program
Working not correctly
npm run build.dist
cd dist/myq-jobs-archive-parser-0.1.3-linux-x64/
./myq-jobs-archive-parser
Select folder to parse and waiting some time.
The program does not respond.
Close program (Ctrl + C or [x] on the window) and see console output:
Gtk-Message: GtkDialog mapped without a transient parent. This is discouraged.
[16012:16023:0802/145405.677499:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
[16012:16023:0802/145405.678176:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
[16012:16012:0802/145405.679403:ERROR:broker_posix.cc(104)] Error sending sync broker message: Broken pipe (32)
[16012:16012:0802/145405.679587:ERROR:command_buffer_proxy_impl.cc(100)] ContextResult::kFatalFailure: AllocateAndMapSharedMemory failed
killall exe - to stop the process
PS:
If download nwjs from the official website, working not correctly with SDK version also.
On Linux system program not responded when comes to the game Web Worker: https://github.com/trofivan/myq-jobs-archive-parser/blob/master/src/middlewares/fetchJobs.js
When program not responding I'm can not open Chrome dev tools and get more info.
I'm facing the following error:
[root# main]# p4 submit -c 758557
Submitting change 758557.
Locking 19 files ...
edit x.sh#88
edit y.Linux#8
edit z.Linux#8
open for write: /tmp/tmp.27775.40: Read-only file system
SSL receive failed.
read: Connection reset by peer: Connection reset by peer
Perforce client error:
SSL receive failed.
read: Connection reset by peer: Connection reset by peer
Expanding on Bryan's comment: your server's /tmp directory is read-only, and that's where the server writes spec files by default, so nothing that involves editing a spec (including submit) will work.
Your server admin should either make /tmp writable to the p4d process (maybe p4d got restarted as the wrong user and lost its access?) or set TMP to a writable directory:
https://www.perforce.com/perforce/r15.1/manuals/cmdref/TMP.html
I am fairly new to Linux (and brand new to chef) and I have ran into an issue when setting up my chef server. I am trying to create an admin user with the command
sudo chef-server-ctl user-create admin Admin Ladmin admin#example.com
examplepass -f admin.pem
but after I keep getting this error:
ERROR: Connection refused connecting...
ERROR: Connection refused connecting to https://127.0.0.1/users/, retry 5/5
ERROR: Network Error: Connection refused - Connection refused
connecting to https://..., giving up
Check your knife configuration and network settings
I also noticed that when I ran chef-server-ctl I got this output:
[2016-12-21T13:24:59-05:00] ERROR: Running exception handlers Running
handlers complete
[2016-12-21T13:24:59-05:00] ERROR: Exception
handlers complete Chef Client failed. 0 resources updated in 01 seconds
[2016-12-21T13:24:59-05:00] FATAL: Stacktrace dumped to
/var/opt/opscode/local-mode-cache/chef-stacktrace.out
[2016-12-21T13:24:59-05:00] FATAL: Please provide the contents of the
stacktrace.out file if you file a bug report
[2016-12-21T13:24:59-05:00] FATAL:
Chef::Exceptions::CannotDetermineNodeName: Unable to determine node
name: configure node_name or configure the system's hostname and fqdn
I read that this error is due to a prerequisite mistake but I'm uncertain as to what it means or how to fix it. So any input would be greatly appreciated.
Your server does not have a valid FQDN (aka full host name). You'll have to fix this before installing Chef server.
My laptop is running Ubuntu 14.04 LTS. I have a WMD MyCloud that I am doing backups to with an rsync. The rsync calls usually end with an error.
Some things I have observed. The MyCloud machine has a REST API and I see that someone has tried to hack that. As far as I can tell, the attempt did not succeed.
My backup does this from my laptop, to the MyCloud:
/usr/bin/rsync -a -z -v /home/me/ root#192.168.1.82:/shares/me
Usually I get:
rsync: connection unexpectedly closed (93239 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.0]
and the rsync process is returning 12.
Oddly, if I use a second -v, the process usually succeeds. Why would that be? It is too fast without the second -v?
Something I saw worries me. Again, this is the output on my laptop, running rsync to copy to the MyCloud:
*** stack smashing detected ***: <unknown> terminated
rsync: connection unexpectedly closed (11387 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(226) [sender=3.1.0]
and it returns 12. Ouch!
So, does this mean my laptop is infected with something, presumably Heartbleed or something like it?
On my laptop:
$ rsync -h
rsync version 3.1.0 protocol version 31
...
On the MyCloud:
# rsync -h
rsync version 3.0.9 protocol version 30
...
Am I just screwed? Do I just need to update the stuff on the MyCloud? I have updated by Ubuntu laptop several times. Did that not prevent an infection on this machine?
Open to any suggestions.