curl error 23 using vagrant on OSX 10.10.5 - linux

I'm using a Udacity class on linux shell commands. I'm using OSX 10.10.5. and I installed Ubuntu from Virtual Box (VirtualBox 5.0.20 for OS X hosts amd64 from xxxs://www.virtualbox.org/wiki/Downloads as instructed.)
It uses this VM of Ubuntu, and Vagrant (from xxxs://releases.hashicorp.com/vagrant/1.8.1/vagrant_1.8.1.dmg) to connect that terminal to the VM.
Using this VM is for file consistency. Commands build on each other in the class.
One task (which is minor...and not graded) is to run the following command
curl xxx://udacity.github.io/ud595-shell/stuff.zip -o things.zip
[I can't post more than one link due to low reputation the xxx is http above.]
This command should hit the 'net and download a zip file named "things.zip". This fails for me, giving the below:
vagrant#vagrant-ubuntu-trusty-64:/$ curl http://udacity.github.io/ud595-shell/stuff.zip -o things.zip
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0Warning: Failed to create the file things.zip: Permission denied
0 144k 0 796 0 0 3241 0 0:00:45 --:--:-- 0:00:45 3235
curl: (23) Failed writing body (0 != 796)
vagrant#vagrant-ubuntu-trusty-64:/$
So I get error 23 and am not sure why. (Googling is failing to answer this.) I'm guessing there is a permission error but not sure where to start.

Your missing permissions from the directory you're in when downloading the file. You can check this by changing to a directory like /tmp and trying it there.

Related

HyperLedger Fabric Samples - Downloading Platform specific fabric binaries on windows 10

I am installing the Fabric-Samples from https://hyperledger-fabric.readthedocs.io/en/release-2.0/install.html on windows 10.
When I try to run the command, curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s, I am getting an error in downloading the binaries. Please find the dump from the terminal below. I am running this from the fabric-samples folder where the cloning is done.
Clone hyperledger/fabric-samples repo
===> Checking out v2.0.0 of hyperledger/fabric-samples
error: pathspec 'v2.0.0' did not match any file(s) known to git
Pull Hyperledger Fabric binaries
===> Downloading version 2.0.0 platform specific fabric binaries
===> Downloading: https://github.com/hyperledger/fabric/releases/download/v2.0.0/hyperledger-fabric-msys_nt-10.0-18362-amd64-2.0.0.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 t0 a r . e x e0: E r r o r0 o-p-e:n-i-n:g- -a r-c-:h-i-v:e-:- F-a-i:l-e-d: -t-o o p e n0 '\\.\tape0'
100 9 100 9 0 0 9 0 0:00:01 0:00:01 --:--:-- 4
(23) Failed writing body
==> There was an error downloading the binary file.
------> 2.0.0 platform specific fabric binary is not available to download <----
But when I run this in Git-Cmd(as suggested in HyperLedger-downloading Platform-specific Binaries on Windows 10), I get the following:
Clone hyperledger/fabric-samples repo
===> Checking out v2.0.0 of hyperledger/fabric-samples
error: pathspec 'v2.0.0' did not match any file(s) known to git
Pull Hyperledger Fabric binaries
===> Downloading version 2.0.0 platform specific fabric binaries
===> Downloading: https://github.com/hyperledger/fabric/releases/download/v2.0.0/hyperledger-fabric-windows-amd64-2.0.0.tar.gz
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0tar.exe: Error opening archive: Failed to open '\\.\tape0'
100 656 100 656 0 0 328 0 0:00:02 0:00:02 --:--:-- 244
0 64.3M 0 16943 0 0 2420 0 7:44:40 0:00:07 7:44:33 3929
curl: (23) Failed writing body (0 != 16384)
==> There was an error downloading the binary file.
------> 2.0.0 platform specific fabric binary is not available to download <----
I created /bin and /config folders in the fabric-samples folder. Pls let me know what I am doing wrong here.
Thanks in advance.
Try specifying the latest Fabric version explicitly:
curl -sSL https://raw.githubusercontent.com/hyperledger/fabric/master/scripts/bootstrap.sh | bash -s 2.1.0

Installing sdkman on cygwin

I'm having trouble installing sdkman on cygwin. The instructions say to run the command:
curl "https://get.sdkman.io" | bash
When I run this command in cygwin I get this:
$ curl "https://get.sdkman.io" | bash
% Total % Received % Xferd Average Speed Time Time Time
Dload Upload Total Spent Left
0 0 0 0 0 0 0 0 --:--:-- 0:03:56 --:--:--
Nothing is downloaded and the connection eventually times out.
Any ideas why this is happening? Maybe related to the firewall or something not being properly installed? Any solution ideas would be helpful, thanks.
UPDATE:
I tried wget as well and got this:
$ wget https://get.sdkman.io
--2018-02-09 13:29:47-- https://get.sdkman.io/
Resolving get.sdkman.io (get.sdkman.io)... 162.243.83.58
Connecting to get.sdkman.io (get.sdkman.io)|162.243.83.58|:443... failed:
Connection timed out.
Retrying.
--2018-02-09 13:30:09-- (try: 2) https://get.sdkman.io/
Connecting to get.sdkman.io (get.sdkman.io)|162.243.83.58|:443... failed:
Connection timed out.
Retrying.
--2018-02-09 13:30:32-- (try: 3) https://get.sdkman.io/
Connecting to get.sdkman.io (get.sdkman.io)|162.243.83.58|:443... failed:
Connection timed out.
Retrying.
--2018-02-09 13:30:56-- (try: 4) https://get.sdkman.io/
Connecting to get.sdkman.io (get.sdkman.io)|162.243.83.58|:443...
Thank you, it ended up being a network issue. My companies firewall is very selective and I was able to circumvent it with a wireless modem stick.

How do I access a USB drive on a OSX host from inside a docker container?

I have an application that I eventually want to run on a cloud computing service (e.g., such as AWS or Google Cloud) packaged inside a docker image. The reason the application will need to run in the cloud is because it's designed to process large data files, but before I actually deploy, I'd like to test it first on a local laptop, using a single large data file that I've stored (for test and development purposes) on an external USB drive.
My development machine is an OSX laptop, and I'm using a recent version of docker:
stachyra> uname -a
Darwin Andrews-MacBook-Pro-76.local 14.5.0 Darwin Kernel Version 14.5.0: Tue Sep 1 21:23:09 PDT 2015; root:xnu-2782.50.1~1/RELEASE_X86_64 x86_64
stachyra> docker --version
Docker version 1.10.2, build c3959b1
OSX has mounted my external USB drive, device /dev/disk2s2, as /Volumes/MGR DATA:
stachyra> df
Filesystem 512-blocks Used Available Capacity iused ifree %iused Mounted on
/dev/disk1 974770480 435721376 538537104 45% 54529170 67317138 45% /
devfs 375 375 0 100% 650 0 100% /dev
map -hosts 0 0 0 100% 0 0 100% /net
map auto_home 0 0 0 100% 0 0 100% /home
/dev/disk2s2 3906291632 3869523640 36767992 100% 483690453 4595999 99% /Volumes/MGR DATA
/dev/disk3s1 196608 193160 3448 99% 24143 431 98% /Volumes/VirtualBox
stachyra> diskutil list
/dev/disk0
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *500.3 GB disk0
1: EFI EFI 209.7 MB disk0s1
2: Apple_CoreStorage 499.4 GB disk0s2
3: Apple_Boot Recovery HD 650.0 MB disk0s3
/dev/disk1
#: TYPE NAME SIZE IDENTIFIER
0: Apple_HFS Macintosh HD *499.1 GB disk1
Logical Volume on disk0s2
DB70B91A-3B57-4C82-A758-C4BDEA4160FD
Unlocked Encrypted
/dev/disk2
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *2.0 TB disk2
1: EFI EFI 209.7 MB disk2s1
2: Apple_HFS MGR DATA 2.0 TB disk2s2
/dev/disk3
#: TYPE NAME SIZE IDENTIFIER
0: GUID_partition_scheme *100.7 MB disk3
1: Apple_HFS VirtualBox 100.7 MB disk3s1
and it should also be noted, the drive has several directories and data which are visible inside it, at least when viewed directly through OSX:
stachyra> ls -l /Volumes/MGR\ DATA
total 0
drwxr-xr-x 6 stachyra staff 204 Apr 14 2015 1000genomes
drwxr-xr-x 5 stachyra staff 170 Oct 12 17:41 GIAB
drwxr-xr-x 4 stachyra staff 136 Apr 28 2015 genome_browser_tracks
drwxr-xr-x 24 stachyra staff 816 Oct 6 14:00 mitty
I have tried to follow the advice from this question, which describes how to mount a USB drive in docker when docker is running within a linux host. But my local laptop is OSX, not linux, so it doesn't seem to work.
Explicitly, when attempting to follow the advice of the accepted answer, I obtain the following result:
stachyra> docker run -i -t --privileged -v /dev/disk2s2:/dev/foo ubuntu bash
root#8da7b492a707:/# uname -a
Linux 8da7b492a707 4.1.18-boot2docker #1 SMP Sat Feb 20 08:24:27 UTC 2016 x86_64 x86_64 x86_64 GNU/Linux
root#8da7b492a707:/# ls -l /dev/foo
total 0
root#8da7b492a707:/#
Based upon the response, one can see that docker does indeed launch a linux container correctly, and it also creates a volume /dev/foo inside of the container as requested, but the actual contents of the USB drive are not accessible via that location--the ls -l command claims there are no files or directories there.
I also tried the second method described in an alternate response to the same question, and that fails even worse:
stachyra> docker run -i -t --device=/dev/disk2s2 ubuntu bash
docker: Error response from daemon: error gathering device information while adding custom device "/dev/disk2s2": not a device node.
stachyra>
I have found another discussion thread on stackoverflow which suggests that raw USB access is handled quite differently in OSX than in linux, which I suspect is probably the reason why both of the above attempts at USB access are failing.
But, what should I actually do about it? That is to say, what is the correct sequence of actions or commands to allow docker to access a USB device mounted on an OSX host, rather than linux?
I was finally able to access my USB drive from /var/media inside my container by using the machine-diskutil.sh script mentioned in warmoverflow's comment like so
machine-diskutil.sh mount my-machine-name /Volumes/my-usb-drive
and then starting the container like so
docker run -v /Volumes/my-usb-drive:/var/media -it my/image:latest bash
Because I had tried to add /Volumes/my-usb-drive as a shared folder manually in VirtualBox, I first got this error.
Error: The shared folder /Volumes/Seagate already exists on the
docker machine, please unmount it first.
So I removed it manually and re-ran the machine-diskutil.sh mount command without any problems. Great stuff!
As per #pgayvallet comment on GitHub:
As the daemon runs inside a VM in Docker Desktop, it is not possible to actually share a mac host device with the container inside the VM, and this will most definitely never be possible.

Incomplete downloads over curl-sftp and curl-scp with curl 7.46.0 package

I am trying to do curl sftp download.
My version of curl is 7.46.0 and libssh2 is 1.6.0
I am getting this error
/tmp # curl -kv scp://10.12.16.16//var/lib/tftpboot/lokesh/sw_data.img -u root:root123 -o /tmp/sw_data.img
Trying 10.12.16.16...
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
0 0 0 0 0 0 0 0 --:--:-- --:--:-- --:--:-- 0
Connected to 10.12.16.16 (10.12.16.16) port 22 (#0)
SSH MD5 fingerprint: 72b2890c38059d8d3509dfacbd69c5cb
SSH authentication methods available: publickey,gssapi-keyex,gssapi-with-mic,password
Using SSH public key file '(nil)'
Using SSH private key file ''
SSH public key authentication failed: Unable to extract public key from private key file: Unable to open private key file
Initialized password authentication
Authentication complete
SSH CONNECT phase done
{ [16384 bytes data]
42 7680k 42 3232k 0 0 5900 0 0:22:12 0:09:20 0:12:52 0
Only partial amount of file is downloaded as speed stops to zero.
How to resolve this problem?
Do I need to add some compilation options while cross-compiling this?
I tried with "curl sftp" too, the same problem partial download is observed?
But with the old version of curl-7.37.1, this issue is not seen.
Please help me in this regard.

Unable to open ZFS pool because of incorrect pool configuration on Linux

After rebooting my ZFS pool was unable to open my main pool. The exact error that I'm getting is:
"The pool metadata is corrupted and the pool cannot be opened"
When I checked the zpool configuration using zpool status (from the recovery console), the configuration that it displayed was all wrong. The configuration listed several drives that I had just moved to other drives.
Currently the output of zpool status looks like this:
pool: pool
state: FAULTED
status: The pool metadata is corrupted and the pool cannot be opened.
action: Destroy and re-create the pool from
a backup source.
see: http://zfsonlinux.org/msg/ZFS-8000-72
scan: resilvered 511G in 12h39m with 0 erors on Sat Mar 14 06:14:34 2015
config:
NAME STATE READ WRITE CKSUM
pool FAULTED 0 0 1 corrupted data
raidz1-0 ONLINE 0 0 8
wwn-0x50014ee05943ce36-part4 ONLINE 0 0 0
wwn-0x50014ee05943ce36-part5 ONLINE 0 0 1
wwn-0x50014ee05943ce36-part6 ONLINE 0 0 0
wwn-0x50014ee05943ce36-part7 ONLINE 0 0 0
wwn-0x50014ee05943ce36-part8 ONLINE 0 0 1
My question is, why does the configuration suddenly revert back to an old state after rebooting? (I checked with zpool status before rebooting and everything was fine and reported no errors). And how can I tell ZFS to change the configuration so that I can open the pool and get my data back?
I'm running Fedora 20, kernel 3.18.7-100

Resources