Installing on Seagate NAS - linux

I recently purchased a 4 TB Seagate central NAS. On a whim, I tried to SSH into the drive just to see what would happen. It worked. I did a little digging and found that it is running montavista.
I thought I would install screen and a few other helpful small programs.
When I try to install screen, it said that there was no C compiler in $path. I suspect that there is likely no C compiler on the drive.
I'm wondering if this is something that I can address and how I would do it. I'm also wondering if there's a way to make it easier to install things on this embedded version of Linux.

If you ssh your Seagate and type
uname -m
You will see that the processor is armv6 or 7 which means that it only will work if you install a Linux distro or programme for this architecture.
I am not that desperate to test installing a raspberry pi distro, but I believe it should work considering that raspberry pi is ARM architecture.
The reason why I think it is not worth at the moment is because I have no replacement for this storage at the moment and I don't want to possibly "brick" it.
Also I see no advantages since a ARM is a basic processor and loading with a full distro is a way to hog the system.
Basic Montavista embed system is enough to do the work I expect from this NAS.
If you want to run something like a Plex server on your NAS forget about ARM processor, look for something more powerful.
Check the limits of my Seagate Central 4TB
uname -a
Linux Seagate-3F0580 2.6.35.13-cavm1.whitney-econa.whitney-econa #1 Wed Sep 16 15:47:59 PDT 2015 armv6l GNU/Linux
free
256 Mb ram
1GB Swap
df -h
Filesystem Size Used Avail Use% Mounted on
rootfs 1008M 461M 497M 49% /
/dev/root 1008M 461M 497M 49% /
devtmpfs 125M 125M 0 100% /dev
/dev/sda5 1008M 159M 799M 17% /usr/config
none 125M 125M 0 100% /dev
/dev/sda7 1008M 282M 676M 30% /Update
/dev/mapper/vg1-lv1 3.7T 1.7T 2.0T 46% /Data
/dev/mapper/vg1-lv1 3.7T 1.7T 2.0T 46% /shares/Public
/dev/mapper/vg1-lv1 3.7T 1.7T 2.0T 46% /shares/mauricio
/dev/mapper/vg1-lv1 3.7T 1.7T 2.0T 46% /shares/mauricio.tm
/dev/mapper/vg1-lv1 3.7T 1.7T 2.0T 46% /shares/audrey
/dev/mapper/vg1-lv1 3.7T 1.7T 2.0T 46% /shares/audrey.tm
tmpfs 125M 11M 114M 9% /var/volatile
tmpfs 125M 0 125M 0% /dev/shm
tmpfs 125M 0 125M 0% /media/ram
/dev/mapper/vg1-lv1 3.7T 1.7T 2.0T 46% /Data/anonftp/Public
/dev/sdb1 932G 876G 57G 94% /shares/usb1-1share1
I do have a 1TB usb hard drive attached to this Seagate Central.
You can see that for the root file system it has almost 500Mb used of almost 1GB
So the distro is really small. ( If cross your mind dsl forget it has no arm version of that distro, unless you install it on a pc and build a arm kernel for it... again worthless effort.)
A second partition for the configs /dev/sda5 /usr/config
A third partition for the Update /dev/sda7 /Update
And the shares are LVM partitions.
To install applications you should use the compiler on your Linux computer, compile it for a arm architecture and import to the Seagate via ssh, debug the application on the Seagate and then once it is completely debugged and ready for use install it permanent on the system.
Nobody said it is an easy task :)
https://support.mvista.com/DocViewer/pro_5_1intro.html

Related

Arch Linux, Docker "No space left on device."

All of the similar questions I see are resolved by cleaning up the images or containers or orphaned volumes but I am not having any of those problems. I even completely deleted /var/lib/docker and still nothing.
Relevant output:
[N] ⋊> ~/W/W/cocagne on master ⨯ docker run --rm -v /var/run/docker.sock:/var/run/docker.sock:ro -v /var/lib/docker:/var/lib/docker martin/docker-cleanup-vol
umes
docker: Error response from daemon: Container command '/usr/local/bin/docker-cleanup-volumes.sh' not found or does not exist..
[N] ⋊> ~/W/W/cocagne on master ⨯ docker-compose build 11:56:23
mysql uses an image, skipping
Building vitess
Traceback (most recent call last):
File "/usr/bin/docker-compose", line 9, in <module>
load_entry_point('docker-compose==1.7.1', 'console_scripts', 'docker-compose')()
File "/usr/lib/python3.5/site-packages/compose/cli/main.py", line 58, in main
command()
File "/usr/lib/python3.5/site-packages/compose/cli/main.py", line 109, in perform_command
handler(command, command_options)
File "/usr/lib/python3.5/site-packages/compose/cli/main.py", line 213, in build
force_rm=bool(options.get('--force-rm', False)))
File "/usr/lib/python3.5/site-packages/compose/project.py", line 300, in build
service.build(no_cache, pull, force_rm)
File "/usr/lib/python3.5/site-packages/compose/service.py", line 718, in build
buildargs=build_opts.get('args', None),
File "/usr/lib/python3.5/site-packages/docker/api/build.py", line 54, in build
path, exclude=exclude, dockerfile=dockerfile, gzip=gzip
File "/usr/lib/python3.5/site-packages/docker/utils/utils.py", line 103, in tar
t.add(os.path.join(root, path), arcname=path, recursive=False)
File "/usr/lib/python3.5/tarfile.py", line 1938, in add
self.addfile(tarinfo, f)
File "/usr/lib/python3.5/tarfile.py", line 1966, in addfile
copyfileobj(fileobj, self.fileobj, tarinfo.size)
File "/usr/lib/python3.5/tarfile.py", line 244, in copyfileobj
dst.write(buf)
File "/usr/lib/python3.5/tempfile.py", line 483, in func_wrapper
return func(*args, **kwargs)
OSError: [Errno 28] No space left on device
[I] ⋊> ~/W/W/cocagne on master ⨯ docker ps -a 11:56:30
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
[I] ⋊> ~/W/W/cocagne on master ⨯ docker ps -q 11:57:25
[I] ⋊> ~/W/W/cocagne on master ⨯ docker image -q 11:57:28
docker: 'image' is not a docker command.
See 'docker --help'.
[I] ⋊> ~/W/W/cocagne on master ⨯ docker images -a 11:57:39
REPOSITORY TAG IMAGE ID CREATED SIZE
martin/docker-cleanup-volumes latest 8c41df286c03 12 weeks ago 22.12 MB
[I] ⋊> ~/W/W/cocagne on master ⨯ df -h 11:57:41
Filesystem Size Used Avail Use% Mounted on
dev 3.9G 0 3.9G 0% /dev
run 3.9G 832K 3.9G 1% /run
/dev/sda4 27G 9.1G 17G 36% /
tmpfs 3.9G 64M 3.8G 2% /dev/shm
tmpfs 3.9G 0 3.9G 0% /sys/fs/cgroup
tmpfs 3.9G 32K 3.9G 1% /tmp
/dev/sda1 42G 16G 25G 39% /home
/dev/sda2 42G 9.4G 30G 24% /var
/dev/sda5 1.3G 32M 1.3G 3% /boot
tmpfs 790M 12K 790M 1% /run/user/1000
[I] ⋊> ~/W/W/cocagne on master ⨯ 11:57:54
docker info
[I] ⋊> ~/W/W/cocagne on master ⨯ docker info 12:01:55
Containers: 0
Running: 0
Paused: 0
Stopped: 0
Images: 0
Server Version: 1.11.2
Storage Driver: devicemapper
Pool Name: docker-8:2-2359321-pool
Pool Blocksize: 65.54 kB
Base Device Size: 10.74 GB
Backing Filesystem: xfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 11.8 MB
Data Space Total: 107.4 GB
Data Space Available: 34.57 GB
Metadata Space Used: 581.6 kB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.147 GB
Udev Sync Supported: true
Deferred Removal Enabled: false
Deferred Deletion Enabled: false
Deferred Deleted Device Count: 0
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
WARNING: Usage of loopback devices is strongly discouraged for production use. Either use `--storage-opt dm.thinpooldev` or use `--storage-opt dm.no_warn_on_loop_devices=true` to suppress this warning.
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.131 (2016-07-15)
Logging Driver: json-file
Cgroup Driver: cgroupfs
Plugins:
Volume: local
Network: null host bridge
Kernel Version: 4.6.4-1-ARCH
Operating System: Arch Linux
OSType: linux
Architecture: x86_64
CPUs: 4
Total Memory: 7.706 GiB
Name: crockford
ID: HO2U:ELWR:LDB3:PMEY:5YOJ:D7YJ:2HJA:PVYG:45K2:J6KI:D6WO:4RUE
Docker Root Dir: /var/lib/docker
Debug mode (client): false
Debug mode (server): false
Registry: https://index.docker.io/v1/
One thing that makes my issue a little different (Where I think the root of the issue comes from)
Before I created a separate partition for /var, it was on my root partition, which eventually maxed out. Once it maxed out, I shrunk my home partition, create a /var partition, copied my root's /var to my new /var, and removed my old /var. But for some reason, docker still think's it's maxed out? I have no idea.
I also tried to resinstall docker with sudo pacman -S docker but nothing.
Edit: I just tried it with a normal docker build . and that works fine. Somehow docker-compose thinks it's out of memory though?
The python stack trace from docker-compose indicates that it can't seem to create a temporary file. This would indicate there's no space left in /tmp.
OP mentioned that his RAM is completely consumed when he runs docker-compose in the comments. Given that and the fact that /tmp is mounted on tmpfs it makes sense that there is no space left for Python/docker-compose to create any temporary files in /tmp.
The possible solutions are:
Temporarily switch the default tempfile generation location by setting one of the following environment variables: TMPDIR, TEMP, TMP (ref: Python doc)
Change /tmp to not use tmpfs and use disk instead.
Increase the amount of RAM/Swap space on your machine. (You can increase swap without messing with your partitions like so). tmpfs is backed by volatile storage, which means both RAM and Swap should theoretically work.
Note, most of these cases will result in a slowdown of your application, especially if the docker build process is I/O heavy.
Try this:
mount -o remount,size=4G,noatime /tmp

docker with mode and mongo using massive amounts of disk space

I have a docker container setup using a MEAN stack and my disk usage is increasing really quickly. I have a 30gb droplet on digital ocean and am at 93% disk usage, up from 67% 3 days ago and I have not installed anything since then, just loaded a few thousand database records.
I probably have 20k or 30k documents in my database, but they are not very large, but my disk usage increases by about 5% every day. A much larger data set storing the same data was in postgres prior to this and I never had issues with storage space and I was on a 20g droplet before I was forced to increase after deploying my mongo application.
I deleted most of my old images and non-running containers.
running docker ps -s yields the following:
My main web container shows 8.456 kB (virtual 817.4 MB)
My mongo container shows 0 B (virtual 314.4 MB)
Filesize from images with docker images
VIRTUAL SIZE
848 MB
643 MB
743.6 MB
317 MB
636.7 MB
Filesystem use with df:
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/vda1 30830588 26957740 2283712 93% /
Docker copmose command to start mongo:
command: mongod --dbpath /data/db --smallfiles --quiet --logpath=/dev/null
I ran: sudo du -h / | grep -P '^[0-9\.]+G'
2.1G /var/lib/docker/aufs/diff
1.3G /var/lib/docker/aufs/mnt
3.4G /var/lib/docker/aufs
16G /var/lib/docker/containers/9fab4a607a0155bd61b2b73...5399e
16G /var/lib/docker/containers
20G /var/lib/docker
21G /var/lib
21G /var
Is mongo really this big of a data hog, or am I (hopefully) missing something?
I figured out my issue. I had a 20gb log file from the docker container!
I guess I will be exploring options to limit this based no the docs https://docs.docker.com/engine/admin/logging/overview/

Installing Node on Arduino Yun

I have an 8G MicroSD card and I want to install Node for Arduino Yun using opkg but I receive the following message:
root#Arduino:~# opkg update
Downloading http://downloads.arduino.cc/openwrtyun/1/packages/Packages.gz.
Updated list of available packages in /var/opkg-lists/attitude_adjustment.
Downloading http://downloads.arduino.cc/openwrtyun/1/packages/Packages.sig.
Signature check passed.
root#Arduino:~# opkg install node
Installing node (v0.10.33-1) to root...
Collected errors:
* verify_pkg_installable: Only have 2040kb available on filesystem /overlay, pkg node needs 3016
* opkg_install_cmd: Cannot install package node.
root#Arduino:~# df -h
Filesystem Size Used Available Use% Mounted on
rootfs 6.9M 4.9M 2.0M 71% /
/dev/root 7.5M 7.5M 0 100% /rom
tmpfs 29.8M 480.0K 29.4M 2% /tmp
tmpfs 512.0K 0 512.0K 0% /dev
/dev/mtdblock3 6.9M 4.9M 2.0M 71% /overlay
overlayfs:/overlay 6.9M 4.9M 2.0M 71% /
/dev/sda1 7.3G 46.8M 7.2G 1% /mnt/sda1
Is there a way to install it?
Try this
opkg -d /dev/sda1 install node
Seems like the installation is being attempted on /overlay which has 6.9MB size
You must first expand your rootfs with
'overlay-only -i '
http://www.arduino.org/learning/tutorials/advanced-guides/how-to-enable-the-auto-expanding-of-your-file-system-using-pivot-overlay

Docker run, no space left on device

[root#host ~]# docker run 9e7de9390856
Timestamp: 2015-06-15 22:20:58.8367035 +1000 AEST
Code: System error
Message: [/usr/bin/tar -xf /var/lib/docker/tmp/cde0f3a199597ac2e18e7efc7744c84a6c134adef31fb88b6982a8732f45efa5090033894/_tmp.tar -C /var/lib/docker/devicemapper/mnt/cde0f3a199597ac2e18e7efc7744c84a6c134adef31fb88b6982a8732f45efa5/rootfs/tmp .] failed: /usr/bin/tar: ./was/fixPack/7.0.0-WS-WASSDK-LinuxX64-FP0000027.pak: Wrote only 4608 of 10240 bytes
/usr/bin/tar: ./was/fixPack/wasFixPackInstallResponseFile: Cannot write: No space left on device
.
.
Cannot write: No spaFATA[0141] Error response from daemon: : exit status 2
df -h:
Filesystem Size Used Avail Use% Mounted on
/dev/xvda2 6.0G 3.2G 2.9G 52% /
devtmpfs 1.9G 0 1.9G 0% /dev
tmpfs 1.8G 0 1.8G 0% /dev/shm
tmpfs 1.8G 17M 1.8G 1% /run
tmpfs 1.8G 0 1.8G 0% /sys/fs/cgroup
/dev/xvdb1 99G 28G 67G 30% /var/lib/docker
docker info:
Containers: 2
Images: 34
Storage Driver: devicemapper
Pool Name: docker-202:17-2621441-pool
Pool Blocksize: 65.54 kB
Backing Filesystem: extfs
Data file: /dev/loop0
Metadata file: /dev/loop1
Data Space Used: 15.89 GB
Data Space Total: 107.4 GB
Data Space Available: 76.3 GB
Metadata Space Used: 10.27 MB
Metadata Space Total: 2.147 GB
Metadata Space Available: 2.137 GB
Udev Sync Supported: true
Data loop file: /var/lib/docker/devicemapper/devicemapper/data
Metadata loop file: /var/lib/docker/devicemapper/devicemapper/metadata
Library Version: 1.02.93-RHEL7 (2015-01-28)
Execution Driver: native-0.2
Kernel Version: 3.10.0-229.el7.x86_64
Operating System: Red Hat Enterprise Linux Server 7.1 (Maipo)
CPUs: 2
Total Memory: 3.452 GiB
Name: ip-10-100-128-182.localdomain
ID: 4ZZZ:BSQD:GBKL:4Y3N:J6BL:47QE:3HMQ:GLMY:FPUK:CEPM:3EBP:ZU7G
Debug mode (server): true
Debug mode (client): false
Fds: 13
Goroutines: 18
System Time: Mon Jun 15 22:48:24 AEST 2015
EventsListeners: 0
Init SHA1: 836be3a369bfc6bd4cbd3ade1eedbafcc1ea05d0
Init Path: /usr/libexec/docker/dockerinit
Docker Root Dir: /var/lib/docker
uname -a:
Linux ip-10-100-128-182.localdomain 3.10.0-229.el7.x86_64 #1 SMP Thu Jan 29 18:37:38 EST 2015 x86_64 x86_64 x86_64 GNU/Linux
Anyone can help me?
Not sure this information is enough. But tried couple of solutions, nothing worked.
docker version:
Client version: 1.6.0
Client API version: 1.18
Go version (client): go1.4.2
Git commit (client): 8aae715/1.6.0
OS/Arch (client): linux/amd64
Server version: 1.6.0
Server API version: 1.18
Go version (server): go1.4.2
Git commit (server): 8aae715/1.6.0
OS/Arch (server): linux/amd64
[root#host ~]# service docker status -l
Redirecting to /bin/systemctl status -l docker.service
docker.service - Docker Application Container Engine
Loaded: loaded (/usr/lib/systemd/system/docker.service; disabled)
Active: active (running) since Tue 2015-06-16 00:31:46 AEST; 2min 2s ago
Docs: http://docs.docker.com
Main PID: 3306 (docker)
CGroup: /system.slice/docker.service
└─3306 /usr/bin/docker -d --storage-opt dm.basesize=30G --storage-opt dm.loopmetadatasize=4G
It sounds like you're trying to start a container from a 14GB image.
A Docker container, when using the devicemapper storage driver, only has 10GB of space available by default. You appear to be using the devicemapper driver, so this is probably the source of your problem.
This article discusses in detail the process you need to use to increase the amount of space available for container filesystems.
Filesystem-based drivers (like the overlay driver) to not have this same limitation (but they may of course suffer from other limitations).

libpcap performance and behavior differences between Ubuntu 14.40 and CentOS 6.5

I have been running a tcpdump based script on Ubuntu for some time, and recently I have been asked to run it on CentOS 6.5 and I'm noticing some very interesting differences
I'm running tcpdump 4.6.2, libpcap 1.6.2 on both setups, both are actually running on the same hardware (dual booted)
I'm running the same command on both OS'.
sudo /usr/sbin/tcpdump -s 0 -nei eth9 -w /mnt/tmpfs/eth9_rx.pcap -B 2000000
From "free -k", I see about 2G allocated on Ubuntu
Before:
free -k
total used free shared buffers cached
Mem: 65928188 1337008 64591180 1164 26556 68596
-/+ buffers/cache: 1241856 64686332
Swap: 67063804 0 67063804
After:
free -k
total used free shared buffers cached
Mem: 65928188 3341680 62586508 1160 26572 68592
-/+ buffers/cache: 3246516 62681672
Swap: 67063804 0 67063804
expr 3341680 - 1337184
2004496
One CentOS, I see twice the amount of memory (4G) being allocated from the same command
Before:
free -k
total used free shared buffers cached
Mem: 16225932 394000 15831932 0 15308 85384
-/+ buffers/cache: 293308 15932624
Swap: 8183804 0 8183804
After:
free -k
total used free shared buffers cached
Mem: 16225932 4401652 11824280 0 14896 84884
-/+ buffers/cache: 4301872 11924060
Swap: 8183804 0 8183804
expr 4401652 - 394000
4007652
From the command, I'm listening against an interface and dumping into a RAMdisk.
On Ubuntu, I can capture packets at line rate for large size packets (10G, 1024 byte frames)
But on CentOS, I can only capture packets at 60% of line rate (10G, 1024 byte frames)
Also, both OS's are running the same version of NIC drivers and driver configurations.
My goal is to achieve the same performance on CentOS as I have with Ubuntu.
I googled around and there seems to be the magic of libpcap behaving differently with different kernels. I'm curious if there's any kernel side options I have to tweek on the CentOS side to achieve the same performance on Ubuntu.
This has been answered. According to Guy Harris from tcpdump/libpcap, the difference is due to CentOS6.5 running 2.6.X kernel. Below is his response:
"
3.2 introduced the TPACKET_V3 version of the "T(urbo)PACKET" memory-mapped packet capture mechanism for PF_PACKET sockets; newer versions of libpcap (1.5 and later) support TPACKET_V3 and will use it if the kernel supports it. TPACKET_V3 makes much more efficient use of the capture buffer; in at least one test, it dropped fewer packets. It also might impose less overhead, so that asking for a 2GB buffer takes less kernel memory."

Resources