Filebeat crawl fail on AWS EFS files - logstash

I am trying to send logs stored in Amazon Elastic File System (EFS) mounted volume to Logstash using Filebeat running in an EC2 instance. But getting the error "stale NFS file handle".
ERROR log/log.go:170 Unexpected error reading from /var/efs/logs/my-app.log; error: stat /var/efs/logs/my-app.log: stale NFS file handle.
ERROR log/harvester.go:330 Read line error: stat /var/efs/logs/my-app.log: stale NFS file handle; File: /var/efs/logs/my-app.log
I saw this section in Filebeat common problems page. Please inform if there is any workaround for the issue.

Related

stale NFS file handle on git after hard reboot on Google VM

after hard reboot, of my Google VM:
git pull error:
fatal: protocol error: unexpected 'Error running git: fork/exec /usr/bin/git-upload-pack: stale NFS file handle'
git push error:
fatal: protocol error: unexpected 'Error running git: fork/exec /usr/bin/git-receive-pack: stale NFS file handle'
What i can do to fix this?
You will need to unmount the NFS file system and remount it. A hard reboot might also require a file system check fsck of the VM's file systems.
Note: the umount will probably report an error/warning as the system thinks the NFS file system is mounted when it is not.

Write dd image to NFS (AWS EFS)

I have a disk image (from dd).
Is it possible to save it to NFS (AWS EFS).
Of course I can mount it (loop) but it is over 1.5TB of small files and cp or rsync works very slowly.
I also tried by aws file sync, but unfortunately I get an error: Input/output error.
Hosts:
HOST A: Mounted image dd + nfs server
HOST B: Host with AWS file sync
Yes you can. Use EFS File Sync for the best performance:
https://aws.amazon.com/blogs/aws/efs-file-sync-faster-file-transfer-to-amazon-efs-file-systems/

Kubernaties unable to mount NFS FS on Google Container Engine

I am following the basic nfs server tutorial here, however when I am trying to create the test busybox replication controler I get an error indicating that the mount has failed.
Can someone point out what am I doing wrong ?
MountVolume.SetUp failed for volume
"kubernetes.io/nfs/4e247b33-a82d-11e6-bd41-42010a840113-nfs"
(spec.Name: "nfs") pod "4e247b33-a82d-11e6-bd41-42010a840113" (UID:
"4e247b33-a82d-11e6-bd41-42010a840113") with: mount failed: exit
status 32 Mounting arguments: 10.63.243.192:/exports
/var/lib/kubelet/pods/4e247b33-a82d-11e6-bd41-42010a840113/volumes/kubernetes.io~nfs/nfs
nfs [] Output: mount: wrong fs type, bad option, bad superblock on
10.63.243.192:/exports, missing codepage or helper program, or other error (for several filesystems (e.g. nfs, cifs) you might need a
/sbin/mount. helper program) In some cases useful info is found
in syslog - try dmesg | tail or so
I have tried using a ubuntu vm as well just to see if I can manage to mitigate a possible missble /sbin/mount.nfs dependency by running apt-get install nfs-common, but that too fails with the same error.
Which container image are you using? On 18th of October Google announce a new container image, which doesn't support NFS, yet. Since Kubernetes 1.4 this image (called gci) is the default. See also https://cloud.google.com/container-engine/docs/node-image-migration#known_limitations

Logstash service not reading config at custom directory

I'm trying to set up Logstash as a service on an Ubuntu 14.04 server and want it to read my configuration from a series of files that are in a directory that isn't the default /etc/logstash/conf.d I've edited /etc/init.d/logstash so that LS_CONF_DIR is the directory I want instead of the default. When I run "service logstash configtest" I get a reply saying the configuration is OK, but when I try to start the service it doesn't work and I get the following errors in debug mode:
{:timestamp=>"2016-06-08T16:35:05.729000-0400", :message=>"translation missing: en.logstash.runner.configuration.file-not-found", :level=>:error, :file=>"logstash/agent.rb", :line=>"383", :method=>"create_pipeline"}
{:timestamp=>"2016-06-08T16:35:05.738000-0400", :message=>"starting agent", :level=>:info, :file=>"logstash/agent.rb", :line=>"207", :method=>"execute"}
I've checked permissions on the files and all user have read permission. When I run logstash from the command line with -f pointing to the directory, it runs fine.

Trying to copy JAR file from local machine to linux server

I am trying to copy a jar file from my local machine to linux(centos) server, but I am getting an error.
This is my command:
pscp Watch.jar dev#10.10.40.74:/home/dev/Documents
Watch.jar is the name of my jar and dev is my user on my server. I am trying to copy this file to a particular location(home/dev/Documents)
The error I got is:
Watch.jar: Network Error occurred
http://i.stack.imgur.com/aVBIj.png
And when I run java -jar Watch.jar on my linux server, it says
Error: Invalid or corrupt file.
Any help would be appreciated.

Resources