Issue saving a cvs file to a usb drive using node on a raspberry pi - linux

This one is really doing my head in! please help!
So I have a little raspberry pi 2 running debian and node 0.12.6.
The node script listens to a home automation bus for events and saves these to a json list. This list is then saved to cvs using a cron job hourly (every min for testing) to a usb drive that is mounted at boot on the system.
The issue is that every time fs.writeFile is called it throws an error.
This only happens if I try and save to the usb drive, and not to local folders, which work fine.
I have altered the /etc/fstab file to include the following
/dev/sda1 /home/pi/knx/usb vfat user,auto,nodev,gid=pi,uid=pi,fmask=0111,dmask=0000 0 0
This appears to mount the usb at boot and I can as pi user make dirs and files no problem from the console.
The permissions of the folder (with the usb drive mounted at usb) look like this....
pi#raspberrypi ~/knx $ ls -l
total 24
-rw-r--r-- 1 pi pi 965 Nov 20 16:40 app.js
drwxr-xr-x 2 pi pi 4096 Nov 20 16:00 data_files
drwxr-xr-x 2 pi pi 4096 Nov 19 12:27 install_scripts
drwxr-xr-x 7 pi pi 4096 Nov 20 16:21 node_modules
drwxrwxrwx 8 pi pi 8192 Jan 1 1970 usb
The drive looks to be mounted ok....
pi#raspberrypi ~/knx $ lsblk
NAME MAJ:MIN RM SIZE RO TYPE MOUNTPOINT
sda 8:0 1 15G 0 disk
└─sda1 8:1 1 15G 0 part /home/pi/knx/usb
The node script is ...
//set up chron job to save list
var job = new CronJob({
cronTime: '00 0-59 * * * *',
onTick: function() {
console.log('saving file');
json2csv({ data: listToSave, fields: fields }, function(err, csv) {
if (err) console.log(err);
tempfilename = new Date();
fname = './usb/data_files/'+ tempfilename.toString() +'.csv';
console.log(fname);
fs.writeFile(fname, csv, function(err) {
if (err) throw err;
console.log('file saved');
listToSave=[];
});
});
},
start: true,
});
The fname looks like this....
'./usb/data_files/Fri Nov 20 2015 16:42:00 GMT+0000 (UTC).csv'
If I had any hair - I would have well and truly pulled it out by now!
Any ideas on a solution?

It looks like the filename you are creating with "new Date()" is an invalid format for the filesystem.
I'm able to make your code work by formatting the raw date string into a friendlier file name:
var utils = require('util')
Above adds the util module for string formatting.
var today = new Date();
var fname = utils.format('./usb/%s-%s-%s.csv', today.getMonth(), today.getDay(), today.getHours());
This writes a "MM-dd.csv" filename which appears to work on my debian machine. You can extend this by adding the time, year , etc.

Related

OpenCV how to select which camera to connect to?

I am writing an OpenCV 3.4 app on Ubuntu 16.04.
However, I am running into a problem where no matter what deviceId I set on my VideoCapture() function, I always get the built in webcam stream. However I need to get the stream from my usb camera. Btw my usb camera is a Logitech c920, and my computer is a ThinkPad P51 (not sure if that would matter)
Here is my code:
#include <opencv2/highgui.hpp>
#include <opencv2/opencv.hpp>
#include <opencv2/aruco.hpp>
#include <iostream>
int main(int, char**){
cv::Mat inputImage;
std::vector<int> markerIds;
std::vector<std::vector<cv::Point2f> > markerCorners, rejectedCandidates;
cv::aruco::DetectorParameters parameters;
cv::VideoCapture inputVideo(1); // I have tried 0, 1, 2...20 all the same stream
cv::Ptr<cv::aruco::Dictionary> dictionary = cv::aruco::getPredefinedDictionary(cv::aruco::DICT_4X4_50);
while (inputVideo.grab()){
cv::Mat image, imageCopy;
inputVideo.retrieve(image);
image.copyTo(imageCopy);
std::vector<int> ids;
std::vector<std::vector<cv::Point2f> > corners;
cv::aruco::detectMarkers(image, dictionary, corners, ids);
if (ids.size() > 0) cv::aruco::drawDetectedMarkers(imageCopy, corners, ids);
cv::imshow("out", imageCopy);
if(cv::waitKey(30) >= 0) break;
}
}
Thanks!
EDIT:
This is the result I get with ls -lR /dev/v4l:
/dev/v4l:
total 0
drwxr-xr-x 2 root root 80 Feb 23 10:53 by-id
drwxr-xr-x 2 root root 80 Feb 23 10:53 by-path
/dev/v4l/by-id:
total 0
lrwxrwxrwx 1 root root 12 Feb 23 10:53 usb-046d_HD_Pro_Webcam_C920_25CCF0FF-video-index0 -> ../../video1
lrwxrwxrwx 1 root root 12 Feb 23 2019 usb-8SSC20F27019L1GZ83C07ZY_Integrated_Camera-video-index0 -> ../../video0
/dev/v4l/by-path:
total 0
lrwxrwxrwx 1 root root 12 Feb 23 10:53 pci-0000:00:14.0-usb-0:6:1.0-video-index0 -> ../../video1
lrwxrwxrwx 1 root root 12 Feb 23 2019 pci-0000:00:14.0-usb-0:8:1.0-video-index0 -> ../../video0
And this is what I get with lsusb:
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 005: ID 138a:0097 Validity Sensors, Inc.
Bus 001 Device 004: ID 04ca:7066 Lite-On Technology Corp.
Bus 001 Device 008: ID 046d:082d Logitech, Inc. HD Pro Webcam C920
Bus 001 Device 003: ID 046d:c539 Logitech, Inc.
Bus 001 Device 002: ID 04d9:0169 Holtek Semiconductor, Inc.
Bus 001 Device 006: ID 8087:0a2b Intel Corp.
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
Not sure how to match them?

device driver fn is not being called when system call in userspace is called

i am re-writing a scull device driver.
I have written the open/read/write fns in driver code.
echo "hello" > /dev/myscull0 shows the data being written successfully to my device driver, and open() -> write() ->release() has been called in driver successfully.
Similarly, other operations using standard bash commands are successfull when operated on driver device file. Eg: cat /dev/myscull0 execute drivers read() call.
Now, i am writing a user space program to operate on my device file.
void
scull_open(){
char device_name[DEV_NAME_LENGTH];
int fd = 0;
memset(device_name, 0, DEV_NAME_LENGTH);
if((fgets(device_name, DEV_NAME_LENGTH -1, stdin) == NULL)){
printf("error in reading from stdin\n");
exit(EXIT_SUCCESS);
}
device_name[DEV_NAME_LENGTH -1] = '\0';
if ((fd = open(device_name, O_RDWR)) == -1) {
perror("open failed");
exit(EXIT_SUCCESS);
}
printf("%s() : Success\n", __FUNCTION__);
}
But i am seeing, drivers open() call is not being executed, confirmed from dmesg. I am running the program with sudo privileges, yet no succsess. I supply the input as /dev/myscull0
Infact, after executing the user program, i am seeing two entries in /dev dir
vm#vm:/dev$ ls -l | grep scull
crw-r--r-- 1 root root 247, 1 Feb 27 14:38 myscull0
---Sr-S--- 1 root root 0 Feb 27 14:38 myscull0
vm#vm:/dev$
The first entry was created by me using mknod command, however second entry is created with strange set of permissions after executing the user program.
Output :
/dev/myscull0
scull_open() : Success
Can anyone pls help what wrong i am doing here ?

How to use alsa sound and/or snd_pcm_open in docker?

I am running an Ubuntu 12.04 Docker container on an Ubuntu 16.04 host. Some test code I have exercises 'snd_pcm_open'/'snd_pcm_close' operations with the SND_PCM_STREAM_PLAYBACK and SND_PCM_STREAM_CAPTURE stream types.
I do not need any actual sound/audio capabilities but just getting the 'snd_pcm_open' return 0 with a valid handle, then 'snd_pcm_close' to return 0 on the same handle would be good enough for my purposes. I do not want to modify the code as it's already got some not-so-nice platform dependent switches and I am not the maintainer.
I am using the simple code and compiling it as 'g++ alsa_test.cpp -lasound'
#include <stdio.h>
#include <alsa/asoundlib.h>
int main() {
snd_pcm_t* handle;
snd_pcm_stream_t stream_type[]= {SND_PCM_STREAM_PLAYBACK, SND_PCM_STREAM_CAPTURE};
printf("\nstarting\n");
for (unsigned char i = 0; i < sizeof(stream_type) / sizeof(stream_type[0]); ++i) {
printf(">>>>>>>>\n\n");
int deviceResult = snd_pcm_open(&handle, "default" , stream_type[i], 0);
printf("\n%d open: %d\n", stream_type[i], deviceResult);
if (deviceResult >= 0) {
printf("attempting to close %d\n", stream_type[i]);
snd_pcm_drain(handle);
deviceResult = snd_pcm_close(handle);
printf("%d close: %d\n\n", stream_type[i], deviceResult);
}
printf("<<<<<<<<\n\n");
}
return 0;
}
It works just fine on the host but despite all the different things I tried, 'snd_pcm_open' returns '-2' for both stream types in the container.
I tried installing the 'libasound2.dev' but 'modinfo soundcore' is empty and '/dev/snd' does not exist.
Also tried running the container with the options below, even though it feels like a massive over kill for such a simple purpose
--privileged --cap-add=ALL -v /dev:/dev -v /lib/modules:/lib/modules
After these extra parameters to the container, following commands generate the same output both in the host and the container.
root#31142791f82d:/export# modinfo soundcore
filename: /lib/modules/4.4.0-59-generic/kernel/sound/soundcore.ko
alias: char-major-14-*
license: GPL
author: Alan Cox
description: Core sound module
srcversion: C941364F5CD0B525693B243
depends:
intree: Y
vermagic: 4.4.0-59-generic SMP mod_unload modversions
parm: preclaim_oss:int
root#31142791f82d:/export# ls -l /dev/snd/
total 0
drwxr-xr-x 2 root root 100 Feb 2 21:10 by-path
crw-rw----+ 1 root audio 116, 2 Feb 2 07:42 controlC0
crw-rw----+ 1 root audio 116, 7 Feb 2 07:42 controlC1
crw-rw----+ 1 root audio 116, 12 Feb 2 21:10 controlC2
crw-rw----+ 1 root audio 116, 6 Feb 2 07:42 hwC0D0
crw-rw----+ 1 root audio 116, 11 Feb 2 07:42 hwC1D0
crw-rw----+ 1 root audio 116, 3 Feb 2 07:42 pcmC0D3p
crw-rw----+ 1 root audio 116, 4 Feb 2 07:42 pcmC0D7p
crw-rw----+ 1 root audio 116, 5 Feb 2 07:42 pcmC0D8p
crw-rw----+ 1 root audio 116, 9 Feb 2 10:44 pcmC1D0c
crw-rw----+ 1 root audio 116, 8 Feb 2 07:42 pcmC1D0p
crw-rw----+ 1 root audio 116, 10 Feb 2 21:30 pcmC1D1p
crw-rw----+ 1 root audio 116, 14 Feb 2 21:10 pcmC2D0c
crw-rw----+ 1 root audio 116, 13 Feb 2 21:10 pcmC2D0p
crw-rw----+ 1 root audio 116, 1 Feb 2 07:42 seq
crw-rw----+ 1 root audio 116, 33 Feb 2 07:42 timer
The container only has the 'root' user by the way, so, access rights shouldn't be an issue either.
What would be the easiest and least hacky way to get this working? I'd rather get rid off the privileged mode and dev/modules mapping to the container however, these containers are not accessed from the outside world and are only created/destroyed for some short lived tasks. So, safety isn't exactly a massive concern.
Thanks in advance.
If you don't actually need the device to work correctly, use the null device instead of default.
To make the null plugin the default one, put this into the container's /etc/asound.conf, or into the user's ~/.asoundrc:
pcm.!default = null;

Groovy File() not reporting correct size / length

I have a Jenkins post build Groovy script running out of the "Post build task plugin". From the same plugin, immediately before running the Groovy script, I check for the existence of the file and its size. The log shows:
09:14:53 -rw-r--r-- 1 aaa users 978243 Nov 4 08:53 /jk/workspace/xxxx/output/delta.txt
09:14:53 cppcheck.groovy: Checking build result: SUCCESS
09:14:53 cppcheck.groovy: workspace = /jk/workspace/xxxx
09:14:53 cppcheck.groovy: delta = /jk/workspace/xxxx/output/delta.txt
09:14:53 cppcheck.groovy: delta.txt length = 0
The groovy script is as follows:
import hudson.model.*
def build = Thread.currentThread().executable
def result = build.getResult()
println("cppcheck.groovy: Checking build result: " + result.toString())
if (result.isBetterOrEqualTo(hudson.model.Result.SUCCESS)) {
def workspace = build.getEnvVars()["WORKSPACE"]
def delta = workspace + "/output/delta.txt"
println("cppcheck.groovy: workspace = " + workspace)
println("cppcheck.groovy: delta = " + delta)
def f = new File(delta)
println("cppcheck.groovy: delta.txt length = " + f.length())
if (f.length() > 0) {
build.setResult(hudson.model.Result.UNSTABLE)
}
}
What am I doing wrong here?
Update: There seems to be some scepticism that the file exists and that there is some sort of race condition. To put your minds at rest, let's rule that out. I have modified the build to execute the same ls -l command after it runs the groovy script, to prove the file does exist and that this problem is ultimately Groovy not being able to open the file. I also added the file exists() check to the above Groovy script, which as I suspected it would, reports the file doesn't exist. I don't dispute that Groovy thinks the file doesn't exist. What I am trying to work out is why?
10:31:39 [xxxx] $ /bin/sh -xe /tmp/hudson8964729240493636268.sh
10:31:39 + ls -l /jk/workspace/xxxx/output/delta.txt
10:31:39 -rw-r--r-- 1 aaa users 978243 Nov 4 08:53 /jk/workspace/xxxx/output/delta.txt
10:31:40 cppcheck.groovy: Checking build result: SUCCESS
10:31:40 cppcheck.groovy: workspace = /jk/workspace/xxxx
10:31:40 cppcheck.groovy: delta = /jk/workspace/xxxx/output/delta.txt
10:31:40 cppcheck.groovy: delta.txt length = 0
10:31:40 cppcheck.groovy: delta.txt exists = false
10:31:40 [xxxx] $ /bin/sh -xe /tmp/hudson8007562636561507409.sh
10:31:40 + ls -l /jk/workspace/xxxx/output/delta.txt
10:31:40 -rw-r--r-- 1 aaa users 978243 Nov 4 08:53 /jk/workspace/xxxx/output/delta.txt
Also, notice the timestamp on said file, is still 08:53 when it was created.
I suspected that the Groovy script was running on the build master as opposed to the build node that this particular build was running on. I added some debug to print the hostname for which the Groovy script was running and sure enough it wasn't the same host that the shell variant of the script was running.

linux ftp server log file to include additional information

I am implementing the ftp server on linux box (fedora 11, vsftpd). Everything works good so far but i need Ftp server log files to contain transfer rate information.
At the moment when I use "get " or " put" command from client end , I got following message on ftp client
example
ftp: 18 bytes received in 0.00seconds 18000.00kbytes/sec.
Is there is any way , I can get same message on ftp server side?
Below is the sample of my xferlog file
Tue Oct 23 01:28:52 2012 1 10.65.112.55 1 /home/test/testfile b _ o r test ftp 0 * c
Tue Oct 23 01:32:46 2012 1 10.65.112.55 18 /home/test/uploadServer b _ i r test ftp 0 * c
Tue Oct 23 01:50:23 2012 1 192.168.10.27 1 /home/test/testfile a _ o r test ftp 0 * c
Tue Oct 23 01:50:36 2012 1 192.168.10.27 19 /home/test/test a _ i r test ftp 0 * c
I really appreciate everyone's help here.
Well I have solved that.
I have included
*dual_log_enable= yes* in the vsftpd.conf file and as a result it created the new log file as /var/log/vsftpd.log and it contains all the information that I need.

Resources