Copy file from linux to remote windows machine using staf - linux

Assumming the following error is because the same path on local system does not exist on the remote system.
Is there a way to specify an alternative path to save on remote system?
[root#linuxsystem mydir]# staf local FS COPY FILE /root/scripts/tests/script.bat TOMACHINE remoteVM
Error submitting request, RC: 17
Additional info
---------------
\root\scripts\tests\script.bat
Error information:
17 File Open Error
This indicates that there was an error opening the requested file.
Some possible explanations are that the file/path does not exist, contains invalid characters, or is locked.
Note: Additional information regarding which file could not be opened may be provided in the result passed back from the submit call.

Found correct command:
staf local FS COPY FILE /root/scripts/tests/script.bat TODIRECTORY "C:\" TOMACHINE remoteVM

Related

Missing copied file in destination folder using exec node in node-red flow

I'm trying to copy a file from one folder to another folder using node-red but flow execute fine and throwing an error and also showing syntax error in the console
My flow:
Node Configuration:
exec command used
copy C:\Users\Karthikeyan.Anbalaga\Downloads\ID_NewOnePulse_results.csv C:\Users\Karthikeyan.Anbalaga\Downloads\Documents\
At the same time if I execute the file copy command in the terminal working well
>copy C:\Users\Karthikeyan.Anbalaga\Downloads\ID_NewOnePulse_results.csv C:\Users\Karthikeyan.Anbalaga\Downloads\Documents\
1 file(s) copied.
I did a mistake on the configuration, after removing the msg.payload works fine and also files I can see in the destination folder.

OpenMPI: ORTE was unable to reliably start one or more daemons

I've been at it for days but could not solve my problem.
I am running:
mpiexec -hostfile ~/machines -nolocal -pernode mkdir -p $dstpath where $dstpath points to current directory and "machines" is a file containing:
node01
node02
node03
node04
This is the error output:
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06177] [[6421,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891
--------------------------------------------------------------------------
ORTE was unable to reliably start one or more daemons.
This usually is caused by:
* not finding the required libraries and/or binaries on
one or more nodes. Please check your PATH and LD_LIBRARY_PATH
settings, or configure OMPI with --enable-orterun-prefix-by-default
* lack of authority to execute on one or more specified nodes.
Please verify your allocation and authorities.
* the inability to write startup files into /tmp (--tmpdir/orte_tmpdir_base).
Please check with your sys admin to determine the correct location to use.
* compilation of the orted with dynamic libraries when static are required
(e.g., on Cray). Please check your configure cmd line and consider using
one of the contrib/platform definitions for your system type.
* an inability to create a connection back to mpirun due to a
lack of common network interfaces and/or no route found between
them. Please check network connectivity (including firewalls
and network routing requirements).
--------------------------------------------------------------------------
[node01:06177] 1 more process has sent help message help-errmgr-base.txt / failed-daemon-launch
[node01:06177] Set MCA parameter "orte_base_help_aggregate" to 0 to see all help / error messages
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06181] [[6417,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891
I have 4 machines, node01 to node04. In order to log into these 4 nodes, I have to first log in to node00. I am trying to run some distributed graph functions. The graph software is installed in node01 and is supposed to be synchronised to the other nodes using mpiexec.
What I've done:
Made sure all passwordless login are setup, every machine can ssh to any other machine with no issues.
Have a hostfile in the home directory.
echo $PATH gives /home/myhome/bin:/home/myhome/.local/bin:/usr/include/openmpi:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/usr/games:/usr/local/games:/snap/bin
echo $LD_LIBRARY_PATH gives
/usr/lib/openmpi/lib
This has previously worked before, but it just suddenly started giving these errors. I got my administrator to install fresh machines but it still gave such errors. I've tried doing it one node at a time but it gave the same errors. I'm not entirely familiar with command line at all so please give me some suggestions. I've tried reinstalling OpenMPI from source and from sudo apt-get install openmpi-bin. I'm on Ubuntu 16.04 LTS.
You should focus on fixing:
Failed to parse XML input with the minimalistic parser. If it was not
generated by hwloc, try enabling full XML support with libxml2.
[node01:06177] [[6421,0],0] ORTE_ERROR_LOG: Error in file base/plm_base_launch_support.c at line 891

Oracle Unable to Read Globally Accessible (777) Dump File

I'm trying to import an Oracle dump file, and despite granting global rwx permissions on the files, I'm still getting a permission errors when running the import.
Here's the whole process I've run through:
# Create the dump directory with the dump file, and grant 777 permissions
mkidr -p /home/vagrant/dump
mv /home/vagrant/data.dmp /home/vagrant/dump
chmod -R 777 /home/vagrant/dump
# Check the file permissions
# drwsrwsrwx. vagrant vagrant dump
# -rwxrwxrwx. vagrant vagrant dump/data.dmp
# Add the directory to Oracle
sqlplus system/vagrant
CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump';
exit
# Try importing the data
impdp system/vagrant dumpfile=data.dmp directory=DUMP_DIR nologfile=y
And let the keyboard smashing begin...
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/home/vagrant/dump/data.dmp" for read
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
Note: I'm entirely aware that these permissions and passwords are terrible for security, but since I'm just trying to run some experimental analysis on a publicly available data set, I don't really care.
I think the problem is that your script says mkidr instead of mkdir.
This way, you don't create the directory, when you move the file to the supposed dir, it only renames the file, making it appear (as a file, not a directory) /home/vagrant/dump with the right permissions (except the d char at the beginning) and, of course, you cannot search it for files, as it's not a directory, but a file. This will also impede oracle to execute successfully the CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump'; as there's a file there with that name.
By the way, to access a file, you don't only need read access in the file inode, but also execute permission x in all the directories followed along the path (in this case /home, /home/vagrant and /home/vagrant/dump ---this last one is a file, not a directory---). In this case, it's ora (the user oracle runs as) the user that must be checked.
I suggest you to impersonate as the user ora and try to read the file if that doesn't work, from the same directory where the database runs, and using the same path as it uses to open the file.

Error opening zip file or JAR manifest missing : jrebel.jar

When configuring JRebel on my remote server (JBoss on linux) I have configured the JVM arg as
-javaagent:/home/user/jrebel.jar" -Drebel.remoting_plugin=true
The jrebel.jar is absolutely definitely in that location, yet the server fails to start with the error:
Error opening zip
file or JAR manifest missing : /home/user/jrebel.jar Error occurred
during initialization of VM agent library failed to init: instrument
So the arg is oviously being passed to the JVM correctly, but for the life of me I can't work out why it can't find the jar. I've been through every Zero Turnaround article I can find + looked at the solutions that have resolved it for other people, but no luck. Any ideas?
Turned out to be a permissions problem - the JBoss user didn't have the permissions to access the directory that I had placed jrebel.jar into.
Would have been nice to have a more meaningfull error - e.g. 'permissions denied'. Shows my lack of Linux knowledge though I guess.
After the jar was moved to a directory within the JBoss installation + the jar owner was changed to the JBoss user and Read/Write/Execute permissions added, all is well.
Yes , the permission is the reason that this error happens to me when I tried to open PHPSTORM and that error was :
Error opening zip file or JAR manifest missing : ${JetbrainsIdesCrackPath}
Error occurred during initialization of VM
agent library failed to init: instrument
so before running PHPSTORM I had to run the command : sudo -i to get the root permission to run the program.

Mount a Directory from one Server to set top box linux environment

I run my build stored in linux server path eg user/aaditya/builds/build1/bin/ i have written an api which takes linux mount path as input
for example API- takeimage,..//xyz.bmp
when i run it capture the image and stored in build1 folder but when i execute takeimage,..//..//build2/ , no image is stored in build2 folder
where path to build2 is user/aaditya/builds/build2/ how to mount user/aaditya/builds/build2/
because API uses the linux system call to write the file
In your two examples you are running your API like this:
takeimage ..//zyx.bmp
takeimage ..//..//build2/
I am guessing you run this from user/aaditya/builds/build1/bin/
In your first example you are providing your API the name of a file to save your image in. In your second example you are only providing a directory path. If the Build2 directory does not exist it could be producing an error.

Resources