CentOS 6.5 mkdir invalid argument - linux

server is running on centOS6.5 with ext4 file system
We have a problem with create a new dir in specific folder (for eg. /var/www/html/img/) when we are trying use mkdir,cp,scp, from another server,create with winSCP, and php script it's type "mk dir cant create folder 'name' invalid argument
when we was trying create something "upper" (for eg. /var/www/html) everything was created correct
have somebody some idea how to fix this problem?

Related

Capistrano deployment with Amazon EFS (bind mounted folders) failing when using with :linked_dirs

I am having an issue trying to use Capistrano to deploy an application that requires having several Amazon EFS bind mounts inside of the deployment (current) folder.
I have a directory on the webserver in the root called /webroot inside of it is where all of our code currently is along with about 7 folders (bind mounts) that are shared across three nodes.
Inside of my deploy.rb I have the following line set :deploy_to, "/webroot/testingCap" in which Capistrano is deploying the code into the symlinked folder current. This is great but now when it gets to the step of symlinking the bind mount directories for example:/webroot/uploads it throws an error:
rm -rf /webroot/uploads
rm: cannot remove '/webroot/uploads'
Device or resource busy
I am not sure why it is trying to forcefully remove that directory? I thought it was supposed to just symlink to the directory.
My linked_dirs part looks like this inside of deploy.rb:
append :linked_dirs, "/webroot/uploads"
What am I doing wrong?
:linked_dirs only works with relative paths and always uses Capistrano's shared directory.
When you add e.g. "foo" to :linked_dirs, Capistrano will create a symlink within your deployed app. If anything already exists there, it will delete it first (that is why you are seeing the rm -rf).
The destination of that link will always be to the same name in Capistrano's shared directory. So the chain of events will be like this:
rm -rf /webroot/testingCap/current/foo
ln -s /webroot/testingCap/shared/foo /webroot/testingCap/current/foo
Thus if you look inside current, you will a link that points
foo -> /webroot/testingCap/shared/foo
Notice that the path relative to current is identical to the path relative to shared. This is how :linked_dirs works and you can't change it.
For example, if your app expects to store uploads in public/uploads, you will need the exact same relative path to exist inside shared in order for the link to be established. In other words, the link will point like this:
/webroot/testingCap/current/public/uploads -> /webroot/testingCap/shared/public/uploads
In your case, I suspect you can get this to work, but you'll need to make sure that your mount points are located exactly where Capistrano expects them to be.

Oracle Unable to Read Globally Accessible (777) Dump File

I'm trying to import an Oracle dump file, and despite granting global rwx permissions on the files, I'm still getting a permission errors when running the import.
Here's the whole process I've run through:
# Create the dump directory with the dump file, and grant 777 permissions
mkidr -p /home/vagrant/dump
mv /home/vagrant/data.dmp /home/vagrant/dump
chmod -R 777 /home/vagrant/dump
# Check the file permissions
# drwsrwsrwx. vagrant vagrant dump
# -rwxrwxrwx. vagrant vagrant dump/data.dmp
# Add the directory to Oracle
sqlplus system/vagrant
CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump';
exit
# Try importing the data
impdp system/vagrant dumpfile=data.dmp directory=DUMP_DIR nologfile=y
And let the keyboard smashing begin...
Connected to: Oracle Database 11g Express Edition Release 11.2.0.2.0 - 64bit Production
ORA-39001: invalid argument value
ORA-39000: bad dump file specification
ORA-31640: unable to open dump file "/home/vagrant/dump/data.dmp" for read
ORA-27037: unable to obtain file status
Linux-x86_64 Error: 13: Permission denied
Additional information: 3
Note: I'm entirely aware that these permissions and passwords are terrible for security, but since I'm just trying to run some experimental analysis on a publicly available data set, I don't really care.
I think the problem is that your script says mkidr instead of mkdir.
This way, you don't create the directory, when you move the file to the supposed dir, it only renames the file, making it appear (as a file, not a directory) /home/vagrant/dump with the right permissions (except the d char at the beginning) and, of course, you cannot search it for files, as it's not a directory, but a file. This will also impede oracle to execute successfully the CREATE DIRECTORY DUMP_DIR AS '/home/vagrant/dump'; as there's a file there with that name.
By the way, to access a file, you don't only need read access in the file inode, but also execute permission x in all the directories followed along the path (in this case /home, /home/vagrant and /home/vagrant/dump ---this last one is a file, not a directory---). In this case, it's ora (the user oracle runs as) the user that must be checked.
I suggest you to impersonate as the user ora and try to read the file if that doesn't work, from the same directory where the database runs, and using the same path as it uses to open the file.

Mount a Directory from one Server to set top box linux environment

I run my build stored in linux server path eg user/aaditya/builds/build1/bin/ i have written an api which takes linux mount path as input
for example API- takeimage,..//xyz.bmp
when i run it capture the image and stored in build1 folder but when i execute takeimage,..//..//build2/ , no image is stored in build2 folder
where path to build2 is user/aaditya/builds/build2/ how to mount user/aaditya/builds/build2/
because API uses the linux system call to write the file
In your two examples you are running your API like this:
takeimage ..//zyx.bmp
takeimage ..//..//build2/
I am guessing you run this from user/aaditya/builds/build1/bin/
In your first example you are providing your API the name of a file to save your image in. In your second example you are only providing a directory path. If the Build2 directory does not exist it could be producing an error.

Is that the 'path to war' which I am giving wrong? If yes,how do I do a rollback?

I have been trying to use the command to rollback the last process of deploying the website which was interrupted due to a network failure.
The generic command that I am using while inside the bin directory of server's SDK (On Linux) is :
./appcfg.sh rollback /path_to_the_war_directory_that_has_appengine-web.xml
Is this the way we do a rollback ? If not please tell me the method.
_(I was asked to make a directory war in the project directory and place the WEB-INF folder in that with appengine-web.xml inside it. It may be wrong)_
I am fully convinced that I am making a mistake while giving the path to my app .
Shot where my .war file is there :
Now the command that I am using is (while inside the bin directory of the server's SDK) :
./appcfg.sh rollback /home/non-admin/NetbeansProjects/'Personal Site'/web/war
The following is the representation of the path to war directory :
Where am I wrong ? How should I run this command so that I am able to deploy my project once again ?
On running the above command I get this message :
Unable to find the webapp directory /home/non-admin/NetbeansProjects/Personal Site/web/war
usage: AppCfg [options] <action> [<app-dir>] [<argument>]
NOTE : I have duplicated the folder WEB-INF. There is still a folder named WEB-INF inside the web directory that contains all other xml files.
The error tells you that the folder /home/non-admin/NetbeansProjects/Personal Site/web/war does not exist. If you look carefully the name of the folder is NetBeansProjects (the filesystem in Linux is case-sensitive).
So, you should run instead the command:
./appcfg.sh rollback /home/non-admin/NetBeansProjects/'Personal Site'/web/war
and just to make sure that the directory exists run first
ls /home/non-admin/NetBeansProjects/'Personal Site'/web/war

Run executable from local storage using Azure Web Role

i'm trying to run a simple executable using an Azure Web Role.
The executable is stored in the Web Role's local storage.
The executable produces a log.txt file once it has been run.
This is the method I am using to run the executable:
public void RunExecutable(string path)
{
Process.Start(path);
}
Where path is localStorage.RootPath + "Application.exe"
The problem I am facing is that when I open the local storage folder the executable is there however there is no log.txt file.
I have tested the executable, it works if I manually run it, it produces the log.txt file.
Can anyone see the problem?
Try setting an explicit WorkingDirectory for the process... I wonder if log.txt is being created, just not where you expect. (Or perhaps the app is trying to create log.txt but failing because of the permissions on the directory it's trying to create it in.)
If you remote desktop into the instance, can't you find the file created at E:\approot\ folder ? As Steve said, using a WorkingDirectory for the process will fix the issue
You can use Environment.GetEnvironmentVariable("RoleRoot") to construct the URL to your application root

Resources