VS 2013 not finding a string - search

I'm using Visual Studio 2013 Express. I am using ctrl/shift/f to search for a string. It is not finding the string. Here is the result of the search:
Find all "tsolb", Keep modified files open, Find Results 1, "C:\DokanTestDirectory\2220", "*.s"
No files were found to look in.
But if I use a CMD method the string is found:
C:\>findstr /s /i "tsolb" "C:\DokanTestDirectory\2220\*.s"
C:\DokanTestDirectory\2220\TSO.S:*MACLIB TSOLB
C:\>
To try to isolate the problem I copied TSO.S to another directory and tried VS again ... That works:
Find all "tsolb", Keep modified files open, Find Results 1, "c:\temp\2220", "*.s"
C:\temp\2220\TSO.S(1):*MACLIB TSOLB
Matching lines: 1 Matching files: 1 Total files searched: 1
Here is the content of both directories:
C:\>dir c:\DokanTestDirectory\2220 /a
Volume in drive C has no label.
Volume Serial Number is ECBC-051A
Directory of c:\DokanTestDirectory\2220
12/02/2014 07:55 PM <DIR> .
12/02/2014 07:55 PM <DIR> ..
11/28/2014 06:06 PM 951,692 TSO.S
1 File(s) 951,692 bytes
2 Dir(s) 166,707,027,968 bytes free
C:\>dir c:\temp\2220 /a
Volume in drive C has no label.
Volume Serial Number is ECBC-051A
Directory of c:\temp\2220
12/02/2014 07:57 PM <DIR> .
12/02/2014 07:57 PM <DIR> ..
11/28/2014 06:06 PM 951,692 TSO.S
1 File(s) 951,692 bytes
2 Dir(s) 166,707,548,160 bytes free
C:\>
Does anyone know what could be going on?

I found the problem. The folder c:\DokanTestDirectory\2220 had the S attribute but when I made the folder c:\temp\2220, the folder 2220 did not get the S attribute. So I guess VS 2013 Express will not find files within a folder which has the S attribute unless you specifically name the file.

Related

Run script on specific file in all subdirs

I've written a script (foo) which makes a simple sed replacement on text in the input file. I have a directory (a) containing a large number of subdirectories (a/b1, a/b2 etc) which all have the same subdirs (c, etc) and contain a file with the same name (d). So the rough structure is:
a/
-b1/
--c/
---d
-b2/
--c/
---d
-b3/
--c/
---d
I want to run my script on every file (d) in the tree. Unfortunately the following doesn't work:
sudo sh foo a/*/c/d
how do I use wildcards in a bash command like this? Do I have to use find with specific max and mindepth, or is there a more elegant solution?
The wildcard expansion in your example should work, and no find should be needed. I assume a b and c are just some generic file names to simplify the question. Do any of your folders/files contain spaces?
If you do:
ls -l a/*/d/c
are you getting the files you need listed? If so, then it is how you handle the $* in your script file. Mind sharing it with us?
As you can see, wildcard expansion works
$ ls -l a/*/c/d
-rw-r--r-- 1 user wheel 0 15 Apr 08:05 a/b1/c/d
-rw-r--r-- 1 user wheel 0 15 Apr 08:05 a/b2/c/d
-rw-r--r-- 1 user wheel 0 15 Apr 08:05 a/b3/c/d

How can I list files with their absolute path, group and user in linux?

I want to generate recursive file listings with their full information: absolute path, group, user, created time & etc.
But, if I use a find command like find /, I only get the relative path. I would like -rw-r--r-- 1 root root 669319168 Mar 11 17:10 /root/valhalla-i386-disc2.iso
find has a specific action which essentially does what you want
-ls True; list current file in ls -dils format on standard output.

linux find on multiple patterns

I need to do a find on roughly 1500 file names and was wondering if there is a way to execute simultaneous find commands at the same time.
Right now I do something like
for fil in $(cat my_file)
do
find . -name $fil >> outputfile
done
is there a way to spawn multiple instances of find to speed up the process. Right now it takes about 7 hours to run this loop one file at a time.
Given the 7-hour runtime you mention, I presume the file system has some millions of files in it so that OS disk buffers loaded in one query are being reused before the next query begins. You can test this hypothesis by timing the same find a few times, as in following example.
tini ~ > time find . -name IMG_0772.JPG -ls
25430459 9504 lrwxrwxrwx 1 omg omg 9732338 Aug 1 01:33 ./pix/rainbow/IMG_0772.JPG
20341373 5024 -rwxr-xr-x 1 omg omg 5144339 Apr 22 2009 ./pc/2009-04/IMG_0772.JPG
22678808 2848 -rwxr-xr-x 1 omg omg 2916237 Jul 21 21:03 ./pc/2012-07/IMG_0772.JPG
real 0m15.823s
user 0m0.908s
sys 0m1.608s
tini ~ > time find . -name IMG_0772.JPG -ls
25430459 9504 lrwxrwxrwx 1 omg omg 9732338 Aug 1 01:33 ./pix/rainbow/IMG_0772.JPG
20341373 5024 -rwxr-xr-x 1 omg omg 5144339 Apr 22 2009 ./pc/2009-04/IMG_0772.JPG
22678808 2848 -rwxr-xr-x 1 omg omg 2916237 Jul 21 21:03 ./pc/2012-07/IMG_0772.JPG
real 0m0.715s
user 0m0.340s
sys 0m0.368s
In the example, the second find ran much faster because the OS still had buffers in RAM from the first find. [On my small Linux 3.2.0-32 system, according to top at the moment 2.5GB of RAM is buffers, 0.3GB is free, and 3.8GB in use (ie about 1.3GB for programs and OS).]
Anyhow, to speed up processing, you need to find a way to make better use of OS disk buffering. For example, double or quadruple your system memory. For an alternative, try the locate command. The query
time locate IMG_0772.JPG
consistently takes under a second on my system. You may wish to run updatedb just before starting the job that finds the 1500 file names. See man updatedb. If directory . in your find's gives only a small part of the overall file system, so that the locate database includes numerous irrelevant files, use various prune options when you run updatedb, to minimize the size of the locate database that is accessed when you run locate; and afterwards, run a plain updatedb to restore other filenames to the locate database. Using locate you probably can cut the run time to 20 minutes.
This solution calls find and fgrep only once:
find . | fgrep -f my_file > outputfile
I assume that my_file has a list of files you are looking for, with each name on a separate line.
Explanation
The find command finds all the files (including directories) in the current directory. Its output is a list of files/directories, one per line
The fgrep command search from the output of the find command, but instead of specifying the search term on the command line, it gets the search terms from my_file--that's what the -f flag for.
The output of the fgrep command, which is the list of files you are looking for, are redirected into outputfile
maybe something like
find . \( -name file1 -o -name file2 -o ... \) >outputfile
You could build lines of this kind, depending on the number of names in my_file:
find . \( $(xargs <my_file printf "-name %s -o " | sed 's/-o $//') \) >outputfile
is there a way to spawn multiple instances of find to speed up the process.
This is not how you want to solve the problem, since find is I/O- and FS-limited.
Either use multiple -name arguments grouped together with -o in order to use one find command to look for multiple filenames at once, or find all files once and use a tool such as grep to search the resultant list of files for the filenames of interest.

Delete file with odd character in filename

I cannot delete a file that is copy of a backup of a backup... I don't remember all the filesystem character set it has passed by.
Anyway, today here's the file:
nas# ls -al
ls: cannot access Sécurité: No such file or directory
total 32
drwx------ 4 sambacam sambacam 20480 Jun 5 01:38 .
drwxr-xr-x 3 sambacam sambacam 12288 Jun 5 01:38 ..
d????????? ? ? ? ? ? S??curit??
nas# cd S*
cd: 13: can't cd to Sécurité
nas# rm "Sécurité"
rm: cannot remove `S\303\251curit\303\251': No such file or directory
nas# rm S*
rm: cannot remove `S\303\251curit\303\251': No such file or directory
nas#
I even tried to code in Python without success:
nas# python
Python 2.5.2 (r252:60911, Jan 24 2010, 20:48:41)
[GCC 4.3.2] on linux2
Type "help", "copyright", "credits" or "license" for more information.
>>> import os
>>> d=os.listdir('.')
>>> d
['S\xc3\xa9curit\xc3\xa9']
>>> d[0]
'S\xc3\xa9curit\xc3\xa9'
>>> os.remove(d[0])
Traceback (most recent call last):
File "<stdin>", line 1, in <module>
OSError: [Errno 2] No such file or directory: 'S\xc3\xa9curit\xc3\xa9'
>>>
Any idea?
I already ran fsck to check for inconsistencies.
I think you've got worse problems:
d????????? ? ? ? ? ? S??curit??
This means that ls(1) was unable to find permissions, link count, owner, group, size, or mtime of your file. All it has is a filename.
This could happen if the directory structure points to a file, but the inode for that file has gone missing. I would hope a fsck would find it and clean up the directory entry, but if that hasn't happened, you might not be able to ever empty this directory on this filesystem. (You could move it wherever you wanted, even into the /lost+found, and not be bothered by it again...)
Perhaps the debugfs(8) tool would be useful in learning more?
Have you tried with the inode number trick? Do:
ls -ilb
The first number in that list is the inode number. The -b switch makes ls not try to print non-printable chars. Once you have the inode number from the file, try:
find . -inum the_number_from_above -exec rm -i {} \;
(BTW: that's UTF-8 encoding.)
I'm not sure it will work though. The fact that ls isn't finding the file's metadata (timetamps and permission bits) looks like filesystem corruption.

Linux: Unzip an archive containing files with the same name

I was sent a zip file containing 40 files with the same name.
I wanted to extract each of these files to a seperate folder OR extract each file with a different name (file1, file2, etc).
Is there a way to do this automatically with standard linux tools? A check of man unzip revealed nothing that could help me. zipsplit also does not seem to allow an arbitrary splitting of zip files (I was trying to split the zip into 40 archives, each containing one file).
At the moment I am (r)enaming my files individually. This is not so much of a problem with a 40 file archive, but is obviously unscalable.
Anyone have a nice, simple way of doing this? More curious than anything else.
Thanks.
Assuming that no such tool currently exists, then it should be quite easy to write one in python. Python has a zipfile module that should be sufficient.
Something like this (maybe, untested):
#!/usr/bin/env python
import os
import sys
import zipfile
count = 0
z = zipfile.ZipFile(sys.argv[1],"r")
for info in z.infolist():
directory = str(count)
os.makedirs(directory)
z.extract(info,directory)
count += 1
z.close()
I know this is a couple years old, but the answers above did not solve my particular problem here so I thought I should go ahead and post a solution that worked for me.
Without scripting, you can just use command line input to interact with the unzip tools text interface. That is, when you type this at the command line:
unzip file.zip
and it contains files of the same name, it will prompt you with:
replace sameName.txt? [y]es, [n]o, [A]ll, [N]one, [r]ename:
If you wanted to do this by hand, you would type "r", and then at the next prompt:
new name:
you would just type the new file name.
To automate this, simply create a text file with the responses to these prompts and use it as the input to unzip, as follows.
r
sameName_1.txt
r
sameName_2.txt
...
That is generated pretty easily using your favorite scripting language. Save it as unzip_input.txt and then use it as input to unzip like this:
unzip < unzip_input.txt
For me, this was less of a headache than trying to get the Perl or Python extraction modules working the way I needed. Hope this helps someone...
here is a linux script version
in this case the 834733991_T_ONTIME.csv is the name of the file that is the same inside every zip file, and the .csv after "$count" simply has to be swapped with the file type you want
#!/bin/bash
count=0
for a in *.zip
do
unzip -q "$a"
mv 834733991_T_ONTIME.csv "$count".csv
count=$(($count+1))
done`
This thread is old but there is still room for improvement. Personally I prefer the following one-liner in bash
unzipd ()
{
unzip -d "${1%.*}" "$1"
}
Nice, clean, and simple way to remove the extension and use the
Using unzip -B file.zip did the trick for me. It creates a backup file suffixed with ~<number> in case the file already exists.
For example:
$ rm *.xml
$ unzip -B bogus.zip
Archive: bogus.zip
inflating: foo.xml
inflating: foo.xml
inflating: foo.xml
inflating: foo.xml
inflating: foo.xml
$ ls -l
-rw-rw-r-- 1 user user 1161 Dec 20 20:03 bogus.zip
-rw-rw-r-- 1 user user 1501 Dec 16 14:34 foo.xml
-rw-rw-r-- 1 user user 1520 Dec 16 14:45 foo.xml~
-rw-rw-r-- 1 user user 1501 Dec 16 14:47 foo.xml~1
-rw-rw-r-- 1 user user 1520 Dec 16 14:53 foo.xml~2
-rw-rw-r-- 1 user user 1520 Dec 16 14:54 foo.xml~3
Note: the -B option does not show up in unzip --help, but is mentioned in the man pages: https://manpages.org/unzip#options

Resources