mac directory structure to text converter - text

Im looking for a tool that would convert a given directory into a text based directory in form of:
im working with macOS, maybe there is a browser based tool for this?
./directory
|
+-- subdirectory1/
| |
| + fileA.md
|
+-- subdirectory2/
|
+ fileA.md
+ ...

There is a Unix tool that will do this called tree. This tool will output the directory tree structure of a given folder. It is a command line tool which means that you will have to use the terminal to get your results. Typing tree -d ~ will, for example, output the tree structure of you home directory.
Although it is not included by default on MacOS you can install it yourself. You can download and compile the source on their homepage (link) or use a package manager like HomeBrew to install it (brew install tree).

Related

Combining all of the node_modules packages' LICENSE files into one file

In my node_modules folder, let's say that I have 5 folders:
Folder1
Folder2
Folder3
Folder4
Folder5
Each folder contains a "LICENSE" file or "LICENSE.md" file. I want to run a script of some kind whether it be Python or JS that takes the contents of each LICENSE file along with name of package and add it to one single file "Destination.txt". Is there an easier way to do this? I know that NPM License Checker allows me to list licenses and then I can output the results into a text file, but I also want to see the contents of each LICENSE file into a single file.
So sample output for Destination.txt should be like:
Folder1
text from LICENSE file
Folder2
text from LICENSE file
Folder3
text from LICENSE file
Folder4
text from LICENSE file
Folder5
text from LICENSE file
What I'm thinking about is possibly a Python script that traverses each folder in node_modules and tries to find a LICENSE file. When it goes into a folder and finds a LICENSE, it copies / pastes the contents into Destination.txt and then moves on to the next LICENSE file. I'm not sure how to implement this and would appreciate any tips. Doesn't have to be Python. I'm on a Windows computer so if there's a creative way to use the CLI commands to generate what I'm looking for, that'd be great.
TLDR; I want to list the names of my packages as well as the contents of their associated LICENSE files all in one file. So if package1 is listed, the contents of the package1 LICENSE file will be listed below it. Same with package2 and so on.
The closest thing I've been able to find that doesn't require a custom script is Yarn Licenses.
I used command yarn licenses generate-disclaimer > output.txt
It worked even though I have never installed anything via Yarn before. Only NPM.
most languages has a package that already does that. e.g.
nodejs:
legally
license-checker
python:
pip licenses
ruby:
bundle-audit
each of which has ability to produce a json file, which you can then parse and combine as you wish.
if you specifically interested in npm packages, then just use npm view like so
$ npm view eslint --json license
"MIT"
and if you want it to all your npm packages, then you can combine it with npm ls like so
for PKG in $(npm ls --json | jq -r '.dependencies | keys | .[]')
do
npm view $PKG name license --json
done

Using your own venv in other system

I am having 3 small problems which are inter-related.
1.I recently created my own virtual environment ,I want to export that environment to my friend's system so that he can run my environment's main program in one tap.
2.Also where to put main driver python code file in venv so that it can easily be executed in other system.
3.I used open() to read a text file ,but i am not sure what must be its directory so that it can be worked on other(any) system ,i am currently storing it within my venv
What I tried:
1.It is completed so I exported it to other system and (but i am not sure which folder to select so it can be operated on other Window),I copied my_venv directly and pasted it in other system.
2.I stored it within my_venv/main.py
3.I tried open(r'.vmy_env/text.txt','r').
You cannot actually move your virtual environment from one system to your friend's system. What you can do instead is :
Create a src folder inside the virtual environment folder and keep all the code files and necessary files related to the project inside this source folder.
Use pip freeze command in order to obtain all the installation details. Store all these details inside a file like requirements.txt. You can do it either by manually copy-pasting or by using output redirection.
Now that we have the requirements and basic structure down, make a virtual environment in another (your friend's) system. Ask him to place the src folder in the exact same place as you did. Then ask him to install all the dependencies using pip (you can follow this link)
Then he should be good to go with the project execution. Another helpful link can be this which shows how to use the above mentioned steps.
A suggested folder structure can be something like this :
+
|
|---- src
| |
| |---- main.py
| |
| |---- data
| | |
| | |---- dataset1.csv
| | |---- dataset2.csv
| | +
| |
| |---- utils
| | |
| | |---- helper.py
| | |
| | +
| |
| |---- requirements.txt
| |
| +
+
Here the main.py is the driver code and all other directories will contain helper/utility functions and classes.
Some good practices while managing Project's folder structure is :
Keep all the code or data files inside the source folder (here src).
Make use of relative paths instead of absolute paths. You can make use of os module in order to do the same. Since it would eliminate the need of modifying the code every time you run it on a different machine or operating system.
Never copy the venv folder. It's only the src folder we need.
Using version control system is a big plus when it comes to effective project management and collaboration. So try looking into git
If you could share your current folder structure then I could help you out more precisely.

Build an RPM dependency chain without installing dependencies on build host

I'm trying to set up an RPM builder that will compile all the dependent binaries and executable of my project.
The dependency looks a bit like this:
MainProject.rpm | depends on:
|
+-- subProject1.rpm | depends on:
| |
| +-- subProject2.rpm
|
+-- subProject2.rpm
I'm generating all these RPMs in this order:
-build rpm for subProject2
-install subProject2 RPM in System
-build rpm for subProject1
-install subProject1 RPM in System
-build rpm for subProject3
-install subProject3 RPM in System
-build rpm for MainProject
All my spec files are producing suitable RPMs, at the cost of myself having to install on my machine the subProject2.rpm before attempting to rpmbuild subProject1.
Same goes for the mainProject.rpm : if I want to build it, I have to install all the RPMs it depends on.
I feel like this way of doing things is very bad, because I'm installing these RPMs in my builder's filesystem.
Is there an RPMbuild option to, say, deploy the RPM dependency in a chroot-like environment to build another one? I think if there exists such thing, it also needs to take in account the RPATH.
This is only a bad thing if your build environment is not controlled as far as I'm concerned. Assuming you have a controlled, repeatable build process doing things this way should be fine I believe.
That being said the answer to your actual question is to do what Fedora/etc. are doing and use Koji or at least the underlying chroot-related piece called Mock.
You could also consider using the '--root' option. Use it with both the 'rpm' command and for the 'rpmbuild' command. With this option, all the rpm constraints and actions will be in relation to this 'chroot-like' environment. It must be a fully qualified path.
Ex:
rpmbuild --root /home/user/master-project/rpmroot
There are at least three major implications for this:
1) you must initialize an rpm database in this area before you can use it for other commands;
rpm --initdb --root /home/user/master-project/rpmroot
2) all dependencies in the alternate root must be met by some other package in the alternate root. this can get difficult if, in your case for example, 'MainProject' depends on standard libraries.
3) as you alluded, compilers/linkers must also know about the alternate root.
Ex:
LIBRARY_PATH=/home/user/master-project/rpmroot/usr/lib
C_INCLUDE_PATH=/home/user/master-project/rpmroot/usr/include
hope this helps.

Determine list of non-OS packages installed on a RedHat Linux machine

I have been tasked with identifying new (non-operating system) software installed on several Red Hat Enterprise Linux (RHEL) machines. Can anyone suggest an efficient way to do this? The way I was doing it is manually comparing the list of installed software with the list on Red Hat's FTP site for the relevant operating system:
ftp://ftp.redhat.com/pub/redhat/linux/enterprise/6Server/en/os/SRPMS/
The problems I am encountering with this method is it is tedious / timeconsuming, and just the source packages are listed (e.g. I can't tell if avahi-glib is installed as part of the avahi package). If anyone can suggest a more efficient way to identify the software that doesn't come with the operating system on a RHEL machine, it would be greatly appreciated!
Here is what I have come up with so far as a more efficient method (though I still haven't figured out the last part, and there may be more efficient methods). If anyone can help me with the last step of this method, or can share a better method, it would be greatly appreciated!
New method (work in progress):
Copy the list of packages from Red Hat's FTP site into a text file (OSPackages.txt).
To fix the problem of just source RPMs being listed, also copy the list of files from the relevant corresponding version in http://vault.centos.org into a text file, and merge this data with OSPackages.txt.
Do a rpm -qa > list1, yum -y list installed > list2, ls /usr/bin > list3, ls /usr/share > list4, ls /usr/lib > list5.
Use cat to merge all the listX files together into InstalledPackages.txt.
Use sort to sort out the unique entries, perhaps like: sort -u -k 1 InstalledPackages.txt > SortedInstalledPackages.txt
Do a diff between SortedInstalledPackages.txt and OSPackages.txt using a regular expression (-I regexp) to identify the package names (and eliminate the version numbers). I would need to also do a "one way diff", e.g. ignore the extra OS packages in OSPackages.txt that do not appear in the installed packages file.
Note: I asked the following question to help me with this part, and believe I am now fairly close to a solution:
How do I do a one way diff in Linux?
If diff (or another command) can perform the last step, it should produce a list of packages that don't come on the OS. This is the step I am stuck on and would appreciate further help. What command would I use to perform step 6?
rpm -qa --last | less
This will list recently installed rpms with the installed date.
yum provides some useful information about when & from where a package was installed. If you have the system installation date then can you pull out packages that were installed after that, as well as packages that were installed from different sources & locations.
Coming at it from the other direction you can query rpm to find out which packages provides each of the binaries in /sbin /lib etc ... - any package that doesn't provide a "system" binary or library is part of your initial set for consideration.
Get a list of configured repository ids:
yum repolist | tail -n +3 | grep -v 'repolist:' | cut -f1 -d' '
Now identify which are the valid Red Hat repositories. Once you do that you can list all the packages from that repository. For example if I were to do this for Fedora official repositories, I would enlist the package names like so:
yum list installed --disablerepo="*" --enablerepo="fedora*"
From this list you get which package you have installed.
for p in $PACKAGES; do rpmls $p; done
Or like this:
yum list installed --disablerepo="*" --enablerepo="fedora*" \
| cut -f1 -d' ' \
| ( while read p; do rpmls $p; done ) \
| cut -c13-
So have a list of files which are supposed to come from the official repositories.
Now you can list all the installed files using rpm:
rpm -qal
With these two lists, it would be easy to compare the contents of two outputs.
If redhat has an equivalent of /var/log/installer/initial-status.gz on Ubuntu systems then you could cat that to a tmpfile and then search for installed packages and grep -v the tmpfile.
One of the first scripts I wrote to learn Linux did this exact same thing on Ubuntu:
https://gist.github.com/sysadmiral/d58388e315a6c6384053aa6b0af66c5f
This works on Ubuntu and may work on other Debian based systems or systems that use aptitude package manager. It doesn't work on Redhat/CentOS but I added it here as a starting point I guess.
Disclaimer: It will not pickup manually compiled things i.e. your package manager needs to know about it for this script to show it.
Personal Disclaimer: please forgive the none use of tee. I was still learning the ropes when I wrote this and have never updated the code for nostalgia's sake.

where to copy samtools binary to some directories

I am installing cufflinks on my Mac OS X, and here is the instruction:
http://cufflinks.cbcb.umd.edu/tutorial.html
Under Installing the SAM tools I follow the instructions below
Download the SAM tools
Unpack the SAM tools tarball and cd to the SAM tools source directory.
Build the SAM tools by typing make at the command line.
Choose a directory into which you wish to copy the SAM tools binary, the included library libbam.a, and the library headers. A common choice is /usr/local/.
Copy libbam.a to the lib/ directory in the folder you've chosen above (e.g. /usr/local/lib/)
Create a directory called "bam" in the include/ directory (e.g. /usr/local/include/bam)
Copy the headers (files ending in .h) to the include/bam directory you've created above (e.g. /usr/local/include/bam)
Copy the samtools binary to some directory in your PATH.
I've done the fist 7 steps, but I am not sure how to proceed with the last step (#8): should I use the command:
sudo cp -a samtools-0.1.18 /usr/local/
or into some other directories? What does the PATH in step 8 indicate? Thanks!
To answer your question I will go over some basic linux knowledge that has helped me understand binaries and their locations.
In linux, you can run a binary by typing in a complete path to the binary and the binary will run. For example, if I have a binary named foo in /usr/local/bin, I would run the command /usr/local/bin/foo and the foo binary would be run.
The purpose of the PATH is a shortcut so that you don't need to type in the complete path to the binary, just the name of the binary. PATH is a variable that contains all of the directories that have binaries that you want to have the shortcut apply to. So, referring to the previous example, if /usr/local/bin is in my PATH variable, then I could just run foo.
So, to answer your question, you can tell which directories that are in your PATH by running the command echo $PATH and if one of the directories is where your samtools binaries are, your good!! If not, you can move your samtools binaries to one of those directories so that you don't have to put the full path everytime you want to run the binaries.

Resources