linux how to get the deepest child folders? - linux

My current directory contents are:
$ tree
├── README.md
├── deploy.sh
├── grizzly
│   ├── configs
│   │   ├── nginx-conf.yml
│   │   └── proxy-conf.yml
│   ├── deployments
│   │   ├── api.yml
│   │   ├── celery.yml
│   │   └── proxy.yml
│   ├── secrets
│   └── services
│   ├── api.yml
│   └── proxy.yml
├── ingress.yml
└── shared
├── configs
│   └── rabbitmq.yml
└── env
└── variables.yml
I plan to create a script that will run $ kubectl apply for all files in this tree.
My thought is to get all the child directories then just have all those child directories(expected to have the yml files) execute $ kubectl apply for my resources to be created.

This is an instance of the XY Problem. You want to apply all yamls which are somewhere within the directory structure of the current directory.
Just run:
kubectl apply -f . --recursive
If you want to filter the files based on certain conditions you can use a construct like
find . -type f | grep 'api.yml' | xargs -n 1 kubectl apply -f

Related

How to unzip recursively in place?

I have a directory with subdirectories. I want to unzip every .zip file in this directory structure recursively and locally. (Unzip next to the .zip file)
For example:
.
├── A
│   ├── A2
│   │   └── content1.zip
│   ├── content2.zip
│   └── content3.zip
├── B
│   └── content4.zip
└── content5.zip
After unzip:
.
├── A
│   ├── A2
│   │   ├── content1.txt
│   │   └── content1.zip
│   ├── content2.txt
│   ├── content2.zip
│   ├── content3.txt
│   └── content3.zip
├── B
│   ├── content4.txt
│   └── content4.zip
├── content5.txt
└── content5.zip
I prefer if it's working both on linux and windows.
With python3
import os
import sys
import zipfile
for dir in os.walk(sys.argv[1]):
#dir[0] - current directory
#dir[1] - directories in current directory
#dir[2] - files in current directory
for file in dir[2]:
if file.endswith(".zip"):
with zipfile.ZipFile(dir[0] + "/" + file, 'r') as zip_ref:
zip_ref.extractall(dir[0])
Run in current directory:
python3 unzip.py .

Archive & delete files older than x days, maintaining the directory structure

we have log files generating from different sources and storing in directories & sub-directories. The directory structure is something like below.
I want to tar & zip the file older than 30 days maintaining the directory structure and delete the archived files after the tar & zip.
Can someone help me with this? How this can be achieved.
dataload
├── apiConnectorApp
│   ├── csv
│   │   ├── 20210216231308
│   │   ├── batch1
│   │   ├── batch2
│   │   └── batch3
│   ├── day1Load
│   ├── logs
│   └── sql
├── configs
│   ├── eSite
│   │   ├── CKB2B
│   │   │   ├── CatalogEntryAssociations
│   │   │   ├── CatalogGroup
│   │   │   └── CatalogGroupCatalogEntryRelation
│   │   └── THB2B
│   │   ├── CatalogEntryAssociations
│   │   ├── CatalogGroup
│   │   └── CatalogGroupCatalogEntryRelation
find dataload -type f -mtime +30 -exec tar -rf archive.tar {} + -exec rm -f {} +
Run find with dataload as the parent directory. Process files that have been modified more that 30 days ago. Append as many files as possible (+) to an archive file "archive.tar" by using the tar -r flag. Then remove the files.

How to pass a token environment variable into a provider module?

I have made a Terraform script that can successfully spin up 2 DigitalOcean droplets as nodes and install a Kubernetes master on one and a worker on the other.
For this, it uses a bash shell environment variable that is defined as:
export DO_ACCESS_TOKEN="..."
export TF_VAR_DO_ACCESS_TOKEN=$DO_ACCESS_TOKEN
It can then be used in the script:
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}
Now, having all these files in one directory, is getting a bit messy. So I'm trying to implement this as modules.
I'm thus having a provider module offering access to my DigitalOcean account, a droplet module spinning up a droplet with a given name, a Kubernetes master module and a Kubernetes worker module.
I can run the terraform init command.
But when running the terraform plan command, it asks me for the provider token (which it rightfully did not do before I implemented modules):
$ terraform plan
provider.digitalocean.token
The token key for API operations.
Enter a value:
It seems that it cannot find the token defined in the bash shell environment.
I have the following modules:
.
├── digitalocean
│   ├── droplet
│   │   ├── create-ssh-key-certificate.sh
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   └── vars.tf
│   └── provider
│   ├── main.tf
│   └── vars.tf
└── kubernetes
├── master
│   ├── configure-cluster.sh
│   ├── configure-user.sh
│   ├── create-namespace.sh
│   ├── create-role-binding-deployment-manager.yml
│   ├── create-role-deployment-manager.yml
│   ├── kubernetes-bootstrap.sh
│   ├── main.tf
│   ├── outputs.tf
│   └── vars.tf
└── worker
├── kubernetes-bootstrap.sh
├── main.tf
├── outputs.tf
└── vars.tf
In my project directory, I have a vars.tf file:
$ cat vars.tf
variable "DO_ACCESS_TOKEN" {}
variable "SSH_PUBLIC_KEY" {}
variable "SSH_PRIVATE_KEY" {}
variable "SSH_FINGERPRINT" {}
and I have a provider.tf file:
$ cat provider.tf
module "digitalocean" {
source = "/home/stephane/dev/terraform/modules/digitalocean/provider"
DO_ACCESS_TOKEN = "${var.DO_ACCESS_TOKEN}"
}
And it calls the digitalocean provider module defined as:
$ cat digitalocean/provider/vars.tf
variable "DO_ACCESS_TOKEN" {}
$ cat digitalocean/provider/main.tf
provider "digitalocean" {
version = "~> 1.0"
token = "${var.DO_ACCESS_TOKEN}"
}
UPDATE: The provided solution led me to organize my project like:
.
├── env
│   ├── dev
│   │   ├── backend.tf -> /home/stephane/dev/terraform/utils/backend.tf
│   │   ├── digital-ocean.tf -> /home/stephane/dev/terraform/providers/digital-ocean.tf
│   │   ├── kubernetes-master.tf -> /home/stephane/dev/terraform/stacks/kubernetes-master.tf
│   │   ├── kubernetes-worker-1.tf -> /home/stephane/dev/terraform/stacks/kubernetes-worker-1.tf
│   │   ├── outputs.tf -> /home/stephane/dev/terraform/stacks/outputs.tf
│   │   ├── terraform.tfplan
│   │   ├── terraform.tfstate
│   │   ├── terraform.tfstate.backup
│   │   ├── terraform.tfvars
│   │   └── vars.tf -> /home/stephane/dev/terraform/utils/vars.tf
│   ├── production
│   └── staging
└── README.md
With a custom library of providers, stacks and modules, layered like:
.
├── modules
│   ├── digitalocean
│   │   └── droplet
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── scripts
│   │   │   └── create-ssh-key-and-csr.sh
│   │   └── vars.tf
│   └── kubernetes
│   ├── master
│   │   ├── main.tf
│   │   ├── outputs.tf
│   │   ├── scripts
│   │   │   ├── configure-cluster.sh
│   │   │   ├── configure-user.sh
│   │   │   ├── create-namespace.sh
│   │   │   ├── create-role-binding-deployment-manager.yml
│   │   │   ├── create-role-deployment-manager.yml
│   │   │   ├── kubernetes-bootstrap.sh
│   │   │   └── sign-ssh-csr.sh
│   │   └── vars.tf
│   └── worker
│   ├── main.tf
│   ├── outputs.tf
│   ├── scripts
│   │   └── kubernetes-bootstrap.sh -> /home/stephane/dev/terraform/modules/kubernetes/master/scripts/kubernetes-bootstrap.sh
│   └── vars.tf
├── providers
│   └── digital-ocean.tf
├── stacks
│   ├── kubernetes-master.tf
│   ├── kubernetes-worker-1.tf
│   └── outputs.tf
└── utils
├── backend.tf
└── vars.tf
The simplest option you have here is to just not define the provider at all and just use the DIGITALOCEAN_TOKEN environment variable as mentioned in the Digital Ocean provider docs.
This will always use the latest version of the Digital Ocean provider but otherwise will be functionally the same as what you're currently doing.
However, if you did want to define the provider block so that you can specify the version of the provider used or also take this opportunity to define a partial state configuration or set the required Terraform version then you just need to make sure that the provider defining files are in the same directory you are applying or in a sourced module (if you're doing partial state configuration then they must be in the directory, not in a module as state configuration happens before module fetching).
I normally achieve this by simply symlinking my provider file everywhere that I want to apply my Terraform code (so everywhere that isn't just a module).
As an example you might have a directory structure that looks something like this:
.
├── modules
│   └── kubernetes
│   ├── master
│   │   ├── main.tf
│   │   ├── output.tf
│   │   └── variables.tf
│   └── worker
│   ├── main.tf
│   ├── output.tf
│   └── variables.tf
├── production
│   ├── digital-ocean.tf -> ../providers/digital-ocean.tf
│   ├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
│   ├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
│   └── terraform.tfvars
├── providers
│   └── digital-ocean.tf
├── stacks
│   ├── kubernetes-master.tf
│   └── kubernetes-worker.tf
└── staging
├── digital-ocean.tf -> ../providers/digital-ocean.tf
├── kubernetes-master.tf -> ../stacks/kubernetes-master.tf
├── kubernetes-worker.tf -> ../stacks/kubernetes-worker.tf
└── terraform.tfvars
This layout has 2 "locations" where you would perform Terraform actions (eg plan/apply): staging and production (given as an example of keeping things as similar as possible with slight variations between environments). These directories contain only symlinked files other than the terraform.tfvars file which allows you to vary only a few constrained things, keeping your staging and production environments the same.
The provider file that is symlinked would contain any provider specific configuration (in the case of AWS this would normally include the region that things should be created in, with Digital Ocean this is probably just clamping the version of the provider that should be used) but could also contain a partial Terraform state configuration to minimise what configuration you need to pass when running terraform init or even just setting the required Terraform version. An example might look something like this:
provider "digitalocean" {
version = "~> 1.0"
}
terraform {
required_version = "=0.11.10"
backend "s3" {
region = "eu-west-1"
encrypt = true
kms_key_id = "alias/terraform-state"
dynamodb_table = "terraform-locks"
}
}

ctrlp still searches the ignored directory

I tried to put ignored setting in .vimrc
But when I used the ctrlp to search under rails app folder
It still search the vendor folder, so it took lots of time.
But when the search was done, I couldn't search anything under the vendor
It was so strange! How to fix it.
Here is my .vimrc setting file.
http://d.pr/i/yMtK
http://d.pr/i/Hy4u
" Sane Ignore For ctrlp
let g:ctrlp_custom_ignore = {
\ 'dir': '\.git$|vendor\|\.hg$\|\.svn$\|\.yardoc\|public\/images\|public\/system\|data\|log\|tmp$',
\ 'file': '\.exe$\|\.so$\|\.dat$'
\ }
When I appended the code in the end of .vimrc
217 let g:NERDTreeIgnore=['\~$', 'vendor']
218 set wildignore+=*\\vendor\\**
It worked when my first time to use the CTRLP to search under the RAILS app folder,
But still NOT worked in the following times.
I guess maybe there are some settings will disable the ignored setting ?
Here are the structure of my folder
.
├── Gemfile
├── Gemfile.lock
├── README.rdoc
├── Rakefile
├── app
│   ├── assets
│   ├── controllers
│   ├── helpers
│   ├── mailers
│   ├── models
│   ├── uploaders
│   ├── views
│   └── workers
├── auto.sh
├── config
│   ├── application.rb
│   ├── application.yml
│   ├── boot.rb
│   ├── database.yml
│   ├── environment.rb
│   ├── environments
│   ├── initializers
│   ├── locales
│   ├── macbookair_whenever_schedule.rb
│   ├── menu_navigation.rb
│   ├── navigation.rb
│   ├── resque.god
│   ├── resque_schedule.yml
│   ├── routes.rb
│   ├── schedule.rb -> ubuntu_whenever_schedule.rb
│   ├── tinymce.yml
│   └── ubuntu_whenever_schedule.rb
├── config.ru
├── db
│   ├── development.sqlite3
│   ├── migrate
│   ├── migrate_should_be_skip
│   ├── production.sqlite3
│   ├── schema.rb
│   └── seeds.rb
├── doc
│   └── README_FOR_APP
├── lib
│   ├── assets
│   ├── auto_tools
│   ├── tasks
│   └── url_automation_module.rb
├── log
│   ├── apalog
│   ├── development.log
│   ├── passenger.80.log
│   ├── production.log
│   └── prodution.log
├── output_name
├── public
│   ├── 404.html
│   ├── 422.html
│   ├── 500.html
│   ├── exports
│   ├── favicon.ico
│   ├── results.zip
│   ├── robots.txt
│   ├── sandbox
│   └── uploads
├── script
│   ├── delayed_job
│   └── rails
├── test
│   ├── fixtures
│   ├── functional
│   ├── integration
│   ├── performance
│   ├── test_helper.rb
│   └── unit
├── test.sh
├── tmp
│   ├── cache
│   ├── pids
│   ├── restart.txt
│   ├── sessions
│   └── sockets
├── tmplog
└── vendor
└── bundle
If you type :help ctrlp-options and read a bit, you will find:
Note #1: by default, wildignore and g:ctrlp_custom_ignore only
apply when globpath() is used to scan for files, thus these options
do not apply when a command defined with g:ctrlp_user_command is
being used.
Thus, you may need to unlet g:ctrlp_user_command (possibly set to a default command) to actually use wildignore as advised by #TomCammann. For instance, in your ~/.vimrc, add:
if exists("g:ctrlp_user_command")
unlet g:ctrlp_user_command
endif
set wildignore+=*\\vendor\\**
After that, you need to refresh your ctrlp cache: in Vim, press F5 in ctrlp mode, or run :CtrlPClearAllCaches, or remove the cache directory directly in your shell:
rm -r ~/.cache/ctrlp/ # On Linux
part of my .vimrc file . perhaps it will help
set wildignore+=*/.git/*,*/.hg/*,*/.svn/*,*/.idea/*,*/.DS_Store,*/vendor
You can use the wildignore vim setting which CtrlP will pick up on.
set wildignore+=*\\vendor\\**
Check if you are using some specific search command, like:
let g:ctrlp_user_command = 'find %s -type f' " MacOSX/Linux
let g:ctrlp_user_command = 'dir %s /-n /b /s /a-d' " Windows
This kind of configuration ignores the g:ctrlp_custom_ignore option.
wildignore may be used by other commands, the reason of failure for g:ctrlp_custom_ignore is g:ctrlp_user_command, for example, here is mine:
if executable('rg')
let g:ctrlp_user_command = 'rg %s --files --hidden --color=never --glob ""'
endif
For this case, rg has own ignore way, just put .git to .gitignore, rg will not search any files in .gitignore.

Copy every file of entire directory structure into base path of another

I have a directory-tree with a lot of files in it. I'd like to copy all of those files into one new directory, but with all files located in the base of the folder.
So I have something like this:
images
├── avatar.png
├── bg.jpg
├── checkbox.png
├── cross.png
├── datatables
│   ├── back_disabled.png
│   ├── back_enabled.png
│   ├── forward_disabled.png
│   ├── forward_enabled.png
│   ├── sort_asc.png
│   ├── sort_asc_disabled.png
│   ├── sort_both.png
│   ├── sort_desc.png
│   └── sort_desc_disabled.png
├── exclamation.png
├── forms
│   ├── btn_left.gif
│   ├── btn_right.gif
│   ├── checkbox.gif
│   ├── input
│   │   ├── input_left-focus.gif
│   │   ├── input_left-hover.gif
│   │   ├── input_left.gif
│   │   ├── input_right-focus.gif
│   │   ├── input_right-hover.gif
│   │   ├── input_right.gif
│   │   ├── input_text_left.gif
│   │   └── input_text_right.gif
│   ├── radio.gif
│   ├── select_left.gif
│   ├── select_right.gif
And I'd like something like this:
new_folder
├── avatar.png
├── bg.jpg
├── checkbox.png
├── cross.png
├── back_disabled.png
├── back_enabled.png
├── forward_disabled.png
├── forward_enabled.png
├── sort_asc.png
├── sort_asc_disabled.png
├── sort_both.png
├── sort_desc.png
├── sort_desc_disabled.png
├── exclamation.png
├── btn_left.gif
├── btn_right.gif
├── checkbox.gif
├── input_left-focus.gif
├── input_left-hover.gif
├── input_left.gif
├── input_right-focus.gif
├── input_right-hover.gif
├── input_right.gif
├── input_text_left.gif
├── input_text_right.gif
├── radio.gif
├── select_left.gif
├── select_right.gif
I'm pretty sure there is a bashcommand for that, but I haven't found it yet. Do you have any ideas?
CS
find /source-tree -type f -exec cp {} /target-dir \;
you are looking for ways to flatten the directory
find /images -iname '*.jpg' -exec cp --target-directory /newfolder/ {} \;
find all files iname in case insensitive name mode.
cp copy once to --target-directory named /newfolder/.
{} expand the list from find into the form of /dir/file.jpg /dir/dir2/bla.jpg.
On zsh:
cp /source/**/* /destination
$ cd images && cp ** new_folder/

Resources