Tar Jenkins jobs build logs and send to S3 - linux

I have lot of jenkins jobs builders folder in various hierarchy places like this
I need to collect all sub-folders within builds folder from each job and tar it keeping the path to these jobs intact (i.e. when i tar it and extract i will ensure it will land on jobs/test1/builds etc.
What is the best way to approach this?
/var/lib/jenkins/folder1
├── config.xml
└── jobs
└── test1
├── builds
├── 1
/var/lib/jenkins/folder2
├── config.xml
└── jobs
└── awsbuild
├── jobs
├── developers
├── jobs
├── billing
├── branches
├── testbranch
├── builds
├── 1

Related

How should global resources be declared in Terragrunt?

I need to create a single Azure Container Registry. I'm using Terragrunt to manage several environments in Azure. My simplified layout looks similar to this:
/modules
/environments
/development
/staging
/production
The registry does not really fit in any of these environments since it is shared. Is there a "best practices" way to create one-off global resources such as a container registry? I could not come up with a way that I liked that didn't feel wrong.
You can create a directory structure similar to the following:
account
├── _global
│   └── acr
├── dev
├── prod
└── stage
For each Azure account/subscription you have, create a top level account directory. Within that directory, at the top level, set up a _global folder for resources that are global to that account. Also create subdirectories for each environment.
Gruntwork's example repo is a good reference.
Not Azure specific, but have gone through a similar exercise, I assume there are global objects that associated with your account and not an environment. This is the case in AWS with VPCs, instances etc, though items like IAM (user,role,role policy management) are owned by the account, so after a bit of trail and error I came up with the following at the same root as dev, staging, and prod....there are sub repositories of global mainly to keep down on the change scope.
All of the dev,stage,prod use the global directory through terraform_remote_state, actually dependency since i'm using terragrunt, but its analogous.
HTH
tree global
global
├── cloud_watch_alarm.tf
├── dynamo_db
│   └── terragrunt.hcl
├── iam
│   ├── iam_groups.tf
│   ├── iam_instance_profile.tf
│   ├── iam_policies.tf
│   ├── iam_policy_attachment.tf
│   ├── iam_role_policies.tf
│   ├── iam_roles.tf
│   ├── iam_user_group_membership.tf
│   ├── iam_users.tf
│   ├── main.tf
│   ├── provider.tf
│   ├── terragrunt.hcl
│   └── variables.tf
├── main.tf
├── s3
│   ├── main.tf
│   ├── provider.tf
│   ├── s3-ohio.tf
│   ├── s3.tf
│   └── terragrunt.hcl
├── sns_topic_subscription.tf
├── sns_topic.tf
├── sqs.tf
├── terragrunt.hcl
└── variables.tf
EDIT: Alas i can't state my practice is best, i asked the same question and got a similar answer

How to centralize database models folder if it is use in multiple projects?

We have 3 different RESTapis namely A,B and C and all access the same MySQL database.
Our stack is NodeJS + ExpressJS + SequlizeJS ORM + MySQL
Right now if I want to create a new db model, I need to create it in the api A and
communicate with other developers who are working on the api B and C to copy and paste new model in their projects' models folder.
This is very inefficient and has many mistakes during the process.
So instead of doing this manually, can we automate this task with a new repo in the bitBucket?
The idea is to create a repo in bitBucket and some how refer that models folder in all 3 projects instead of keeping models folder in each and every project?
How do I achieve this using NodeJS, ExpressJS and BitBucket?
I am assuming that by API A, B and C you are referring to completely different projects here. If that's the case then i can suggest you to use GIT Submodules. But having extensively used submodules i would suggest to use this only if it is inevitable.
Project Structure That i usually work on:
.(Git Root)
├── logs
├── resources
├── schema
│   └── <different-entities>
├── src
│   ├── config
│   ├── controllers
│   ├── jobs
│   │   ├── emails
│   │   └── notifications
│   ├── locales
│   ├── middlewares
│   ├── migrations
│   ├── models (You need to have a git submodule here)
│   ├── public
│   ├── seeders
│   ├── services
│   │   ├── entities
│   │   └── factories
│   ├── transformers
│   ├── types
│   ├── types-override
│   ├── util
│   └── validators
│   ├── keywords
│   └── <different-entities>
├── storage
│   ├── <date>
├── stubs
└── temp_files
This sounds easy but keep these things in mind:
If your existing project has models directory in git history, you cannot create a submodule on that directory (At least not an easy way. The way i did it was rename models to shared-models)
Now there will be 2 git repositories:
A. git repository containing all the model files (You will never open this GIT repo. in your IDE)
B. the main git repository of your project.
So there will be unnecessary merge conflicts most of the time you
merge any branches in your main repository. because main repository just keeps track of commit hash your models repository should be on at that moment. Any new commit (Irrespective of whether it's even fast-forward will be treated as a merge conflict)
Third and the drawback that's most tiring: Suppose you have to just make a change in model file and nothing else in project changes. (Yes, that's infrequent but possible. e.g. adding another enum value in status key). To achieve this you will have to make two commits, first one in models repository which will store the actual changes, then in the main repository which will store the new commit hash and push two commits on two different repositories
If I have not lost you already & you feel this is the right approach for your use case (I can't think of anything better though)
copy your complete models directory to a new place (Let's say desktop).
git init inside ~/Desktop/models
push it on a separate bitbucket repo. (I usually name it <project>-models-<lang>, e.g. facebook-models-node)
come back to your main Project A.
remove models directory there
run: git submodule add <HTTPS/SSH Url of bitbucket> src/shared-models
replace imports of models from src/models to src/shared-models
Repeat steps 5-7 for other projects too
Official GIT Submodule: https://git-scm.com/book/en/v2/Git-Tools-Submodules

git - Merge only core functionality of two different branches

So here's the thing:
I have a webapp project I write in Node using Express as a server (in the master branch, and I also have a version of the same app to build a node-webkit desktop app (in the nwjs-sdk branch).
The difference between the two branches are only a handful of files mainly in the root of the directory.
So here's a rough idea of the contents of each branch:
The master branch:
├── package.json
├── node_modules/
├── public
│   ├── css
│   │   ├── cassette.css
│   │   └── style.css
│   ├── data
│   │   └── metadata.json
│   ├── index.html
│   ├── js
│   │   ├── cookie-monster.js
│   │   ├── jquery-3.2.1.min.js
│   │   └── mixtape.js
│   └── tracks
├── README.md
├── readTracks.js
└── server.js
And the nwjs-sdk branch:
├── app
│   ├── css
│   │   ├── cassette.css
│   │   └── style.css
│   ├── data
│   │   ├── metadata.json
│   │   └── tracks.json
│   ├── index.html
│   ├── js
│   │   ├── jquery-3.2.1.min.js
│   │   └── mixtape.js
│   ├── main.js
│   ├── package.json
│   ├── tracks
│   └── uploads
├── package.json
├── readTracks.js
└── writeID3.js
Basically the main diference is that the express server is gone and the public/ dir changes the name to app/
The core functionality of my app is on the public/js/mixtape.js and in index.html (app/js/mixtape.js and app/index.html in the nwjs-sdk branch).
What I want to do is work on the master branch tweaking the core functionality and when everything is ready copy that functionality to the nwjs-sdk branch without breaking the app for node-webkit.
Any ideas on how to use git for this?
You want to perform a merge. But git is really dumb. If you ask it to merge two histories one without a public folder and one with a public folder then the merged history will contain a public folder. The same argument can be said of the app folder. So after the merge you should expect to see both folders. Instead you'll want to direct git.
git checkout master;
git merge --no-commit nwjs-sdk;
This will cause git to pause right before it makes the merge commit. At this point you should move your files around how you would like them. (git's not going to be able to figure this out for you). When you're happy with how your files look you just need to make a regular commit. Since you're creating new history, you can always go back in time, you don't need to worry about losing something in the following instructions.
# Un-stage the changes git was preparing to commit
git reset
# Use git rm to stage the removal of old files (you may want to read up on this command, perhaps try --dry-run)
git rm -r foo ...
# Use git add to stage the new files
git add app ...
# Package everything in the stage into your merge commit
git commit

Node.js location of uploaded file

I'm very new of Node.js. I am trying to implement a online file converting service, but I have trouble knowing where should I store the converted file. Because I don't want anyone but the user can access that file. Is the folder "public" a right choice to store the file? Thanks.
Here is my structure.
WebApp
├── bin
├── node_modules
│   ├── body-parser
│   ├── cookie-parser
│   ├── debug
│   ├── express
│   ├── jade
│   ├── morgan
│   ├── multer
│   ├── node-uuid
│   ├── remove
│   ├── serve-favicon
│   ├── sqlite3
│   └── stripe
├── public
│   ├── bower_components
│   ├── converted
│   ├── fonts
│   ├── images
│   ├── javascripts
│   ├── previews
│   ├── stylesheets
│   └── uploads
├── routes
├── ssl
└── views
└── partials
Serving private files from such public folder (it looks like your public folder is a place static files are served from) is not good idea - anyone who has the link can download it.
You should create a custom download route handler instead, which is responsible for verifying that it is really the owner who is trying to download particular file.
Assuming this is a standard generated Express app.
Absolutely do not store in the public directory, its open access for anyone.
You have multer in your packages, so I'm assuming you have your file upload modules already. Now just use some sort of middleware authentication package, like PassportJs, and use that to authenticate in a routing file or a controller file.

Couchapp directory structure, updates?

when generating a new couchapp, I get this structure:
appname
├── _attachments
│   └── style
├── evently
│   ├── items
│   │   └── _changes
│   └── profile
│   └── profileReady
│   └── selectors
│   └── form
├── lists
├── shows
├── updates
├── vendor
│   └── couchapp
│   ├── _attachments
│   ├── evently
│   │   ├── account
│   │   │   ├── adminParty
│   │   │   ├── loggedIn
│   │   │   ├── loggedOut
│   │   │   ├── loginForm
│   │   │   │   └── selectors
│   │   │   │   └── form
│   │   │   └── signupForm
│   │   │   └── selectors
│   │   │   └── form
│   │   └── profile
│   │   ├── loggedOut
│   │   ├── noProfile
│   │   │   └── selectors
│   │   │   └── form
│   │   └── profileReady
│   └── lib
└── views
└── recent-items
Now, since this structure is meant to reflect the JSON structure of a CouchDB _design document, I figured this out:
[_attachments] Attachments are stored binary. JavaScript, CSS, and HTML files are stored here.
[evently] ???
[lists] Lists are JavaScript functions that are executed to render HTML or AtomFeeds from view results.
[shows] Show functions are the analogue to list functions, but render content by transforming a document into other formats (such as html, xml, csv, png).
[updates] ???
[vendor]Home of external libraries.
[views]View contain MapReduce functions that can later be queried though the HTTP API (see \ref{couchdb:views}).
Appart from me hopefully being not completely wrong with the filled out descriptions, how would I describe the updates directory? Is this hosting validation functions?
The second question would be how you would describe the evently directory...
If there is a summary for this already existing, please point me to it!
Kind Regards!
The generate command builds the backbone document format that CouchDB needs; and it also builds a web app development framework, Evently. I don't know Evently very well; but basically it gives a developer tools and suggestions to make the UI and the couch interact.
Personally, I never use the couchapp generate command. I just create the _id file from scratch (echo -n _design/whatever > _id), then create folders and files as I need them.
List functions (one per file) receive _view output to produce any HTTP response (e.g. XML RSS).
Show functions (one per file) receive a one document to produce any HTTP repsonse.
Update functions (one per file) receive one HTTP query and output one prepared document to be stored by couch. (For example, receiving a form submission and building a JSON document.)
View functions (one map.js and one reduce.js in a folder) are CouchDB views and provide for the querying and stuff.
I'm not sure about updates and vendor. They aren't relevant to the CouchDB server.
I have been using couchapp for about a week or two now. It took me more than a while to get the grasp of how couchDB works and how couchapp fits. In fact, I was having the very questions that you were having and I'm sure now that every newbie to couchapp will have these questions lingering in their mind. To save their time at least, I'm posting some of the links that helped be get better at answering the very questions you have asked for. And the links are as below:
http://couchapp.org/page/filesystem-mapping
http://couchapp.org/page/couchapp-usage
http://couchapp.org/page/evently-do-it-yourself
http://www.ibm.com/developerworks/opensource/tutorials/os-couchapp/?ca=drs-
Hope they help.
Update functions are documented in the CouchDB wiki. Quoting it:
[...] you should think about an _update handler as complementary to _show functions, not to validate_doc_update functions.
Evently is documented on CouchApp site. Documentation is weak, I am using it in a project and I have found only a short blog post with useful info. Luckily the source code is easy to understand. But look at Pages app for sample usage. Anyway it is not clear to me how much used is it.

Resources