dbt crontab copies old logs and doesn't run - cron

My coworker set up a virtual machine that is running Linux and dbt. The dbt run is scheduled with crontab like this:
0 3 * * * . /home/user/copydbtrel_and_run.sh
The script is really simple itself:
cd
cd .dbt
cd folder1
dbt run --target dev
cd
cd .dbt
cd folder2
dbt run --target dev
The problem is that the cron scheduling works as expected, but it doesn't do what it's supposed to. I'm not sure if it actually starts the dbt run at all, but what definitely happens:
all files from .dbt/folder1/logs get deleted
old log files from a week ago are copied from somewhere to the log folder
same or similar happens in .dbt/folder1/target, the files there refer to the same week old run as if nothing was run in between
the actual dbt job doesn't do what it's supposed to. No loading of database tables happens
If I just run the script manually, it does what it's supposed to i.e. it runs the job and appends results to the log files.
So, what's going on here? I haven't used Linux in a long time and dbt isn't familiar to me, so I don't know where to start debugging. Also, my coworker is on vacation, so he can't help.

Related

From inside a yarn workspace subpackage, run a root-level script

I'm wondering: if your terminal's current working directory is inside a yarn workspace, is there a way to run a yarn script that's defined at the project root without changing the current directory to be outside of a workspace?
For instance, you can run a command for a particular workspace by running yarn workspace workspace-name script-name but is it possible to use that yarn workspace command to target not a subpackage, but the root package itself?
I couldn't find a way to do it with yarn workspace, but you can do it by specifying the current working directory (cwd) when running the root command. Assuming you're running your command from ~/packages/subpackage, you'll need to go back two times with ../..:
yarn --cwd="../.." my-root-script
Scripts that contain a : in their name can be run from anywhere!
For example, your root script called "root:something" can be called from within any workspace by running yarn root:something.
Note that this even works if the : script is not a root script, but a workspace script. See yarn docs.

Automating build installation process

I work on a SaaS based product of my company which is hosted on private cloud. So every time a fresh BOM package is available by the DEV team. In the common share folder , we- the testing team installs the build on our application servers (3 multi node servers, with one being primary and the other two being secondary).
The build installation is entirely done manually.on the three app servers(linux machine), where in the steps we follow are as below
Stop all the app servers
Copy the latest build from a code repository server(copy the .zip build file)
Unzip the content s if the folder on to a folder in the appserver (using the unzip command)
Run backup of existing running build on all three folders( command is something like - ant-f primaryBackup.xml, ant-f secondary backup.xml )
Then run the install on all three serverscommand is something like - ant-f primaryInstall.xml, ant-f secondaryInstall.xml )
Then restart all the server and check if the latest build is successfully applied.
Question: I am wanting to automate this entire process, such that I am just required to give the latest build number to be installed and the script takes care of the whole installation .
Presently I don't understand how this can be done ? Where should I start? Is this feasible? Will a shell script of the entire process be the solution?
There are many build automation/continuous deployment tools out there that would help you with a solution for automating your deployment pipeline. Some of the more popular configuration automation tools out there are puppet, chef, ansible, and saltstack. I only have experience with ansible and chef but my interpretation has been that chef is the more "user-friendly" option. I would start there... (chef uses the ruby language and ansible uses python).
I can answer specific questions about this, but hour original question is really open ended and broad.
free tutorials: https://learn.chef.io/
EDIT: I do not suggest provisioning your servers/deployments using bash scripts... that is generally messy and as your automation grows (which it likely will), your code will gradually become unmanageable. Using something like chef, you could set periodic checks for new code in your repositories and deploy when new code is detected (or upon certain conditions being met). you could write strait bash code within a ruby bock that will remotely stop/start a service like this (example):
bash 'Copying the conf file' do
cwd "current/working/directory"
user 'user_name'
code <<-EOH
nohup ./startservice.sh &
sleep 2m
nohup ./startservice.sh &
sleep 3m
EOH
end
to copy code from git for example... I am assuming github in this example, as i do not know where your code resides:
git "/opt/mysources/couch" do
repository "git://git.apache.org/couchdb.git"
reference "master"
action :sync
ssh_wrapper "/some/path/git_wrapper.sh"
end
lets say that your code is anywhere else.. bamboo or Jenkins for example... there is a ruby/chef resource for it or some way to call it using strait ruby code.
This is something that "you" and your team will have to figure out a strategy for.
You could untar a file with a tar resource like so:
tar_package 'http://pgfoundry.org/frs/download.php/1446/pgpool-3.4.1.tar.gz' do
prefix '/usr/local'
creates '/usr/local/bin/pgpool'
end
or use the generic linux command to like so:
execute 'extract_some_tar' do
command 'tar xzvf somefile.tar.gz'
cwd '/directory/of/tar/here'
not_if { File.exists?("/file/contained/in/tar/here") }
end
You can start up the servers in the way that I wrote the first block of code (assuming they are services.. if you need to restart the actual servers, then you can just run init 6 or something.
This is just en example of the flexibility these utilities offer

Jenkins npm build fails to unzip a package on Windows slave

I have added a windows slave to do npm build, in my package.json i have a step in which it performs "unzip pack.zip".
When i do npm build directly on the box it does everything successfully but when it gets done using Jenkins job, it fails to unzip the file i.e pack.zip
The file even gets extracted properly using tools like Unzip, Winrar and 7z etc.
I wrote the bat file to do npm build. when i ran it using cmd it worked without any issue but when i executed the same bat file from jenkins, it failed in the same extraction step.
Added log below:
inflating: saui-client/node_modules/sig-quote/node_modules/sig-core/node_modules/underscore/underscore-min.map
error: expected central file header signature not found (file #73741).
(please check that you have transferred or created the zipfile in the
appropriate BINARY mode and that you have compiled UnZip properly)
inflating: saui-client/node_modules/sig-quote/node_modules/sig-core/node_modules/underscore/underscore.js
D:\jenkins\workspace\BUILD>exit 3
Build step 'Execute Windows batch command' marked build as failure
Finished: FAILURE
The only thing that comes to mind without the log is that you set your slave environment after connecting it to Jenkins - maybe even installed NodeJS after connecting it - meaning the session you have on your computer is not the same as was loaded when Jenkins was connected to it. The simple solution is to disconnect the slave and reconnect it with the new environment so the session will have all it needs to run. When you will have the log I will update my answer.
Good luck!

Autorun Bash Script after Login to Sync Git Repos

I have already a gitupdate.sh that I found on the internet.
My problem now is how to make the script run automatically every time I log into my Ubuntu 14.04 computer.
I have tried adding this line to .bashrc
sh '/path/to/git/repo/gitupdate.sh'
The problem here is the script is executed not in the context or path of the repo thus the script runs in a folder that is not initialized with git. (I don't actually know what folder bashrc run on)
What want to do is that the update script to be run by Ubuntu in the context of the path so the script will not fail. And also show the running script in a Terminal window that will not automatically close.
And the ultimate goal, is to be able automatically upon login, that git cloned repos to be synced with the public repos.
Try
(cd repo && bash /path/to/gitupdate.sh)
Just open the gitupdate.sh and at the beginning of the file just add the command to go to the selected direcotry. For example:
cd /var/test

Linux to Windows copying network script

I need to improve my method, or even change it completely, for copying files on a private network from multiple Windows machines to a central Linux machine. How this works is that I run the script below as a cron job every 5 minutes to copy data from say 10 Windows machines, all with a shared folder, to the central Linux machine that gets collected each day. So in theory the Linux machine at the end of the day should have all the data that has changed on the Windows machines.
#!/bin/sh
USER='/home/user/Documents/user.ip'
IPADDY=$USER
USERNAME=$USER
while read IPADDY USERNAME; do
mkdir /mnt/$USERNAME
mkdir /home/user/Documents/$USERNAME
smbmount //$IPADDY/$USERNAME /mnt/$USERNAME -o username=usera,password=password,rw,uid=user
rsync -zrv --progress --include='*.pdf' --include='*.txt' --include='issues' --exclude='*' /mnt/$USERNAME/ /home/user/Documents/$USERNAME/
done < $USER
The script works fine but it doesn't seem to be the best method as a lot of the time data is not being copied across or not all the data is copied correctly.
Do you think that this is the best approach or can someone point me in a better solution?
How about git repository? Wouldn't it be easier? You could easily also track the changes.

Resources