Custom remote task is not executed with capistrano 3 - node.js

I run into a weird issue with capistrano 3 and brunch.
I want to execute brunch on remote server but nothing happen. My custom remote task looks like this:
namespace :brunch do
desc "Building assets with brunch.io"
task :build do
on roles(:web) do
within "#{release_path}" do
execute "node #{release_path}/node_modules/brunch/bin/brunch build --env=#{fetch(:stage)} #{release_path}"
end
end
end
end
When I run "cap staging deploy", I can see command is executed:
INFO [a246858c] Running node /releases/20160303145521/node_modules/brunch/bin/brunch build --env=staging /releases/20160303145521 as web
INFO [a246858c] Finished in 0.159 seconds with exit status 0 (successful).
But my assets are not built, nothing is done.
And if I connect on my server run command, everything works fine.
I don't understand this behaviour, is any one aware of that?
Thanks a lot for your help
I'm using Capistrano Version: 3.4.0 (Rake Version: 10.5.0)

Related

gitlab-ci Job failed: exit status 1 with no error

I'm trying to run njsscan to SAST my code on gitlab-ci. But the results of the job always fail even though there are no errors as shown in the image below.
If I run the manual on my server the command runs without any problems in the image below.
Is this a bug of gitlab-ci ? or is there a solution I can do? thank you
I have the same issue using gitlab-runner 15.3.0 with docker executor (docker version is 20.10.17):
Job is failing with RC=1 while running the before_script part
Restarting the Job (without any changes to code or pipeline-definitions) just succeed in the most cases.
We are using a dozen of runners, but even if a job is restarted on the same runner, it succeeds although it just failed there before.

Gitlab-CI succeeds on non-zero exit

Gitlab-CI seems allow the build to succeed even though the script is returning a non-zero exit. I have the following minimal .gitlab-ci.yml:
# Run linter
lint:
stage: build
script:
- exit 1
Producing the following result:
Running with gitlab-runner 11.1.0 (081978aa)
on gitlab-runner 72348d01
Using Shell executor...
Running on [hostname]
Fetching changes...
HEAD is now at 9f6f309 Still having problems with gitlab-runner
From https://[repo]
9f6f309..96fc77b dev -> origin/dev
Checking out 96fc77bb as dev...
Skipping Git submodules setup
$ exit 1
Job succeeded
Running on GitLab Community Edition 9.5.5 with gitlab-runner version 11.1.0. Closest post doesn't propose a resolution nor does this issue. A related question shows this setup should fail.
What are the conditions of failing a job? Isn't it a non-zero return code?
The cause of the problem was su was wrapped to call ksu as the shared machines are authenticated using Kerberos. In that case the wrapped ksu succeeds even though the script command might fail, indicating the job succeeded. This affected gitlab-runner since the shell executor was running su to run as the indicated user.

Capistrano 3 How to write correctly a local task?

In my nodjs/a,gular2 project, I am trying to run locally the build-sot process as a local task, before deploying it , but I cannot get it right , how can I set it :
task :build_production_aot do
run_locally do
set :local_app_path, Dir.pwd
set :local_client_path, "#{fetch(:local_app_path)}/client"
sh 'npm run build:prod-aot'
end
end
thanks for feedback
UPDATE
succeeded in running the following task , but is there any better way to write it ?
task :build_production_aot do
run_locally do
local_client_path = Dir.pwd + "/client"
puts "--> Running build: '#{local_client_path}', please wait ..."
execute "cd #{local_client_path} && npm run build:prod-aot"
end
end
thanks
Even if it runs locally with such modified script, I guess it's better to build the production dist directly on the remote server

Deploy NodeJS app with pm2 and Capistrano

I develop project based on NodeJs, pm2, Capistrano 3.
Faced with problem of downtime while deploying Node app with Capistrano.
deploy.rb:
set :linked_dirs, ['node_modules', 'logs']
set :linked_files, ['ecosystem.json']
set :npm_flags, '--silent --no-spin'
before 'deploy:updated', 'assets:upload'
after 'deploy:updated', 'assets:webpack'
after 'deploy:publishing', 'pm2:restart'
assets:upload - builds js and css files and uploads to CDN. Build performs with Webpack so it's create webpack-assets.json.
assets:webpack - uploads webpack-assets.json to prod servers. webpack-assets.json is using by node to get exact asset name because it contains hash:
task :webpack do
run_locally do
roles(:web).each do |host|
execute :rsync, '-rvzu', "themes-assets.json", "#{host.user}##{host.hostname}:#{fetch(:release_path)}"
execute :rsync, '-rvzu', "webpack-assets.json", "#{host.user}##{host.hostname}:#{fetch(:release_path)}"
end
end
end
pm2:restart - should perform zero time reload. But in fact I'm getting 1second down time. If I perform this task independently there is no downtime.
def restart_app
within current_path do
execute :pm2, :startOrRestart, fetch(:deploy_to) + '/shared/ecosystem.json'
end
end
pm2 logs show the following error
Process with pid 123169 still not killed, retrying...
Instead of
pm2 startOrRestart <app|conf>
You have to use
pm2 startOrReload <app|conf>
If you still see a downtime while using "startOrReload", have a look at: http://pm2.keymetrics.io/docs/usage/signals-clean-restart/#graceful-start

TeamCity ".Net Process Runner" hangs

We have started migrating our one of several projects to team city as part of CI. Below is how we have setup teamcity build. We are trying to deploy WebSite.
1) Build Step 1 (Package installation)
Using "command line " runner type install required package.
2) Build Step 2 (Build)
Using Runner type "Visual Studio (sln)" (Visual Studio 2010) build website.
3) Build Step 3 (Deploy Web Site)
Using ".Net Process Runner", deployer.exe (x86 built with .Net Framework 4) deploy site.
Deployer.exe reads config file. Config file contains "BuildId", "Environment" and "Servers" where we want build to be pushed.
<buildType id="bt52">
<env name="Debug">
<server path="SERVER1" />
</env>
<env name="QA">
<server path="SERVER2" />
<server path="SERVER3" />
</env>
<env name="UAT">
<server path="SERVER4" />
<server path="SERVER5" />
</env>
</buildType>
Deployer.exe is called with required parameters as below. Which reads config and deploys site to Server2 and Server3.
Deployer.exe "bt52" "QA" "siteQA" "E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" msdeploy.exe
Problem area is step #3.
When we run deployer.exe using .Net process runner as part of team city we see its hanging and not responsind sometime even for 45 minutes. When we try to execute same deployer.exe from build server using command line script executes within couple of seconds.
E:\TeamCity_custom_applications\deployer>Deployer.exe farm1-1 QA siteQA E:\BuildAgent\work\2483052e33e5e1e8\src\diy\ msdeploy.exe
Info
: Processing batch run ... Info : Processing command ...msdeploy.exe
-verb:sync -source:contentPath="E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" -dest:contentPath="siteQA",wmsvc="SERVER2",userName="*****",password="******",authType="Basic"-skip:objectName=filePath,absolutePath=web.config -skip:objectName=dirPath,absolutePath="bin" -enableRule:DoNotDeleteRule -allowUntrusted Info : output >>Total changes: 0 (0 added, 0 deleted, 0 updated, 0 parameters changed, 0
bytes copied) Info : error >>(none) Info : ExitCode >> 0 Info :
Processing command ...msdeploy.exe -verb:sync
-source:contentPath="E:\BuildAgent\work\2483052e33e5e1e8\src\diy\" -dest:contentPath="siteQA",wmsvc="SERVER3",userName="******",password="******",authType="Basic"
-skip:objectName=filePath,absolutePath=web.config -skip:objectName=dirPath,absolutePath="bin" -enableRule:DoNotDeleteRule -allowUntrusted Info : output >>Total changes: 0 (0 added, 0 deleted, 0 updated, 0 parameters ch anged, 0
bytes copied) Info : error >>(none) Info : ExitCode >> 0
Info: Deploy Script Complete.
One more thing we observed is running deployer.exe through teamcity I see that site content gets copied but only for 1 server and teamcity build status stays in "Running" mode. I am wondering if someone can please put little bit of insight on how can I look into this issue.
Update 1:
Thanks for your time looking into it !! What we ended up doing is, Instead of running command "msdeploy.exe" from "cmd.exe" we added "msdeploy.exe" location as Environment variable and executed "msdeploy.exe" in loop for # of servers. This has resolved issue of hanging. Now I am just curious to know why would it behave in such manner where if you execute "msdeploy.exe" from "cmd.exe" it would hang while running directly "msdeploy.exe" it would execute successfully. Any insight into same would be greatly appreciated.
Update 2:
I have added image which explains behavior using process explorer. If we kill msdeploy.exe from process explorer than for next all deployments to that server will not have the issue of build hanging. Please see below image
To be honest, it sounds like you're running into issues with redirecting input/output streams. TeamCity is running your application in a totally headless environment and then you, in turn, are attempting to redirect and parse the output of msdeploy.exe
If that's the case, I'd recommend looking into using the MSDeploy API instead of msdeploy.exe. The latter is just a command line wrapper for the former, so all the functionality is available to you. There's a sample deployment application available on the IIS blog if you need help getting started.
It seems you have NUnit build step configured in TeamCity and invoke cmd.exe from your test. This looks like an issue with the test code then. Most probably it will reproduce without TeamCity if you run the test in question with NUnit directly.
As Richard noted, most probably the issue root cause is related to stdin/stdout processing.
If you want to fix it in your code, you can try to experiment by explicitly closing stdin or the other way around, try writing something into it, etc.
Work around we did is, we observed msdeploy doesn't take more than 3-5 seconds to execute and deploy (Even for our biggest project which is almost 300mb website). So we set timeout of 20 seconds. So far since last 1 weeks we have not seen any issue with it and hopefully it will not cause more trouble but still we are not sure why such behavior.

Resources