Rename a folder with Grunt or Node.JS - node.js

Is it possible to rename a folder without the whole process of making a new directory, copy files into new directory, then remove the old directory?
This process takes several minutes to complete, I am forced to use a batch script to rename the folders, I'd prefer it all to be handled by Grunt. Looking through the Node docs it appears there is no way to rename folders similar to the way 'mv' or 'rename' commands work.
The use-case is for a faster deployment workflow with Grunt on an intranet site. I'd like minimal down time, 2 minutes of downtime to copy files is not ideal.
I stage my website on the server in www/test.
I then rename www/prod to www/archived
Then rename www/test to www/prod making the new site live.

Using grunt-shell solved my problem, however you have to warn future developers which platform the shell commands are meant for, in my case Windows.
shell: {
options: {
stderr: false
},
'archiveToDelete': {
command: 'rename <%= yeoman.winserver %>\archived delete-this'
},
'liveToArchive': {
command: 'rename <%= yeoman.winserver %>\prod archived'
},
'deployToLive': {
command: 'rename <%= yeoman.winserver %>\test prod'
},
}

I think you are looking for https://nodejs.org/api/fs.html#fs_fs_rename_oldpath_newpath_callback
which will do a 'mv' equivalent command.

Not a direct answer, but for this exact same task I prefer to use symbolic links rather than renames:
I upload my files as www/20150602-214412 (exact timestamp of the build)
I remove existing symlinks www/prod and create new symlink to my timestamped build
That way I have as many archives as I want, and I know exactly when the files were deployed.
In grunt I use grunt-contrib-symlink and grunt-contrib-clean

Related

Gradle Copy Task causes file permission issue

Background
I have a JetBrains Plugin that I am developing. Recently, I moved from a Windows to a Ubuntu system. I am trying to set up everything correctly as before. Note: I am fairly new to Linux.
Issue
I am having an apparent file permission issue (as can be seen in the error section of this question) when I ran the following Gradle script. Note: Whenever I build the project, this Gradle script is automatically called. It also worked correctly on Windows.
If I comment out the copy{...} closure then everything works correctly. I just need to manually copy over the required file.
tasks.create(name: "copyJar_v${project['version']}") {
group GROUP_CHROMATERIAL
def mostCurrentJarFile = "ChroMATERIAL-${project['version']}.jar"
// comment this out and there are not errors, but I need to do this copy manually
copy {
into '/' // Copy into project's root folder
from 'build/libs', {
include mostCurrentJarFile
rename mostCurrentJarFile, 'ChroMATERIAL.jar'
}
}
}
Error
FAILURE: Build failed with an exception.
* Where:
Build file
'/home/ciscorucinski/IdeaProjects/ChroMATERIAL/ChroMATERIAL/build.gradle' line: 100
* What went wrong:
A problem occurred evaluating project ':ChroMATERIAL'.
> Could not copy file '/home/ciscorucinski/IdeaProjects/ChroMATERIAL/ChroMATERIAL/build/libs/ChroMATERIAL-2.5.1.jar' to '/ChroMATERIAL.jar'.
* Try:
Run with --stacktrace option to get the stack trace. Run with --info or --debug option to get more log output.
BUILD FAILED
Total time: 0.139 secs
/ChroMATERIAL.jar (Permission denied)
오후 12:20:39: Task execution finished 'copyJar_v2.5.1'.
What I tried
I have been using the UI to modify the file permissions of the Plugin Project Folder within IdeaProjects.
I gave everyone Create and delete files permissions and I Change Permissions for Enclosing Files...
Everyone has Read and write access to files and Create and delete files access to folders.
Clicked Change
Run the Gradle script again ... same error message
Thoughts
It appears that Linux is not letting Gradle modify these files. I can comment out the code and do everything myself, but I need to allow Gradle to have higher control. Don't know how though.
I notice that when I go back to Change Permissions for Enclosing Files... the shown permissions are not what I selected! Others has Read-only access to files and Access files access to folders. I don't know if this is common Ubuntu behavior, a bug, or something else.
It would be nice to know how to give as restrictive as possible access while fixing this.
Well, it is true -- you cannot (and should not) copy files into the Root folder in a linux box. You could if you ran the script as sudo, but that is a bad idea.
Edited
Since you want to copy to the root of the project, you can use ${projectDir} or ${rootDir}.
Also, you should be able to do this without the hassle of a closure -- and it makes your script cleaner, IMHO -- by using the built in Copy task:
task copyClientLoc(type: Copy) {
from "build/libs/"
into "${rootDir}"
include "ChroMATERIAL-${project['version']}.jar"
fileMode = 0644
}

Puppet : Copy files only IF the package needs to be installed to the latest

I'm a puppet beginner - so bear with me :)
I'm trying to write a module that does the following :
Check if a package is installed with the latest version in the repos
If the package needs to be installed, then config files will be copied from puppet source location, to client. Then the package will be installed
Once files are copied and package installed, run the script that will use the config files on the client to apply the necessary settings.
Once all of this are done, remove the copied files on client
I've come up with the following :
class somepackage(
$package_files_base = "/var/tmp",
$package_setup = "/var/tmp/package-setup.sh",
$ndc_file = "/var/tmp/somefile.ndc",
$osd_file = "/var/tmp/somefile.osd",
$nds_file = "/var/tmp/somefile.nds",
$configini_file = "/var/tmp/somefile.ini",
$required_files = ["$package_setup", "$ndc_file", "$osd_file", $nds_file", "$configini_file"])
{
package { 'some package':
ensure => 'latest',
notify => Exec['Package Setup'],
}
file { 'Package Setup Files':
path => $package_files_base,
ensure => directory,
replace => false,
recurse => true,
source => "puppet:///modules/somepackage/${::domain}",
mode => '0755',
}
exec { 'Package Setup':
command => "$package_setup",
logoutput => true,
timeout => 1800,
require => [ File['Package Setup Files']],
refreshonly => true,
notify => Exec['Remove config files'],
}
exec { 'Remove config files':
path => ['/usr/bin','/usr/sbin','/bin','/sbin'],
command => "rm \"${package_setup}\" \"${ndc_file}\" \"${osd_file}\" \"${nds_file}\" \"${configini_file}\"",
refreshonly => true,
}
}
While this achieves most of what I want to do, I notice that upon rerunning puppet apply the files, although they were being removed, were being recopied.
I can understand why this happens, but I don't know how to code it so that the files get copied ONLY if the package gets updated/installed (e.g. package wasn't installed or old). Otherwise the files will get copied over and over again every time puppet runs every 30 min (default setup) on the client I assume... I tried using the replace => false to prevent this but that just means the files wont ever get removed from /var/tmp after the first run of the class, because it only prevents subsequent runs of the class to re-copy the files (from my testing). This does prevent the redundant, repetitive copying - however I just want the files to be gone the first time!
Is this possible? Head hurts :(
Thanks in advance! We're running Puppet version 3.8.6 on EL7.3.
EDIT: To be clear, this is the bit that I'm struggling with: the resource file { 'Package Setup Files':. This keeps getting files copied even though the package isn't updated/installed. How do I prevent this from happening?
Here are some suggestions.
1) Recommendation for a short term solution
Stop trying to clean up those files if you do not need to. Put them in /opt and forget about them. Better still, have Puppet place a README file in there with them that will explain to your future self and to your fellow admins what they are and why they are there.
While I completely understand the desire to clean up, you need to weigh the cost of having a few old files in a directory somewhere against the cost of having complicated logic in the Puppet code that will not make any sense to anyone in a few months.
This is what I would do and in my experience it is also what most Puppet module authors do with these sorts of set up files.
2) Consider an orchestration framework
That said, it appears to me that you are trying to use Puppet to do operational tasks, and while it can kind of do operational tasks (via features like ensure => latest etc) it is really intended to be a configuration management tool.
I recommend people use Puppet to ensure => installed for packages (make sure Puppet can install the app properly if you need to fully rebuild the node); then delegate the problem of applying version upgrades and hotfixes etc outside of Puppet.
There are a few reasons for this.
Puppet is a declarative configuration management system; your Puppet code should define an end-state. Puppet is not like a shell script, where instead of an end-state, you define steps that change the state of a server imperatively, "one step at a time".
The first problem with ensure => latest is philosophical.
latest does not define a single end-state. The behaviour of your code at time X is different from the behaviour at time Y. So your code is not idempotent.
The second problem is practical. You can never solve the problem of RPM updates in a general way using Puppet, because Puppet can never know about all of the RPMs and their dependencies in your system. So, one way or another, you still need a specialised tool for managing the version updates.
So, since you will need a specialised tool for managing the version updates anyway, it is cleaner to draw a clear boundary between the two tools' roles: always use Puppet to manage the configuration and the initial installation; and then always use the other tool to manage the updates.
Ok, great. I see in your comments that you already have a Red Hat Satellite server, and you have written:
...some hosts within the Satellite have got an older version of the
software within yum. But we don't update this software very
often.....maybe once every year.
So, it sounds like you are using Puppet here to work around a problem in the way you are using Satellite. Is it possible to address this by fixing the way you use Satellite? If so, I think that will be cleaner.
Of course, sometimes the right thing to do is use a work-around, and that's why I provided some other options.
3) If you really really want Puppet to clean up those files
Perhaps move the logic inside a shell script. Something like:
class somepackage {
$shell =
'#!/bin/bash
# maybe use wget instead of puppet to get the files
wget http://a.b/c.tgz
tar zxf c.tgz
# install stuff
# clean up stuff
'
file { '/usr/local/bin/installer.sh':
ensure => file,
mode => '0755',
content => $shell,
}
package { 'some package':
ensure => latest,
notify => Exec['installer'],
}
exec { 'installer':
command => '/usr/local/bin/installer.sh',
refreshonly => true,
require => File['/usr/local/bin/installer.sh'],
}
}

puppet - How to debug and test to see if your module is working properly

I wrote a simple module to install a package (BioPerl) on a Ubuntu VM. The whole init.pp file is here:
https://gist.github.com/anonymous/17b4c31bf7309aff14dfdcd378e44f40
The problem is it doesn't work, and it gives me no feedback to let me know why it doesn't work. There are 3 simple steps in the module. I checked and it didn't do any of them. Heres the first 2:
Step 1: Download an archive and save it to /usr/local/lib
exec { 'bioperl-download':
command => "sudo /usr/bin/wget --no-check-certificate -O ${archive_path} ${package_uri}",
require => Package['wget']
}
Step 2: Extract the archive
exec { 'bioperl-extract':
command => "sudo /usr/bin/tar zxvf ${archive_path} --directory ${install_path}; sudo rm ${archive_path}",
require => Exec['bioperl-download']
}
pretty simple. But I have no idea where the problem is because I can't see what its doing. The provisioner is set to verbose mode, and here are the output lines for my module:
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-download]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-extract]/returns: executed successfully
==> default: Notice: /Stage[main]/Bioperl/Exec[bioperl-path]/returns: executed successfully
So all I know is it executed these three steps successfully. It doesn't tell me anything about whether the steps did their job properly or not. I know that it didn't download the archive to /usr/local/lib that directory, and that it didn't add an environment variable file to /usr/profile.d. Maybe the issue is the variables containing the directories are wrong. Maybe the variable containing the archives download URI is wrong. How can I find these things out?
UPDATE:
It turns out the module does work. But to improve the module (since I want to upload it to forge.puppetlabs.com, I tried implementing the changes suggested by Matt. Heres the new code:
file { 'bioperl-download':
path => "${archive_path}",
source => "http://cpan.metacpan.org/authors/id/C/CJ/CJFIELDS/${archive_name}",
ensure => "present"
}
exec { 'bioperl-extract':
command => "sudo /bin/tar zxvf ${archive_name}",
cwd => "${bioperl_target_dir}",
require => File['bioperl-download']
}
A problem: It gives me an error telling me that the source cannot be http://. I see in the docs that they do indeed allow http:// files as the source for the file resource. Maybe I'm using an older version of puppet?
I want to try out the puppet-archive module, but I'm not sure how I can set it as a required dependency. By that, I mean how I can make sure its installed first. Do I need to get my module to download the module from github and save it to the modules directory? Or is there a way to let puppet install it automatically? I added it as a dependency to the metadata.json file, but that doesn't install it. I know I can just get my module to download the package, but I was wondering what best practice for this is.
The initial problem you describe is acceptance testing. Verifying that the Puppet resources and code you wrote actually resulted in the desired end state you wanted is normally accomplished with Serverspec: http://serverspec.org/. For example, you can write a Puppet module to deploy an application, but you only know that Puppet did what you told it to, and not that the application actually successfully deployed. Note Serverspec is also what people generally use to solve this problem for Ansible and Chef also.
You can write a Serverspec test similar to the following to help test your module's end state:
describe file('/usr/local/lib/bioperl.tar.gz') do
it { expect(subject).to be_file }
end
describe file('/usr/profile.d/env_file') do
it { expect_subject).to be_file }
its(:content) { is_expected.to match(/env stuff/) }
end
However, your problem also seems to deal with debugging why your acceptance tests failed. For that, you need unit testing. This is normally solved with RSpec-Puppet: http://rspec-puppet.com/. I would show you how to write some tests for your situation, but I don't think you should be writing your Puppet module the way that you did, so it would render the unit tests irrelevant.
Instead, consider using a file resource with the source attribute and a HTTP URI to grab the tarball instead of an exec with wget: https://docs.puppet.com/puppet/latest/type.html#file-attribute-source. Also, you might want to consider using the Puppet archive module to assist you: https://forge.puppet.com/puppet/archive.
If you have questions on how to use these tools to provide unit and acceptance testing, or have questions on how to refactor your module, then don't hesitate to write followup questions on StackOverflow and we can help you.

puppet: Could not back up <file>: Got passed new contents for sum

I had a question I was hoping someone might have an answer to. Essentially what I'm doing is try to ensure I'm always using a fixed, slightly older version of phpunit, which I've placed in my module's file resources.
The manifest:
file
{
"/usr/bin/phpunit":
ensure => file,
owner => 'root',
group => 'root',
mode => 0755,
source => "puppet:///modules/php/phpunit"
}
Preparation: I download the current ('wrong') version of phpunit and place it in /usr/bin.
So the first run puppet succeeds:
Notice: Compiled catalog for <hostname> in environment production in 3.06 seconds
Notice: /Stage[main]/Php/File[/usr/bin/phpunit]/content: content changed '{md5}9f61f732829f4f9e3d31e56613f1a93a' to '{md}38789acbf53196e20e9b89e065cbed94'
Notice: /Stage[main]/Httpd/Service[httpd]: Triggered 'refresh' from 1 events
Notice: Finished catalog run in 15.86 seconds
Then I download the current (still 'wrong') version of phpunit and place it in /usr/bin again.
This time the puppet run fails.
Notice: Compiled catalog for <hostname> in environment production in 2.96 seconds
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
Error: /Stage[main]/Php/File[/usr/bin/phpunit]/content: change from {md5}9f61f732829f4f9e3d31e56613f1a93a to {md5}38789acbf53196e20e9b89e065cbed94 failed: Could not back up /usr/bin/phpunit: Got passed new contents for sum {md5}9f61f732829f4f9e3d31e56613f1a93a
What gives? If I delete the file ( /var/lib/puppet/clientbucket/9/f/6/1/f/7/3/2/9f61f732829f4f9e3d31e56613f1a93a/ ) from my filebucket it will work again... for the next run, but not the one after that.
What am I doing wrong?
I'd appreciate any input and thanks in advance.
Been having this error as well. I solved it with a combination of two previous answers.
Firstly I had to delete /var/lib/puppet/clientbucket on the client node by running:
sudo rm -r /var/lib/puppet/clientbucket
Just doing this will only let it run once more.
Then I had to mark the backup => false to stop it recreating the file, missing out either step failed to solve it for me. The accepted answer is incorrect by saying there is
"no solution other than upgrading".
I was able to fix the same problem by removing /var/lib/puppet/clientbucket on the client node.
This node has been running out of disk space, so puppet has probably incorrectly stored empty files there.
As a workaround, you can set backup => false in the file resource. This is a little unsafe, of course.
This has no solution other than to upgrade since there's a bug in certain versions of puppet where files containing both UTF8 and binary characters are treated wrongly, and it results in an error message.
https://tickets.puppetlabs.com/browse/PUP-1038
The ridiculously overcomplicated solution I used as a workaround is to have a .tar file in the file resource which notifies an exec which untars and places the actual executable in the correct directory, making sure the timestamp for the latter is newer than the former.
It's far from ideal but it works in cases like mine where upgrading puppet to the most current version isn't an attractive option.

Compile less files in node.js project on Windows Azure

I have a node.js project that compiles less files to css when I start the app. I do this by modifying the start script in package.json like so:
{
// omitted for brevity
start: { lessc public/stylesheets/styles.less > public/stylesheets/styles.css; node app.js; }
}
This works nicely locally, but not at all on my Windows Azure instance. Either because less needs to be installed globally on the machine for this to work, or because Azure doesn't run npm start. Or both. Either way, I need another solution!
I thought custom deployments was the answer (I'm using git remote deployment) and I tried modifying the deploy.cmd to include
call "lessc public/stylesheets/styles.less > public/stylesheets/styles.css;"
No joy. I even tried
call "%SITE_ROOT%/node_modules/less/bin/lessc %SITE_ROOT%/public/stylesheets/styles.less > %SITE_ROOT%/public/stylesheets/styles.css;
Am I coming at this the wrong way? How can I keep the compiled css files out of my source control and compile them on the server after deployment to Azure?
Thanks!
OK, I finally have this going, I think.
For some reason, even though the physical file is on the disk (I can see them with my FTP client), Azure is not letting me run lessc in the \node_modules\less\bin folder, but it does let me run the version in the \node_modules\.bin folder.
In the end, I added the following lines to my deploy.cmd file, and it worked!
IF NOT DEFINED LESS_COMPILER (
SET LESS_COMPILER=%DEPLOYMENT_TARGET%\node_modules\.bin\lessc
)
call %LESS_COMPILER% %DEPLOYMENT_TARGET%\public\stylesheets\styles.less > %DEPLOYMENT_TARGET%\public\stylesheets\styles.css

Resources