Puppet agent re-copying same file after every run - puppet

I'm writing a Puppet module that isn't working as expected.
Ultimately, I want to ensure that an application is installed and running on Windows servers. Puppet copies SomeApp.exe to a non-temporary location on the server's C-drive. If the app needs installing, it can install it using that exe file. Pretty straightforward.
It works, except every time the Puppet agent runs, it re-copies over SomeApp.exe resulting in a corrective action. I'm puzzled by this behavior since SomeApp.exe is already present.
Here is the code:
file { 'SomeApp.exe' :
path => 'C:\Post\SomeApp.exe',
ensure => 'present',
source => 'puppet:///modules/app_test/SomeApp.exe',
}
service { 'SomeApp':
name => 'SomeApp',
ensure => 'running',
enable => 'true',
}
package { 'SomeApp.exe':
ensure => 'installed',
provider => 'windows',
source => 'C:\Post\SomeApp.exe'
}
It all works except it insists on re-copying SomeApp.exe over every time. The original SomeApp.exe has not changed or been deleted.
What am I missing here?
Update: It looks like it's not actually re-copying the binary, but it's still reporting a corrective action:
Notice: /Stage[main]/app_test/Package[SomeApp.exe]/ensure: created (corrective)
Thank you!

Thanks, all! Turns out you were all correct. The name of the executable in Windows is not necessarily the name of the package. I'm not Windows person, so I assumed that "SomeApp.exe" is the name of the package when, in fact, the name is "Some App".
It's working now, thank you!

Related

ssh-key metadata doesn't work after image capture/boot on centos 8

I have a program that creates instances from upstream CentOS images, using the ssh-keys metadata to log in. This works, so long as I'm booting from an upstream image, like centos-cloud/global/images/centos-8-v20210217:
metadata: {
"items": [
{
"key": "block-project-ssh-keys",
"value": "true"
},
{
"key": "ssh-keys",
"value": "centos:" + ssh_key + " centos\n"
},
]
},
The problem is in shutting down this instance, capturing an image, and starting a new instance with new keys in ssh-keys metadata. This does not seem to write new keys at all, and furthermore, my attempts to use the in-browser SSH (which would create a user by my name) does not work either (error 15). That feature has worked for me in the past when booting from the upstream image as well.
The only way I've been able to get is by logging in with the old keys. Other than that, the instance is normal.
The problem persists even as I try to manipulate keys in the browser, and they are accepted by https://console.cloud.google.com/
I have tried a few things to rectify this, including truncating the /home/centos/.ssh/authorized_keys file, removing the /home/centos/.ssh directory, and even the whole user with userdel --remove --selinux-user --force centos, under theory that attempt to use the same user twice might conflict some Google software.
I also took a look at the logs in the google_osconfig_agent service: nada.
Am I missing a trick somewhere? At least on Azure, one is theoretically obligated to "generalize" an instance (a program idiomatically is provided to do that, it mostly deletes the user and scrubs keys on Linux). Amazon has no equivalent, always running an arbitrary short shell script of your choice, it's up to you to get whatever you want done therein.
Thanks.
This was caused by this image build using syslog-ng. It so happens that the RPM google-compute-engine takes a hard dependency on rsyslog, so, removing rsyslog will likewise break a bunch of stuff.
I don't have a solution to continue to use syslog-ng quite yet, I will probably be taking apart the package to see why it takes a hard dependency on rsyslog and if it can be effectively ignored.
This has been filed here: https://github.com/GoogleCloudPlatform/compute-image-packages/issues/897
I added a more thorough writeup here: https://github.com/GoogleCloudPlatform/guest-configs/issues/20

Electron Node.js node localstorage osx mkdir permission denied

I am working with Electron and Node.js. We have developed an application that works fine on windows and as a requirement had to package it for mac os. I packaged the application using electron-packager, the packaging process completes and package is generated. Double clicking it throws an error that permission denied for mkdir, as i am using node localstorage to maintain some settings on the user's local machine. somehow mac doesn't local storage to create folder in the root of the application. Any help in this matter will be great. Thanks
First off, is the code in question in the main process or in a renderer process? If it is the latter, you don't need to use 'node-localstorage', because you can use the renderer's native LocalStorage. If you are in the main process, then you need to provide your own storage strategy so using 'node-localstorage' is a viable option.
In any case, you need to carefully consider where to store the data; for starters, let's look at where Electron's renderer processes would store its LocalStorage data: this differs based on the OS, but you can get and set the paths using the app module -- the path in question is userData, which on OS X would default to ~/Library/Application Support/<App Name>. Electron uses that folder to persist cookies, caches, LocalStorage etc. so I would suggest using that folder as well. (Otherwise, refer to XDG defaults for good defaults)
What your example above was trying to do is store your 'errorLogDb' in the current working directory, which might depend on your OS, where your App is installed, how you executed it, etc.
Finally, it's a good idea to differentiate between your 'production' app and your app during development and testing, because you might not want to use the same storage folders for every environment. In any case, just writing to './errorLogDb' is likely to cause lots of headaches so I'd be thankful for the permission denied error.
this strategy worked for me:
const { LocalStorage } = require('node-localstorage');
let ls;
mb.on('ready', () => {
let prefsPath = mb.app.getPath('userData') + '/prefs';
ls = new LocalStorage(prefsPath);
loadPrefs();
});
mb.on('after-create-window', () => { /* ls... */ }
exports.togglePref = () => { /* ls... */ }

Could not retrieve information from environment production source(s) file:///

I've got a puppet class defined like this:
class etchostfile
(
$hostfile
)
file { $hostfile :
ensure => file,
source => "file:///var/www/cobbler/pub/hosts-${hostfile}.txt",
path => '/root/hosts',
}
}
Then I've got a node defined:
node 'hostname.fqdn.com'
{
class { 'etchostfile' :
hostfile => foo,
}
}
I want it to take the file /var/www/cobbler/pub/hosts-foo.txt and install it to /root/hosts. But I'm getting this error:
err: /Stage[main]/Etchostfile/File[foo]: Could not evaluate: Could not
retrieve information from environment production source(s)
file:///var/www/cobbler/pub/hosts-foo.txt
The file exists, is readable, and every directory leading to it is at least r-x.
I saw a number of reasons why this error applies to an incorrect puppet:/// source, but I'm using a file:/// source. I also tried disabling SELinux on both agent and master. No luck.
It worked correctly on my test host, so I presume it's a firewall issue. But the agent can get to the master on port 8140, and I already have a signed certificate, and it appears that I am at least getting a catalog, so I don't understand why I can't get a file too.
Looks like you are trying to source the file from your puppet master? In that case, you need to use the puppet:// resource and not the file:// resource.
Also, ensure that your fileserver setup on master is working.
https://docs.puppetlabs.com/guides/file_serving.html#file-server-configuration
[EDIT]
From the above linked doc first paragraph:
If a file resource declaration contains a puppet: URI in its source
attribute, nodes will retrieve that file from the master’s file server
Also, some more doc talking about file source attribute
https://docs.puppetlabs.com/references/latest/type.html#file-attribute-source
If you are trying to reference a local file (local to the node where the agent is running) you can remove the protocol part and just use:
file { $hostfile :
ensure => file,
source => "/var/www/cobbler/pub/hosts-${hostfile}.txt",
path => '/root/hosts',
}
If you are trying to access a file on the puppet master, you need to use the puppet:/// protocol (and not file). This will bring some additional restrictions : usually you do not configure puppet file serving to serve all files on the master (which would be a security issue) but only to serve files that are part of puppet modules.
Or you could use a template with no logic instead of the file:
file { $hostfile :
ensure => file,
content => template("my_module/hosts-${hostfile}.txt.erb"),
path => '/root/hosts',
}
It is counter intuitive, but using templates with no logic tends to have better performance that trying to use Puppet for file serving.

How to check if files exist on different drive with Nodejs

I am working on a Nodejs powered system that runs within a local network and I need to check if files exist on a different local drive of the computer that the Nodejs app runs on.
I have tried using the fs.exists function, but that doesn't.
Is this possible? I am guessing there are security risks involved, but because the system runs 100% on a local network, is there any work around to achieve this?
the reason I need to check that the files exist is because the file name holds the version number, and I need to get the latest version (highest number)
this is what I tried:
// the example looks for example#1.wav in the V:\public folder
var filename = "example"
var versionCount = 1;
if (fs.existsSync("V:\public\"+filename+"#"+versionCount+".wav")) {
console.log("V:\public\"+filename+"#"+versionCount+".wav Found!");
} else {
console.log("V:\public\"+filename+"#"+versionCount+".wav does not exists");
}
I am running Nodejs on Windows.
Any suggestions would be greatly apprecaited! TIA!
Posting an answer incase anyone runs into the same problem in the future..
I resolved this problem by using forward slashes (/) instead of back slashes ()

Update deployment via linux script in weblogic

What is the script to update deployment ( from GUI, we can do this update by unlock & save changes ) in linux. Is it possible to do this ? If not what is script to redeploy ?
As Kevin pointed out, WLST is the way to go. You should probably craft a script (named wlDeploy.py, for instance), with content like follows (import clauses were omitted for the sake of simplicity):
current_app_name = '[your current deployed app name]'
new_app_name = '[your new app name]'
target_name = '[WL managed server name (or AdminServer)]'
connect([username],[pwd],'t3://[admin server hostname/IP address]:[PORT]')
stopApplication(current_app_name)
undeploy(current_app_name, timeout=60000);
war_path = '[path to war file]'
deploy(appName=new_app_name, path=war_path, targets=target_name);
And call it via something like:
./wlst.sh wlDeploy.py
Of course you can add parameters to your script, and a lot of logic which is relevant to your deployment. This is entirely up to you. The example above, though, should help you getting started.
In WebLogic you can use wlst to perform administrative tasks like managing deployments. If you google weblogic wlst, you will receive tons of information. wlst runs on the python language.
Assuming you are using weblogic 10 you can also "Record" your actions. This will save the actions into a python script which you can "replay" (execute) later.

Resources