Uninitialized Constant on custom Puppet Function - puppet

I've got a function that I'm trying to run on my puppetmaster on each client run. It runs just fine on the puppetmaster itself, but it causes the agent runs to fail on the nodes because of the following error:
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: uninitialized constant Puppet::Parser::Functions::<my_module>
I'm not really sure why. I enabled debug logging on the master via config.ru, but I see the same error in the logs with no more useful messages.
What other steps can I take to debug this?
Update:
Adding some extra details.
Puppet Community, with Foreman connected Puppetmaster running on Apache2 with Passenger / Rack
Both client and master are running Puppet 3.7.5
Both client and master are Ubuntu 14.04
Both client and master are using Ruby 1.9.3p484 (2013-11-22 revision 43786) [x86_64-linux]
Pluginsync is enabled
The custom function works fine on the puppetmaster when run run as part of the puppetmaster's manifest (it's a client of itself) or using puppet apply directly on the server.
The function is present on the clients, and when I update for debugging purposes I do see the file appear on the client side.
I cannot paste the function here unfortunately because it is proprietary code. It does rely on the aws-sdk, but I have verified that that ruby gem is present on both the client and the master side, with the same version in both places. I've even tried surrounding the entire function with:
begin
rescue LoadError
end
and have the same result.

This is embarrassingly stupid, but it turns out I had somehow never noticed that I hadn't actually included this line in my function:
require 'aws-sdk'
And so the error I was receiving:
uninitialized constant Puppet::Parser::Functions::Aws
Was actually referring to the AWS SDK being missing, and not a problem with the puppet module itself (which was also confusingly named aws), which is how I was interpreting it. Basically, I banged my head against the wall for several days over a painfully silly mistake. Apologies to all who tried to help me :)

Related

Python 3.7 Window 10 Service Not Working: Error starting service: The service did not respond to the start or control request in a timely fashion

I have write a simple python 3.7 window service and installed successfully.Now I am facing this error
"Error starting service: The service did not respond to the start or control request in a timely fashion."
Please Help me to fix this error.
Thanks
One of the most common errors from windows when starting your service is Error 1053: The service did not respond to the start or control request in a timely fashion. This can occur for multiple reasons but there are a couple things to check when you do get them:
Make sure your service is actually stopping:Note the main method has an infinite loop. The template above with break the loop if the stop even occurs, but that will only happen if you call win32event.WaitForSingleObject somewhere within that loop; setting rc to the updated value
Make sure your service actually starts: Same as the first one, if your service starts and does not get stuck in the infinite loop, it will exit. Terminating the service
Check your system and user PATH contains the necessary routes: The DLL path is extremely important for your python service as its how the script interfaces with windows to operate as a service. Additionally if the service is unable to load python - you are also hooped. Check by typing echo %PATH% in both a regular console and a console running with Administrator priveleges to ensure all of your paths have been loaded
Give the service another restart: Changes to your PATH variable may not kick in immediately - its a windows thing

docker pull from Artifactory results in "net/http: request canceled" inconsistently

We are running Artifactory 5.11.0 (just update to 6.0.2 today and haven't yet seen this) in a docker container and when our automation executes a docker pull from Artifactory, 9/10 times it is successful. Sometimes, even when running the docker pull from the machine hosting Artifactory, the docker pull fails with:
Pulling 'docker.{artifactory url}/staging:latest'...
Error response from daemon: Get http://docker.{artifactory url}/v2/staging/manifests/latest: Get http://docker.{artifactory url}:80/artifactory/api/docker/docker/v2/token?account=admin&scope=repository%3Astaging%3Apull&service=docker.{artifactory url}%3A80:
net/http: request canceled (Client.Timeout exceeded while awaiting
headers)
Like I said, most of the time this is working perfect, but that 1/10 (probably less) we get the above error during our automated builds. I tried running the docker pull in a while loop over night until it hit a failure and there was no failure. Ran ping overnight and no packets were lost.
OS: Debian 9 x64
Docker version 17.09.0-ce, build afdb6d4 and seems to happen more frequently with Docker version 18.03.1~ce-0~debian, but I have no direct evidence to suggest the client is at fault.
Here is what JFrog provided me to try to resolve this issue. (Note: we were on an older version of Artifactory at the time and they did recommend that we update it to the latest as there were several updates that could help).
The RAM value -Xmx 2g was the default value provided by Artifactory. We can increase that value by going into the Docker container "docker exec -it artifactory bash"
and then $Artifactory_Home/bin/artifactory.default ( Mostly: - /opt/jfrog/artifactory/bin/artifactory.default) and we can change the RAM value accordingly. Please follow this link for more information.
We should also change the access max threads count and we can do that by going to $Artifactory_Home/tomcat/config/server.xml and change it to:
<Connector port="8040" sendReasonPhrase="true" maxThreads="<200>"/>
Also add below line in /var/opt/jfrog/artifactory/etc/artifactory.system.properties
artifactory.access.client.max.connections=200
To deal with heavy loads we need to append the below line in /var/opt/jfrog/artifactory/etc/db.properties.Please follow this link for more information.
pool.max.active=200
Also, they told me to be sure that we were using the API Key when authenticating the docker client with Artifactory instead of user/pass login since the latter will go through our ldap authentication and the former will not:
One thing to try would be to use an API Key instead of the plain text password, as using an API key will not reach out to the LDAP server.
We were already doing this, so this had no impact on the issue.
Also posted here: https://www.jfrog.com/jira/browse/RTFACT-17919?focusedCommentId=66063&page=com.atlassian.jira.plugin.system.issuetabpanels:comment-tabpanel#comment-66063
I hope this helps as it helped us.

Github sharing process failed

Before answering my question please take into consideration that I'm new to Github and how it works. I'm having trouble sharing my project to Github. I'm receiving the errors:
The remote end hung up unexpectedly RPC failed; result=18, HTTP code = 200
The sharing process takes very long and it stops with this error. The project gets created on Github with no code. Other forums relating to this talk about GitBash. Note this is done through android studio.
As mentioned here
Specifically, the 'result=18' portion. This is the error code coming from libcurl, the underlying library used in http communications with Git. From the libcurl documentation, a result code of 18 means:
CURLE_PARTIAL_FILE (18)
A file transfer was shorter or larger than expected. This happens when the server first reports an expected transfer size, and then delivers data that doesn't match the previously given size.
You could try and increase the http buffer:
git config --global http.postBuffer 524288000
But first, make sure your local repo didn't include large binaries.
To be sure, check your .gitignore

Why can't I access Jenkins URLs from within a Jenkins groovy script?

I'm using the Jenkins Dynamic Parameter Plugin and the Jenkins SSH credentials plugin and would like to use them together so that I can have groovy code that auto-populates a choice parameter with the ssh systems I have configured (or a subset more likely) and allows me to pick which system I want to run a deployment to. I've found a URL that provides the list of ssh hosts and have this basic code already written:
def myURL="http://myJenkins/job/myJob/descriptorByName/org.jvnet.hudson.plugins.SSHBuilder/fillSiteNameItems"
def allText=new URL(myURL).getText()
I've verified the URL does return JSON with the list of connections when I hit it from anywhere outside of Jenkins (REST client, wget, and even groovysh), but when I try to call it inside the dynamic parameter groovy code, I keep getting:
java.net.ConnectException: Connection refused
at java.net.AbstractPlainSocketImpl.doConnect(AbstractPlainSocketImpl.java:369)
at java.net.AbstractPlainSocketImpl.connectToAddress(AbstractPlainSocketImpl.java:230
...
So I'm wondering if this is some sort of threading issue (the thread that is running this is maybe the same thread that would respond to the http request), but my knowledge of Jenkins at a programming level is somewhat limited. If anyone can point me to how to get what I'm after (maybe even in a simpler way), I'd really appreciate it.

Open MPI, contact information is unknown

I am working on Mac OSX and using bash as my shell. I have been working for the past few hours trying to get the simplest of code to run using Open MPI on multiple computers. After fiddling with configuring Open MPI, I believe I am on the verge of getting the thing to work. However, I have hit a dead end.
The code runs fine without asking other computers over the internet to run it (meaning, I can run it using Open MPI on my own desktop), but when I put in a hostfile and ask a host to run the code I get an error. I think I am connecting to the host fine otherwise, I can ssh to them and do whatever I want, it's just when I run the code.
To produce the following error I run: mpirun -n 4 -hostfile /path/hostfile.txt ./mpi_hello_world. Then it ask for the password on the host I am access, I enter it and then receive the following:
[MyComputer] [[62774,0],0] ORTE_ERROR_LOG: A message is attempting to be sent to
a process whose contact information is unknown in file /opt/local/var/macports/
build/_opt_mports_dports_science_openmpi/openmpi/work/openmpi-1.7.1/orte/mca/rml/
oob/rml_oob_send.c at line 362
[MyComputer] [[62774,0],0] attempted to send to [[62774,0],1]: tag 15
[MyComputer] [[62774,0],0] ORTE_ERROR_LOG: A message is attempting to be sent to
a process whose contact information is unknown in file /opt/local/var/macports/
build/_opt_mports_dports_science_openmpi/openmpi/work/openmpi-1.7.1/orte/mca/
grpcomm/base/grpcomm_base_xcast.c at line 166
Would anyone be able to give me an idea of what is going wrong here? Thanks for any insight you can offer.

Resources