puppetdb facts endpoint not showing custom facts - puppet

I am trying to extract a report using puppetdb facts endpoint .I want to know how many servers have a specific setting available which is calculated by a custom fact and returns as a true or false.The custom facter is added in a module and the module is available on the nodes .
I can see the custom fact is available in the nodes and returning the expected results.But I am unable to collect the output from a puppetdb API query .Do I need to add anything specific for a custom fact to retrieve via puppetdb facts endpoint?

Related

How to disable pagination for a single request (request all items under resource)?

I have an Eve instance running and pagination enabled. In some cases I want to request all items under a resource. This is done together with a projection to get a full list of ids.
This question is very similar to another question, but this question concerns external requests rather than internal calls.
I have tried setting max_results to 0 and -1 but both yield a single result. Is there a way to request all items without disabling pagination globally?
Edit My current solution to circumvent this is a custom flask endpoint which just access the database directly. The issue with this approach is that I would like to add various projects and make use of Eve's database optimizations. All of which I need to manually reimplement.

What is the best method for Backend and Frontend communication (incl. SessionID)? [closed]

Closed. This question is opinion-based. It is not currently accepting answers.
Want to improve this question? Update the question so it can be answered with facts and citations by editing this post.
Closed last year.
Improve this question
I have a small project running and I question myself, how to do it. I host on two server. One is for Backend and one for Frontend. The Backend-Server is running on NodeJS and as DB MongoDB and I already have my own small API to communicate with it. On the Frontend-Server I am using React. My Question now: What is the best way to make a SessionID on the Frontend and Send it to the Backend-Server over the Frontend-Server?
For example SessionID3 should get displayed a bike on his site and SessionID2 a car (both informations are stored on the Backend-Server on the DB).
I look forward to any replies, thanks!
Most Jamstack setups have an API server that serves content in the form of json/raw (like Github's developer API)in which Nodejs and JavaScript has a built in feature which allows the quick conversion to a JSON object with references in which you can read about that on MDN.
Diagram of Jamstacking with React & Vue
^ since it is actually the client sending the request and doing most of the work and not the front-end server, you can use universal-cookie to set or get a session ID permanently, parse it into JSON and send with the POST request to the Back-end API. You could also do this with Math.random(min, max) as well of you just want a random string of numbers to be sent.
Axios is asynchronous so you want to use async/await or .then() with it so that it works correctly. You can view the official docs at https://axios-http.com/docs/intro. The value returned by Axios is the response in which you will want to parse the "body" the header returned so it can be used.
Another alternative to get the browser session id (which is erased after the browser is closed) is a module called react-session-hook
In my opinion, for what you are trying to do, adding another server that relays the message is not optimal as that adds another sequential connection that makes things take twice as long as it would otherwise. In other-words, this is not necessary. The only time you do this is if the keys to access the API are sensitive meaning they contain important information and musts be hidden.

Creating custom metric descriptors continually results in HTTP 500

I think I've broken my project's custom metrics.
Earlier yesterday, I was playing around with the cloud monitoring api, and I created a metric descriptor and added some time series data to it using the latest python3 cloud monitoring library create_time_series call. Satisfied with the results, I deleted the descriptor using the library, which threw an error as I had incorrectly passed in the descriptor's name. I called it again with the correct name, and it succeeded, but now every call to create_time_series on this project fails with an HTTP 500. The error message included simply says to "Try again in a few seconds," which I have, to no avail.
I have verified that I can create time series data on other projects of mine, and it works as expected. The API Explorer available in google's API documentation for metrics also gets an HTTP 500 back on calls to this project, but works fine on others. CURLing requests yields the same results.
My suspicion is that I erroneously deleted the custom.googleapis.com endpoint in its entirety, and that is why I am unable to create new metric descriptors/time series data. Is there a way to view the state of this endpoint, or recreate it?
It is impossible to to delete the data stored in your Google Cloud project but deleting the metric descriptor renders the data inaccessible. Also, according to data retention policy, there is a deletion of this data when it expires.
To delete your custom metric descriptor, call the metricDescriptors.delete method. You can follow the steps in this guide.
You are calling CreateMetricDescriptor every time when you call CreateTimeSeries. Some or all of these calls specify no metric labels, and these calls are therefore overwriting the metric descriptor with one that has no labels. The calls to ‘CreateTimeSeries’, on the other hand, do specify metric labels, causing the metric labels to be auto-added to the descriptor.
Custom metric names typically begin with custom.googleapis.com/, which differs from the built-in metrics.
When you create a custom metric, you define a string identifier that represents the metric type. This string must be unique among the custom metrics in your Google Cloud project and it must use a prefix that marks the metric as a user-defined metric. For Monitoring, the allowable prefixes are custom.googleapis.com/ and external.googleapis.com/prometheus. The prefix is followed by a name that describes what you are collecting. For details on the recommended way to name a custom metric, see Naming conventions.

How to force Node.js to use a predefined IP for a hostname?

We are developing a command line tool that relies on a third party Cloud Object Storage SDK, so we cannot modify the request address directly (unless we remade our own SDK based on the official one which is wrong at so many levels). We noticed the IPs resolved by our clients can often be problematic due to their geolocation or DNS provider. So we want to be able to create a "custom host" file if you would say and embed it into the command line tool (npm global module). We look up the DNS documentation and doesn't seem to find relevant information. https://nodejs.org/api/dns.html
So basically what we try to do is that one person may resolve google.com to 8.7.198.46, and another person may resolve google.com to 74.125.200.139, instead of letting the DNS do the job we want to force all hostname google.com to resolve to a specific IP (let's say 88.88.88.88). How will I be able to do this with Node.js (OR perhaps with the help of an additional shell script)?

Liferay model listener ordering

Following is my requirement :
Whenever site is created, with help of GroupListener we are adding some custom attributes to created site.
So assume that you are crating "Liferay Test" site then it will have some fix custom attribute "XYZ" with value set in GroupListeners onAfterCreate method.
I can see this value in custom fields under site settings.
Now based on this values, we are creating groups in another system(out of liferay, using webservices).
So far so good.
Whenever we are deleting the site we need to remove the equivalent groups from other system via web service.
But while deleting site, in GroupListener we are not able to retrieve the custom attributes.
On further debug by adding expando listener, I observed that Expando listeners are getting called first and then delete method of GroupLocalService/GroupListener.
And hence we are not able to delete the groups present in another system.
So I was wondering if we can have ordering defined for listeneres.
Note: Since we were not getting custom attributes in listeners we implemented GroupLocalServiceImpl and with this we are getting custom attributes in delete method on local environment but not on our stage environment which has clustering.
You shouldn't use the ModelListeners for this kind of change, rather create ServiceWrappers, e.g. wrap the interesting methods in GroupLocalService (for creation as well as deletion).
This will also enable you to react to failures to create records in your external system etc.

Resources