How to check foreman collecting facts from all hosts - puppet

Want to write automation script to ensure foreman collecting facts from all nodes
How to ensure foreman having facts from all nodes ?

A fact is a key/value data pair that represents some aspect of node state, such as its IP address, uptime, operating system, or whether it's a virtual machine.
1. Manual process are:
a. Login to foreman UI, click Monitor->Facts
b. run facter -p on hosts
2. Automation:
I have written below script to check facts from each hosts
#!/usr/bin/python
import requests
import json
foreman_url = "https://foreman_ip/api/hosts"
username = "admin"
password = "changeme"
node = "node1.puppet.com"
headers = {
'Content-Type': 'application/json',
'Accept': 'application/json',
}
def retrive_hostid():
host_id = requests.get(foreman_url, headers=headers, verify=False, auth=(username, password))
hostobj = json.loads(host_id.content)
for s in hostobj:
print s['host']['name']
host_name = s['host']['name']
url = foreman_url + host_name + '/facts' # check facts from each hosts
print url
response = requests.get(url, headers=headers, verify=False, auth=('admin', 'changeme'))
#print response
respobj = json.loads(response.content)
print respobj['total'] # display total number of facts found
retrive_hostid()

Related

Empty token with Python Requests, but multiple tokens seen in chrome dev tools

I'm trying to use requests to login to a site, navigate to a page, and scrape some data. This question is about the first step (to get in).
I cannot fetch the token from the site:
import requests
URL = 'https://coderbyte.com/sl'
with requests.Session() as s:
response = s.get(URL)
print([response.cookies])
Result is empty:
[<RequestsCookieJar[]>]
This make sense according to the response I'm seeing in Chrome's dev tools. After I login with my username and password, I see four tokens, three of them deleted, but one valid:
How can I fetch the valid token?
you can use the post method to the url you want in order to fetch the token (to pass the login first). For example :
url = "url-goes-here"
url_login = "login-url-goes-here"
with requests.Session() as s:
# get the link first
s.get(url)
payload = json.dumps({
"email" : "your-email",
"password" : "your-password"
})
headers = {
'Content-Type': 'application/json'
}
response = s.post(url=url_login, data=payload, headers=headers)
print(response.text)
Based on your question, i assume that if you only use username or password to login, then you can use HTTPBasicAuth() which is provided by requests package.

How we can use unix-socket with python3 urllib or with requests

How I can use the following in python3 with urllib or with requests
curl --unix-socket /var/run/docker.sock http://localhost/images/json
If this is possible then somebody help me?
Here is an example.
Obviously, the main advantage is that it does not require any dependency.
It sent an HTTP request to the UNIX socket opened by Docker to retrieve the container list via JSON.
async def get_containers():
reader, writer = await asyncio.open_unix_connection("/var/run/docker.sock")
query = (
f"GET /containers/json HTTP/1.0\r\n"
f"\r\n"
)
writer.write(query.encode('utf-8'))
await writer.drain()
writer.write_eof()
headers = True
while headers:
line = await reader.readline()
if line == b"\r\n":
headers = False
elif not line:
break
containers = []
if not headers:
data = await reader.readline()
containers = json.loads(data.decode("utf-8"))
writer.close()
await writer.wait_closed()
return containers
c = asyncio.run(get_containers())

How to create Wiki Subpages in Azure Devops thru Python?

My Azure devops page will look like :
I have 4 pandas dataframes.
I need to create 4 sub pages in Azure devops wiki from each dataframe.
Say, Sub1 from first dataframe, Sub2 from second dataframe and so on.
My result should be in tab. The result should look like :
Is it possible to create subpages thru API?
I have referenced the following docs. But I am unable to make any sense. Any inputs will be helpful. Thanks.
https://github.com/microsoft/azure-devops-python-samples/blob/main/API%20Samples.ipynb
https://learn.microsoft.com/en-us/rest/api/azure/devops/wiki/pages/create%20or%20update?view=azure-devops-rest-6.0
To create a wiki subpage, you should use Pages - Create Or Update api, and specify the path to pagename/subpagename. Regarding how to use the api in Python, you could use Azure DevOps Python API and refer to the sample below:
def create_or_update_page(self, parameters, project, wiki_identifier, path, version, comment=None, version_descriptor=None):
"""CreateOrUpdatePage.
[Preview API] Creates or edits a wiki page.
:param :class:`<WikiPageCreateOrUpdateParameters> <azure.devops.v6_0.wiki.models.WikiPageCreateOrUpdateParameters>` parameters: Wiki create or update operation parameters.
:param str project: Project ID or project name
:param str wiki_identifier: Wiki ID or wiki name.
:param str path: Wiki page path.
:param String version: Version of the page on which the change is to be made. Mandatory for `Edit` scenario. To be populated in the If-Match header of the request.
:param str comment: Comment to be associated with the page operation.
:param :class:`<GitVersionDescriptor> <azure.devops.v6_0.wiki.models.GitVersionDescriptor>` version_descriptor: GitVersionDescriptor for the page. (Optional in case of ProjectWiki).
:rtype: :class:`<WikiPageResponse> <azure.devops.v6_0.wiki.models.WikiPageResponse>`
"""
route_values = {}
if project is not None:
route_values['project'] = self._serialize.url('project', project, 'str')
if wiki_identifier is not None:
route_values['wikiIdentifier'] = self._serialize.url('wiki_identifier', wiki_identifier, 'str')
query_parameters = {}
if path is not None:
query_parameters['path'] = self._serialize.query('path', path, 'str')
if comment is not None:
query_parameters['comment'] = self._serialize.query('comment', comment, 'str')
if version_descriptor is not None:
if version_descriptor.version_type is not None:
query_parameters['versionDescriptor.versionType'] = version_descriptor.version_type
if version_descriptor.version is not None:
query_parameters['versionDescriptor.version'] = version_descriptor.version
if version_descriptor.version_options is not None:
query_parameters['versionDescriptor.versionOptions'] = version_descriptor.version_options
additional_headers = {}
if version is not None:
additional_headers['If-Match'] = version
content = self._serialize.body(parameters, 'WikiPageCreateOrUpdateParameters')
response = self._send(http_method='PUT',
location_id='25d3fbc7-fe3d-46cb-b5a5-0b6f79caf27b',
version='6.0-preview.1',
route_values=route_values,
query_parameters=query_parameters,
additional_headers=additional_headers,
content=content)
response_object = models.WikiPageResponse()
response_object.page = self._deserialize('WikiPage', response)
response_object.eTag = response.headers.get('ETag')
return response_object
More details, you could refer to the link below:
https://github.com/microsoft/azure-devops-python-api/blob/451cade4c475482792cbe9e522c1fee32393139e/azure-devops/azure/devops/v6_0/wiki/wiki_client.py#L107
Able to achieve with rest api
import requests
import base64
import pandas as pd
pat = 'TO BE FILLED BY YOU' #CONFIDENTIAL
authorization = str(base64.b64encode(bytes(':'+pat, 'ascii')), 'ascii')
headers = {
'Accept': 'application/json',
'Authorization': 'Basic '+authorization
}
df = pd.read_csv('sf_metadata.csv') #METADATA OF 3 TABLES
df.set_index('TABLE_NAME', inplace=True,drop=True)
df_test1 = df.loc['CURRENCY']
x1 = df_test1.to_html() # CONVERTING TO HTML TO PRESERVE THE TABULAR STRUCTURE
#JSON FOR PUT REQUEST
SamplePage1 = {
"content": x1
}
#API CALLS TO AZURE DEVOPS WIKI
response = requests.put(
url="https://dev.azure.com/xxx/yyy/_apis/wiki/wikis/yyy.wiki/pages?path=SamplePag2&api-version=6.0", headers=headers,json=SamplePage1)
print(response.text)
Based on #usr_lal123's answer, here is a function that can update a wiki page or create it if it doesn't:
import requests
import base64
pat = '' # Personal Access Token to be created by you
authorization = str(base64.b64encode(bytes(':'+pat, 'ascii')), 'ascii')
def update_or_create_wiki_page(organization, project, wikiIdentifier, path):
# Check if page exists by performing a Get
headers = {
'Accept': 'application/json',
'Authorization': 'Basic '+authorization
}
response = requests.get(url=f"https://dev.azure.com/{organization}/{project}/_apis/wiki/wikis/{wikiIdentifier}/pages?path={path}&api-version=6.0", headers=headers)
# Existing page will return an ETag in their response, which is required when updating a page
version = ''
if response.ok:
version = response.headers['ETag']
# Modify the headers
headers['If-Match'] = version
pageContent = {
"content": "[[_TOC_]] \n ## Section 1 \n normal text"
+ "\n ## Section 2 \n [ADO link](https://azure.microsoft.com/en-us/products/devops/)"
}
response = requests.put(
url=f"https://dev.azure.com/{organization}/{project}/_apis/wiki/wikis/{wikiIdentifier}/pages?path={path}&api-version=6.0", headers=headers,json=pageContent)
print("response.text: ", response.text)

requests.post makes a get request

I call requests.post but it ends up making a GET request.
post_body="""
{
...
}
"""
headers = {'Content-type': 'application/json', 'Accept': 'application/json'}
post_response = requests.post("https://...", data=post_body, headers=headers)
print(post_response.request.method)
The last print statement prints "GET". I expected to see "POST".
To debug this further, I changed the code like so:
req = requests.Request('POST', "https://...", data=booking_body, headers=headers)
prepared = req.prepare
print(prepared.method) // "POST"
s = requests.Session()
post_response = s.send(prepared)
print(post_response.request.method) // "GET"
The print statements print "POST" and "GET". What am I doing wrong?
PS:
$ python3 -V
Python 3.7.0
As stated in the comments, the issue was in redirect. The call was initially been made to http://... then redirected to https://.... Hence, the last method was GET.
Once the initial call was made to https://..., the issue was resolved.

Access Locust Host attribute - Locust 1.0.0+

I had previously asked and solved the problem of dumping stats using an older version of locust, but the setup and teardown methods were removed in locust 1.0.0, and now I'm unable to get the host (base URL).
I'm looking to print out some information about requests after they've run. Following the docs at https://docs.locust.io/en/stable/extending-locust.html, I have an request_success listener inside my sequential task set - some rough sample code below:
class SearchSequentialTest(SequentialTaskSet):
#task
def search(self):
path = '/search/tomatoes'
headers = {"Content-Type": "application/json",
unique_identifier = uuid.uuid4()
data = {
"name": f"Performance-{unique_identifier}",
}
with self.client.post(
path,
data=json.dumps(data),
headers=headers,
catch_response=True,
) as response:
json_response = json.loads(response.text)
self.items = json_response['result']['payload'][0]['uuid']
print(json_response)
#events.request_success.add_listener
def my_success_handler(request_type, name, response_time, response_length, **kw):
print(f"Successfully made a request to: {self.host}/{name}")
But I cannot access the self.host - and if I remove it I only get a relative url.
How do I access the base_url inside a TaskSet's event hooks?
How do I access the base_url inside a TaskSet's event hooks?
You can do it by accessing the class variable directly in your request handler:
print(f"Successfully made a request to: {YourUser.host}/{name}")
Or you can use absolute URLs in your test (task) like this:
with self.client.post(
self.user.host + path,
...
Then you'll get the full url to your request listener.

Resources