I'm trying to create a perforce custom type devtrack but am stuck in the prefetch stage. There I am trying to use my instances class method to find the correct provider
def self.prefetch(resources)
instances.each do |prov|
if resource = resources[prov.name]
resource.provider = prov
end
end
end
and in the instances class method I try to find all clients on the current host by using the command
p4 workspaces -u
using the below code
def self.get_list_of_workspaces_on_host(host)
ws_strs = p4(['workspaces', '-u', <USERNAME>]).split("\n")
ws_strs.select { |str| str.include?(host) }.map{ |ws| ws.split[1] }
end
def self.get_workspace_properties(ws)
md = /^(\w*)_.*_(main|\d{2})_managed$/.match(ws)
ws_props = {}
ws_props[:ensure] = :present
...
ws_props
end
def self.instances
host = `hostname`.strip
get_list_of_workspaces_on_host(host).collect do |ws|
ws_props = get_workspace_properties(ws)
new(ws_props)
end
end
and the p4 command is defined like
has_command(:p4, "/usr/bin/p4") do
environment :P4PORT => <PERFORCE SERVER>, :P4USER => <USERNAME>
end
The problem I have is that for any p4 command to work I need to access the server, this is specified in the type
devtrack { '36': source => '<PERFORCE SERVER>'}
but how can I access this value from prefetch? The problem beeing that prefetch is a class method and thus can not access the #properties_hash or the resource hash. Is there a way to get around this? Am I designing this completely wrong?
Related
I'm new to Openstack and I'm trying to create a tool so that I can launch any number of instances in an Openstack cloud. This was easily done using the nova-client module of openstacksdk.
Now the problem is that I want to make the instances execute a bash script as they are created by adding it as a userdata file, but it doesn't execute. This is confusing because I don't any error or warning message. Does anyone know what could it be?
Important parts of the code
The most important parts of the Python program are the function which gets the cloud info, the one that creates the instances and the main function, . I'll post them here as #Corey told.
"""
Function that allow us to log at cloud with all the credentials needed.
Username and password are not read from env.
"""
def get_nova_credentials_v2():
d = {}
user = ""
password = ""
print("Logging in...")
user = input("Username: ")
password = getpass.getpass(prompt="Password: ", stream=None)
while (user == "" or password == ""):
print("User or password field is empty")
user = input("Username: ")
password = getpass.getpass(prompt="Password: ", stream=None)
d['version'] = '2.65'
d['username'] = user
d['password'] = password
d['project_id'] = os.environ['OS_PROJECT_ID']
d['auth_url'] = os.environ['OS_AUTH_URL']
d['user_domain_name'] = os.environ['OS_USER_DOMAIN_NAME']
return d
Then we have the create_server function:
"""
This function creates a server using the info we got from JSON file
"""
def create_server(server):
s = {}
print("Creating "+server['compulsory']['name']+"...")
s['name'] = server['compulsory']['name']
s['image'] = server['compulsory']['os']
s['flavor'] = server['compulsory']['flavor']
s['min_count'] = server['compulsory']['copyNumber']
s['max_count'] = server['compulsory']['copyNumber']
s['userdata'] = server['file']
s['key_name'] = server['compulsory']['keyName']
s['availability_zone'] = server['compulsory']['availabilityZone']
s['nics'] = server['compulsory']['network']
print(s['userdata'])
if(exists("instalacion_k8s_docker.sh")):
print("Exists")
s['userdata'] = server['file']
nova.servers.create(**s)
And now the main function:
"""
Main process: First we create a connection to Openstack using our credentials.
Once connected we cal get_serverdata function to get all instance objects we want to be created.
We check that it is not empty and that we are not trying to create more instances than we are allowed.
Lastly we create the instances and the program finishes.
"""
credentials = get_nova_credentials_v2()
nova = client.Client(**credentials)
instances = get_serverdata()
current_instances = len(nova.servers.list())
if not instances:
print("No instance was writen. Check instances.json file.")
exit(3)
num = 0
for i in instances:
create_server(i)
exit(0)
For the rest of the code you can access to this public repo on github.
Thanks a lot!
Problem solved
The problem was the content of the server['file'] as #Corey said. It cannot be the Path to the file where you wrote the data but the content of it or a file type object. In the case of OpenstackSDK it must be base64 encoded but it is not the case in Novaclient.
Thanks a lot to #Corey for all the help! :)
I want to launch a 5 VM and as soon as launch it will save the IP of that VM in a file
this is the high level idea of what I want to do I want to launch 5 instance and save all IP in a single VM.
I think here template_file will work but i am not sure how to implement this scenario
i tried
#!/bin/bash
touch myip.txt
private_ip=$(google_compute_instance.default.network_interface.0.network_ip)
echo "$private_ip" >> /tmp/ip.sh
resource "null_resource" "coderunner" {
provisioner "file" {
source = "autoo.sh"
destination = "/tmp/autoo.sh"
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
}
connection {
host = google_compute_address.static.address
type = "ssh"
user = var.user
private_key = file(var.privatekeypath)
}
provisioner "remote-exec" {
inline = [
"chmod +x /tmp/autoo.sh",
"sh /tmp/autoo.sh",
]
}
depends_on = ["google_compute_instance.default"]
}
but it is not working as soon as script run through an error as
null_resource.coderunner (remote-exec): /tmp/autoo.sh: line 3: google_compute_instance.default.network_interface.0.network_ip: command not found
There are 2 kinds of template files. One is template_file which is data resouce and the other one is templatefile which is a function.
Template_file is used when you have some file you want to transfer from your machine to provisioning instance and change some paramaters according to that machine. For example:
data "template_file" "temp_file" {
template = file("template.yaml")
vars = {
"local_ip" = "my_local_ip"
}
}
( if you want more detail explanation of what I did in this example just post in the comments but I think it's not for your use case )
This is good because you can change this file for each instance you have if you iterate over it with count for example.
As you can see this template doesn't do what you want to do. It's completely different thing.
To do what you want to it's best to use 2 provisioners:
1st you can use file provisioner to copy the script ( which executes for example ip a, along with some other paramaters to cut and filter only for the data you need )
and 2nd one, remote-exec which will execute that script.
Instead of using depend_on, if you use remote-exec provisioner it's best to use sleep command. Sleep hold you on for given amount of time and let's your instance start properly. You need to choose the right amount of sleep time depending on the size and speed of your instance but I usually do 30 seconds.
I hope I understand your question correctly and hope it helped with something.
I went through the official docs of google cloud but I don't have an idea how to use these to list resources of specific organization by providing the organization id
organizations = CloudResourceManager.Organizations.Search()
projects = emptyList()
parentsToList = queueOf(organizations)
while (parent = parentsToList.pop()) {
// NOTE: Don't forget to iterate over paginated results.
// TODO: handle PERMISSION_DENIED appropriately.
projects.addAll(CloudResourceManager.Projects.List(
"parent.type:" + parent.type + " parent.id:" + parent.id))
parentsToList.addAll(CloudResourceManager.Folders.List(parent))
}
organizations = CloudResourceManager.Organizations.Search()
projects = emptyList()
parentsToList = queueOf(organizations)
while (parent = parentsToList.pop()) {
// NOTE: Don't forget to iterate over paginated results.
// TODO: handle PERMISSION_DENIED appropriately.
projects.addAll(CloudResourceManager.Projects.List(
"parent.type:" + parent.type + " parent.id:" + parent.id))
parentsToList.addAll(CloudResourceManager.Folders.List(parent))
}
You can use Cloud Asset Inventory for this. I wrote this code for performing a sink in BigQuery.
import os
from google.cloud import asset_v1
from google.cloud.asset_v1.proto import asset_service_pb2
def asset_to_bq(request):
client = asset_v1.AssetServiceClient()
parent = 'organizations/{}'.format(os.getEnv('ORGANIZATION_ID'))
output_config = asset_service_pb2.OutputConfig()
output_config.bigquery_destination.dataset = 'projects/{}}/datasets/{}'.format(os.getEnv('PROJECT_ID'),
os.getEnv('DATASET'))
output_config.bigquery_destination.table = 'asset_export'
output_config.bigquery_destination.force = True
response = client.export_assets(parent, output_config)
# For waiting the finish
# response.result()
# Do stuff after export
return "done", 200
if __name__ == "__main__":
asset_to_bq('')
Be careful is you use it, the sink must be done in an empty/not existing table or set the force to true.
In my case, some minutes after the Cloud Scheduler that trigger my function and extract the data to BigQuery, I have a Scheduled Query into BigQuery that copy the data to another table, for keeping the history.
Note: It's also possible to configure an extract in Cloud Storage if you prefer.
I hope that is a starting point for you and for achieving what do you want to do.
I am able to list the project but I also want to list the folder and resources under folder and folder.name and tags and i also want to specify the organization id to resources information from a specific organization
import os
from google.cloud import resource_manager
def export_resource (organizations):
client = resource_manager.Client()
for project in client.list_projects():
print("%s, %s" % (project.project_id, project.status))
I'm trying to create a new custom type/provider but not ensurable.
I've already checked the exec and augeas types, but I couldn't figure out clearly how exactly the integration between type and provider work when we don't define the ensurable mode.
Type:
Puppet::Type.newtype(:ptemplates) do
newparam(:name) do
desc ""
isnamevar
end
newproperty(:run) do
defaultto 'now'
# Actually execute the command.
def sync
provider.run
end
end
end
Provider:
require 'logger'
Puppet::Type.type(:ptemplates).provide(:ptemplates) do
desc ""
def run
log = Logger.new(STDOUT)
log.level = Logger::INFO
log.info("x.....................................")
end
But I don't know why the provider is being executed twice
root#puppet:/# puppet apply -e "ptemplates { '/tmp': }" --environment=production
Notice: Compiled catalog for puppet.localhost in environment production in 0.12 seconds
I, [2017-07-30T11:00:15.827103 #800] INFO -- : x.....................................
I, [2017-07-30T11:00:15.827492 #800] INFO -- : x.....................................
Notice: /Stage[main]/Main/Ptemplates[/tmp]/run: run changed 'true' to 'now'
Notice: Applied catalog in 4.84 seconds
Also, I had to define the defaultto to force the execution of the provider.run method.
What am I missing ?
Best Regards.
First you should spend some time reading this blog http://garylarizza.com/blog/2013/11/25/fun-with-providers/ and the two following by Gary Larizza. It gives a very good introduction to puppet type/providers.
Your log is being executed twice because of your def sync in the type that calls the run define, second when puppet tries to determine the value of your run property.
In order to write a type/provider that is not ensurable you need to do something like:
Type:
Puppet::Type.newtype(:ptemplates) do
#doc = ""
newparam(:name, :namevar => true) do
desc ""
end
newproperty(:run) do
desc ""
newvalues(:now, :notnow)
defaultto :now
end
end
Provider:
Puppet::Type.type(:ptemplates).provide(:ruby) do
desc ""
def run
#Do something to determine if run value and is now or notnow and return it
end
def run= value
#Do something to set the value of run
end
end
Note that all type providers must be able to determine the value of the property and be able to set it. The difference between an ensurable and a not ensurable type/provider is that the ensurable type/prover is able to create and destroy it, fx remove an user or add an user. A type/provider that is not ensurable is not able to create and destroy the property, fx selinux, you can set its value, but you cannot remove selinux.
I intend to transfer issues from Redmine to GitLab using this script
https://github.com/sdslabs/redmine-to-gitlab/blob/master/issue-tranfer.py
It works, but I would like to keep the issues ids during the transition. By default GitLab just starts from #1 and increases. I tried adding "newissue['iid']=issue['id']" and variations to the parameters, but apparently GitLab simply does not permit assigning an id. Anyone knows if there's a way?
"issue" is the data acquired from redmine:
newissue = {}
newissue['id'] = pro['id']
newissue['title'] = issue['subject']
newissue['description'] = issue["description"]
if 'assigned_to' in issue:
auser = con.finduserbyname(issue['assigned_to']['name'])
if(auser):
newissue['assignee_id'] = auser['id']
print newissue
if ('fixed_version' in issue):
newissue['milestone_id'] = issue['fixed_version']['id']
newiss = post('/projects/' + str(pro['id']) + '/issues', newissue)
and this is the "post" function
def post( url, load = {}):
load['private_token'] = conf.token
r = requests.post(conf.base_url + url, params = load, verify = conf.sslverify)
return r.json()
The API does not allow you to specify an issue ID at creation time. The ID is intended to be sequential. The only way you could potentially accomplish this task is to interact with the database directly. If you choose this route I caution you to be extremely careful, and have backups.