In puppet master is there any way to store facts of each client in database and yaml file. Currently it is working with yaml file but i want to store it in both the places. How can i do it. Any help would be appreciated.
storeconfigs should work along side, check out:
http://projects.puppetlabs.com/projects/1/wiki/using_stored_configuration
Related
I am trying to understand a few things that arose from writing the COPY command in a SQL (.sql) file generated by Prisma. I understand that the filepath given to COPY is absolute/relative to where the database is running.
COPY table(value,other) FROM 'how/do/i/get/the/right/path' DELIMITER ',' CSV HEADER
Can someone explain when we have a hosted server and database (I believe they are usually on separate machines) how we would COPY from a csv file. The file is part of my typical git repo. After reading, my understanding is that the file would actually need to be where the database is hosted, is that correct? I am not sure I have a full grasp on this. Do hosted db servers also store files? I thought it would just be the database (I understand its a machine that could technically have anything on it but is that ever done?)
What would be the best way to access this file from the db? Would I use psql to connect to my database and then ssh into the server? I think there are different solutions like running a script which has psql in it and using the variation \COPY.
What I wanted ideally was to have the file as part of my repo and in a Prisma migrate file copy over the contents into a database table. If the above was incorrect, would someone be able to clarify how I would get the correct path into the command? I want it to be in a .sql file and put in the host variable (assuming that would work, depending on clarity on the above points regarding where files live).
Thanks!
Sources used:
https://www.postgresql.org/docs/current/sql-copy.html
https://www.cybertec-postgresql.com/en/copy-in-postgresql-moving-data-between-servers/
Closed. This question needs debugging details. It is not currently accepting answers.
Edit the question to include desired behavior, a specific problem or error, and the shortest code necessary to reproduce the problem. This will help others answer the question.
Closed 2 years ago.
Improve this question
I have dtsx file in which having two connection strings.
It is point to Azure storage account Fileshare.
It is point to Data Source connection.
e.g.
Connection string = Dev.file.core.windows.net\xxx\xxx\
Data Source = xx.xx.xx.xxx(IpAddress),User Id=XXX;Password=XXXX;Initial Catalog=XXXX;
IPAdress like 15.10.52.192
Current connection strings having Development environment entry need to transform for Test,QA and Prod environment using Azure Devops.
I'm used File Transformation but it is not working in Azure DevOps pipeline.
Please suggest the solution for SSIS deployment for file transformation.
add those connection strings as project variable in SSIS packages.
refer below blog on how to do it.
https://learn.microsoft.com/en-us/sql/integration-services/integration-services-ssis-package-and-project-parameters?view=sql-server-ver15
Once you have these project parameters configured you should be able to change those in Azure dev ops build pipeline before deploying to Production, as those project level variables will be available to you in there.
I am building an automation script that builds infrastructure in Azure and then installs Confluent Kafka on top if it. Confluent already has an Ansible playbook that I want to use: https://github.com/confluentinc/cp-ansible
My playbook builds out the Azure infrastructure (SSH tokens included), clones the Confluent Git repo, and then generates a new hosts.yml file with the data from the newly created Azure infrastructure. I can then call the Confluent playbook with the new inventory file and all is well.
My question is, can I do everything in one playbook? Since I don't have control over the Confluent playbook, I will need to maintain the vars from my well-formed hosts.yml file. The problem I have with creating a global hosts.yml file that works for both playbooks is a lot of the data needed for the Confluent playbook, I won't have until the infrastructure is built.
My thoughts are, I can do one of the following:
Execute the ansible playbook with a new shell command ansible-playbook -i cp-ansible/hosts.yml cp-ansible/all.yml
I'm assuming that I will lose all the console output if I do this
Load the playbook and do a lot of set_fact: tasks
Something creative that I can't think of
My progress is over here: https://github.com/joecoolish/kafka-infrastructure-ansible
Rather than a single file, it might be easier to have your inventory in a directory. Either way, you want meta: refresh_inventory
https://docs.ansible.com/ansible/latest/modules/meta_module.html
You can also use dynamic inventory from Azure.
https://learn.microsoft.com/en-us/azure/developer/ansible/dynamic-inventory-configure
https://docs.ansible.com/ansible/latest/plugins/inventory/azure_rm.html
is it possible to use a node local config file (hiera?) that is used by the puppet master to compile the update list during a puppet run?
My usecase is that puppet will make changes to users .bashrc file and to the users home directory, but I would like to be able to control which users using a file on the actual node itself, not in the site.pp manifest.
is it possible to use a node local config file (hiera?) that is used
by the puppet master to compile the update list during a puppet run?
Sure, there are various ways to do this.
My usecase is that puppet will make changes to users .bashrc file and
to the users home directory, but I would like to be able to control
which users using a file on the actual node itself, not in the site.pp
manifest.
All information the master has about the current state of the target node comes in the form of node facts, provided to it by the node in its catalog request. A local file under local control, whose contents should be used to influence the contents of the node's own catalog, would fall into that category. Puppet supports structured facts (facts whose values have arbitrarily-nested list and/or hash structure), which should be sufficient for communicating the needed data to the master.
There are two different ways to add your own facts to those that Puppet will collect by default:
Write a Ruby plugin for Facter, and let Puppet distribute it automatically to nodes, or
Write an external fact program or script in the language of your choice,
and distribute it to nodes as an ordinary file resource
Either variety could read your data file and emit a corresponding fact (or facts) in appropriate form. The Facter documentation contains details about how to write facts of both kinds; "custom facts" (Facter plugins written in Ruby) integrate a bit more cleanly, but "external facts" work almost as well and are easier for people who are unfamiliar with Ruby.
In principle, you could also write a full-blown custom type and accompanying provider, and let the provider, which runs on the target node, take care of reading the appropriate local files. This would be a lot more work, and it would require structuring the solution a bit differently than you described. I do not recommend it for your problem, but I mention it for completeness.
I want to make some dynamic configurations details in the puppet master side before it makes a deployment on puppet agent. So I want to send significant amount of configuration details along with the request of the agent to master. Is there a proper way to do this in puppet ?
Regards,
Malintha Adiakri
Yes! There is facter. This is how I use it and what I find most robust but there are other ways to define new facts.
For example if you want to add role of the server then you can do
export FACTER_ROLE=jenkins
Now you can see that command facter role will print jenkins. Yay!
After running puppet agent all facts known to system will be passed to thenpuppetmaster. Be aware that service puppet will not know fact that you just defined because it runs in other scope.
I put my facts in file .facts and source it before apply.
This is my script that runs from cron:
#!/bin/bash
source /root/.facts
puppet agent -t --server puppetmaster.example.com --pluginsync
While the previous answer is correct, I'm opening this as a new one because it's significant. Defining FACTER_factname variables in the agent's environment is a nice and quick way to override some facts. If you wish to rely on your own facts for production purposes, you should look to custom facts instead.
In its basic form, you use it by deploying Ruby code snippets to your boxen. For an easier approach, take special note of external facts. Those are probably the best solution for your problem.
Also note that as of Facter 2 you can contain complex data structures in your facts and don't have to serialize everything into strings. If the amount of data from the agent is high, as you emphasize, that may be helpful.