puppetserver as client of another puppetserver - puppet

I have a puppet OS v6 install that maintains 'internal' server hosts. I've recently started migrating 'external' servers to a separate puppet domain. I'd like to include the 'external' domain server as a host in the 'internal' domain so I can use the base configuration that I have for other 'internal' hosts.
Is this practical / possible?
If so, how do I specify the 'agent' ssldir to keep it separated from the 'server' ssldir? Looks like there's only one ssldir setting in the [main] section of puppet.conf.
Why have different hosts? There are network / security reasons to separate the two onto different VMs.

I'd like to include the 'external' domain server as a host in the
'internal' domain so I can use the base configuration that I have for
other 'internal' hosts.
Is this practical / possible?
It's definitely possible. You'll have to judge how practical it is.
If so, how do I specify the 'agent' ssldir to keep it separated from the 'server' ssldir? Looks like there's only one
ssldir setting in the [main] section of puppet.conf.
Most Puppet settings can be specified in multiple sections, so one thing you could try would be to specify ssldir in the [agent] section. For the agent only, that should override the the default and / or an explicit setting in the [main] section.
Alternatively, you can override the config file via command-line options. Specifically, you could use the --ssldir option (or, more broadly, the --vardir option) when you launch Puppet.
But do you really need a separate set of SSL certificates? If you set up a central CA instead of having each server provide CA services for its clients, and maybe by other means as well, you ought to be able to have a single cert for each machine that it uses to identify itself to all other machines in your site.

Related

SSL Certs for single IP- two ports, same URL website

We've a project that is to go live very soon and we ran into this issue when dealing with developers. This is two JDEdwards (ERP) website which are hosted on a single IBM WebSphere webserver, currently using a FQDN, and different ports assignment for DEV and TEST users. Websites as such are -
DEV
https://jdeweb01dev.corporate.company.com:100/jde/owhtml/
TEST
https://jdeweb01dev.corporate.company.com:101/jde/owhtml/
There is only one IP configured for the above server FQDN but we will eventually give common name like JdeDev.company.com JdeTest.company.com or something.
We want to implement SSL cert for our Test/Dev environments, but how would we implement this on IIS or IBM Web SPhere, as well as on DNS level. Sine the only difference between the URLs is port numbers and both lead to different websites. I'm open for suggestions on how we can improve the design as well or how to make the current design work.
Another important thing to consider, the two websites will be accessed between two different Domain Forests which have transient Trust. This is a JDEdwards project.
Appreciate any help on this!
In order to configure HTTPS binding in IIS site binding, just configure a certificate in IIS site binding module.
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-an-iis-hosted-wcf-service-with-ssl
Also, this could be accomplished by the Netsh http command.
netsh http add sslcert ipport=0.0.0.0:8000
certhash=0000000000003ed9cd0c315bbb6dc1c08da5e6
appid={00112233-4455-6677-8899-AABBCCDDEEFF}
https://learn.microsoft.com/en-us/dotnet/framework/wcf/feature-details/how-to-configure-a-port-with-an-ssl-certificate
After you have set up the FQDN in DNS entries, you could specify the Hostname field in order to access the service with the server fully qualified domain name.
Feel free to let me know if there is anything I can help with.
WebSphere supports multiple virtual hosts, each with its own alias(es), which can be a combination of DNS name and port. The built-in default_host will typically have an alias for the server/node name and the * wildcard for all ports. You then assign a specific virtual host to an application when you deploy it.

How does Fabric cli get ip of peer/orderer in the example of byfn?

Could any body tell me how the cli knows the IPs of other peers and orders just according to the Host in the configtx.yaml?
When does the DNS information generated?
Can anybody also tell me some more information about the configuration below "CORE_VM_ENDPOINT=unix:///host/var/run/docker.sock"?
When you run fabric example, it always refer default credentials or already confirugred fabric configuration.
For example, if you use basic fabric example, you will run [your directory]/fabric-dev-servers/startFabric.sh
this file refer already configured information. One of them is connection profile. If you look at createPeerAdmin.sh file, you can find DevServer_connection.json. This file contains connection information for the fabric network.
As you are using byfn.sh, you can add the host ip address using "extra_hosts" in docker-compose.yaml file.
As there is no definition about this, it will use localhost as default.
https://medium.com/1950labs/setup-hyperledger-fabric-in-multiple-physical-machines-d8f3710ed9b4
like this,
extra_hosts:
- "peer0.org1.example.com:192.168.1.10"
- "ca.org1.example.com:192.168.1.15"
- "peer0.org2.example.com:192.168.1.20"
- "ca.org2.example.com:192.168.1.25"

How do I put WSO2 Identity Server on my site? Remove localhost and make it public

I want to make my identity server public so that all users who visit it can access the identity server but right now only I can access it since it's hosted locally. How can I deploy this so that it runs on my IIS? Will copying and pasting the WSO2 IS folder into my inetpub\wwwroot folder work? (And after configuring the .xml files so that it shares my public domain)
I tried reading the WSO2 IS documentation but it's not very clear to me how I can make it public. I was hoping for a systematic tutorial/way to do this but it chains from one step to multiple.
https://docs.wso2.com/display/IS530/Deployment+Guidelines+in+Production
https://docs.wso2.com/display/IS550/Changing+the+hostname
I believe there are a few misconceptions (no, copying the installation into inetpub\wwwroot won't work, it's not php)
chains from one step to multiple
well - the documentation is related only to a product, it assumes some knowledge of the network and systems it runs on
1 - you should run the WSO2IS as a service ( so this is Windows guide may be helpful and this here is how to run the WSO2IS as as service for Linux)
2 - change the repository/conf/carbon.xml
(this step is optional, but increases security)
HostName - to the public hostname
MgtHostName - to internal hostname, so the administrative console is not accessible from internet
3 - The best practice to expose the WSO2IS would be a reverse proxy (depending you are using IIS, nginx or httpd) so you don't expose the default port 9443 to the outside directly (I assume you want to use your own SSL certificate on 443 and TLS termination in the web server)
For the default WSO2IS applications you need to create a reverse proxy from `HTTPS:443 -> HTTP:9763
update /repository/conf/tomcat/catalina-server.xml and on the Connector listening on 9763 add attribtue proxyPort="443"
(Note: now I am not sure if it will work, what will work for sure is TLS bridging HTTPS:443->HTTPS:9443, it means adding proxyPort="443" to the Connector for port 9443)
Every WSO2 product already has an application server shipped with a TomCat.
This way you do not need, nor should, place the fonts on another separate application server. Use what's in the product.
By its description it seems to me that you do not have much familiarity with infrastructure, servers and etc, I will try to help you and clarify some points.
As I mentioned above, you should use the TomCat that already comes with the product and put it in some VM (Server) that has Internet output, that is, it has ports 80, 443 and also ports 9443 and 8243 (which are the default product ports) released for access beyond the internal network (LAN).
If you get the Public IP of that VM where the WSO2 Identity Server product is running, and access it from outside your local area network (LAN), the service should work.
Making an analogy to a Web site is the same concept. When you want to put a Web Site publicly for the internet, as you said put the fonts inside apache's WWW folder or something, it's the same concept, so people outside of your local network can access this website, this Apache would have to be with a Public IP, It's the same concept, but WSO2 already has its "Apache" TomCat internally, just leave your Public IP.

RabbitMQ Cluster on EC2: Hostname Issues

I want to set up a 3 node Rabbit cluster on EC2 (amazon linux). We'd like to have recovery implemented so if we lose a server it can be replaced by another new server automagically. We can set the cluster up manually easily using the default hostname (ip-xx-xx-xx-xx) so that the broker id is rabbit#ip-xx-xx-xx-xx. This is because the hostname is resolvable over the network.
The problem is: This hostname will change if we lose/reboot a server, invalidating the cluster. We haven't had luck in setting a custom static hostname because they are not resolvable by other machines in the cluster; thats the only part of that article that doens't make sense.
Has anyone accomplished a RabbitMQ Cluster on EC2 with a recovery implementation? Any advice is appreciated.
You could create three A records in an external DNS service for the three boxes and use them in the config. E.g., rabbit1.alph486.com, rabbit2.alph486.com and rabbit3.alph486.com. These could even be the ec2 private IP addresses. If all of the boxes are in the same region it'll be faster and cheaper. If you lose a box, just update the DNS record.
Additionally, you could assign an elastic IPs to the three boxes. Then, when you lose a box, all you'd need to do is assign the elastic IP to it's replacement.
Of course, if you have a small number of clients, you could just add entries into the /etc/hosts file on each box and update as needed.
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of the system. If the hostname changes, a new empty database is created. To avoid data loss it's crucial to set up a fixed and resolvable hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname
#Chrskly gave good answers that are the general consensus of the Rabbit community:
Init scripts that handle DNS or identification of other servers are mainly what I hear.
Elastic IPs we could not get to work without the aid of DNS or hostname aliases because the Internal IP/DNS on amazon still rotate and the public IP/DNS names that stay static cannot be used as the hostname for rabbit unless aliased properly.
Hosts file manipulations via an script are also an option. This needs to be accompanied by a script that can identify the DNS's of the other servers upon launch so doesn't save much work in terms of making things more "solid state" config wise.
What I'm doing:
Due to some limitations on the DNS front, I am opting to use bootstrap scripts to initialize the machine and cluster with any other available machines using the default internal dns assigned at launch. If we lose a machine, a new one will come up, prepare rabbit and lookup the DNS names of machines to cluster with. It will then remove the dead node from the cluster for housekeeping.
I'm using some homebrew init scripts in Python. However, this could easily be done with something like Chef/Puppet.
Update: Detail from Docs
From:
http://www.rabbitmq.com/ec2.html
Issues with hostname
RabbitMQ names the database directory using the current hostname of
the system. If the hostname changes, a new empty database is created.
To avoid data loss it's crucial to set up a fixed and resolvable
hostname. For example:
sudo -s # become root
echo "rabbit" > /etc/hostname
echo "127.0.0.1 rabbit" >> /etc/hosts
hostname -F /etc/hostname

in node.js, using ssl with more than one domain name with one ip address

used to Apache in Linux where each domain name using ssl requires its own ip address.
is this still true if using node.js and not using Apache at all?
The same limitations apply in node.js as in Apache -- they're nothing to do with the particular server software you're using, they're inherent in the http and TLS/SSL protocols.
Having said that, there are two ways to run SSL for multiple domains from a single IP address. I don't know the status of node.js support for either of these, but it shouldn't matter for the first alternative.
First, you can get a single SSL certificate that covers all of the domain names you want to use -- either a wildcard if they're all subdomains of the same domain or one that uses Subject Alternative Names (SAN) if they're not. Note that SAN is not supported by some older web browsers, especially on some smartphones.
Second, you can use Server Name Indication (SNI) to configure multiple SSL certificates, as it extends the SSL protocol to make the hostname available to the server before it's done the key exchange. Browser support for SNI is not as good as for SAN, and in particular it doesn't work with any Internet Explorer version on Windows XP.
This link shows how to do it with nginx using the SNI method.
https://www.digitalocean.com/community/articles/how-to-set-up-multiple-ssl-certificates-on-one-ip-with-nginx-on-ubuntu-12-04
You might as well let nginx do the https and file serving and have it reverse proxy into node.js for the api work, as shown here:
https://stackoverflow.com/a/15008873/151312

Resources