Swarm mode token - Puppet module - puppet

The documentation of a swarm mode setup seems to be missing something important.
It looks like to manage swarm with puppet I need to provide a token.
But to get the token I need to go to the manager node and type docker swarm join-token -q, copy the output and paste it into puppet?
Am I missing something? Or there's some automated way to do that?
What I would expect is this:
if(host_has_label("my-swarm-manager")) {
docker::swarm {'cluster_manager':
init => true,
advertise_addr => current_host_ip(),
listen_addr => current_host_ip(),
swarm_name => 'my-swarm'
}
} else if (host_has_label("my-swarm-worker")) {
docker::swarm {'cluster_worker':
join => true,
advertise_addr => current_host_ip(),
listen_addr => current_host_ip(),
manager_ip => get_ip_by_swarm_name('my-swarm'),
token => get_token_by_swarm_name('my-swarm')
}
}
Swarm mode token

Related

How to use AWS SSO temporary credentials in AWS SDK for local, but Instance Role for prod?

I understand how to use AWS SSO temporary credentials in my SDK, from this question and this question and this documentation. It's pretty simple, really:
import { fromSSO } from "#aws-sdk/credential-providers";
const client = new FooClient({ credentials: fromSSO({ profile: "my-sso-profile" }) });
And this code will work perfectly fine in my local computer and in my teammates' computers (so long as they are logged to AWS SSO with AWS CLI). However, it's not at all clear to me how to modify this so that it works both on our local computers with AWS SSO and on an EC2 instance with an Instance Role. It's obvious that the EC2 instance should not have access to AWS SSO, but rather it should get its permissions from the IAM Policies attached to its associated Instance Role. But how would my code look like, to account for both scenarios?
I'm taking a wild stab a this:
import { fromInstanceMetadata, fromSSO } from "#aws-sdk/credential-providers";
const isEc2Instance = "I HAVE NO IDEA WHAT TO DO HERE";
let credentials;
if (isEc2Instance) {
credentials = fromInstanceMetadata({
// Optional. The connection timeout (in milliseconds) to apply to any remote requests.
// If not specified, a default value of `1000` (one second) is used.
timeout: 1000,
// Optional. The maximum number of times any HTTP connections should be retried. If not
// specified, a default value of `0` will be used.
maxRetries: 0,
});
} else {
credentials = fromSSO({ profile: "my-sso-profile" });
}
const client = new FooClient({ credentials: credentials });
The code for getting the credentials from instance metadata is from here so it's probably correct. But as line 2 says, I have no idea how to determine whether I should be using AWS SSO or InstanceMetadata or something else (perhaps for other platforms, like what if this code is deployed in EC2 for dev env and ECS/EKS for prod env?).
Maybe this if statement is not the right approach. I would gladly consider another option.
So, what would be the correct way to write code that gets the AWS credentials from the correct source depending on the platform where it's running?
And yes, since these credentials will be the same for any AWS SDK Client anywhere in the app, the code that gets the credentials should be abstracted away from this, and 5 if statements in a CredentialsHelper doesn't sound so bad, but I didn't want to overcomplicate this question.
The code is JavaScript and I'm looking for something that works in Node.js, but I think the logic would be the same in any language.
You need to have DefaultAWSCredentialsProviderChain added in your code.
This will take your credentials in local and will take instance credentials when deployed in EC2 using EKS.
Moreover, add this snippet to your code:
private def getProvider(awsConfig: AWSConfig): Either[Throwable, AWSCredentialsProvider] = {
def isDefault(key: String): Boolean = key == "default"
def isIam(key: String): Boolean = key == "iam"
def isEnv(key: String): Boolean = key == "env"
((awsConfig.accessKey, awsConfig.secretKey) match {
case (a, s) if isDefault(a) && isDefault(s) =>
new DefaultAWSCredentialsProviderChain().asRight
case (a, s) if isDefault(a) || isDefault(s) =>
"accessKey and secretKey must both be set to 'default' or neither".asLeft
case (a, s) if isIam(a) && isIam(s) =>
InstanceProfileCredentialsProvider.getInstance().asRight
case (a, s) if isIam(a) && isIam(s) =>
"accessKey and secretKey must both be set to 'iam' or neither".asLeft
case (a, s) if isEnv(a) && isEnv(s) =>
new EnvironmentVariableCredentialsProvider().asRight
case (a, s) if isEnv(a) || isEnv(s) =>
"accessKey and secretKey must both be set to 'env' or neither".asLeft
case _ =>
new AWSStaticCredentialsProvider(
new BasicAWSCredentials(awsConfig.accessKey, awsConfig.secretKey)
).asRight
}).leftMap(new IllegalArgumentException(_))
}

Hyperledger Composer issue identity but missing business card

I am working in a proof of concept with a Node.js application and 'composer-client' npm module.
I have tried different commands such as adding a participant, adding an asset and performing a transaction and everything seems to work correctly.
However, when I try to issue a new identity I do not get the results that I expect. I execute my Node.js application with the following code:
var businessNetwork = new BusinessNetworkConnection();
return businessNetwork.connect('admin#tutorial-network')
.then(() => {
return businessNetwork.issueIdentity('org.acme.biznet.Trader#Trader_001', 'usr001')
})
.then((result) => {
console.log(`userID = ${result.userID}`);
console.log(`userSecret = ${result.userSecret}`);
})
.catch((error) => {
console.error(error);
});
Then, UserId and UserSecret are displayed at console log. After that, I try to do a ping to Business Network:
var businessNetwork = new BusinessNetworkConnection();
return businessNetwork.connect('usr001#tutorial-network')
.then(() => {
return businessNetwork.ping();
})
.then((result) => {
console.log(`participant = ${result.participant ? result.participant : '<no participant found>'}`);
})
.catch((error) => {
console.error(error);
});
However, I get the following error message:
{ Error: Card not found: usr001#tutorial-network
at IdCard.fromDirectory.catch.cause (/home/user.name/git_repositories/nodejs/first.blockchain.test/node_modules/composer-common/lib/cardstore/filesystemcardstore.js:73:27)
at <anonymous>
cause:
{ Error: Unable to read card directory: /home/user.name/.composer/cards/user001#tutorial-network
If I execute the command composer identity list -c admin#tutorial-network, I get the following output:
$class: org.hyperledger.composer.system.Identity
identityId: 9b49f67c262c0ae23e1e0c4a8dc61c4a12b5119df2b6a49fa2e02fa56b8818c3
name: usr001
issuer: 27c582d674ddf0f230854814b7cfd04553f3d0eac55e37d915386c614a5a1de9
certificate:
state: ISSUED
participant: resource:org.acme.biznet.Trader#Trader_001
But, I am not able to find the business card.
It works for me. I'm using composer 0.15.1.
var businessNetwork = new BusinessNetworkConnection();
return businessNetwork.connect('admin#carauction-network')
.then(() => {
return businessNetwork.ping();
})
.then((result) => {
console.log(`participant = ${result.participant ? result.participant : '<no participant found>'}`);
})
.catch((error) => {
console.error(error);
});
Output is like this
linux1#fabric:~/eventclient$ node event.js
participant = org.hyperledger.composer.system.NetworkAdmin#admin
You may need to import ID card to wallet ?
composer card import --file networkadmin.card
I had a similar issue late last week. Part of the upgrade instructions from V0.14 to V0.15 states that we have to delete the (if existing) ~/.composer, ~/.composer-connection-profiles and ~/.composer-credentials. I skipped that step on my first upgrade to v01.5 and encountered the error you are seeing. Went back and deleted those three folders, reinstalled the binaries and rechecked docker image status. Error went away and has not reappeared.
You named the new card user001, not user001##tutorial-network. Try connecting with just user001 as your connection card name.
I used the following command to create a card for the participant using the enrollment secret obtained from the javascript.
composer card create -u usr001 -s <enrollment_secret> -f usr001.card -n tutorial-network -p connection.json
You probably created the connection.json needed in some step before in the tutorial you are following. If this file is not available explicitly, you may get it from the composer wallet. In the current version, it can be located in /home/your_user/.composer/cards/. If you are only following the tutorial, any connection.json in this directory will do.
After that, you must add the credential created to the wallet using:
composer card import -f usr001.card
Your code for testing the issued identity is correct.

How to connect my LDAP with an existing ldap system in Laravel 5.4

I have to connect my ldap with an existing ldap system with the following conditions:
Domain used is 12.14.4.38
Username 0000001 and password 123456.
I've opened this link , but I still don't understand how to use it. This is my adldap.php code
<?php
return [
'connections' => [
'default' => [
'auto_connect' => true,
'connection' => Adldap\Connections\Ldap::class,
'schema' => Adldap\Schemas\ActiveDirectory::class,
'connection_settings' => [
'account_prefix' => env('ADLDAP_ACCOUNT_PREFIX', ''),
'account_suffix' => env('ADLDAP_ACCOUNT_SUFFIX', ''),
'domain_controllers' => explode(' ', env('ADLDAP_CONTROLLERS', '12.14.4.38')),
'port' => env('ADLDAP_PORT', 389),
'timeout' => env('ADLDAP_TIMEOUT', 5),
'base_dn' => env('ADLDAP_BASEDN', 'dc=12.14.4.38'),
'admin_account_suffix' => env('ADLDAP_ADMIN_ACCOUNT_SUFFIX', ''),
'admin_username' => env('ADLDAP_ADMIN_USERNAME', '0000001'),
'admin_password' => env('ADLDAP_ADMIN_PASSWORD', '123456'),
'follow_referrals' => false,
'use_ssl' => false,
'use_tls' => false,
],
],
],
];
// Create a new Adldap Provider instance.
$provider = new \Adldap\Connections\Provider(connections);
$ad = new \Adldap\Adldap(connections);
try {
// Connect to the provider you specified in your configuration.
$provider = $ad->connect('default');
// Connection was successful.
// We can now perform operations on the connection.
$user = $provider->search()->users()->find('0000001');
} catch (\Adldap\Auth\BindException $e) {
die("Can't connect / bind to the LDAP server! Error: $e");
}
You didn't specify a dn/path nor did you enter a path for the admin
This is how it normally looks
Search host: 12.14.4.38
basedn: "ou=Users,dc=DRiski,dc=com" <- I use your username as an example
admin:"cn=admin,ou=admins,dc=DRiski,dc=com"
password: just the regular password
What is with the weird cn, dc, ou stuff... that is like a path/folder in wich it needs to look to find the user (or users/groups in the case of base dn)...
base dn: specifies where to look for the users, in this case: in the folder users, on the server Driski.com
That is also how you specify your admin (tell the server where to find the thing).
Solved?
If not, try connecting to your ldap using ldapadmin (or another administrative tool) such that you can see how it works, and what path to enter...

GitLab set up to send invite Email to new members

I'm new to setting up Gitlab.
I have used the VSHN puppet module to install Gitlab in AWS.
The Gitlab server is running fine but the email invite is not working for anyone.
I have used the following configuration in site.pp file.
node 'client-ip-address' {
class { 'gitlab':
external_url => 'http://client-ip-address',
}
}
Could anyone please tell me what configuration is required to set up email notification?
Depending on your method of email, it will be configured with the gitlab rails option configuration documented here: https://github.com/vshn/puppet-gitlab/blob/master/README.md#usage
Documentation on examples for various email providers here: https://docs.gitlab.com/omnibus/settings/smtp.html#example-configuration
For example, the most basic one:
class { 'gitlab':
external_url => 'http://gitlab.mydomain.tld',
gitlab_rails => {
'smtp_enable' => true,
},
}
For gmail:
class { 'gitlab':
external_url => 'http://gitlab.mydomain.tld',
gitlab_rails => {
'smtp_address' => "smtp.gmail.com"
'smtp_port' => 587
'smtp_user_name' => "my.email#gmail.com"
'smtp_password' => "my-gmail-password"
'smtp_domain' => "smtp.gmail.com"
'smtp_authentication' => "login"
'smtp_enable_starttls_auto' => true
'smtp_tls' => false
'smtp_openssl_verify_mode' => 'peer'
},
}

How to provision rds instance inside vpc via puppet

I'm trying to provision rds instance inside existing VPC using puppetlabs/aws module. I'm able to provision rds instance in non-VPC mode using following declaration of resource:
rds_instance { $instance_name:
ensure => present,
allocated_storage => $rds::params::allocated_storage,
db_instance_class => $rds::params::db_instance_class,
db_name => $db_name,
engine => $db_data['engine'],
license_model => $db_data['license_model'],
db_security_groups => $db_security_groups,
master_username => $master_username,
master_user_password=> $master_user_password,
region => $region,
skip_final_snapshot => $rds::params::skip_final_snapshot,
storage_type => $rds::params::storage_type,
}
However, when I try to add additional attribute called: db_subnet, following error happens when trying to puppet apply:
Error: Could not set 'present' on ensure: unexpected value at params[:subnet_group_name]
I'm aware this error retrieves aws-sdk rather than puppet module itself.
If I'm correct, I need to pass subnet group name for db_subnet attribute and I've done but it results with issue from above. Any idea what I'm doing wrong?
Thanks in advance

Resources