I used the following commands to replace a previously deployed RDS instance with a manually configured RDS instance:
./terraform destroy -target aws_db_instance.my_db
./terraform import aws_db_instance.my_db my-rds-instance
(Had to destroy the old instance before I could use import.)
When I now run ./terraform plan, terraform wants to destroy and re-create the RDS db:
-/+ aws_db_instance.my_db (new resource required)
id: "my-rds-instance" => <computed> (forces new resource)
address: "my-rds-instance.path.rds.amazonaws.com" => <computed>
allocated_storage: "100" => "100"
allow_major_version_upgrade: "false" => "false"
apply_immediately: "false" => "false"
arn: "arn:aws:rds:eu-central-1:123456789123:db:my-rds-instance" => <computed>
auto_minor_version_upgrade: "false" => "false"
availability_zone: "eu-central-1b" => <computed>
backup_retention_period: "7" => "7"
backup_window: "09:46-10:16" => "09:46-10:16"
ca_cert_identifier: "rds-ca-2015" => <computed>
character_set_name: "" => <computed>
copy_tags_to_snapshot: "false" => "false"
db_subnet_group_name: "bintu-ct6" => "bintu-ct6"
endpoint: "my-rds-db-manually.path.rds.amazonaws.com:5432" => <computed>
engine: "postgres" => "postgres"
engine_version: "10.6" => "10.6"
final_snapshot_identifier: "" => "my-rds-DbFinal"
hosted_zone_id: "Z1RLNUO7B9Q6NB" => <computed>
identifier: "my-rds-db-manually" => "my-rds-db-manually"
identifier_prefix: "my-rds-db-" => <computed>
instance_class: "db.m5.large" => "db.m5.xlarge"
kms_key_id: "arn:aws:kms:eu-central-1:123456789123:key/d123d45d-b678-9123-a1e9-c456d40d7be7" => <computed>
license_model: "postgresql-license" => <computed>
maintenance_window: "wed:00:53-wed:01:23" => "mon:00:00-mon:03:00"
monitoring_interval: "60" => "60"
monitoring_role_arn: "arn:aws:iam::123456789123:role/myRdsMonitoring" => "arn:aws:iam::123456789123:role/myRdsMonitoring"
multi_az: "true" => "true"
name: "mydb" => "mydb"
option_group_name: "default:postgres-10" => <computed>
parameter_group_name: "rds-my-group" => "rds-my-group"
password: <sensitive> => <sensitive> (attribute changed)
port: "5432" => <computed>
publicly_accessible: "false" => "false"
replicas.#: "0" => <computed>
resource_id: "db-ABCDEFGHIJKLMNOPQRSTUVW12" => <computed>
skip_final_snapshot: "true" => "false"
status: "available" => <computed>
storage_encrypted: "true" => "false" (forces new resource)
storage_type: "gp2" => "gp2"
tags.%: "1" => "0"
tags.workload-type: "production" => ""
timezone: "" => <computed>
username: "user" => "user"
vpc_security_group_ids.#: "1" => "1"
vpc_security_group_ids.1234563899: "sg-011d2e33a4464eb65" => "sg-011d2e33a4464eb65"
I expected that the "import" command would add the manually created RDS instance to the config/state file, so it can be used without re-deploying a new RDS instance.
How can I prevent the destruction of the imported RDS instance when using terraform plan/apply?
Here is the resource config:
resource "aws_db_instance" "my_db" {
#identifier = "my-rds-db-manually"
identifier_prefix = "${var.db_instance_identifier_prefix}"
vpc_security_group_ids = ["${aws_security_group.my_db.id}"]
allocated_storage = "${var.db_allocated_storage}"
storage_type = "gp2"
engine = "postgres"
engine_version = "10.6"
instance_class = "${var.db_instance_type}"
monitoring_interval = "60"
monitoring_role_arn = "${aws_iam_role.my_rds_monitoring.arn}"
name = "${var.bintu_db_name}"
username = "${var.DB_USER}"
password = "${var.DB_PASS}"
allow_major_version_upgrade = false
apply_immediately = false
auto_minor_version_upgrade = false
backup_window = "${var.db_backup_window}"
maintenance_window = "${var.db_maintenance_window}"
db_subnet_group_name = "${aws_db_subnet_group.my_db.name}"
final_snapshot_identifier = "${var.db_final_snapshot_identifier}"
parameter_group_name = "${aws_db_parameter_group.my_db.name}"
multi_az = true
backup_retention_period = 7
lifecycle {
prevent_destroy = false
}
}
Notice that prevent_destroy = false is set, otherwise the plan will fail.
As you probably noticed, you have to figure out the code that matches the imported resource yourself.
The provided output contains one important information:
storage_encrypted: "true" => "false" (forces new resource)
This means that your code wants to set up an RDS instance with storage_encrypted = false, while state/reality has it set to true. Change this in your code and your plan will be non-destructive.
I haven't checked, if the rest of the diff is matching. If not, it will tell you which exact settings are contrary to current state.
Related
I want to be able to get notified when a server is down.
Puppet: sensu/sensu-puppet v5.9.0
Based on https://github.com/sensu/sensu-go/issues/1960, I tried this code without success.
Since there is a special static handler called "keepalive", I create a set handler "keepalive" and include my telegram handler (telegram_ops) in it.
BACKEND Code
class { 'sensu':
api_host => 'sensu3.mydomain.com',
password => '****',
agent_password => '****',
agent_entity_config_password => '****',
ssl_ca_source => 'puppet:///modules/common/ssl/ca.crt',
}
include sensu::cli
class { 'sensu::backend':
ssl_cert_source => 'puppet:///modules/common/ssl/my.crt',
ssl_key_source => 'puppet:///modules/common/ssl/my.key',
config_hash => {
'deregistration-handler' => 'deregistration',
'event-log-file' => '/var/log/sensu/events.log'
}
}
sensu_bonsai_asset { 'sensu/check-cpu-usage':
ensure => 'present',
version => 'latest',
}
sensu_check { 'check-cpu':
ensure => 'present',
labels => {'contacts' => 'ops'},
handlers => ['telegram_ops'],
command => 'check-cpu-usage -w 75 -c 85',
interval => 60,
subscriptions => 'linux',
publish => true,
runtime_assets => ['sensu/check-cpu-usage']
}
sensu_bonsai_asset { 'sensu/sensu-go-has-contact-filter':
ensure => 'present',
version => '0.2.0',
}
sensu_filter { 'contact_ops':
ensure => 'present',
action => 'allow',
runtime_assets => ['sensu/sensu-go-has-contact-filter'],
expressions => ['has_contact(event, "ops")'],
}
sensu_filter { 'first_occurrence':
ensure => 'present',
action => 'allow',
expressions => ['event.check.occurrences == 1'],
}
sensu_bonsai_asset { 'Thor77/sensu-telegram-handler':
ensure => 'present'
}
sensu_handler { 'telegram_ops':
ensure => 'present',
type => 'pipe',
command => 'sensu-telegram-handler --api-token **** --chatid -****',
timeout => 10,
runtime_assets => ['Thor77/sensu-telegram-handler'],
filters => [
'is_incident',
'not_silenced',
'contact_ops',
'first_occurrence'
],
}
sensu_handler { 'keepalive':
ensure => 'present',
type => 'set',
handlers => ['telegram_ops'],
}
AGENT Code (Very simple code.)
class { 'sensu::agent':
subscriptions => ['default', 'linux', $hostname, 'nuc']
}
It does not work. If I suddenly shutdown a server, nothing happeds.
What is the proper way to do this?
It is posible any other aproach?
Long time ago there was another solution, class sensu had the parameter client_keepalive but it is not available anymore.
Thanks.
I am creating a checkout session for a subscription and I sometimes have a coupon ID and sometimes not. I was wondering about the value to which I should set the $coupon_id variable when there is no coupon.
Should It be set to 'none' or empty string '' ?
$coupon_id = $ID;
}
else {
$coupon_id = ''; //or 'none' ?
}
$session = \Stripe\Checkout\Session::create([
'payment_method_types' => ['card'],
'line_items' => [[
'price' => $plan_id,
'quantity' => 1,
]],
'mode' => 'subscription',
'discounts' => [[
'coupon' => $coupon_id,
]],
'success_url' => 'https://example.com/success',
'cancel_url' => 'https://example.com/cancel',
]);
Are you allowing your users to enter the promotion code in the Checkout page? If so, set the allow_promotion_codes parameter when creating the Session.
Otherwise, omit the discounts parameter entirely if there's no coupon to apply.
https://stripe.com/docs/api/checkout/sessions/create#create_checkout_session-allow_promotion_codes
$params = [
'payment_method_types' => ['card'],
'line_items' => [[
'price' => $plan_id,
'quantity' => 1,
]],
'mode' => 'subscription',
'success_url' => 'https://example.com/success',
'cancel_url' => 'https://example.com/cancel',
];
if ($coupon_id) {
$params['discounts'] = [[
'coupon' => $coupon_id
]]
}
$session = \Stripe\Checkout\Session::create($params);
Hello I have this configuration for a logstash running on my computer :
input {
exec {
command => "powershell -executionpolicy unrestricted -f scripts/windows/process.ps1 command logstash"
interval => 30
type => "process_data"
codec => line
tags => [ logstash" ]
}
}
output
{
if "sometype-logs" in [tags] {
elasticsearch {
action => "index"
doc_as_upsert => true
index => "sometype-logs-%{+YYYY.MM.dd}"
hosts => "locahost:9200"
template_overwrite => true
}
} else {
elasticsearch {
action => "index"
doc_as_upsert => true
index => "%{type}"
hosts => "localhost:9200"
template_overwrite => true
}
}
When displaying indexes I have :
Why is index name is "%type" and not "process_data" ?
Probably just something about syntax. To used some field of the data, you must use this syntax
%{[somefield]}
(see example on this documentation page)
So, in your case, try this :
"%{[type]}"
in place of
"%{type}"
Oracle DB: 11.2.0.4
OJDBC version: ojdbc6.jar
JDK: openjdk 1.8
LogStash version: 6.3.2-1
I am recieving following error in logstash error log [ERROR][logstash.inputs.jdbc ] Unable to connect to database. Tried 1 times {:error_message=>"Java::JavaSql::SQLException: ORA-00604: erro occurred at recursive SQL level 1\nORA-01882: timezone region not found\n"}
Logstash code:
input{
jdbc{
# jdbc_default_timezone => "Asia/Kolkata"
jdbc_driver_library => "/var/lib/logstash/OJDBC-Full/ojdbc6.jar"
jdbc_driver_class => "Java::oracle.jdbc.driver.OracleDriver"
jdbc_connection_string => "jdbc:oracle:thin:#xxxx:port:sid"
jdbc_user => "xxxx"
jdbc_password => "xxxx"
jdbc_validate_connection => true
statement => "select count(*) from apps.icx_sessions icx join apps.fnd_user usr on usr.user_id=icx.user_id left join apps.fnd_responsibility resp on resp.responsibility_id=icx.responsibility_id where last_connect>sysdate-nvl(FND_PROFILE.VALUE('ICX_SESSION_TIMEOUT'),30)/60/24 and disabled_flag != 'Y' and pseudo_flag = 'N' and USER_NAME <> 'GUEST'"
type => "xxx_RPT_DB_Session_query"
schedule => "*/2 * * * *"
}
}
filter{
}
output{
file{
path => "/var/log/logstash/sample-JDBC-%{+YYYY-MM-dd}.txt"
}
elasticsearch{
hosts => ["xxxxxxxxx:7778"]
index => "q_session"
}
http{
format => "json"
http_method => "post"
url => "https://api.telegram.org/bot629711229:AAFDebywi4NDiSdqqHhmxTFlUH7cMUJwwvE/sendMessage"
mapping => {
"chat_id" => "xxxxx"
"parse_mode" => "html"
"text" => "❗ Current Session Count 😱"
}
}
}
Had the same problem, solved it adding a line in the logstash jvm.options
-Duser.timezone="+01:00"
of course you have to change the +01 with your timezone
We are storing our data types in each of its database in couchdb. What sort of format will the config file have to import data from multiple databases? Or do I need to have multiple config files for importing data from each database to an index. Will appreciate any help.
Thanks.
We use a single config file for multiple databases.
It's not perfect, but functional for now.
Currently looks like:
input {
couchdb_changes {
sequence_path => "db1.seq"
db => "db1"
host => "xxx.xxx.xxx.xxx"
username => "xxx"
password => "xxx"
add_field => {
"organization" => "db1"
}
}
couchdb_changes {
sequence_path => "db2.seq"
db => "db2"
host => "xxx.xxx.xxx.xxx"
username => "xxx"
password => "xxx"
add_field => {
"organization" => "db2"
}
}
}
filter {
mutate {
remove_field => [ "_attachments" ]
}
}
output {
stdout { codec => rubydebug }
elasticsearch {
cluster => "cluster0"
host => ["xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx", "xxx.xxx.xxx.xxx"]
protocol => "http"
index => "%{[organization]}"
document_id => "%{[#metadata][_id]}"
}
}