Puppet Resource Ordering not working - puppet

I have a large manifest that sets up an OpenStack Controller node that also has HAProxy, Galera and RabbitMQ co-located on the same host. I am running into problems because the HAProxy service always seems to be the last thing to start. This creates a problem as I am suppose to connect to the Galera DB cluster through the HAProxies VIP. So all the attempts by the various OpenStack services are not able to set up their relational database tables. This would not be a problem if the HAProxy service starts first. I have tried all kinds of things to get my puppet manifest to force application of my HAProxy profile before anything else. Here's a couple of things I have tried:
#Service <|name=='haproxy' or name=='keepalived'|> -> Exec <| |>
#Exec<| title == 'nova-db-sync' |> ~> Service<| title == $nova_title |>
#Service<| title == 'haproxy' |> ~> Exec<| title == 'nova-db-sync' |>
#Service<| title == 'haproxy' |> ~> Exec<| title == 'Exec[apic-ml2-db-manage --config-file /etc/neutron/neutron.conf upgrade head]' |>
Service['haproxy'] -> Exec['keystone-manage db_sync']
#Class ['wraphaproxy'] -> Class ['wrapgalera'] -> Class['wraprabbitmq'] -> Class['keystone']
# -> Class['wraprabbitmq'] -> Class['keystone']
#
# setup HAproxy
notify { "This host ${ipaddress} is in ${haproxy_ipaddress}":}
if ( $ipaddress in $haproxy_ipaddress ) {
notify { "This host is an haproxy host ${ipaddress} is in ${haproxy_ipaddress}":}
require wraphaproxy
# class { 'wraphaproxy':
# before => [
# Class['wrapgalera'],
# Class['wraprabbitmq'],
# Class['keystone'],
# Class['glance'],
# Class['cinder'],
# Class['neutron'],
# Class['nova'],
# Class['ceilometer'],
# Class['horizon'],
# Class['heat'],
# ]
# }
}
The class wraphaproxy is the class that configures and starts the HAProxy service. It seems that no matter what I do the OpenStack Puppet modules attempt to do their "db sync's" before the HAProxy service is ready.

OK. It turns out that I need to use HAProxy::Service['haproxy'] instead of just Service['haproxy']. So I have this in my code:
Haproxy::Service['haproxy'] -> Exec['keystone-manage db_sync']
Haproxy::Service['haproxy'] -> Exec['glance-manage db_sync']
Haproxy::Service['haproxy'] -> Exec['nova-db-sync']
Haproxy::Service['haproxy'] -> Exec['glance-manage db_sync']
Haproxy::Service['haproxy'] -> Exec['neutron-db-sync']
Haproxy::Service['haproxy'] -> Exec['heat-dbsync']
Haproxy::Service['haproxy'] -> Exec['ceilometer-dbsync']
Haproxy::Service['haproxy'] -> Exec['cinder-manage db_sync']
Please if someone out there knows of a better way by maybe using anchors or resource collectors please reply.

Related

Spring Integration, Google Pubsub Auto ack not working

Following is an Spring IntegrationFlow i am trying to make work using Spring integration 6 and Google PubSub 1.123. However, AUTO acknowledgment is not working reason being the routeToRecpient is one way message handle. Is routeToRecipents one-way MessageHandle or am i doing something worng. is there a way to pass on the message to main flow ?
public IntegrationFlow processEvent() {
return IntegrationFlow.from(Function.class, gateway -> gateway.beanName("onMessage"))
.transform(Transformers.fromJson(Alert.class))
.enrichHeaders(h -> h
.headerFunction(POSTGRES, t -> POSTGRES.equalsIgnoreCase(store) || ALL.equalsIgnoreCase(store))
.headerFunction(BIGTABLE, t -> BIGTABLE.equalsIgnoreCase(store) || ALL.equalsIgnoreCase(store))
.headerFunction(NOSUPPORT,
t -> !BIGTABLE.equalsIgnoreCase(store) && !ALL.equalsIgnoreCase(store)
&& !POSTGRES.equalsIgnoreCase(store)))
.log(LoggingHandler.Level.DEBUG, "Message Routed to DB store", t -> t.toString())
.routeToRecipients(r -> r
.recipientMessageSelectorFlow(m -> m.getHeaders().get(BIGTABLE, Boolean.class),
c -> c.channel(postgresRouteChannel()))
.recipientMessageSelectorFlow(m -> m.getHeaders().get(BIGTABLE, Boolean.class),
c -> c.channel(bigtableRouteChannel()))
.recipientMessageSelectorFlow(m -> m.getHeaders().get(NOSUPPORT, Boolean.class),
c -> c.channel(noSupportRouteChannel())))
.get();
Need a way to handle the acknowledgement to PubSub messages
Why is that a Function? What do you produce from your flow? It really looks like your logic is one way and distribution. Isn't Consumer works for your instead?
That's probably indeed a reason why auto-ack is not happened: the Function waits for a reply, but you don't produce one. We may do that as well, but need to understand what you are going to produce as a reply.

Terraform - Azure - Virtual Network - Subnets output shuffle

We are using Terraform CLI on Azure, We have code for each subscription which create default Storage accounts, keyvault, VNet with subnet etc. Then we get the output of these services. In the output we also have info related to the VNET Subnets. We had less than 10 subnets before and we created VMs by giving reference of that output of the subnet. After adding more subnets in the same code we observed that after creating more subnets the output of the VNet->subnets has been readjusted/shuffled as shown below:
Changes to Outputs:
~ msdn_eastus_aks01 = "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_aks01" -> "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_webapp"
~ msdn_eastus_apimgt = "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_apimgt" -> "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_utility"
~ msdn_eastus_app = "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_app" -> "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_int_web"
~ msdn_eastus_appgw = "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_appgw" -> "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_apimgt"
~ msdn_eastus_db = "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_db" -> "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_app"
~ msdn_eastus_int_web = "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_int_web" -> (known after apply)
~ msdn_eastus_utility = "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_utility" -> "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_db"
~ msdn_eastus_webapp = "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_webapp" -> "/subscriptions/xxxxxxxx-xxxxxxxx-xxx-xxxxxxxxxxxf/resourceGroups/msdn-eastus-test-subdefault/providers/Microsoft.Network/virtualNetworks/vnet-eastus-test-msdn/subnets/msdn_eastus_appgw"
So if we rerun the code of same VMs the referred subnet got changed so the SAME VM is getting the new ip from the different subnet as shown above

actionFinally not running cmd invoked inside handler

I have the following code fragment:
withContainer :: String -> (String -> Action a) -> Action a
cid <= cmd "docker" [ "run" , "--rm", "-d", my_image ]
...
actionFinally (action containerName) $ do
cmd_ "docker" [ "rm", "-f", filter (/= '\n') cid ]
which supposedly kill containers whether or not the action succeeds. However I noticed containers are still left up and running when the action fails, which is annoying. What am I doing wrong?
This sample of code looks correct, and testing variations of it in isolation does indeed work. A bug has been raised to figure out the precise reason it doesn't work in your case at https://github.com/ndmitchell/shake/issues/731.

Is there a sbt-native-packager option equivalent to sbt-pack packMain?

I have a scala application library with several main classes.
When running sbt stage, sbt correctly creates bash scripts per main class that I have but with pre-defined names (taken from each class name).
I would like to control the name of the bash script and the JVM opts passed to each one.
For example: given two main classes: FooBar and BarFoo
I get bin/foo-bar and bin/bar-foo respectively.
I would like to somehow pass a map like
mainClasses := Map(
"newFooBar" -> "com.example.FooBar",
"newBarFoo" -> "com.example.BarFoo"
)
mainClassesJVM := Map(
"newFooBar" -> "-Xmx512m",
"newBarFoo" -> "-Xmx2g"
)
I found an sbt plugin called sbt-pack that does what I'm trying to achieve but I was wondering if I could achieve the same only with the sbt-native-packager plugin.
Example using sbt-pack plugin:
// [Optional] Specify mappings from program name -> Main class (full package path). If no value is set, it will find main classes
automatically
packMain := Map(
"newFooBar" -> "com.example.FooBar",
"newBarFoo" -> "com.example.BarFoo"
)
// [Optional] JVM options of scripts (program name -> Seq(JVM
option, ...))
packJvmOpts := Map(
"newFooBar" -> "-Xmx512m",
"newBarFoo" -> "-Xmx2g"
)
Does anyone know if there is an option to achieve the above using only sbt-native-packager?

Where is the breaking change?

I wrote a CRUD application to interface with JIRA. I ended up upgrading my haskell enviornment, because cabal-dev doesn't solve everything. As a result, I've got some breakage, with this error anytime I try to use any code that interfaces with JIRA.
Spike: HandshakeFailed (Error_Misc "user error (unexpected type received. expecting
handshake and got: Alert [(AlertLevel_Warning,UnrecognizedName)])")
After a little googling, I think this either has to do with tls or http-conduit which uses tls.
I'm currently using tls-1.1.2 and http-conduit-1.8.7.1
previously I was using
tls-0.9.11 and http-conduit >= 1.5 && < 1.7 (not sure which exactly, old install is gone.
This is where I believe the break is happening
manSettings :: ManagerSettings
manSettings = def { managerCheckCerts = \ _ _ _-> return CertificateUsageAccept }
this is what it used to look like
manSettings :: ManagerSettings
manSettings = def { managerCheckCerts = \ _ _ -> return CertificateUsageAccept }
Here's the code that uses it
initialRequest :: forall (m :: * -> *). URI -> IO (Request m,Manager)
initialRequest uri = do
initReq <- parseUrl uri -- let the server tell you what the request header
-- should look like
manager <- newManager manSettings -- a Manager manages http connections
-- we mod the settings to handle
-- the SSL cert. See manSettings below.
return (modReq initReq,manager)
where modReq initReq = applyBasicAuth username password initReq
Let me know if I'm left something out. I'm not sure at this point what broke between then and now.
It's a good guess about the error source, but very unlikely: managerCheckCerts simply uses the certificate package to inspect certificates for validity. The error message you're seeing seems to be coming from tls itself and indicates a failure in the data transport. It's probably a good idea to file a bug report with tls, preferably first by narrowing down the issue to a single HTTPS call that fails (or even better, using tls alone and demonstrating the same failure).

Resources