where can I find the chef's core resources? - resources

I'm very new to chef and the chef version I'm using is
[root#localhost ~]# chef --version
Chef Development Kit Version: 1.3.43
chef-client version: 12.19.36
delivery version: master (dd319aa632c2f550c92a2172b9d1226478fea997)
berks version: 5.6.4
kitchen version: 1.16.0
when I create cookbook using
knife cookbook create COOKBOOK #I'm able to see libraries directory
whereas If create cookbook using
chef generate cookbook COOKBOOK #I'm unable to find libraries directory
Now my question is I want to create custom resource for myself, where I have to store my custom resource?
can I store them in chef's core resources directory? If yes, then how to find out the chef's core resources directory?
Thanks

Custom resources go in the resources/ folder in your cookbook. You cannot edit the core resources as those live in the Chef code base, not your cookbook.

Generally custom resource live under the resources directory.
However if you need to write helpers, library will still be used.
Chef Docs for Custom Resources: https://docs.chef.io/custom_resources.html
Further in-depth information can be found here: https://docs.chef.io/dsl_custom_resource.html
An example of custom resource usage can be found here with integration tests https://github.com/sous-chefs/samba

Related

Airflow 2.0 Docker setup

Recently been trying to learn Airflow, but a majority of resources online depended on this repo https://github.com/puckel/docker-airflow which unfortunately has been removed.
I am not familiar with docker so I'm just trying to set up locally and play around with Airflow. I'm on a windows setup and have already gotten docker working on my computer. Does Airflow have a quick-set-up file for a docker-compose? Or is there any other resources I can look at? Thanks.
Its a duplicate question.
Use official official docker-compose.yml see here
I recently added a quick start guides to the official Apache Airflow documentation. Unfortunately, this guide has not been released yet. It will be released in Airflow 2.0.1.
For now, you can use the development version, and when a stable version is released it will be very easy for you to migrate. I don't expect any major changes to our docker-compose.yaml files.
http://apache-airflow-docs.s3-website.eu-central-1.amazonaws.com/docs/apache-airflow/latest/start/docker.html

Building out custom images (not just pulling a prebuilt image) in a cloud agnostic manor

I need to be able to build my own images using {some_tool} alongside Terraform. I had been looking into using packer for this, but it seems to me that it just pulls a prebuilt AMI and configures it.
Basically, I need to build a windows or Linux OS image that will build then deploy with Terraform on any cloud (AWS, VMWARE, OCI, Google, where ever).
Looking for a tool to use this way. Also, I'm not sure how packer is necessary alongside Terraform sense it seems to me that Terraform has the same exact built-in functionality.
Thanks all :)
Hashcorp's Packer is the perfect tool for this. We build various machine images and deploy them to AWS. Basically packer boots an instance (using the provided base image) in the selected provider, installs the dependencies/requirements as mentioned in your provisioner and create the final image out of it.
So to start the instance, it needs a base image to start with.. So the issue that you mentioned is not an issue at all and it is the way tool works. Hope it helps.
it seems to me that it just pulls a prebuilt AMI and configures it.
You can build AMI's from scratch with the amazon-ebssurrogate or amazon-chroot builders or use any of the local builders and the amazon-import post-processor, but all these options require a lot of understanding of the prerequisites of running the OS on AWS and how to automatically install it from scratch.
Basically, I need to build a windows or Linux OS image that will build then deploy with Terraform on any cloud (AWS, VMWARE, OCI, Google, where ever).
There is no such thing a cloud agnostic images. Each cloud requires the correct kernel options, drivers, and tools installed to operate optimally or even at all.
I'm not sure how packer is necessary alongside Terraform sense it seems to me that Terraform has the same exact built-in functionality.
A big difference is that Terraform doesn't handle the lifecycle of creating a AMI. Terraform is not a good tool for creating images from source code, that isn't what it was built for. HashiCorp created these two tools to complement each other.

Bringing other Chef cookbooks into a custom cookbook

I'm in the process of learning Chef to so I can deploy projects built with Python.
I have my own Cookbook where I am writing my own custom recipes. I've also downloaded the poise-python cookbook. Both sit in the same "cookbooks" path in my app.
What I am trying to figure out is how do I include the methods from poise-python so I can use them in my custom cookbook?
Thanks,
RB
You need to define your dependency in your metadata.rb file for your cookbook. Like this:
depends 'poise-python'
For this particular dependency this is enough to use the custom resources it provides. You should review any dependency's README.md for guidance on using it. You can find poise-python's here. You should also review it's dependencies to be sure you have all of these available (uploaded to your Chef server, or in the cookbooks directory for Chef solo).
Familiarizing yourself with Policyfiles is recommended for dealing with dependencies at a greater scale.

Hazelcast 3.8 module and configuration possibility for wildfly 10.1?

I like to prototype a JEE environment with Wildfly 10.1 and Hazelcast 3.8. Until now I only have experience with ancient JBoss 4.2.3.GA.
I already found existing resource adapter implementation based on older hazelcast 3.6 under https://github.com/hazelcast/hazelcast-ra. Unfortunately I couldn't deploy it as-is on Wildfly 10.1 since IronJacamar complained about missing equals/hashCode methods (which isn't true since they are explictly overwritten in the source code. deploying a self-built snapshot of git master had the same issue).
I ended up with migrating the ra.xml configuration code to proper javax.resource.spi annotations (#Connector, #ConfigProperty, #ConnectionDefinition) and adding javax.resource.Referenceable interface implementation (don't know whether this is necessary). The step to hazelcast 3.8 was much easier - just adding missing interface methods to HazelcastConnectionImpl.
I still struggle with deployment/configuration, so here are my questions:
How should the deployment structure for an JCA adapter look like? I tested the following approaches:
All-in-one: RAR file containing all of cache-api-1.0.0.jar, hazelcast-3.8.jar, hazelcast-client-3.8.jar, my-hazelcast-ra-impl.jar and deployment descriptors.
By-Library: Added new modules javax.cache.api (cache-api-1.0.0.jar) and com.hazelcast.hazelcast (hazelcast-3.8.jar, hazelcast-client-3.8.jar) to ${WILDFLY_HOME}/modules/ and declared appropriate module dependencies in jboss-deployment-structure.xml. RAR file contains my-hazelcast-ra-impl.jar, hazelcast.xml and deployment descriptors.
By-Adapter: Added a new module my.hazelcast.ra (cache-api-1.0.0.jar, my-hazelcast-ra-impl.jar) to ${WILDFLY_HOME}/modules/ and declared appropriate module dependency in jboss-deployment-structure.xml. RAR file contains hazelcast-3.8.jar, hazelcast-client-3.8.jar, hazelcast.xml and deployment descriptors.
Where is the proper place to deploy a hazelcast.xml configuration file into Wildfly 10.1? It seems that I need to pack it next to ResourceAdapterImpl class (my-hazelcast-ra-impl.jar) so that the class loader finds it and prefers it over hazelcast-default.xml. It contains only global configuration options like group/network. No cache definitions since caches should be configured/created on-demand via CDI.
Is there something like a conf folder where I can deploy hazelcast.xml file separate from binary RAR contents? It would be nice if it could be hot-deployed (for prototyping) but that is not mandatory.
Should it be placed somehow inside subsystem configurations within standalone.xml? I found similar cache-container configurations for infinispan subsystem but don't know how to adapt this to hazelcast (since it's not an own subsystem).
In Wildfly Management webinterface I can find the deployed RAR under Depoyments and in the JNDI view, but it is not listed under Configuration -> Subsystems -> Resource Adapters. I can create a new entry there but don't find any advantage. What is the meaning of this configuration option?
Thank you in advance

How would I host deb packages?

I'm currently working on a github project mainly focused on windows users, written in Java. Install4j allows for easy .deb/.rpm etc. package conversion...
We could just ditribute the .deb on the download side, but when looking at gitlab a while ago, I saw, that Gitlab is using packagecloud.io as a hosting service for their packages (usingtheir own domain), so they can be updated using apt-get.
My question is, if there is a free service working just like packagecloud.io (not launchpad or similar with baazar and that advanced stuff) which can either be hosted on our own server or a public server. Or if there even is a downloadable version of packagecloud.io which we could use on our own server.
You can configure Travis CI to run extra commands when the build succeeds. You can put in some conditions, so that the deploy stage will only be run if commit happens to have a tag name. See the deployment documentation to get going.
A number of providers are officially supported, among which PackageCloud.io.
You might find the dpl utility useful, as it assists with writing and testing deployment settings.
Check out OpenRepo: https://github.com/openkilt/openrepo
I think this is what you're asking for. This is a package hosting server that can make packages available for both Debian (APT) and Red Hat (RPM) files.

Resources