BDD with Cucumber to guide Chef development - cucumber

I like a lot Cucumber and I find a very useful tool to solve problems seeing them with an outside-in approach so I would like to use it as part of chef projects too. I have successfully integrated it into the project I'm working on but at the time of writing business goal of features I have some doubts.
Who is the end user here?
Regarding on this the feature will be more service oriented or not, ie:
If the feature is more architecture faced the I could write a MongoDB feature which describes that I need up and running a MongoDB service and that the applications is linked to it.
In the other hand I should just write application features, forgetting about the infrastructure behind and then assume that if the cucumber tests run well for the application then it means that the infrastructure is fine too. (I dont like this approach)
Which of the both approaches are better? I like the most the first one but I'm just a noob on these lands. Please give me your considerations.

Related

Is Pulumi that magical when compared to using Azure .NET SDK?

I'm with a dilema here about which SE site to ask this question so please help me out if it should be somewhere else.
I've been looking into Infrastructure as Code solutions.
Didn't like Terraform too much. The lack of intellisense makes discoberability harder than programmers have been used to.
I've been considering ARM templates. I like it that the templates are made available as we create resources in the portal but it seems way less readable and harder to maintain afterwards.
Then I found out Pulumi and love their idea compared to Terraform. The way I see it, they're approach is also declarative like the above options but we can use decent programming languages to get the job done.
The for loops is a must.
Cool, I like that! But since we like using C# (or other alternatives), then why don't we SDKs to manage our infrastructure as code?
Pulumi has compared themselves with cloud SKDs by positioning their solution as much safer advocating that, if we just use a cloud SDK ourselves, then our solution wouldn't be that reliable.
To what extent is this really true, I wonder?
Last year, I wrote some libraries that used Azure service bus queues/topics. There were several integration tests that would run in parallel and I needed to isolate them by creating new queues/topics and used Microsoft.Azure.ServiceBus.Management.ManagementClient to do this.
It really didn't seem like I had to learn anything at all.
Going to the point now. Not discarding Pulumi's innovation which I think is great:
Will Pulumi's really add that much benefit compared to using Azure SDKs?
What's been your experience with it?
A Pulumi developer here, so I'm definitely biased. I suspect the SO community may find your question violating some of the guidance, but I hope my answer survives :)
One upside of using Pulumi is that you get access to multiple providers with consistent developer experience. You may be using exclusively Azure, but you might at some point start combining it with things like building and publishing Docker images, deploying Kubernetes applications, or Datadog dashboards. All can be done from the same program or solution.
Now, the biggest difference with imperative SDKs is the notion of desired-state configuration. A Pulumi program describes the graph of resources and dependencies between them (what), not the steps to provision them (how). When you have an environment that lives for months and years, there's a big difference between evolving a single definition with baby steps and applying incremental changes (Pulumi) and writing a bunch of update scripts/programs to bring each environment to the new state (SDK).
How do you maintain multiple environments that may be similar but still different? (production vs staging vs test vs dev) How do you make sure that your short-lived infra that you created for nightly tests reflects the reality of production? What happens when an SDK program fails in the middle - can you retry running it again or will it create duplicate resources/fail with another error? How do you get a simple overview of changes over time in git? Concurrency control? Change history?
All the things above are baked into Pulumi and require manual consideration with a cloud SDK.

How to create a Node.js program that runs the same on all supported platforms

I want to create a Node.js application that runs on Windows, Mac and most linuxes. Is that easy? Are there any good examples of such? What do I need to take into account to do it? I understand file-path separator is one important issue. Are there others?
I'd like to hear if anybody has actual experiences
and "gotchas" they've encountered when creating
a cross-platform Node.js application. Thanks
I agree with https://stackoverflow.com/users/3731501/estus, the question is a bit broad in regards to what functionality you'd like to have in your application.
With that said, it may very well be impossible to create any application that executes the same across all platforms, but you should be able to achieve near functional parity with a bit of understanding and effort.
The main issues you'll encounter are around file systems. The node.js team has created a great guide on working with different filesystems, and would be a good start in at least understanding some of the best practices and approaches to to handling the differences and utilizing the fs module on different platforms.
Whatever other intricacies and considerations around platform dependent operations you may have are inevitably tied to what your application is trying to do. Once that's determined, you'll need to address those differences by reviewing whatever module you're using to execute the expected functionality and coding for the deviations. The documentation for the api's in the node.js common library are very good at exposing any behavioral or functional differences across operating systems, so if using those, you should at the very least know how those modules and corresponding methods behave on host systems. Hope that helps.

Choosing between JHipster and Spring boot plus angular separately

As a junior developer I am struggling to decide which approach should I use for a prototype. Given two separate apps (Java Spring Boot and Angular) I can learn many things from scratch. On the other hand JHipster provides a skeleton and a lot of already working components to copy paste.
So what do you recommend for a junior? Should I jump to JHipster or should I built everything myself in order to conquer the basics? Would it be possible to conquer the basics from a top down approach (JHipster)??
Note: I understand that Jhipster allows the easy integration of other components f.e. provides easy docker support. But still, I am not sure whether this approach is better for someone still on the learning curve.
It depends on the time you have for the prototype and if the result is really important.
Solution 1: if you have a lot of time to learn, you can start from scratch and try to build everything on your own. Then, you use JHipster and can compare what you did with JHipster, and use JHipster as a modele (as we try to keep best practices). You can take some code and integrate it to your project but not sure it will work easily. And you will see there are some parts which are really hard to code yourself as they impact all your project (ex: security)
Solution 2: use directly JHipster and focus on your use cases, using the generated codes of JHipster as example. You will learn with a good base code. And you have a good community on stackoverflow and gitter to help you.
As a JHipster team member, I would suggest the solution 2, of course :-)

How to test Socket + API

I have an API written in PHP, which works with nodejs sockets. I want to test this project with PHPUnit or codeception. How to do it? What is the best way to do it? I didn't found any documentation.
PHPUnit is a unit testing framework, it doesn't really care what are you testing with it as far as it's independent blocks of PHP code, i.e., whether it's API or not, PHPUnit doesn't care. Documentation and how to get started is available here. Besides the official docs and tutorials there are gadzillions of other great resources, like this one. Codeception has a dedicated section in the docs on how to test APIs.
If this is the beginning of your testing life, you will find it much easier following the general documentation from bottom to the top and then looking at more complex stuff, like testing APIs, etc. Be ready, as this is likely to take some weeks before you can get the head around with both frameworks.

How can I still use DDD, TDD in BizTalk?

I just started getting into BizTalk at work and would love to keep using everything I've learned about DDD, TDD, etc. Is this even possible or am I always going to have to use the Visio like editors when creating things like pipelines and orchestrations?
You can certainly apply a lot of the concepts of TDD and DDD to BizTalk development.
You can design and develop around the concept of domain objects (although in BizTalk and integration development I often find interface objects or contract first design to be a more useful way of thinking - what messages get passed around at my interfaces). And you can also follow the 'Build the simplest possible thing that will work' and 'only build things that make tests pass' philosophies of TDD.
However, your question sounds like you are asking more about the code-centric sides of these design and development approaches.
Am I right that you would like to be able to follow the test driven development approach of first writing a unti test that exercises a requirement and fails, then writing a method that fulfils the requirement and causes the test to pass - all within a traditional programing language like C#?
For that, unfortunately, the answer is no. The majority of BizTalk artifacts (pipelines, maps, orchestrations...) can only really be built using the Visual Studio BizTalk plugins. There are ways of viewing the underlying c# code, but one would never want to try and directly develop this code.
There are two tools BizUnit and BizUnit Extensions that give some ability to control the execution of BizTalk applications and test them but this really only gets you to the point of performing more controled and more test driven integration tests.
The shapes that you drag onto the Orchestration design surface will largely just do their thing as one opaque unit of execution. And Orchestrations, pipelines, maps etc... all these things are largely intended to be executed (and tested) within an entire BizTalk solution.
Good design practices (taking pointers from approaches like TDD) will lead to breaking BizTalk solutions into smaller, more modular and testable chunks, and are there are ways of testing things like pipelines in isolation.
But the detailed specifics of TDD and DDD in code sadly don't translate.
For some related discussion that may be useful see this question:
Mocking WebService consumed by a Biztalk Request-Response port
If you often make use of pipelines and custom pipeline components in BizTalk, you might find my own PipelineTesting library useful. It allows you to use NUnit (or whatever other testing framework you prefer) to create automated tests for complete pipelines, specific pipeline components or even schemas (such as flat file schemas).
It's pretty useful if you use this kind of functionality, if I may say so myself (I make heavy use of it on my own projects).
You can find an introduction to the library here, and the full code on github. There's also some more detailed documentation on its wiki.
I agree with the comments by CKarras. Many people have cited that as their reason for not liking the BizUnit framework. But take a look at BizUnit 3.0. It has an object model that allows you to write the entire test step in C#/VB instead of XML. BizUnitExtensions is being upgraded to the new object model as well.
The advantages of the XML based system is that it is easier to generate test steps and there is no need to recompile when you update the steps. In my own Extensions library, I found the XmlPokeStep (inspired by NAnt) to be very useful. My team could update test step xml on the fly. For example, lets say we had to call a webservice that created a customer record and then checked a database for that same record. Now if the webservice returned the ID (dynamically generated), we could update the test step for the next step on the fly (not in the same xml file of course) and then use that to check the database.
From a coding perspective, the intellisense should be addressed now in BizUnit 3.0. The lack of an XSD did make things difficult in the past. I'm hoping to get an XSD out that will aid in the intellisense. There were some snippets as well for an old version of BizUnit but those havent been updated, maybe if theres time I'll give that a go.
But coming back to the TDD issue, if you take some of the intent behind TDD - the specification or behavior driven element, then you can apply it to some extent to Biztalk development as well because BizTalk is based heavily on contract driven development. So you can specify your interfaces first and create stub orchestrations etc to handle them and then build out the core. You could write the BizUnit tests at that time. I wish there were some tools that could automate this process but right now there arent.
Using frameworks such as the ESB guidance can also help give you a base platform to work off so you can implement the major use cases through your system iteratively.
Just a few thoughts. Hope this helps. I think its worth blogging about more extensively.
This is a good topic to discuss.Do ping me if you have any questions or we can always discuss more over here.
Rgds
Benjy
You could use BizUnit to create and reuse generic test cases both in code and excel(for functional scenarios)
http://www.codeplex.com/bizunit
BizTalk Server 2009 is expected to have more IDE integrated testability.
Cheers
Hemil.
BizUnit is really a pain to use because all the tests are written in XML instead of a programming language.
In our projects, we have "ported" parts of BizUnit to a plain old C# test framework. This allows us to use BizUnit's library of steps directly in C# NUnit/MSTest code. This makes tests that are easier to write (using VS Intellisense), more flexible, and most important, easier to debug in case of a test failure. The main drawback of this approach is that we have forked from the main BizUnit source.
Another interesting option I would consider for future projects is BooUnit, which is a Boo wrapper on top of BizUnit. It has advantages similar to our BizUnit "port", but also has the advantage of still using BizUnit instead of forking from it.

Resources