quick question. Currently spring-integration-kafka doesn't appear to be a part of the BOM for spring-integration, so that when specifying dependencies in gradle it's an oddball out for having to declare the version (when using the spring dependencies plugin).
Is there a BOM somewhere I should be importing? No big deal if not.
plugins {
id 'org.springframework.boot' version '2.0.1.RELEASE'
id 'io.spring.dependency-management' version '1.0.5.RELEASE'
}
dependencyManagement {
imports {
// Need something here for spring-integration-kafka?
}
}
dependencies {
// Spring
compile "org.springframework.boot:spring-boot-starter-web"
compile "org.springframework.boot:spring-boot-starter-data-jpa"
compile "org.springframework.boot:spring-boot-starter-integration"
compile "org.springframework.integration:spring-integration-jms"
compile "org.springframework.integration:spring-integration-kafka:3.0.1.RELEASE"
}
No, there is not such an inclusion into any existing BOM.
We are still in doubts to merge it into the Core project or not: https://jira.spring.io/browse/INT-3966.
However that might be a good argument to include it into the Spring Boot dependency management. Feel free to raise an issue against Spring Boot.
One of the reason might like like there is a strong compatibility between Spring Integration Kafka & Spring Kafka and Apache Kafka per se. See the table in the end of the page: https://projects.spring.io/spring-kafka/
Related
I am using cucumber-jvm to perform some functional tests in Kotlin.
I have the standard empty runner class:
#RunWith(Cucumber::class)
#CucumberOptions(features=[foo],
glue=[bar],
plugin=[baz],
strict=true,
monochrome=true)
class Whatever
The actual steps are defined in another class with the #ContextConfiguration springframework annotation.
This class also uses other spring features like #Autowire or #Qualifier
#ContextConfiguration(locations=["x/y/z/config.xml"])
class MyClass {
...
#Before
...
#Given("some feature file stuff")
...
// etc
}
This all work fine in cucumber version 4.2.0, however upgrading to version 6.3.0 breaks things. After updating the imports to match the new cucumber project layout the tests now fail with this error:
io.cucumber.core.backend.CucumberBackendException: Please annotate a glue class with some context configuration.
It provides examples of what it means...
For example:
#CucumberContextConfiguration
#SpringBootTest(classes = TestConfig.class)
public class CucumberSpringConfiguration {}
Or:
#CucumberContextConfiguration
#ContextConfiguration( ... )
public class CucumberSpringConfiguration {}
It looks like it's telling me I can just add #CucumberContextConfiguration to MyClass.
But why?
I get the point of #CucumberContextConfiguration, it's explained well here but why do I need it now with version 6 when version 4 got on fine without it? I can't see any feature that was deprecated and replaced by this.
Any help would be appreciated :)
Since the error matches exactly with the error I was getting in running Cucumber tests with Spring Boot, so I am sharing my fix.
One of the probable reason is: Cucumber can't find the CucumberSpringConfiguration
class in the glue path.
Solution 1:
Move the CucumberSpringConfiguration class inside the glue path (which in my case was inside the steps package).
Solution 2:
Add the CucumberSpringConfiguration package path in the glue path.
The below screenshot depicts my project structure.
As you can see that my CucumberSpringConfig class was under configurations package so it was throwing me the error when I tried to run feature file from command prompt (mvn clean test):
"Please annotate a glue class with some context configuration."
So I applied solution 2, i.e added the configurations package in the glue path in my runner class annotation.
And this is the screenshot of the contents of CucumberSpringConfiguration class:
Just an extra info:
To run tests from command prompt we need to include the below plugin in pom.xml
https://github.com/cucumber/cucumber-jvm/pull/1959 removed the context configuration auto-discovery. The author concluded that it hid user errors and removing it would provide more clarity and reduce complexity. It also listed the scenarios where the context configuration auto-discovery used to apply.
Note that it was introduced after https://github.com/cucumber/cucumber-jvm/pull/1911, which you had mentioned.
Had the same error but while running Cucumber tests from Jar with Gradle.
The solution was to add a rule to the jar task to merge all the files with the name "META-INF/services/io.cucumber.core.backend.BackendProviderService" (there could be multiple of them in different Cucumber libs - cucumber-java, cucumber-spring).
For Gradle it is:
shadowJar {
....
transform(AppendingTransformer) {
resource = 'META-INF/services/io.cucumber.core.backend.BackendProviderService'
}
}
For Maven something like this:
<transformers>
<transformer implementation="org.apache.maven.plugins.shade.resource.AppendingTransformer">
<resource>META-INF/services/io.cucumber.core.backend.BackendProviderService</resource>
</transformer>
</transformers>
A bit more explanation could be found in this answer
I'm trying to upgrade to Hazelcast 4.0 in our Spring Boot 2.2.1 application.
We use the #EnableHazelcastHttpSession annotation, which pulls in HazelcastHttpSessionConfiguration, which pulls in HazelcastIndexedSessionRepository from the spring-session-hazelcast jar.
However, this class won't compile because it imports Hazelcast's IMap which has moved to a different package in Hz 4.0.
Is there any way to fix this so that Spring Session works with Hazelcast 4?
I just copied the HazelcastIndexedSessionRepository into my own source code, changed the import from com.hazelcast.core.IMap to com.hazelcast.map.IMap, and swapped the sessionListenerId from String to UUID. If I keep it in the same package, then it loads my class instead of the one in the jar, and everything compiles and works fine.
Edit: We no longer get the SessionExpiredEvent, so something's not quite right, but manual testing shows us that our sessions do time out and force the user to log in again, even across multiple servers.
I found the cause of the error, you must look that the session repository is created by HazelcastHttpSessionConfiguration, in the class it checks wich version of hazelcast is in the classpath, when hazelcast4 is true then it instantiates Hazelcast4IndexedSessionRepository that doesn't use 'com.hazelcast.core.IMap' and you don't get the class not found exception.
Code of class HazelcastHttpSessionConfiguration
#Bean
public FindByIndexNameSessionRepository<?> sessionRepository() {
return (FindByIndexNameSessionRepository)(hazelcast4 ? this.createHazelcast4IndexedSessionRepository() : this.createHazelcastIndexedSessionRepository());
}
Remove the usage of HazelcastIndexedSessionRepository replace it with Hazelcast4IndexedSessionRepository or remove the code and let spring autoconfiguration do the job by HazelcastHttpSessionConfiguration
I have two libraries with same classes defined in each one. However they have some different contents (methods/constants).
For example:
Library 1:
package com.test.package;
Class A {
// only method signatures
public void methodA() {
}
public void methodB() {
}
}
Library 2:
package com.test.package;
Class A {
public void methodA() {
// some logic that MUST be executed to provide backward compatibility
}
}
My application uses Library 1 and Library 2 and run in devices which have com.test.package.ClassA, but com.test.package.ClassA.methodB() will only exist in newer releases in framework. Said that, I need the Library 1 to be used to compile my application and the Library 2 to execute a different implementation of methodA().
I have tried to do this in Android Studio using .jar and .aar libraries format, but none of them worked for me.
Is it possible to set this configuration in an Android Studio project?
I am building both Library 1 and 2, and I cannot add methodB() in Library 2.
For a simple Java application, you can do this by unlinking the compile and runtime configurations. I set up an example repository here.
The idea is shown in this commit, but can be described as manually resetting the runtime configuration so that it doesn't include the contents of the compile configuration. After doing so, you can just include your runtime library variation in the runtime configuration.
The application's build.gradle becomes something like:
apply plugin: 'application'
mainClassName = 'my.package.MyAppClass'
configurations.runtime.extendsFrom = [] // Reset runtime configuration
dependencies {
compile 'my.group:my.artifact:2.0' // Library 1, with the new method
runtime 'my.group:my.artifact:1.0' // Library 2, without the method
}
For Android, this can be a little more complicated. The problem is that there's no runtime configuration for Android (because you don't execute it on your computer, unless you're using Robolectric or something similar).
I think there are a few workarounds you can probably use, but one initial suggestion would be to create a wrapper library that abstracts away the dependency on the other libraries. This wrapper library you can compile with the newest library version (Library 1, with the new method). You could then include the wrapper library in the Android app while setting it as a non-transitive dependency and including the other library version:
dependencies {
compile 'my.group:my.wrapped.artifact:0.1' {
transitive = false // Don't include dependencies of the wrapper
// i.e., don't include version 2.0 of the lib.
}
compile 'my.group:my.artifact:1.0'
}
This should work because by setting the dependency as non-transitive Gradle doesn't recursively include the dependencies of the wrapper library, so the version used to compile the wrapper isn't included (in theory) in the APK. You can therefore add the old version without causing a conflict.
An example is set up in the same repository, under the Android branch. Firstly, two Java libraries are created. Then an Android library is created to wrap around the compile-time library. An example activity is created to show how using the wrapper library uses the compile-time library. Then, the latest commit shows how the app is configured to use the wrapper library (which compiles with the newest library) but forces the old library to be included instead in the runtime.
Hope this helps =)
I don't understand gradle plugins block
apply plugin: 'someplugin1'
apply plugin: 'maven'
and other one:
plugins {
id 'org.hidetake.ssh' version '1.1.2'
}
In first block We have some plugin name. in second one package and version. I don't understand where I should use first block and when second one.
The plugins block is the newer method of applying plugins, and they must be available in the Gradle plugin repository. The apply approach is the older, yet more flexible method of adding a plugin to your build.
The new plugins method does not work in multi-project configurations (subprojects, allprojects), but will work on the build configuration for each child project.
I would think that as functionality progresses, the plugins configuration method will overtake the older approach, but at this point both can be and are used concurrently.
As already mentioned by #cjstehno the apply plugin is a legacy method that you should avoid.
With the introduction of the plugins DSL, users should have little reason to use the legacy method of applying plugins. It is documented here in case a build author cannot use the plugins DSL due to restrictions in how it currently works.
With the new plugins block method, you can add a plugin and control when to apply it using an optional parameter apply:
plugins {
id «plugin id» version «plugin version» [apply «false»]
}
You would still use the legacy method in situations where you want to apply an already added but not applied plugin in your plugins block. E.g, in the master project a plugin xyz is added but not applied and it should be applied only in a subproject subPro:
plugins {
id "xyz" version "1.0.0" apply false
}
subprojects { subproject ->
if (subproject.name == "subPro") {
apply plugin: 'xyz'
}
}
Notice that you don't need the version anymore. The version is required in the plugins block unless you are using one of the Core Gradle plugins, such as java, scala, ...
I spent some time understanding the difference while trying to create a Spring Boot application, and that's why I am answering this again after a while. The following example for using Spring Boot plugin helped me a lot:
What should currently be used:
plugins {
id "org.springframework.boot" version "2.0.1.RELEASE"
}
What had been used before Gradle 2.1:
buildscript {
repositories {
maven {
url "https://plugins.gradle.org/m2/"
}
}
dependencies {
classpath "org.springframework.boot:spring-boot-gradle-plugin:2.0.1.RELEASE"
}
}
apply plugin: "org.springframework.boot"
These are two different ways to use Gradle plugin。
The apply plugin way: First resolve plugin you needed from root build.gradle like:
buildscript {
repositories {
// other repositories...
mavenCentral()
}
dependencies {
// other plugins...
classpath 'com.google.dagger:hilt-android-gradle-plugin:2.44'
}
Then in the build.gradle of your Android Gradle modules apply the plugin:
apply plugin: 'com.android.application'
apply plugin: 'com.google.dagger.hilt.android'
The plugins way:combine resovle and apply where in your root build.gradle like:
plugins {
// other plugins...
id 'com.google.dagger.hilt.android' version '2.44' apply false
}
Then in the build.gradle of your Android Gradle modules apply the plugin:
plugins {
// other plugins...
id 'com.android.application'
id 'com.google.dagger.hilt.android'
}
android {
// ...
}
Now ( In Gradle 6) you can give repositories name for plugins without using build script.
In settings.gradle, we can add plugin pluginManagement
pluginManagement {
repositories {
maven {
url '../maven-repo'
}
gradlePluginPortal()
ivy {
url '../ivy-repo'
}
}
}
Reference: https://docs.gradle.org/current/userguide/plugins.html#sec:custom_plugin_repositories
I would like to point out though, that is it not required for a plugin to be published remotely to be able to use it!
It can also be a UNPUBLISHED locally available plugin (be it convention plugins or otherwise) just as well.
In case one wishes to refer to such an unpublished locally-available plugin,
you'll have to include it's so-called "build" within the desired component/build (identified via the settings.gradle(.kts)-file) like so:
pluginManagement {
includeBuild '<path-to-the-plugin-dir-containing-the-settings-file>'
}
Afther that is done, one may use the local plugin within the plugins {}-DSL-block via its pluginId.
If the plugin needs a version then it's safer to put the version number in the pluginManagement block in your settings.gradle file, rather than in plugins block.
By safer I mean that you won't encounter an error like plugin request for plugin already on the classpath must not include a version. That can happen if you includeFlat a project into another project that uses the same plugin and your plugin versions are in the plugins block.
So rather than:
plugins {
id 'pl.allegro.tech.build.axion-release' version '1.10.3'
}
Do:
plugins {
id 'pl.allegro.tech.build.axion-release'
}
and then in your settings.gradle file:
pluginManagement {
plugins {
id 'pl.allegro.tech.build.axion-release' version '1.10.3'
}
}
I'm going to add a little twist to what's been said. Gradle introduced the concept of a plugins block as a technique to speed up and optimize the build process. Here's what Gradle's documentation says:
This way of adding plugins to a project is much more than a more convenient syntax. The plugins DSL is processed in a way which allows Gradle to determine the plugins in use very early and very quickly. This allows Gradle to do smart things such as:
Optimize the loading and reuse of plugin classes.
Provide editors detailed information about the potential properties and values in the buildscript for editing assistance.
This requires that plugins be specified in a way that Gradle can easily and quickly extract, before executing the rest of the build script. It also requires that the definition of plugins to use be somewhat static.
It's not just a newer way of dealing with plugins, it's a way of improving the build process and/or user's editing experience.
In order for it to work, it needs to be specified at the top of the build, but it also needs to be specified after the buildscript block if one is included. Why is that? Because the code in the build scripts is evaluated in the order its written. The buildscript block must be evaluated before the plugins block is evaluated. Remember, the buildscript block is about setting up of the plugin environment. Hence the rule that the plugins block must be specified after the buildscript block.
The new plugins block not only specifies the plugins that the project is using, but it also specifies whether the plugin is applied. By default, all plugins in the plugins block are automatically applied, unless it is specifically declared not to be applied (i.e., adding "apply false" after the plugin declaration in the plugins block).
So why would you declare a plugin and not apply it. There are two main reasons that I can think of:
1.) so you can declare the version of the plugin you want used. After you've declared a plugin, the plugin is now on the "classpath". Once a plugin is on the classpath you no longer need to specify the version of the plugin when you apply it later. In multiproject builds, that makes supporting buildscripts a little easier. (i.e., you only have one place where the plugin version is specified.)
2.) Sometimes, you may have a plugin, that requires certain things defined before they are applied. In that case, you can declare a plugin in the plugins block, and defer the plugin from being applied until after you define the things that the plugin requires as input. For example, I have a custom plugin that looks for a configuration named "mavenResource". In the dependencies block I'll added a dependency like: "mavenResource(maven_coordinate)". That plugin will find all the dependencies contained in the mavenResource configuration and copy the associated maven artifact to the projects "src/main/resources" directory. As you can see, I don't want to apply that plugin until after the mavenResource configuration is added to that project, and the mavenResource dependencies are defined. Hence, I define my custom plugin the plugins block, and I apply it after the project dependencies have been defined. So, the concept that applying a plugin is old style and wrong is a misconception.
Some of you might wonder what it means to apply a plugin. It's pretty straightforward. It means that you call the plugin's apply function passing it the Gradle Project object for the project where the plugin is being applied. What the plugin does from there on is totally at the discretion of the plugin. Most commonly, the apply function usually creates some Gradle tasks and adds them to the Gradle build task dependency graph. When Gradle starts its execution phase, those tasks will get executed at the appropriate time in the build process. The plugin apply function can also do things like deferring some of it work until afterEvaluate. That's a way to allow other things in the build script to be setup even though they are defined later on in the buildscript. So, you might ask why I didn't do that trick in my custom plugin. What I've observed is that the next subproject starts processing after the root project finishes being evaluated. In my case, I needed the resource added before the next subproject began. So, there was a race condition, that I avoided by not doing the afterEvaluate technique and specifically applying the plugin once the things I needed setup was completed.
I'm totally new to this gradle, teamcity and groovy.
I'm tryign to write a plugin,which will get the value from svninfo. If the developer wants to override the value(in build.gradle) they can override something like this.
globalVariables{
virtualRepo = "virtualRepo"
baseName = "baseName"
version = "version"
group = "group"
}
Here i provide the extension called globalvariable.
Now, The jars to be produced shd hav the name according to the values from the build.gradle..
How to get the value from build.gradle in the plugin inorder name the jar???
Not sure I understand the question. It's the plugin that installs the extension object, and it's the plugin that needs to do something with it.
Note that the plugin has to defer reading information from the extension object because the latter might only get populated after the plugin has run. (A plugin runs when apply plugin: is called in a build script. globalVariables {} might come after that.) There are several techniques for deferring configuration. In your particular case, if I wanted to use the information provided by the extension object to configure a Jar task, I might use jar.doFirst { ... } or gradle.projectsEvaluated { jar. ... }.
Before you go about writing plugins, make sure to study the Writing Custom Plugins chapter in the Gradle user guide. A search on Stack Overflow or on http://forums.gradle.org should turn up more information on techniques for deferring configuration. Another valuable source of information is the Gradle codebase (e.g. the plugins in the code-quality subproject).