I just bought JOOQ Express. I can see where I can download a .zip file containing a bunch of .jar files but I used maven for my build. I also github actions running builds which normally download everything from maven.
Do I have to dump all of these files into my source repo/git? Or can I somehow reference my JOOQ express via some maven repo?
As of jOOQ 3.17, there are 2 main options:
You set up an artifact repository and deploy the jar files in there, and make it available to your github actions
You check in the jar files in some source repository (the same one or a different one if you prefer not having too many large files in your main repository)
There's a pending feature request to offer a public repository for commercial jOOQ artifacts: https://github.com/jOOQ/jOOQ/issues/9906. Alternatively, these might be published to Maven Central, as various vendors have started doing (e.g. for JDBC drivers, etc.), despite the artifacts not being open source.
Related
I'm newbie in hybris. I want to add maven dependency in hybris using external-dependencies.xml. But I can't see any of those jar(s) popped-in. Is it possible to get jar using external-dependencies.xml, if yes, please provide your response.
The platform build is coupled with ant but you can use maven dependency (by default is disabled because all necessary libraries are shipped with the hybris).
In order to activate dependency management you have to follow these steps:
1) Make sure you have maven installed
2) Open the extensioninfo.xml from your extension
2.1) Include usemaven="true", for instance
3) Manage your dependencies inside "external-dependencies.xml" file (Inside this file is a regular maven pom.xml)
4) build your project (ant all). Hybris fetch required libraries into \lib and \web\webroot\WEB-INF\lib (Bear in mind that there are two "external-dependencies.xml", one for the core module and other for the web module)
Besides if you look the ant targets you will see there is one call "updateMavenDependencies". This task delete all jars in the lib folder and replaces them with the defined maven dependencies. In case you dont want maven to manage a few libraries you can handle this, creating a file in the root of your extension call "unmanaged-dependencies.txt".On this file you will include all libraries maven is not going to manage (therefore the ant target is not going to delete the libraries include on this file)
My official answer: add usemaven="true" in your extensioninfo.xml (extension tag)
I'm newbie too to Hybris but what I know is that whenever you need a dependency in a Hybris extension you need to add the name of the dependency to hybris/config/localextensions.xml and in extensioninfo.xml in the extension you want to add the dependency.
As for the Maven dependency, I'm not sure how to do that because I mostly use the out-of-the-box build system which is based on Ant.
We are mantaining a java project that consumes a lot of different web services. Service definitions change regularly and new services are added very often. So we need to automate the generation of all the java clients.
We have a batch script that downloads(curl) all the needed wsdls and all the dependent schemas, then generates all the corresponding java clients(wsimport) and finally generates a jar that includes all the clients and all the wsdls and xsds which. We deploy this jar in our artifactory and we use it in our project. We need to include the wsdls and xsds in our jar to avoid jax-ws calls to the wsdls in execution.
The script has become a monster, because we use very different web services. Every wsdl, has it's own different schemas located in different urls, so we have to identify all the files that have to be downloaded and put the xsds in the correct path in the disk.
Our goal would be to have a script that, given a wsdl url list, downloads all the wsdls and dependent xsds in a folder so that we can execute wsimport against them.
SoapUI's "export definition" tool, exports the wsdld and the dependent schemas in a folder, and modifying the "ws:import" paths in the wsdl automaticaly. Is there any way to invoke this tool from command line?
Is there any other tool that would help us improving this process?
thanks in advance
I am looking to add Groovy support to an existing java project so that I can seemlessly compile mixed Java and Groovy code using invokedynamic so that I can get Java-like execution speed without needing to waste excessive amounts of time with verbose Java syntax
After reading that the gmaven plugin no longer supports compilation -and that the groovy eclipse compiler plugin doesn't yet support invokedynamic, I asked myself, why would I want to continue using Maven if it compiles Groovy code that is needlessly slow?
Consequently, I decided I would try scrapping maven for Gradle so that I could obtain faster code while also porting some python deployment scripts to Gradle tasks so as to only need one codebase.
I have some libraries stored on a simple password protected s3 maven repository (in order to avoid needing enterprise overkill like artifactory). After doing some basic research, I have found that Gradle has no built in support for adding in custom dependency management as determined by this stack overlow question and this support forums post.
I did manage to find a s3 plugin for gradle -but it doesn't deal with management of dependencies.
If the whole point of Gradle is to be more flexible than Maven and if the core purpose of a dependency management/ build system is to effectively manage dependencies from a variety of sources-then lack of support for custom repositories appears to be a fairly significant significant design flaw which makes any issues I have encounted with Maven thus far pale in comparison.
However, it is quite possible that I am missing something, and I have already invested several hours learning Gradle -so I figured I would see if there is some reasonable way to emulate dependency management for these s3 dependencies until Gradle developers fix this critical issue. Otherwise I will have to conclude that I am better off just using Maven and tolerating slower Groovy code until the compiler plugin supports invokedynamic.
Basically I need a solution that does the following:
Downloads dependencies and transitive dependencies to the gradle cache
Doesn't require me to hardcode the path to the gradle cache -so that my build script is platform independent.
Doesn't download the dependencies again if they are already in the cache.
Works with a multi-module project.
However, I cannot find anything in the documentation that would even give me a clue as to where to begin:
Gradle 2.4 has native support for S3 repositories. Both downloading dependencies and publishing artifacts.
To download with IAM credentials (paraphrased from the link above):
repositories {
maven {
url "s3://someS3Bucket/path/to/repo/root"
credentials(AwsCredentials) {
accessKey 'access key'
secretKey 'secret key'
}
}
}
Then specify your dependencies as usual.
You don't need any custom repository support to make this work. Just declare a maven repository with the correct URL. If the repository works when used from Maven, it will also work with Gradle. (Uploading may be a different matter.)
You can use S3 and http
repositories {
mavenCentral()
ivy {
url "https://s3-eu-west-1.amazonaws.com/my-bucket"
layout "pattern", {
artifact "[artifact]-[revision].[ext]"
m2compatible = true
}
}
}
Name the jar in S3 to name-rev.jar (joda-time-3.2.jar) in my-bucket.
Also upload a pom file.
And in S3 give all permission to Download the jar and pom.
In a large web application, I'm using requirejs amd modules so that the scripts themselves are modular and maintainable. I have the following directory structure
web
|-src
|-main
|-java
|-resources
|-webapp
|-static
|-scripts
|-styles
|-images
|-static-built //output from r.js. not checked into git
|-WEB-INF
During build js and css are optimized using r.js into static-built folder. Gradle is the build tool.
Now the problem: The jsps refer to the scripts in static/scripts folder and this is how i want when working locally. However when building war, I want the static files to be served from static-built folder. The important thing is the source jsp should not have to change to serve the optimized files from static-built folder.
Two options that I have are: a) the gradle build while making war should include static-built instead of static. b)include static-built in addition to static and using tuckey urlrewrite pick the resouce from static-built rather than static.
What best practices are the community following in similar scenarios?
We've setup the server to have a runtime profile (dev, qa, prod, etc) read from a system property which determines some settings based on it. When running in production profile we serve the optimized files from the WAR. In development we serve the non-minified and non-concatenated files directly from the filesystem outside the application context.
Files are structured according to the official multipage example.
Configuring serving files depends on your chosen backend solution. Here's an example for spring.
Alternatively, r.js can generate source maps and those will help with development as well.
Not sure if this question is outdated already, but I had a kind of similar problem.
I had similar project structure, but with the only difference - I've split the project into 2 modules:
one of them (let's call it service) was java-module for back-end
the second one contained only js and other stuff related to front-end (let's call it ui).
Then in Gradle build 'assemble' task of the service depends on 'assemble' task of ui AND another custom task called 'pre-assemble'. This 'pre-assemble' task was copying the optimized js files to place where I wanted them to be.
So, basically, I've just added another task that was responsible for placing all the optimized js files in the proper place.
I am in the process of introducing NuGet into our software dev process, both for external binaries (eg Moq, NUnit) and for internal library projects containing shared functionality.
TeamCity is producing NuGet packages from our internal library projects, and publishing them to a local repository. My modified solution files use the local repository for accessing the NuGet packages.
Consider the following source code solutions:
Company.Interfaces.sln builds Company.Interfaces.1.2.3.7654.nupkg.
Company.Common.sln contains a reference to Company.Interfaces via its NuGet package, and builds Company.Common.1.1.1.7655.nupkg, with Company.Interfaces.1.2.3.7654 included as a dependency.
The Company.DataAccess.sln uses the Company.Common nupkg to add
Company.Interfaces and Company.Common as references. It builds
Company.DataAccess.1.0.8.7660.nupkg, including Company.Common.1.1.1.7655 as a dependent component.
Company.Product.A is a website solution that contains references to all three library projects (added by selecting the
Company.DataAccess NuGet package).
Questions:
If there is a source code change to Company.Interfaces, do I always need to renumber and rebuild the intermediate packages (Company.Common and Company.DataAccess) and update the packages in Company.Product.A?
Or does that depend on whether the source code change was
a bug fix, or
a new feature, or
a breaking change?
In reality, I have 8 levels of dependent library packages. Is there tooling support for updating an entire tree of packages, should that be necessary?
I know about Semantic Versioning.
We are using VS2012, C#4.0, TeamCity 7.1.5.
It is a good idea to update everything on each check-in, in order to test it early.
What you're describing can be easily managed using artifact dependencies (http://confluence.jetbrains.com/display/TCD7/Artifact+Dependencies) and "Finish Build" build triggers (or even solely "Nuget Dependency Trigger").
We wrote our own build configuration on the base project (would be Company.Interfaces.sln in this case) which builds and updates the whole tree in one go. It checks in updated packages.config files and .nuspec files along the way. I can't say how much of a time-saver this ended up being for us, even if it might sound like overkill at the beginning.
One thing to watch out for: the script we wrote checks in the files even if the chain fails somewhere in between, to give us the chance of fixing it on our local machine, check in the fix and restart the publishing.