How can I setup a common maven repository for windows and linux using dropbox? - linux

I use both windows and ubuntu for my java development work. I manage a common workspace for them using dropbox. In ubuntu, my dropbox folder resides in home directory, while in windows it resides in a separate partition.
I want to have a common .m2 folder for both windows and linux through dropbox. I understand that by modifying the below line in settings.xml I can achieve it:
<localRepository>${user.home}/dropbox/.m2/repository</localRepository>
While this works when the dropbox is set in home directory for both ubuntu and windows, this doesn't work for me as I prefer to have my dropbox set up in a completely different partition in windows.
Is there any way I can define a new system variable similar to user.home, say for example user.dropbox.home in both windows and ubuntu to achieve it?

I was finally able to do it by setting custom user variables as below:
Windows:
_JAVA_OPTIONS
-Duser.dropbox.maven=E:\Dropbox\maven
Linux:
_JAVA_OPTIONS
-Duser.dropbox.maven=/home/creationk/Dropbox/maven
And settings.xml was modified as below:
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<localRepository>${user.dropbox.maven}/.m2/repository</localRepository>
</settings>

I am curious why you would want to have a common .m2 folder? A key purpose of this folder is to maintain a local repository to eliminate unnecessary network traffic.
I would caution against making your local repository not so local. Chances are that you will run into file corruptions and concurrency issues. Jenkins users can attest to that, albeit for different reasons. Dropbox's update protocols will just further get in the way. Rather than thinking of .m2 as a repository, think of it as a cache.
If it is a common repository that you are seeking, I suggest looking into:
Sonatype Nexus Repository Manager
JFrog Artifactory
Apache's Archiva
Edit:
Given that the intent of sharing .m2 is to create, what the OP calls, a universal repository, the following demonstrates how to configure a file-based repository via Dropbox. Similar techniques can be applied to other shared filesystem mechanisms (e.g. CIFS, NFS, etc.) to deploy and retrieve artifacts.
First, create a private folder in your Dropbox folder named repo.
Next, add the following <distributionManagement> configuration to your project's POM, or in a parent POM shared by all projects, or better yet a profile (but that is another question).
<distributionManagement>
<repository>
<id>db-repo</id>
<url>file:///C:/Users/user/Dropbox/repo</url>
</repository>
</distributionManagement>
Having done this, whenever you run mvn deploy, the resulting artifacts will be added to or updated in your common repository. The filepath to the repository will vary on different systems. As long as these configurations are set globally in each system, they only have to be set once.
To enable the same and other projects to use artifacts deployed thereunder, add a <repository> configuration for the common repository.
...
<repositories>
...
<repository>
<id>db-repo</id>
<url>file:///C:/Users/user/Dropbox/repo</url>
</repository>
</repositories>
A public Dropbox-based repository can be implement in a similar fashion by creating the repository folder in Dropbox's Public folder. Once created, log in to your Dropbox website and select the repository folder. Use the Share button to retrieve its public URL. This URL should be used for the <repository> configuration. For example,
<repository>
<id>db-repo</id>
<url>https://www.dropbox.com/whatever/dropbox/says/it/should/be</url>
</repository>

Related

How to restrict exposing project settings file to all in mavenExecute step of cloud-s4-sdk pipeline?

We are working on the s4sdk pipeline implementation for delivery of SAP CloudFoundry applications (spring-boot micro-services) using the SAP Cloud SDK for Java.
We have multiple developers working on multiple micro-services but all these micro-services are having some common dependencies.
We want to control the versions for all the common dependencies from a central location.
For this we have created a Maven BOM (Bill of Materials) dependency and added it as the parent in pom.xml of all the micro-services.
The aforementioned BOM is housed in Nexus repository and all pom.xmls (of the micro-services) can access the parent using the repository tag like below.
<repository>
<id>my-repo</id>
<name>nexus-repo</name>
<url>http://some/url</url>
</repository> `
The credentials for the above nexus repository are placed in the settings.xml file.
We want to run the above model in the cloud-s4-sdk pipeline. Although it works fine, the problem is that we need to expose the nexus repo access credentials in the settings.xml file.
Per documentation in https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/configuration.md#mavenexecute, the settings.xml for maven builds needs to be placed relative to the
project root. This is a security concern for us as the project repository is in GitHub and as such projectSettingsFile can be accessed by the developers.
We don't want these credentials to be exposed to the developers. It should be limited to only the admin team.
Is there a way we can achieve this using the cloud-s4-sdk pipeline?
Although nexus facilitates user token for maven settings.xml, but that does not work here as GUI login is still possible using the token values.
I think you could consider the following options:
Allow anonymous read access for artifacts
The developers anyway need a way to build the artifacts locally. How could developers build your service without having access to a dependency. Allowing read access would also enable them to do that.
Commit credentials to git but make git repository private
If you don't want to allow all employees (I guess the only employees have access to your nexus), you can commit the credentials together with the settings.xml but make the repository private to not share these details.
Inject credentials as environment variable
You can inject the credentials as environment variable to your settings xml file. See also: How to pass Maven settings via environmental vars
The setup the environment variable you can surround the full pipeline in your Jenkinsfile with the withCredentials step. For details see: https://jenkins.io/doc/pipeline/steps/credentials-binding/
String pipelineVersion = "master"
node {
deleteDir()
sh "git clone --depth 1 https://github.com/SAP/cloud-s4-sdk-pipeline.git -b ${pipelineVersion} pipelines"
withCredentials([usernamePassword(credentialsId: 'nexus', usernameVariable: 'NEXUS_USERNAME', passwordVariable: 'NEXUS_PASSWORD')]) {
load './pipelines/s4sdk-pipeline.groovy'
}
}
and a settings.xml like:
<username>${env.NEXUS_USERNAME}</username>
<password>${env.NEXUS_PASSWORD}</password>

Clone maven repository (Archiva manager)

I have an maven repository (managed with Archiva) in my environment to use it for application development. Since it is a mirror of central repository, it has all dependencies I need for my application.
If I want to give the application sources to someone who doesn't have external connection, how can I clone my Archiva repository on its environment? Is there a way to do that?
Thank you.
simply copy the disk content from repositories directory

SVN to deployable folder

I have an SVN repository on my server(windows-7) and my application is runnin on Tomcat.
On server every time i check out my application(which was committed from different machines) and then manually place the files in webapps folder to deploy the app onto tomcat.
Is there a way where i can set up a svn linking with webapps folder, wherein whenever the users commit there code it should be directly deployed on to tomcat webapps folder.
This question comes up often enough that it's in the FAQ
Aside from doing it strictly via a hook script (which requires admin-level access to the repository, on the repository server), you can set up a Continuous Integration server to monitor the repository and perform your deployments that way.

Building projects with Maven offline without using M2 repo but using system jars

I have a requirement for a project in an open source operating system, to use Maven completely offline under nix environment. i.e. it should use the dependencies available in the system (probably at /usr/share/ + few other places?). Maven should not download any dependency from internet.
Is there a solution to achieve this? Creating the M2 repo in the system is not a viable solution. The issue is that the file system is read-only. We can only work on a temporary folder (/tmp for example) with write access. But maintaining a repo at a temporary location is a bad design, isn't it?
Saying it another way, the new to be installed maven project should use the existing packages in the system, if available. If packages does not exist, it should get installed separately ( via the package manager), and should not be copied to the m2 repo.
Is there any known way to do this?
Thanks for the help!
PS: Please note that I'm not asking about -o option to take it offline!
You can create your own "mirror" repository (Mirror of actual Maven repository on /tmp) and ask Maven to use that instead of remote repository.
Example :
http://maven.apache.org/guides/mini/guide-mirror-settings.html
I think you can create a local maven repository (with right folders tree and pom files), but use symbol links to jars in your write-only directory. Sure this solution is not easier then Shamit Verma's approach and you must write some code.
The approach we used is the following.
We've taken a decision to store m2 repo at /usr/share/maven-2/maven_home/m2_repo/. This isn't a temporary folder, and needs admin rights to write. But that isn't a problem since the installing packages also needs admin privileges.
We've symlinked the jars in the repo to point to system-level jars. i.e. we maintained the maven repository structure, but the jars were symlinked them to the system-jars. This means there's no unnecessary duplication and waste of space. We still keep the pom files in repo. The pom files were rewritten by a python script to match our needs.
Further, we refered dependencies with the system scope. For example,
<dependency>
<groupId>groupId</groupId>
<artifactId>artifactId</artifactId>
<version>666</version>
<scope>system</scope>
<systemPath>/usr/share/maven-core/lib/maven-core.jar</systemPath>
</dependency>
With system scope, it doesn't matter what the groupId:artifactId:version combination. It just picks the jar it find at <systemPath>

Subversion Repository on Linux Dev

What's the best practice for setting up a subversion repository on a linux development machine. External users need to be able to access a specific repository, but nothing else on the machine. I know one answer is to set up a dedicated repository, but I'm looking for a single machine solution: location of repositories, accounts, backup procedures.
One of the popular access methods to Subversion is via Apache module. You can set put different rights at the directory level to control access. See Choosing a Server Configuration and httpd, the Apache HTTP Server. For authentication, I recommend using external authentication source like Microsoft AD via mod_auth_sspi.
If you need to mix and match rights, see my answer for How can I make only some folders show up for certain developers with SVN.
I work for an IT operations infrastructure automation company; we do this all the time.
Location of repository: We use "/srv/svn" by default to store all SVN repositories, unless a customer has a specific requirement, for example an existing repository might be stored on a ReadyNAS shared filesystem.
Accounts: All our customers use LDAP. Either OpenLDAP server running on a master host, but sometimes Active Directory because some customers have a Windows domain in their office, which we can configure as well. Developers get access to the "SCM" group (usually svn, git or devel), and the 'deploy' group. These groups only have permissions to log in and perform SCM related activities (ie, write commits to the repo based on group ownership), or do application deployments to production.
Backup procedures: We use svnadmin hotcopy unless the customer already has something in place (usually svnadmin dump, heh).
svnadmin hotcopy /srv/svn /srv/svn_backups/$(date +%Y%m%d)
For access to the repo, it's usually simple svn+ssh. Some customers already have an Apache setup, but not many. I recommend SSH. Developers push their public ssh keys out and all is well. There's little to no maintenance with LDAP user management (the only way to go).
I'd recommend looking at the chapter on server configuration in the subversion book. It makes suggestions about which configuration is more appropriate for your use.
For what it's worth, setting up a repository using the stand alone svn daemon is very straight forward. The annoying thing is managing user rights.
I have a blog posting that describes the steps necessary to set up and initiate a Linux-based Subversion server in order to maintain code repositories etc.
Basically the steps are:
Download the Subversion tarball.
Unzip and install Subversion.
Deal with any installation problems that arise when running ./configure, if any.
Create the Subversion repository using svnadmin create.
Edit the repository configuration file using your text editor of choice.
Ditto the password file.
Import your code, projects etc into the repository using svn import.
Start the server as a daemon eg svnserve -d. It is also possible to get it to do this automatically
upon reboot.
Start using it using standard Subversion commands to eg check out, check in, backup etc...

Resources