How to restrict exposing project settings file to all in mavenExecute step of cloud-s4-sdk pipeline? - sap-cloud-sdk

We are working on the s4sdk pipeline implementation for delivery of SAP CloudFoundry applications (spring-boot micro-services) using the SAP Cloud SDK for Java.
We have multiple developers working on multiple micro-services but all these micro-services are having some common dependencies.
We want to control the versions for all the common dependencies from a central location.
For this we have created a Maven BOM (Bill of Materials) dependency and added it as the parent in pom.xml of all the micro-services.
The aforementioned BOM is housed in Nexus repository and all pom.xmls (of the micro-services) can access the parent using the repository tag like below.
<repository>
<id>my-repo</id>
<name>nexus-repo</name>
<url>http://some/url</url>
</repository> `
The credentials for the above nexus repository are placed in the settings.xml file.
We want to run the above model in the cloud-s4-sdk pipeline. Although it works fine, the problem is that we need to expose the nexus repo access credentials in the settings.xml file.
Per documentation in https://github.com/SAP/cloud-s4-sdk-pipeline/blob/master/configuration.md#mavenexecute, the settings.xml for maven builds needs to be placed relative to the
project root. This is a security concern for us as the project repository is in GitHub and as such projectSettingsFile can be accessed by the developers.
We don't want these credentials to be exposed to the developers. It should be limited to only the admin team.
Is there a way we can achieve this using the cloud-s4-sdk pipeline?
Although nexus facilitates user token for maven settings.xml, but that does not work here as GUI login is still possible using the token values.

I think you could consider the following options:
Allow anonymous read access for artifacts
The developers anyway need a way to build the artifacts locally. How could developers build your service without having access to a dependency. Allowing read access would also enable them to do that.
Commit credentials to git but make git repository private
If you don't want to allow all employees (I guess the only employees have access to your nexus), you can commit the credentials together with the settings.xml but make the repository private to not share these details.
Inject credentials as environment variable
You can inject the credentials as environment variable to your settings xml file. See also: How to pass Maven settings via environmental vars
The setup the environment variable you can surround the full pipeline in your Jenkinsfile with the withCredentials step. For details see: https://jenkins.io/doc/pipeline/steps/credentials-binding/
String pipelineVersion = "master"
node {
deleteDir()
sh "git clone --depth 1 https://github.com/SAP/cloud-s4-sdk-pipeline.git -b ${pipelineVersion} pipelines"
withCredentials([usernamePassword(credentialsId: 'nexus', usernameVariable: 'NEXUS_USERNAME', passwordVariable: 'NEXUS_PASSWORD')]) {
load './pipelines/s4sdk-pipeline.groovy'
}
}
and a settings.xml like:
<username>${env.NEXUS_USERNAME}</username>
<password>${env.NEXUS_PASSWORD}</password>

Related

how developers run pipeline without access on .gitlab-ci.yml

The .gitlab-ci.yml configuration file should not be exposed to any user with the "developer" rule since it might grant unwanted access to variables and infrastructure or make different kinds of exploiting behaviour or simply unwanted changes possible.
therefore, according to https://gitlab.com/secure-ci-config-poc/ci-configs, i made projects and pipelines; but if the user who pushes don't have reporter or developer or other high permissions on ci-configs project which contains all configurations, pipeline fails!!
Found errors in your .gitlab-ci.yml:
Project `root/ci-configs` not found or access denied!
now, how can I fix this error?! so developers can run pipelines, but can not access the configuration files and .gitlab-ci.yml files?
Thanks All
You cannot stop users from reading the configuration. A user triggering a pipeline must have at least read access to the CI yaml file. However, secrets should never be stored in the YAML file, so read access should generally not be problematic.
You can prevent write access, but users triggering pipelines must be able to read all the configuration files. That is really the primary goal in securing your CI configurations -- preventing execution of malicious changes to the CI configuration.
if the user who pushes don't have reporter or developer or other high permissions on ci-configs project which contains all configurations, pipeline fails
The configuration project should have either public or internal visibility to avoid this problem, as described in the GitLab documentation:
If the configuration file is in a separate project, you can set more granular permissions. For example:
Create a public project to host the configuration file.
Give write permissions on the project only to users who are allowed to edit the file.
Then other users and projects can access the configuration file without being able to edit it.
(emphasis added)
If you absolutely needed the project to be set to private visibility, you might consider granting developer access, but creating protected branch rules that require maintainer access or higher to push changes.
Additional considerations
Even if you prevent access to writing changes to the CI configuration file, if the CI configuration executes any code written in the repository (say, for example running unit tests) then you really haven't solved any problems. Consider, for example, that malicious code can be embedded in test code!
It is possible to have a CI configuration that does not execute user code, but it's something you need to consider. If you need CI configurations to execute user-provided code (like running tests) then it's likely not very advantageous to protect your CI configuration in this way as a matter of securing your environment/variables.

Is injecting environment variables into CI/CD scripts really best security practice?

TL;DR
I'm trying to understand how to setup a Jenkins or Gitlab installation "properly" security-wise.
We'd like jobs to be able to pull from and push to maven repositories and docker registries, but do this in a safe way, so that even malicious Jenkinsfile or .gitlab-ci.yml files can't get direct access to the credentials to either print them on-screen or send them with e.g. curl somewhere.
It seems the straight-forward and documented way to do it for both Jenkins and gitlab is to create "Credentials" in Jenkins and "Variables" in Gitlab CI/CD. These are then made available as environment variables for Jenkinsfile or .gitlab-ci.yml to access and use in their scripts. Which is super-handy!
BUT!
That means that anybody that can create a job in Jenkins/Gitlab or has write access to any repository that has an existing job can get access the raw credentials if they're malicious. Is that really the best one can hope for? That we trust every single person that has login to a Jenkins/Gitlab installation with keys to the kingdom?
Sure we can limit credentials so they're only accessible to certain jobs, but all jobs need access to maven repos and docker registries...
In these post-SolarWinds times, surely we can and must do better than that when securing our build pipeline...
Details
I was hoping for something like the ability for a e.g. a Jenkins file to declare up-front that it wants to use these X docker images and these Y java maven dependencies somewhere before a script runs, so these dependencies are downloaded. So that credentials to pull dependencies are hidden from the scripts. And that after a build, a number of artifacts are declared, so that after the script has concluded, "hidden" credentials are used to pushed the artifacts to e.g. a nexus repository and/or docker registry.
But the Jenkins documentation entry for Using Docker with Pipeline describes how to use a registry with:
docker.withRegistry('https://registry.example.com', 'credentials-id') {
bla bla bla
}
And that looks all safe and good, but if I put this in the body:
sh 'cat $DOCKER_CONFIG/config.json | base64'
then it is game over. I have direct access to the credentials. (The primitive security of string matching for credentials in script output is easily defeated with base64.)
Gitlab doesn't even try to hide that it is easy in their docs
before_script:
- docker login -u $CI_REGISTRY_USER -p $CI_REGISTRY_PASSWORD $CI_REGISTRY
Could be replaced with
before_script:
- "echo $CI_REGISTRY_USER:$CI_REGISTRY_PASSWORD | base64"
Likewise, game over.
Is there no general to have credentials that are safely protected from the Jenkinsfile or .gitlab-ci.yml scripts?
These two articles describe the situation perfectly:
Accessing and dumping Jenkins credentials | Codurance "The answers you seek, Jenkins shall leak."
Storing Secrets In Jenkins - Security Blogs - this last article even describes how to print Jenkins' own encrypted /var/lib/jenkins/credentials.xml and then use Jenkins itself to decrypt them. Saves the hacker the trouble.

How can I setup a common maven repository for windows and linux using dropbox?

I use both windows and ubuntu for my java development work. I manage a common workspace for them using dropbox. In ubuntu, my dropbox folder resides in home directory, while in windows it resides in a separate partition.
I want to have a common .m2 folder for both windows and linux through dropbox. I understand that by modifying the below line in settings.xml I can achieve it:
<localRepository>${user.home}/dropbox/.m2/repository</localRepository>
While this works when the dropbox is set in home directory for both ubuntu and windows, this doesn't work for me as I prefer to have my dropbox set up in a completely different partition in windows.
Is there any way I can define a new system variable similar to user.home, say for example user.dropbox.home in both windows and ubuntu to achieve it?
I was finally able to do it by setting custom user variables as below:
Windows:
_JAVA_OPTIONS
-Duser.dropbox.maven=E:\Dropbox\maven
Linux:
_JAVA_OPTIONS
-Duser.dropbox.maven=/home/creationk/Dropbox/maven
And settings.xml was modified as below:
<settings xmlns="http://maven.apache.org/SETTINGS/1.0.0"
xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
xsi:schemaLocation="http://maven.apache.org/SETTINGS/1.0.0
https://maven.apache.org/xsd/settings-1.0.0.xsd">
<localRepository>${user.dropbox.maven}/.m2/repository</localRepository>
</settings>
I am curious why you would want to have a common .m2 folder? A key purpose of this folder is to maintain a local repository to eliminate unnecessary network traffic.
I would caution against making your local repository not so local. Chances are that you will run into file corruptions and concurrency issues. Jenkins users can attest to that, albeit for different reasons. Dropbox's update protocols will just further get in the way. Rather than thinking of .m2 as a repository, think of it as a cache.
If it is a common repository that you are seeking, I suggest looking into:
Sonatype Nexus Repository Manager
JFrog Artifactory
Apache's Archiva
Edit:
Given that the intent of sharing .m2 is to create, what the OP calls, a universal repository, the following demonstrates how to configure a file-based repository via Dropbox. Similar techniques can be applied to other shared filesystem mechanisms (e.g. CIFS, NFS, etc.) to deploy and retrieve artifacts.
First, create a private folder in your Dropbox folder named repo.
Next, add the following <distributionManagement> configuration to your project's POM, or in a parent POM shared by all projects, or better yet a profile (but that is another question).
<distributionManagement>
<repository>
<id>db-repo</id>
<url>file:///C:/Users/user/Dropbox/repo</url>
</repository>
</distributionManagement>
Having done this, whenever you run mvn deploy, the resulting artifacts will be added to or updated in your common repository. The filepath to the repository will vary on different systems. As long as these configurations are set globally in each system, they only have to be set once.
To enable the same and other projects to use artifacts deployed thereunder, add a <repository> configuration for the common repository.
...
<repositories>
...
<repository>
<id>db-repo</id>
<url>file:///C:/Users/user/Dropbox/repo</url>
</repository>
</repositories>
A public Dropbox-based repository can be implement in a similar fashion by creating the repository folder in Dropbox's Public folder. Once created, log in to your Dropbox website and select the repository folder. Use the Share button to retrieve its public URL. This URL should be used for the <repository> configuration. For example,
<repository>
<id>db-repo</id>
<url>https://www.dropbox.com/whatever/dropbox/says/it/should/be</url>
</repository>

Bitbucket Pipelines access other node repository

I have enabled Bitbucket Pipelines in one of my node.js repositories to have it run the build on every commit. My repository depends on another node.js repository. For development I've linked the one to the other using npm link.
I've tried a git clone of that repository that is specified in the bitbucket-pipelines.yml file, but the build gets stuck on that command. I guess it's because git is asking for authentication at that point.
Is there a way to allow the container to access other repositories in the same team? Or is there a better way altogether on how to solve this? I'd also be fine with switching to another CI tool if Bitbucket Pipelines aren't capable of this – the only requirement is that it's free for teams < 5 people.
Btw. I'd like to avoid paying for npm private packages if possible.
Thanks!
You can organize access by ssh key for another repo like described in official docs https://confluence.atlassian.com/bitbucket/access-remote-hosts-via-ssh-847452940.html

how to set permission in Git?

I am new to Git and after I lots of searching I found that I must have set Linux permissions in my Git server.
But I want to know, is it possible to set permissions in Git?
I am working on a team about six people and I don't like to everyone on the team can access all the project for security reasons.
For example, If somebody in my team works on UI in my Store section I want to he/she have it's own branch but when he/she PULL the project with Git just have access to files and folders I let.
I have to add that I have my own Git server on a local network using Linux Debian and I'm using "SourceTree" as my GUI for Git and I have few experience on Git command line, so I need do it from GUI if possible.
Edited:
Does Git lab support permission like this: I have a repository that uses Laravel framework and I'd like to set permission for UI developers that only access views and PHP developers access some controllers not all the part of the controller in the project.
You can checkout GitLab: https://about.gitlab.com/ for this. Out of the box git does not support what you need/want.
No, Git doesn't manage this directly. Anyone with authentication credentials to the repository has access to the entire repository.
Traditionally, this is managed with third-party solutions, such as Gitolite, GitHub private repositories, and other systems.
In addition to other answers: if you want only certain parts of project to be accessible to each developer, you can use git submodules.
This is also preferable if project has logically and functionally separate parts. (Like front-end and back-end. )

Resources