Currently all the Jenkins server both master and nodes have wide open internet access. Our security team is trying to narrow down the internet access on these servers by asking Jenkins admin team to provide for DNS/ IP’s that Jenkins is accessing. Biggest problem here is we doesn’t know what public IP’s it’s accessing while building the code and even if we’re setting up a new build job It goes out to get build time dependencies at this build fails due to firewall? Any idea what would be the best solution to tackle this issue?
Related
As the Top 10 CI/CD Security Risks SEC-04 states:
Ensure that pipelines running unreviewed code are executed on isolated nodes, not exposed to secrets and sensitive environments.
The above statement seems especially true when the code (or pipeline code itself) is in a pull request which has not yet been seen/approved/merged but from a developer perspective you want to know if it builds successfully in the first place. Running code that nobody has laid eyes upon while having access to build secrets is definitely a security risk.
Wondering if isolation is achievable with Jenkins build nodes as I cannot find any specific options for this.
My assumption is that dynamic provisioned containerized agents are best suited for isolated environments, I'm just not sure how to prevent their access to secrets from the Jenkins controller.
We have our software hosted in Git on Azure DevOps and built using a build pipeline (which primarily uses a Cake script). We are now looking to deploy this software using the Azure DevOps release pipeline. However, all of our application servers are behind our firewall, inside of our network, and don't have any port open except for 80 and 443 for the web applications. We have dev, staging, and production servers for our apps (including some for load balancing). All I really need is to copy the artifact, backup the current code to a separate folder on the server, deploy and unzip the artifact file in the root deployment folder, and restart IIS on those servers.
My company is rather large and bureaucratic so there are some hoops we have to jump through for due diligence before we even attempt this new process. In that spirit, I am trying to find the best solution. If you can offer your advice, and in particular, offer any other solution we did not think of, that would be helpful:
The obvious solution would be to stand up servers on Azure cloud and move completely to the cloud. I know this is a solution, and this may be where we go, but my request is for non-cloud solution options so I can present this properly and make a recommendation.
Use a Hyper VPN tunnel to securely transfer the files and restart IIS. Probably the easiest and simplest method in regards to our already built build process on AzDO. Technically, this is the one I am least comfortable with.
Use build agents inside the network, connect to them from AzDO, have them build the software, and then have them deploy it or other agents. Lots of work to set it up but so far the least intrusive to our security. I also am not a fan because I wanted AzDO to handle builds and deployments.
Open the SFTP and SSH ports for each server and transfer the files that way. Maybe the least secure way but very simple?
If you have a better solution for this problem or a more common solution, let me know. If you think I should one of the 4 above solutions, let me know. If you can expand on any of the options above, please do.
ADO agents only require external connectivity, so they talk to ADO, not vice versa. So you only need 443 outbound to a couple of ADO urls.
Reading: https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/agents?view=azure-devops#communication
https://learn.microsoft.com/en-us/azure/devops/pipelines/agents/v2-windows?view=azure-devops#im-running-a-firewall-and-my-code-is-in-azure-repos-what-urls-does-the-agent-need-to-communicate-with
You could use Environments. Create Environment for each VM (that includes creating agent on your machine) and then use the environment parameter in YAML pipelines deployment job. The deployment job can then do whatever you need (deploy webapp, move files, backup, etc..) on your target machine, regardless whether it's on private network.
More reading - Azure DevOps Environments: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/environments?view=azure-devops
Using deployment job: https://learn.microsoft.com/en-us/azure/devops/pipelines/process/deployment-jobs?view=azure-devops
We're in process of adding Jenkins CI to a project with 100 developers.
Do you think they all need read-only access to Jenkins? or should it be accessed by 5 developers only who take charge of builds and continuous testing?
What could be reasons to let all developers login to Jenkins?
It's about security, auditing, Jenkins pipelines etc.
and technically (if relevant) it can be integrated with LDAP.
if 100 developers will be admin of Jenkins , I think that after several weeks someone will mess it up.
you should have several admins for it , to verify plugin before installation.
I used the role plugin , and define several roles for Jenkins :
admins
builders
configures
readers
team leaders ( with more permissions than configures)
There might be an easy technical solution to your problem: Jenkinsfile + pipelines
Then only one or two people need admin access for adding nodes and perhaps a password or two and some initial setup. Configuring the builds is done solely through a Jenkinsfile per repository.
That way, every developer with push access to the repo can configure the jenkins job for that repo. All in version control, so everyone will behave themselves.
LDAP/active directory integration is possible. In my setup, as an example, I've made login mandatory. Everyone that's logged in (=the developers) can stop/restart jobs. Only myself can do the rest of the maintentance. Very simple and clear and long-term-clean setup.
Our continuous integration system uses ansible playbooks to deploy our repo (including node modules) to a list of servers as specified in our ansible host file. We have numerous environments which are configured in individual host files. Jenkins is our build server which is used to kick of each ansible run on a schedule. One of the tasks in the ansible playbook is npm install.
Our problem arises because only one of the environments is offline, so when the playbook performs the npm install task on this particular environment (read "host file"), it fails due to lack of internet connectivity.
I've seen lots of answers on how to get around this manually, but the whole point of our continuous integration system is to run automatically, and consistently (from environment to environment). So I don't want to introduce a bunch of workarounds in the playbook and/or repo to bundle up, etc. the node modules just to get around this particular hosts offline issue.
Since this is a lower enviroment, I am willing to do something specific in advance, as a one time step, on this server in order to bypass this issue. But since the playbook tasks tar up all existing files under the user account in a rollback zip file prior to installing the new code, anything I introduce on the server under this user account will be essentially removed (with the exception of . files and directories).
So, how to us CI to run npm install on single offline server without manual intervention?
I'm not familiar with ansible but the problem you're experiencing isn't uncommon, and there are definitely automated ways of solving it e.g.
Setup a local NPM registry e.g. sinopia and configure this as your default registry. Nodes with no internet access will receive the latest cached version of the package (presuming it was previously downloaded by a node)
Run npm install once, package the nodes up and share the artifacts across the environments
Personally, I prefer and advocate the second solution because:
Faster CI runs (only one download)
Package version is guarenteed across all environments (although strict versioning would also solve this)
An alternative would be to setup an internal NPM registry, so that your offline Jenkins server can talk to your internal NPM, however keeping this in sync with the public NPM repository would likely be a nightmare, presenting a number of problems.
You need to allow "outbound only" internet access to this Jenkins server in order for this to work effectively without workarounds.
Our company uses ANT to automate build scripts.
Now somebody raised the question how to secure such build scripts agains (accidental or intended) threats?
Example 1: someone checks in a build script that deletes everything under Windows drive T:\ because that is where the Apache deployment directory is mounted for a particular development machine. Months later, someone else might run the build script and erase everything on T:\ which is a shared drive on this machine.
Example 2: an intruder modifies the default build target in a single project to scan the entire local hard disk. The Continuous Integration machine (e.g. Jenkins) is configured to execute the default build target and will therefore send its entire local directory structure to the intruder, even for projects that the intruder should not have access to.
Any suggestions how to prevent such scenarios (besides "development policies" or "do not mount shared drives")?
My only idea is to use chroot enviroments for builds?!
The issues you describe are the same for any code that you execute on the build machine - you could do the same thing using a unit test.
In this case the best solution may be to place your build scripts under source control and have a code review prior to check in.
At my company, the build scripts (usually a build folder) are an svn:external to another subversion repository that is only controlled by build/release engineers. Developers can control variables such as servers it can deploy to, but not what those functions do. This same code is reused amongst multiple projects in flight, and only a few devops folks can alter it, not the entire development staff.
Addition: When accessing shared resources, we use a system account that has only read access to those resources. Further: jenkins,development projects and build/deploy code are written to handle complete loss of jenkins project workspace and deploy environments. This is basic build automation/deploy automation that leads to infrastructure automation.
Basic rule: Murphy's law is going to happen. You should write scripts that are robust and handle cold start scenarios and not worry about wild intruder theories.