How to get Selenium working with Jenkins2 in GCP - linux

I'm trying to get Selenium Grid and Jenkins working together in GKE.
I found the Selenium plugin (https://plugins.jenkins.io/selenium) for Jenkins, but I'm not sure it can be used to get what I want.
I stood Jenkins up by following the steps here:
https://github.com/GoogleCloudPlatform/kube-jenkins-imager
( I changed the image for the jenkins node to use Jenkins 2.86 )
This creates an instance of Jenkins running in kubernetes that spawns slaves into the cluster as needed.
But I don't believe that this is compatible with the Selenium plug-in. What's the best way to take what I have and get it working with this instance of Jenkins?
I was also able to get an instance of Selenium up and going in the same cluster using this:
https://gist.github.com/elsonrodriguez/261e746cf369a60a5e2d
( I dropped the version 2.x from the instances to pull in the latest containers. )
I had to bump the k8s nodes up to n1-standard-2 (2 vCPUs, 7.5 G Memory ) to get those containers to run.
For this proof of concept, the SE nodes don't need to be ephemeral. But I'm unsure what kind of permanent node container image I can deploy in k8s that would have the necessary SE drivers.
On the other hand, maybe it would be easier to just use the stand-alone SE containers that I found. If so, how do I use them with Jenkins2?
Has anyone else gone down this path?
Edit: I'm not interested in third-party selenium services at this time.

SauceLabs is a selenium grid in the cloud.
I wrote Saucery to make integrating from C# or Java with NUnit2, NUnit3 or JUnit 4 easy.
You can see the source code here, here and here or take a look at the Github Pages site here for more information.

Here is what I figured out.
I saw many indications that it was a hassle to run your own instance of Selenium grid. Enough time may have passed for this to be a little easier than it used to be. There seem to be a few ways to do it.
Jenkins itself has a plugin that is supposed to turn your Jenkins cluster into a Selenium 3 grid: https://plugins.jenkins.io/selenium . The problem I had with this is that I'm planning on hosting these instances in the cloud, and I wanted the Jenkins slaves to be ephemeral. I was unable to figure out how to get the plugin to work with ephemeral slaves.
I was trying to get this done as quickly as I could, so I only spent three days total on this project.
These are the forked repos that I'm using for the Jenkins solution:
https://github.com/jnorment-q2/kube-jenkins-imager
which basically implements this:
https://github.com/jnorment-q2/continuous-deployment-on-kubernetes
I'm pointing to my own repos to give a reference to exactly what I used in late October 2017 to get this working. Those repos are forked from the main repos, and it should be easy to compare the differences.
I had contacted google support with a question, they responded that this link might actually be a bit clearer:
https://cloud.google.com/solutions/jenkins-on-container-engine-tutorial
From what I can tell, this is a manual version of the more automated scripts I referenced.
To stand up Selenium, I used this:
https://github.com/jnorment-q2/selenium-on-k8s
This is a project I built from a gist referenced in the Readme, which references a project maintained by SeleniumHQ.
The main trick here is that Selenium is resource hungry. I had to use the second tier of google compute engines in order for it to deploy in Kubernetes. I adapted the script I used to stand up Jenkins to deploy Selenium Grid in a similar fashion.
Also of note, there appear to be only Firefox and Chrome options in the project from SeleniumHQ. I have yet to determine if it is even possible to run an instance of Safari.
For now, this is what we're going to go with.
The piece left is how to make a call to the Selenium grid from Jenkins. It turns out that selenium can be pip-installed into ephemeral slaves, and webdriver.Remote can be used to make the call.
Here is the demo script that I wrote to prove that everything works:
https://github.com/jnorment-q2/demo-se-webdriver-pytest/blob/master/test/testmod.py
It has a Jenkinsfile, so it should work with a fresh instance of Jenkins. Just create a new pipeline, change definition to 'Pipeline script from SCM', Git, https://github.com/jnorment-q2/demo-se-webdriver-pytest, then scroll up and click 'run with parameters' and add the parameter SE_GRID_SERVER with the full url ( including port ) of the SE grid server.
It should run three tests and fail on the third. ( The third test requires additional parameters for TEST_URL and TEST_URL_TITLE )

Related

Is it possible to stream Cloud Build logs with the Node.js library?

Some context: Our Cloud Build process relies on manual triggers and about 8 substitutions to customize deploys to various firebase projects, hosting sites, and preview channels. Previously we used a bash script and gcloud to automate the selection of these substitution options, the "updating" of the trigger (via gcloud beta builds triggers import: our needs require us to use a single trigger, it's a long story), and the "running" of the trigger.
This bash script was hard to work with and improve, and through the import-run shenanigans actually led to some faulty deploys that caused all kinds of chaos: not great.
However, recently I found a way to pass substitution variables as part of a manual trigger operation using the Node.js library for Cloud Build (runTrigger with subs passed as part of the request)!
Problem: So I'm converting our build utility to Node, which is great, but as far as I can tell there isn't a native way to steam build logs from a running build in the console (except maybe with exec, but that feels hacky).
Am I missing something? Or should I be looking at one of the logging libraries?
I've tried my best scanning Google's docs and APIs (Cloud Build REST, the Node client library, etc.) but to no avail.

I have a GitLab self hosted running, but how does the frontend work?

I set up a GitLab self-hosted instance and its working fine, my problem right now is that I don't really understand how the frontend works. Mostly because I've been focusing on the backend and because I couldn't find documentation about it either. I wish to understand how I can comment out things I don't want to show for the user or in the overall design, change aspects and text, and overall have control of the frontend.
I'm running on Debian 9, the setup was made with Bitnami using Google VM. As far as I understand I have to manually change the files I want, but I really don't understand the structure of this type of frontend.
What language do I need to know here and where should I find the documentation, how to find the correct directory and files, etc.?
While GitLab doesn't officially support any type of "custom frontend", what you can do is:
Fork GitLab
Use the GitLab Development Kit to implement your changes
Run a Source Install of your fork
The frontend is mostly written in HAML (for the server-side bits) and Vue.js (for the client-side bits).
Note: Even an Omnibus install copies raw ruby and javascript files somewhere, and since they’re physically on the system, they can be manually manipulated and hotpatched, but that’s not really a sustainable way of introducing changes to the codebase.

Running Selenium in Azure Function

I want to periodically scrape a website with Selenium and a headless PhantomJS driver.
My boss wants me to run it "in the cloud" for reasons, and a serverless Azure Function looks like it could be a useful way to do it, instead of having to run a VM or something.
I've got my VS.net code to do the scraping mostly done, but I just realized that I'm not sure if I can actually deploy it as a function, since it looks like it wants me to include the phantomjs.exe in my project in order to run, which may not work in a Azure Function...
Can I do what I wanted to do, or should I explore other options?
PhantomJS is a known unsupported framework in App Service, which is the same environment Azure Functions runs on.
You can find more information here: https://github.com/projectkudu/kudu/wiki/Azure-Web-App-sandbox#unsupported-frameworks

Creating a Web UI for StrongLoop build & deploy processes?

I want to build web ui for StrongLoop. It would let a user build and deploy process with that UI like StrongLoop Arc.
There are simple node applications(Web Services) without created with StrongLoop tools. Need to deploy these applications via web ui. Solution in my mind is some server-side processes, listed steps below:
Upload zip folder(node application) to server
Extract zip and build to tar.gz by shell command (slc build) through node.js child_process API
Deploy tar.gz file to relevant StrongLoop host by shell command(slc deploy..) through API which is mentioned on previous step.
I wonder is there any alternative way to deploy node application(without created with StrongLoop tools) to StrongLoop host via web ui using some StrongLoop API?
I have looked API could not find specific solution.
What you require is a CDP (Continuous delivery pipeline) setup, there seem to be many ways in which you can achieve this (easiest way is using Codeship or similar platforms), but if you want to know how it works it requires a bit of orchestration tools to help you. To describe the steps I'll be using the following tools:
Docker (what is docker?)
Ansible (Use Cases and How it works?)
Jenkins (What is it and Why to use it?)
"There are many other combination of tools that you can look at, but this should give you an idea"
Now that we have the tools, I'll try to describe the deployment pipeline with a very basic use-case.
Step I "Ideally" - Creating a docker image for your nodejs application.
What generally everyone suggests is that you create a docker image of your application. Then save this image on docker-hub. How this will help you is that, now your nodejs application is contained inside a docker image which makes it independent of the Host and can be deployed anywhere you want.
To create this image all you need to do is create a Dockerfile, which is described in the in the link I've shared.
Step II "Ideally" - Creating an Ansible playbook to mimic the setup steps of your application.
Ansible playbooks are basically used to automate every manual process that you would need to do in order to setup-deploy-run your application. This decreases the need to run even trivial tasks like "slc build".
Step III "Ideally" - This is where we get to the UI stuff
By using Jenkins, you are given a UI which will help you configure tasks that can be combined with Github hooks and trigger the deployment as soon as you make a commit. This is explained in more details in the link shared.
So to summarize, This is what goes on at back to some extent, in order to automate the build and deployment of your application using UI. I hope this serves as a good starting point to achieve your requirements, and also in case you want skip these steps in the start, you could always go with Codeship or similar other tools to help you with the steps that you've mentioned.

How to get cucumber to run the same steps against Selenium and a headless browser

I've been doing some work testing web applications with Cucumber and I currently have a number of steps set up to run with Culerity. This works well, but there are times when it would be nice to run the exact same stories in Selenium.
I see two possible approaches that may work:
Writing each step so that it performs the step appropriately depending on the value of some global variable.
Having separate step definition files and somehow selectively including the correct one.
What is the preferred method for accomplishing this?
Third option: See if Culerity implements the Webrat API. Its README file says: "Culerity lets you (...) reuse existing Webrat-Style step definitions". Couldn't find much more than that though. Ideally, you would be able to switch backends with a config option or command-line argument without having to touch the step definitions.
Of course this would only work if you're not testing Javascript, which Culerity supports, but Webrat doesn't.
HI, have you looked at Capybara? It will allow you to use a variety of web drivers, and will allow you to test javascript-related features as well.
I think this is the one you are looking for. http://robots.thoughtbot.com/post/1658763359/thoughtbot-and-the-holy-grail
You can schedule the tests to run in Jenkins. Local machine Jenkins software is open source. You can get cucumber plugin in Jenkins so that you can achieve reporting part to your project on top of continuous test run

Resources