How to encode the password in Gatling? - base64

I have a requirement in my project to encode the password being sent in the login request. What is the best way to do it?

You can store the credentials in a file as key-value pair. Read the file in vector and then decode the value. The value can be stored in a session and can be used in your Gatling request as below:
import scala.concurrent.duration._
import io.gatling.core.Predef._
import io.gatling.core.structure.ChainBuilder
import io.gatling.http.Predef._
import io.gatling.jdbc.Predef._
import io.gatling.jsonpath._
import Headers._
import scala.collection._
import java.time.format.DateTimeFormatter
import java.time.LocalDateTime
import java.time.LocalDate
import io.gatling.core.feeder._
import java.util.Base64
import java.nio.charset.StandardCharsets
import scala.util.matching.Regex
object TestClass {
//Now you can use below code to read the credential file,
val recordsByEnv: Map[String, Seq[Record[Any]]] =
csv("/User/sandy/data/userdetails.csv").readRecords.groupBy { record =>
record("env").toString
}
val passwordByEnv: Map[String, Seq[Any]] = recordsByEnv.mapValues { records =>
records.map { record =>
record("password")
}
}
val password_v = (passwordByEnv.get("env_name").toString)
val password_encoded = password_v.substring(12, (password_v.length - 2))
val credentials = new String(Base64.getDecoder.decode(password_encoded))
//Store the decoded value in session variable
.exec(session => session.set("password", credentials))
.exec(
http("Authorization")
.post("https://XXXX.xxxxxx.com")
.header("Content-Type", "application/x-www-form-urlencoded")
.formParam("password", "${password}")
.formParam("username", "USERNAME")
)
//Below is user details file look like below
//env_name,encoded_password
}

Related

List a sharepoint folder using python office365 containing more than 5000 files

from office365.runtime.auth.client_credential import ClientCredential
from office365.runtime.client_request_exception import ClientRequestException
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
import io
import datetime
import pandas as pd
from office365.sharepoint.client_context import ClientContext
from office365.sharepoint.files.file import File
import io
import datetime
import pandas as pd
app_principal = {
'client_id': 'xxxxxxxxxxxxxxxxxxxxxxx',
'client_secret': 'xxxxxxxxxxxxxxxxxxxxxxx'
}
sp_site = 'https://<domain>.sharepoint.com/sites/TSM839/'
relative_url = "/sites/TSM839/Shared Documents"
client_credentials = ClientCredential(app_principal['client_id'], app_principal['client_secret'])
ctx = ClientContext(sp_site).with_credentials(client_credentials)
libraryRoot = ctx.web.get_folder_by_server_relative_path(relative_url)
ctx.load(libraryRoot)
ctx.execute_query()
files = libraryRoot.files
ctx.load(files)
ctx.execute_query()
len(files)
use case : Fetch the names of all the files in SharePoint folder
This program works perfectly when the number of files in the SharePoint folder is < 5000, when it is > 5000, getting below exception

Scriptrunner/ Copying Project while logged-in as another user

I want to use my script, so that it will be executed by someone who is not me, but another user (ServiceUser) in the Jira Instance.
This is my functioning code, but I do not know how to make someone else execute it.
import com.atlassian.jira.project.Project
import com.atlassian.jira.project.ProjectManager
import com.atlassian.jira.project.AssigneeTypes
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.user.ApplicationUser
import com.atlassian.jira.component.ComponentAccessor
import com.onresolve.scriptrunner.canned.jira.admin.CopyProject
import org.apache.log4j.Logger
import com.atlassian.jira.bc.projectroles.ProjectRoleService
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.security.roles.ProjectRole
import com.atlassian.jira.util.SimpleErrorCollection
import com.atlassian.jira.security.roles.ProjectRoleManager
import com.atlassian.jira.project.ProjectManager
import com.atlassian.jira.project.Project
import com.atlassian.jira.security.roles.ProjectRoleActor
import com.atlassian.jira.bc.project.ProjectCreationData
import com.atlassian.jira.bc.project.ProjectService
import com.atlassian.jira.component.ComponentAccessor
import com.atlassian.jira.project.AssigneeTypes
import com.atlassian.jira.project.type.ProjectTypeKey
// the key for the new project
def projectKey = "EXA987"
def projectName = "EXA987"
def log = Logger.getLogger("com.onresolve.jira.groovy.MyScript")
Thread executorThread = new Thread(new Runnable() {
void run() {
def copyProject = new CopyProject()
def inputs = [
(CopyProject.FIELD_SOURCE_PROJECT) : "SWTEMP",
(CopyProject.FIELD_TARGET_PROJECT) : projectKey,
(CopyProject.FIELD_TARGET_PROJECT_NAME) : projectName,
(CopyProject.FIELD_COPY_VERSIONS) : false,
(CopyProject.FIELD_COPY_COMPONENTS) : false,
(CopyProject.FIELD_COPY_ISSUES) : false,
(CopyProject.FIELD_COPY_DASH_AND_FILTERS) : false,
]
def errorCollection = copyProject.doValidate(inputs, false)
if(errorCollection.hasAnyErrors()) {
log.warn("Couldn't create project: $errorCollection")
}
else {
def util = ComponentAccessor.getUserUtil()
def adminsGroup = util.getGroupObject("jira-administrators")
assert adminsGroup // must have jira-administrators group defined
def admins = util.getAllUsersInGroups([adminsGroup])
assert admins // must have at least one admin
ComponentAccessor.getJiraAuthenticationContext().setLoggedInUser(util.getUserByName(admins.first().name))
copyProject.doScript(inputs)
}
}
})
executorThread.start()
I stumbled upon other codes, using things like
def oldLoggedInUser = jiraAuthenticationContext.getLoggedInUser()
jiraAuthenticationContext.setLoggedInUser(serviceUser)
jiraAuthenticationContext.setLoggedInUser(oldLoggedInUser)
but it was not succesful for me.
I have used following solution to change user during script execution:
def authContext = ComponentAccessor.getJiraAuthenticationContext();
def currentUser = authContext.getLoggedInUser();
def superuser=ComponentAccessor.getUserManager().getUserByKey("ANOTHER_USER_ACCOUNT")
authContext.setLoggedInUser(superuser);
// < do the needed work>
authContext.setLoggedInUser(currentUser);
I had first issues as my used another account had not needed Jira access rights (and I got funny "user must be logged in" errors)

Unable to read Kinesis stream from SparkStreaming

import org.apache.spark.SparkConf
import org.apache.spark.storage.StorageLevel
import org.apache.spark.streaming.Milliseconds
import org.apache.spark.streaming.StreamingContext
import org.apache.spark.streaming.dstream.DStream.toPairDStreamFunctions
import com.amazonaws.auth.AWSCredentials
import com.amazonaws.auth.DefaultAWSCredentialsProviderChain
import com.amazonaws.auth.SystemPropertiesCredentialsProvider
import com.amazonaws.services.kinesis.AmazonKinesisClient
import com.amazonaws.services.kinesis.clientlibrary.lib.worker.InitialPositionInStream
import org.apache.spark.streaming.kinesis.KinesisInputDStream
import org.apache.spark.streaming.kinesis.KinesisInitialPositions.Latest
import org.apache.spark.streaming.kinesis.KinesisInitialPositions.TrimHorizon
import java.util.Date
val tStream = KinesisInputDStream.builder
.streamingContext(ssc)
.streamName(streamName)
.endpointUrl(endpointUrl)
.regionName(regionName)
.initialPosition(new TrimHorizon())
.checkpointAppName(appName)
.checkpointInterval(kinesisCheckpointInterval)
.storageLevel(StorageLevel.MEMORY_AND_DISK_2)
.build()
tStream.foreachRDD(rdd => if (rdd.count() > 0) rdd.saveAsTextFile("/user/hdfs/test/") else println("No record to read"))
Here, even though I see data coming into the stream, my above spark job isn't getting any records. I am sure that I am connecting to right stream with all credentials.
Please help me out.

Zeppling twitter streaming example, unable to

When following the zeppelin tutorial for streaming tweets and querying them using SparkSQL, am running into error where the 'tweets' temp table is not found. The exact code being used and links referred as as follows
Ref: https://zeppelin.apache.org/docs/0.6.2/quickstart/tutorial.html
import scala.collection.mutable.HashMap
import org.apache.spark.streaming._
import org.apache.spark.streaming.twitter._
import org.apache.spark.storage.StorageLevel
import scala.io.Source
import scala.collection.mutable.HashMap
import java.io.File
import org.apache.log4j.Logger
import org.apache.log4j.Level
import sys.process.stringSeqToProcess
/** Configures the Oauth Credentials for accessing Twitter */
def configureTwitterCredentials(apiKey: String, apiSecret: String, accessToken: String, accessTokenSecret: String) {
val configs = new HashMap[String, String] ++= Seq(
"apiKey" -> apiKey, "apiSecret" -> apiSecret, "accessToken" -> accessToken, "accessTokenSecret" -> accessTokenSecret)
println("Configuring Twitter OAuth")
configs.foreach{ case(key, value) =>
if (value.trim.isEmpty) {
throw new Exception("Error setting authentication - value for " + key + " not set")
}
val fullKey = "twitter4j.oauth." + key.replace("api", "consumer")
System.setProperty(fullKey, value.trim)
println("\tProperty " + fullKey + " set as [" + value.trim + "]")
}
println()
}
// Configure Twitter credentials
val apiKey = "xxx"
val apiSecret = "xxx"
val accessToken = "xx-xxx"
val accessTokenSecret = "xxx"
configureTwitterCredentials(apiKey, apiSecret, accessToken, accessTokenSecret)
import org.apache.spark.streaming._
import org.apache.spark.streaming.twitter._
#transient val ssc = new StreamingContext(sc, Seconds(2))
#transient val tweets = TwitterUtils.createStream(ssc, None)
#transient val twt = tweets.window(Seconds(60), Seconds(2))
val sqlContext= new org.apache.spark.sql.SQLContext(sc)
import sqlContext.implicits._
case class Tweet(createdAt:Long, text:String)
twt.map(status=>
Tweet(status.getCreatedAt().getTime()/1000, status.getText())).foreachRDD(rdd=>
// Below line works only in spark 1.3.0.
// For spark 1.1.x and spark 1.2.x,
// use rdd.registerTempTable("tweets") instead.
rdd.toDF().registerTempTable("tweets")
)
ssc.start()
In the next paragraph, i have the SQL select statement
%sql select createdAt, count(1) from tweets group by createdAt order by createdAt
Which throws the following exception
org.apache.spark.sql.AnalysisException: Table not found: tweets;
at org.apache.spark.sql.catalyst.analysis.package$AnalysisErrorAt.failAnalysis(package.scala:42)
at org.apache.spark.sql.catalyst.analysis.Analyzer$ResolveRelations$.getTable(Analyzer.scala:305)
Was able to get the above example running by making following edits. Am not sure, if this change was needed due to version upgrade of Spark (v1.6.3) or some other underlying architecture nuances i might be missing, but eitherways
REF: SparkSQL error Table Not Found
In the second para' instead of directly invoking as SQL syntax, try using the sqlContext as follows
val my_df = sqlContext.sql("SELECT * from sweets LIMIT 5")
my_df.collect().foreach(println)

Connection request failed while connecting to sharepoint online

I am trying to connect sharepoint online i used the following code but the response it is showing is request failed . I tried the following code
#Grab(group='org.codehaus.groovy.modules.http-builder', module='http-builder', version='0.5.0-RC2' )
import groovyx.net.http.*;
import static groovyx.net.http.ContentType.*;
import static groovyx.net.http.Method.*;
import groovyx.net.http.HTTPBuilder;
import static groovyx.net.http.Method.GET;
import static groovyx.net.http.ContentType.TEXT;
import groovyx.net.http.RESTClient
import groovy.util.slurpersupport.GPathResult
import static groovyx.net.http.ContentType.URLENC
import groovyx.net.http.URIBuilder
def authSite = new HTTPBuilder( 'https://xyzsharepoint.com' );
authSite.auth.basic 'username', 'password';
authSite.request(GET) { req->
response.success = { resp->
println'request was successful'
assert resp.status<400
}
response.failure = { resp ->
println'request failed'
assert resp.status>=400
}
}
Suggest me where I went wrong.

Resources