Can Sonar Scanner be run on multiple cores? - multithreading

SonarQube 6.7.1. Sonar Scanner 3.0.3.778. Sonar Scanner plugin 2.6.1 for jenkins. Postgres 9.6.6 database. Everything is running on one Solaris 11 Unix box. Project language is OpenEdge ABL. I found no mention of running the scanner on multiple cores anywhere in the documentation.
A question on here from 2012 seems to indicate it could not be run on multiple cores.
Sonar Multicore
Another question circa 2015 determined that the scanner cannot take advantage of maven's multithread builds.
Maven Sonarqube Plugin: Multithreading
It seems that one plugin (SonarCFamily) offers a way to scan projects on multiple cores, though our application is not written in C. https://docs.sonarqube.org/display/PLUG/Multithreaded+Code+Scan
My interest is in speeding up the sonar scanner analysis. Analysis for our (admittedly large ~1 million line) codebase took a full 24 hours. Only plugin installed is the base openedge language plugin. Only default OpenEdge rules are active.

Related

Is it bad practice to disable Antivirus software on a build server for more performance?

I'm trying to optimize our software builds (Azure DevOps Server Build Agents) as far as I can and was curious, if disabling Windows Defender on our Windows Servers would be good or bad practice?
In our case, Msmpeng.exe takes about 10-15 % (CPU utilization) when a build is running.
If I exlude the agents processes (AgentService.exe, Agent.Worker.exe, Agent.Listener.exe) and the work directory, I'm sure, Msmpeng.exe would get down to 5 % or less which should result in faster compile times.
I think it`s up to you:
If you use isolated build agents and build only internal solutions. You may disable antivirus or exclude the build agent work folder.
If you download some external packages, you have to use antiviruses. Additionally, consider using Snyk or WhiteSource.

Kafka, Linux/Docker and IntelliJ on Windows10

I'm going down the 100DaysKafka path. It appears that the Confluent platform only runs on Linux via Docker. I do my Java development using IntelliJ and Windows 10. Is this a dead-end waste of time or can IntelliJ hook into the running Linux Kafka instance? Thanks!
via Docker
This is false. Confluent Platform doesn't support Windows, meaning some tools like the Schema Registry and KSQL and Control Center don't offer startup scripts for Windows.
Doesn't mean it's not possible to run Kafka or Zookeeper, which sounds like all you want. - How to install Kafka on Windows?
You don't need Intellij to "hook into" anything. That's also not really proper terminology unless you are planning on actually contributing to the Kafka source code. If you're just writing a client, it makes a TCP connection, which works fine over localhost.

Inferencing with a tf.lite model on 32bit linux

So I am 400 hours into a project at work building an automated image classification pipeline. I have have overcame many hurdles, and am about finished with the first alpha. Every thing runs in docker containers on my workstation. The only thing left is to build the inference service. So I set up one more docker container pull in my libraries and set up the flask endpoints, and copy the tflite file to the shared volume; every thing seems to be in order, I can hit the API with the chrome and I get the right responses.
So I very happily report that the project is ready for testing 5 weeks early! I explain that all we have to do is install docker, build and run the docker file, and we are ready to go. To this my coworker responds "the target machines are 32bit! no docker!"
Upgrading to 64 bit is off the table.
I tried to compile tensorflow to 32 bit..........
I want to add a single board PC (x64) to the machine network and run the docker from there but management wants a solution that does not require retrofitting.
The target machines have very unstable internet connections managed by other companies in just about every country on earth so a cloud solution is not going to work.(plus I need sub 50 ms latency)
Does anyone have an idea of how to tackle this challenge? at this point I think I am stuck recompiling tf to 32bit; but I don't know how!
The target machines is running a custom in house distro of Debian 6 32bit.
The target machines are old and have outdated software but were very high end at the time they were built.
It's not clear which 32bit architecture you want to use. I guess it's ARM32.
If that's the case, you can build TF or TFLite for ARM32.
Check the following links.
https://www.tensorflow.org/install/source_rpi
https://www.tensorflow.org/lite/guide/build_rpi
Though they're about RPI, you could get some idea on how to build it for ARM32.

Two Redmine 1.4 instances on same machine with very different performance on WEBrick

I installed Redmine 1.4 on Windows Server 2003 and MySQL. As after some time the instance became used by more people, I needed another one for testing (i.e. a development environment). As I also wanted to be able to test plugins without the risk of destroying the production Redmine instance, I copied the original Redmine (say redmine_prod) folder to another one (say redmine_devel). I created a new, empty Redmine database for redmine_devel. I only defined production environment in the first one and development in the second. Both instances run on Webrick started as Windows service, on different ports. Yet, there is a big difference in performance of both these instances - the old, production one runs very fast, whereas development runs slowly (several seconds to bring up pages, which doesn't alter with time).
I also tested running redmine_devel on thin server, which doesn't improve the performance a bit.
What can be the reason? They both run in literally same conditions.
Any hints appreciated.
OK, it's the default log level for environments other than production in Redmine. I.e. production uses :info as default, whereas other environments use :debug. This can be altered by editing config\environments\[your_env].rb and adding:
config.logger.level = Logger::INFO
to the chosed environment. There are also other options available, naturally.

Linux development environment for a small team

Approach (A)
From my experience I saw that for a small team there's a dedicated server with all development tools (e.g. compiler, debugger, editor etc.) installed on it. Testing is done on dedicated per developer machine.
Approach (B)
On my new place there's team utilizing a different approach. Each developer has a dedicated PC which is used both as development and testing server. For testing an in-house platform is installed on the PC to run application over it. The platform executes several modules on kernel space and several processes on user space.
Problem
Now there are additional 2 small teams (~ 6 developers at all) joining to work on the exactly same OS and development environment. The teams don't use the mentioned platform and can execute application over plain Linux, so no need in dedicated machine for testing. We'd like to adopt approach (A) for all 3 teams, but the server must be stable and installing on it in-house platform, described above, is highly not desirable.
What would you advise?
What is practice for development environmentin your place - one server per team(s) or dedicated PC/server per developer?
Thanks
Dima
We've started developing on VMs that run on the individual developers' computers, with a common subversion repository.
Benefits:
Developers work on multiple projects simultaneously; one VM per project.
It's easy to create a snapshot (or simply to copy the VM) at any time, particularly before those "what happens if I try something clever" moments. A few clicks will restore the VM to its previous (working) state. For you, this means you needn't worry about kernel-space bugs "blowing up" a machine.
Similarly, it's trivial to duplicate one developer's environment so, for example, a temporary consultant can help troubleshoot. Best-practices warning: It's tempting to simply copy the VM each time you need a new development machine. Be sure you can reproduce the environment from your repository!
It doesn't really matter where the VMs run, so you can host them either locally or on a common server; the developers can still either collaborate or work independently.
Good luck — and enjoy the luxury of 6 additional developers!

Resources