We have integrated Azure Pipeline with Black Duck Synopsys task and it's limits up to 10 versions. For every pipeline runs versions will be created and pipeline runs successfully up to 10 runs only. For 11th run pipeline will be failed because of version limitation in Black Duck. Here we can delete the older versions manually in Black Duck but instead of doing manually in black duck, is it possible to do automatically through ADO pipeline by adding any task ?
In short, can we use any powershell or other tasks in the pipeline which automatically deletes the versions when count reach to 10 ?
Thanks..
Related
I have a Haskell project with 300+ files (mostly auto generated). I can build it in few minutes with my 4 yo. processor (by specifying ghc-options: $everything: -j in stack.yaml) but when it comes to Travis things becomes really slow. It seems that modules being processed sequentially and even single module compilation time much larger (about one second on my machine vs tens on seconds on Travis) Eventually I hit Travis timeout (50 min for single job). Is there any way to speed up Travis build or to split up compilation process to multiple jobs? I would accept paid plan from Travis, I need solution which just works without complex setup.
This configuration uses stages: https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml
However, there are unpredictable problems with the cache, or perhaps problems with the Travis config: https://travis-ci.org/google/codeworld/builds/626216910 Also, I am not sure how Travis utilizes the cache(s) for simultaneous builds.
https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml#L52-L63 , https://github.com/google/codeworld/blob/f20020ca78fee51afdf6a5ef13eacc6d15c15724/.travis.yml#L74 , and the redundant calls to stack upgrade --binary-only are attempts to work around these issues.
I'm a bio major only recently doing major coding for research stuff. Our campus in order to support research has an on campus supercomputer for researcher use. I work remotely from this supercomputer and it uses a linux shell to access it and submit jobs. I'm writing a job submission script for the alignment of a lot of genomes using a program installed on the computer called Mauve. Now I've run a job on Mauve fine before and have altered the script from that job to fit this job. Only this time I keep getting this error
Storing raw sequence at
/scratch/addiseg/Elizabethkingia_clonalframe/rawseq16360.000
Sequence loaded successfully.
GCA_000689515.1_E27107v1_PRJEB5243_genomic.fna 4032057 base pairs.
Storing raw sequence at
/scratch/addiseg/Elizabethkingia_clonalframe/rawseq16360.001
Sequence loaded successfully.
e.anophelisGCA_000496055.1_NUH11_genomic.fna 4091484 base pairs.
Caught signal 11
Cleaning up and exiting!
Temporary files deleted.
So I've got no idea how to troubleshoot this. I'm so sorry if this is super basic and wasting time but I don't know how to troubleshoot this at a remote site. All possible solutions I've seen so far require me to access the hardware or software neither of which I can control.
My current submission script is this.
module load mauve
progressiveMauve --output=8elizabethkingia-alignment.etc.xmfa --output-guide-tree=8.elizabethkingia-alignment.etc.tree --backbone-output=8.elizabethkingia-alignment.etc.backbone --island-gap-size=100 e.anophelisGCA_000331815.1_ASM33181v1_genomicR26.fna GCA_000689515.1_E27107v1_PRJEB5243_genomic.fna e.anophelisGCA_000496055.1_NUH11_genomic.fna GCA_001596175.1_ASM159617v1_genomicsrr3240400.fna e.meningoseptica502GCA_000447375.1_C874_spades_genomic.fna e.meningoGCA_000367325.1_ASM36732v1_genomicatcc13253.fna e.anophelisGCA_001050935.1_ASM105093v1_genomicPW2809.fna e.anophelisGCA_000495935.1_NUHP1_genomic.fna
I think I’ve run into a bit of a tricky problem to solve. I need to get a count of our projects that are using code analysis. This is what I've done so far:
First, I installed AstroGrep. That’s a lightweight grep utility
for Windows.
Then I ran AstroGrep and pointed to my local C:\DevTfs2010\Apps. It
appears that 272 out of 354 .csproj files have this text:
<RunCodeAnalysis>true</RunCodeAnalysis> The problem with this
approach is that I’m only running this against what I have on my
laptop. There is much more in TFS.
So I remoted into the build server because I thought I could just
run AstroGrep there. The problem with this approach is that I would
be counting the same projects many times; one for the Main branch
and another for each version that has been released.
How can I get a count of projects using code analysis without including all of the released versions?
I'll share how I was able to make this work. If anyone has a better way, please share.
On our build server, run AstroGrep to search the .csproj files for code analysis being set to true.
Copy to Excel and use a formula to display a 1 if the path contains “main.”
Note: The reason I used "main" is because all of our main trunks have the word "main" in the folder structure. This eliminates counting all the release versions.
Formula: =IF(ISNUMBER(SEARCH("main",A1)), 1, 0)
Count the total for Core and for Apps (our two main team projects), and there’s your number.
I recently started deploying my test code onto an actual device and ran some sample code provided by Xamarin involving different technologies that they introduce you to. Then I came upon an issue with their garbage collector when trying to test out sensors. With the latest version it runs when you reach a certain threshold however that makes the device unresponsive. Using the code from http://docs.xamarin.com/android/recipes/OS%2f%2fDevice_Resources/Accelerometer/Get_Accelerometer_Readings but just changing it to add 2 more sensors, a gyroscope and gravity sensors, the project lasts about 30 seconds before the GC begins to run. I noticed that every time you reference the e.Values list from the OnSensorChanged function you get more references created. Is there a way to delete those references, as the app I'm working on requires those three sensors and needs to run for about 4 to 5 mins, (its just a section of the app but a really important section). Thanks in advance for any help you can give me.
The following link actually provides a way to understand that issue comes up as well as the solution that would fix the issue completely.
https://bugzilla.xamarin.com/show_bug.cgi?id=1084#c6
I'm after a method of converting a single program to run on multiple computers on a network (think "grid computing").
I'm using MSVC 2007 and C++ (non-.NET).
The program I've written is ideally suited for parallel programming (its doing analysis of scientific data), so the more computers the better.
The classic answer for this would be MPI (Message Passing Interface). It requires a bit of work to get your program to work well with message passing, but the end result is that you can easily launch your executable across a cluster of machines that are running a MPI daemon.
There are several implementations. I've worked with MPICH, but I might consider doing this with Boost MPI (which didn't exist last time I was in the neighborhood).
Firstly, this topic is covered here:
https://stackoverflow.com/questions/2258332/distributed-computing-in-c
Secondly, a search for "C++ grid computing library", "grid computing for visual studio" and "C++ distributed computing library" returned the following:
OpenMP+OpenMPI. OpenMP handles the running of single C++ program on multiple CPU cores within the same machine, OpenMPI handles the messaging between multiple machines. OpenMP+OpenMPI=grid computing.
POP-C++, see http://gridgroup.hefr.ch/popc/.
Xoreax Grid Engine, see http://www.xoreax.com/high_performance_grid_computing.htm. Xoreax focuses on speeding up builds of Visual Studio, but the Xoreax Grid Engine can also be applied to generic applications. Looking at http://www.xoreax.com/xge_xoreax_grid_engine.htm quotes, we see the quote "Once a task-set (a set of tasks for distribution along with their dependency definitions) is defined through one of the interfaces described below, it can be executed on any machine running an IncrediBuild Agent.". See the accompanying CodeGuru article at http://www.codeproject.com/KB/showcase/Xoreax-Grid.aspx
Alchemi, see http://www.codeproject.com/KB/threads/alchemi.aspx.
RightScale, see http://www.rightscale.com/pdf/Grid-Whitepaper-Technical.pdf. A quote from the examples section of this paper: "Pharmaceutical protein analysis: Several million protein compound comparisons were performed in less than a day – a task that would have taken over a week on the customer’s internal resources ..."