I have two maven project one them is fairly small and other one is gigantic.
I am suffering some performance issues at saving and packaging jsp resources (ctrl + shift + f9) into the target directory. For small one, it is usually around 1 seconds, However for the bigger one it is around 6 seconds.
How can I see the details of the compiling process? My primary goal is directly copying the resource into the target directory without any processing.
I have already filed an issue but having some doubts that this is my fault.
Related
I am creating a simple linux kernel with buildroot and I am adding a small driver I've done myself, I created the Config.in file and drivername.mk to be able to select the driver in make menuconfig succesfully.
When executing make to build the image, the compilation goes correctly until my driver starts to compile, it looks to compile and create the image right but I get loooots of warnings saying that different files in ./lib/gcc/arm-buildroot-linux-uclibcgnueabihf/ are touched by more than one package: [u'host-gcc-initial', u'host-gcc-final'].
Anyone can explain me a bit about this issue and what is causing it? Do you need any more info to know what is happening? Is it safe to ignore them?
Thanks beforehand
Actually, doing a search on 'touched by more than one package', I found http://lists.busybox.net/pipermail/buildroot/2017-October/205602.html, where we find that this warning can safely be ignored if you're not doing a parallel build and aren't a kernel maintainer.
That said, if you're submitting code for inclusion in the Linux kernel, please be a good citizen and make sure you identify all of the things your code is dependent upon. (I'm not actually an active kernel hacker, so I don't know what method they're using for this right now.)
The basic idea is that there are a bunch of steps in compiling things that need to be done in a logical order. In a small project, we simply use dependencies that we know to put in because we also coded in that dependency. But with a project the size of the kernel, you can guarantee that not everyone does this. Some of them instead just specify dependencies if they're needed for things to build properly - if the default order works, things could go years before someone figures out that there was a missing dependency, causing them grief when they were trying to update just the one thing that was a missing dependency, and the other code not getting updated as a result.
When you're doing things in parallel, on the other hand, it becomes a lot more complicated. Now you really need to have every dependency specified, because there is no longer any inherent dependable order. Some people will probably still build serially, while others use two processing threads. I'll use 8. I've worked in groups that would be inclined to do 30, because they're on a 32 processor machine, and don't really need all of those during the off hours. Suddenly the fact that the file you needed from a directory that normally got processed 30 directories before yours is now getting processed at the same time as your file that needed it, because you didn't list the dependency and everything in those 30 directories that hasn't already been processed and isn't being processed has a dependency that's not yet finished its processing.
I have a class that does some parsing of two large (~90K rows, 11 columns in the first and around ~20K, 5 columns in the second) CSV files. According to the specification I'm working with the CSV files can be externally changed (removing/adding of new rows; columns remain constant as well as the paths). Such updates can happen at any time (though highly unlikely that an update will be launched in time intervals shorter than a couple of minutes) and an update of any of the two files has to terminate the current processing of all that data (CSV, XML from an HTTP GET request, UDP telegrams), followed by re-parsing the content of each of the two (or just one if only one has changed).
I keep the CSV data (quite reduced since I apply multiple filters to remove unwanted entries) in memory to speed working with it and also to avoid unnecessary IO operations (opening, reading, closing file).
Right now I'm looking into the QFileSystemWatcher, which seems to be exactly what I need. However I'm unable to find any information on how it actually works internally.
Since all I need is to monitor 2 files for changes the number of files shouldn't be an issue. Do I need to run it in a separate thread (since the watcher is part of the same class where the CSV parsing happens) or is it safe to say that it can run without too much fuss (that is it works asynchronously like the QNetworkAccessManager)? My dev environment for now is a 64bit Ubuntu VM (VirtualBox) on a relatively powerful host (a HP Z240 workstation) however the target system is an embedded one. While the whole parsing of the CSV files takes just 2-3 seconds at the most I don't know how much performance impact there will be once the application gets deployed so additional overhead is something of a concern of mine.
We have a set of utility programs which reads an .xlsx file for some input data and generate reports, Apache POI is used for this purpose. Excel file got 8 sheets with an average of 50 rows and 20 columns of data. Everything was working fine in normal Windows 7 box (Read developers machine). The file reading will get finished in few seconds.
Recently we moved these jobs to a Windows Server 2012 R2 box and we have noticed that the last sheet in the excel file takes lots of time to finish reading. I have duplicated the last sheet to confirm that this is not the data issue and executed the job, the second last sheet( was the last one in the previous execution) got finished reading in milli seconds and the last one (duplicated sheet) got again stuck for 15 minutes. My best guess here is that this may be because the time taken to close the file is getting too high but that is just a guess and no concrete evidence to prove that, also if that is the case I am not sure why so. Only difference between working Windows boxes and non-working boxes are the OS, rest all configurations are similar. I have analyzed the heap and thread dump and no issues found.
Is there any known compatibility issues with POI and Windows server boxes? Or is it something related to code? We are using POI-XSSF implementation.
Ok, finally we got the problem; the issue identified is with the the VM itself, the Disk I/O operation is always 100% and file read/write was taking a lot of time to complete, this caused the program to stuck there. However we couldnt identify why the disk I/O is high, tried some blogs but didnt work hence we downgraded the OS to Windows 2008 server and it worked well.
Note that there is nothing to do with POI or anything, it was certainly a VM/OS issue.
Evening all,
I am playing with Orchard CMS and I have a quick question. I keep my source code on a 10GB partition on my PC; I downloaded the source for orchard (~40MB) and placed it on that drive.
Started visual studio, opened the solution and started a build, realised quite quickly that it was going to take some time so I went off and got a drink, came back to find it had errored out of the build and that the last 3GB of disk space on my dev drive had been filled. This can't be normal, can it?
Does anyone know how much free disk space I'll need to build orchard from the source? I am limited by the size of the SSD in my laptop and I'm not going to upgrade just so I can use orchard!
Problem is that vanilla source projects don't disable "copy local/private" for references. Therefore every project in the solution creates copies of all references in it's bin folders. This obviously isn't necessary here and increases size exponentially (since these references are shipped together anyway so better if they are included just once).
You have 2 options:
(Recommended) Don't compile the source, I've been writing modules on top of precompiled version and never needed to make changes to the core source, that may do more harm than good. But if you really need to compile >
Force references to not copy locally, either manually for every single reference in every single project or find macro or maybe some VS magic to enforce it globally.
I think I’ve run into a bit of a tricky problem to solve. I need to get a count of our projects that are using code analysis. This is what I've done so far:
First, I installed AstroGrep. That’s a lightweight grep utility
for Windows.
Then I ran AstroGrep and pointed to my local C:\DevTfs2010\Apps. It
appears that 272 out of 354 .csproj files have this text:
<RunCodeAnalysis>true</RunCodeAnalysis> The problem with this
approach is that I’m only running this against what I have on my
laptop. There is much more in TFS.
So I remoted into the build server because I thought I could just
run AstroGrep there. The problem with this approach is that I would
be counting the same projects many times; one for the Main branch
and another for each version that has been released.
How can I get a count of projects using code analysis without including all of the released versions?
I'll share how I was able to make this work. If anyone has a better way, please share.
On our build server, run AstroGrep to search the .csproj files for code analysis being set to true.
Copy to Excel and use a formula to display a 1 if the path contains “main.”
Note: The reason I used "main" is because all of our main trunks have the word "main" in the folder structure. This eliminates counting all the release versions.
Formula: =IF(ISNUMBER(SEARCH("main",A1)), 1, 0)
Count the total for Core and for Apps (our two main team projects), and there’s your number.