We have a Chrome plug-in/extension that we develop. We have three versions of this plug-in – dev, QA, and production. The dev and QA versions are published to testers only. These are our development versions, and we sometimes publish multiple versions in the same day as bug fixes or features are released. There should be minimal review required, as these are private plug-ins.
It used to be that new versions of dev and qa were published within a few minutes of being uploaded. Recently, however, it has begun to take upwards of 2 days to publish these new versions (they are stuck in pending review). Because they are development versions and we are releasing fixes all the time, this is unworkable.
Has anybody else encountered this? How can we restore the < 5 minute publish times that we used to have?
Related
Is there a chart/statistic which versions of Shopware 6 are used by online shops actually.
Background: developing custom plugins, it's hard to cover all versions from 6.1 - current (6.4)
I would suggest supporting the releases since the latest major release 6.4, which has first been released 1 1/2 years ago. That's ample time for users to have updated to one of the minor releases since then. When you offer plugins in the community store you'll get an overview which Shopware versions your plugin is used with. Looking at the data of my plugins I can tell you that the vast majority is now at 6.4. Without breaking changes it should not be a problem supporting all 6.4 releases and with the 6.5 major release coming next year you should be able to cover a significant userbase by supporting both the upcoming and the current major release.
Is there an explanation why NodeJS latest version is 6.2.1 but the LTS is 4.4.5? It might seem odd but shouldn't they stabilized version 5 first before working on or releasing version 6?
https://nodejs.org/en/
The reasoning behind this is that Node works to a 'LTS schedule':
New major version releases (i.e. the x in x.y.z) are created from the master branch every six months - even-numbered releases in April, odd-numbered releases in October.
Whenever a new odd-numbered release comes out, the previous even-numbered release moves into LTS, meaning there should be no breaking changes to that version from that point on.
The LTS version will be supported for 18 months, after which it will go into maintenance mode for 12 months, meaning it will only receive critical/security related updates. Because of this, there will never be more than two LTS versions active at the same time.
If it helps you visualize it, there's a diagram of the schedule on the Node LTS GitHub:
This allows there to be a predictable release schedule and migration path for those who have to support their Node infrastructure long-term, while keeping the pace of development moving for those who want to stay on the cutting-edge of new features. Bear in mind that under SemVer (the versioning scheme that Node uses), breaking changes are only allowed in major-version releases - having a regular schedule for these coming out ensures that these changes can be tested before they get added to an LTS version further down the line.
For more info, I recommend taking a look at the Node LTS GitHub - this is where I got all this information from, and it's a pretty helpful resource.
Does Android Studio have a release schedule? Chrome releases every 6 weeks and I was wondering if Android Studio has something similar.
You can build from source any time you want: http://tools.android.com/build#TOC-Building-Android-Studio
We try to do a build that goes out on the canary channel every week, usually late in the week, but there may be issues that prevent release on any given week. When that happens, we sometimes release it early the next week, and sometimes we punt until the next week's release. It depends on a number of factors.
During development, if a canary channel build looks good and doesn't cause severe problems for some period of time, we will promote it to dev channel. The idea is that canary channel gets close to the bleeding edge, and dev channel updates are less frequent and more stable.
We're working up to the full 1.0 release (sorry I'm being vague about the timing, but I'm not announcing a release date here), so the build schedule has been a little topsy-turvy of late. I expect that after 1.0 ships and we get past any post-release maneuvers, we'll fall back to that rhythm.
In the beta period running up to the release, the beta channel is a bit like the dev channel -- it gets less frequent updates that are better-vetted.
I have a very strange problem and my last hope for help I suspect comes from this community.
I have a build system that incorporates compiling three different elements of a wise installer package. The symptoms that I am seeing is that build times for these projects specifically degrade over time on the virtual machines. This is occurring on multiple virtual machines and has been doing so since November of 2013. Fortunately I had a habit of cloning the virtual machines and had a clone of the machine in early December where the symptoms were in the early stages.
For instance a normal build time should finish in 48 to 50 minutes. Times degraded slowly that by the time I noticed the problems times had degraded to 1hr 45min. I am not normally monitoring performance of the system but the results of the build - so I never knew. The cloned machine I had will restore the system so that it builds at 1hr 12 min approx.
Analyzing the build times, all of the time is being used by the wise installer. I have attempted uninstall and reinstalling the application. I have cleaned the temp directory, run chkdsk, and other normal debugging operations.
One of the wise installer projects is a merge module that requires recompile because it updates a database file. This should only take 8 minutes to compile. It takes over 1/2 hour.
Can anyone think of what I could look for to diagnose this problem? What could degrade the system so that in the course of a month the wise installer compile can loose up to 45 minutes on its build time?
Build Machine OS: XPsp3
HDD: SSD
Other builds do run on the same host machine and can run concurrently but previous to November 2013 this did not have an impact on performance.
If wise installer is causing the issue then you could try running procmon to see what is causing the hang. Windows performance recorder is another useful tool as explained recently by Alois Kraus - http://geekswithblogs.net/akraus1/archive/2014/04/30/156156.aspx
The problem is actually with our visual studio build.
For each build we increment the version number of binaries. We have a process that executes and updates the value in all proj files.
When visual studio compiles it adds assembly information to the registry for every .net file. Since the version number changed, the registry entry changes. The registry is never cleaned up. So I have just accumulated a year or more of registry entries.
When wise installer runs it is scanning through the registry, I am not certain why right now. Using ProcMon we can see the process running and reading the reg. Since the registry has bloated so much it slows down the build time.
Now the big question will be how to prevent this problem on a new build machine? How do I clean up all CLSID entries for our build?
On a pristine Win7 build system the entire build completes in under 20 min!
Also, I made one more change to the wise studio settings for the project. Instead of using high compression I switched it to mszip compression. Our output file is 50MB larger but the build time is far quicker - even on a new machine.
SOLUTION:
Add a clean solution task BEFORE changing the version number. This one change has resulted in a very consistent build time over several months.
The problem was that the company had changed the process of changing the assembly version and file version values. Little did we know that a rebuild was leaving assembly information in the registry on every build. So the task list now looks like:
Clean .NET solutions
Increment the version for file version and assembly version.
Build .NET solutions
Hopefully this helps anyone else who encounters the same problem.
When using 3rd party libraries/components in production projects, are you rigorous about using only released versions of said libraries?
When do you consider using a pre-release or beta version of a library (in dev? in production, under certain circumstances)?
If you come across a bug or shortcoming of the library and you're already committed to using it, do you apply a patch to the library or create a workaround in your code?
I am a big fan of not coding something when someone else has a version that I could not code in a reasonable amount of time or would require me to become an expert on something that wouldn't matter in the long run.
There are several open source components and libraries I have used in our production environment such as Quartz.NET, Log4Net, nLog, SharpFTPLibrary (heavily modified) and more. Quartz.NET was in beta when I first released an application using it into production. It was a very stable beta and I had the source code so I could debug an issue and there were a few. When I encountered a bug or an error I would fix it and post the issue to the bug tracker or author. I feel very comfortable using a beta product if the source is available for me to debug any issues or there is a strong following of developers hammering out any issues.
I've used beta libraries in commercial projects before but mostly during development and when the vendor is likely to release a final version before I finish the product.
For example, I developed a small desktop application using Visual Studio 2005 Beta 2 because I knew that the RTM version would be available before the final release of my app. Also I used a beta version of FirebirdSQL ADO.NET Driver during development of another project.
For bugs I try to post complete bug reports whenever there's a way to reproduce it but most of the time you have to find a workaround to release the application ASAP.
Yes. Unless there's a feature we really need in a beta version.
There's no point using a beta version in dev if you aren't certain you'll use it in production. That just seems like a wasted exercise
I'll use the patch. Why write code for something you've paid for?
There's no point using a beta version in dev if you aren't certain you'll use it in production. That just seems like a wasted exercise
Good point, I was also considering the scenario of evaluation of the pre-release version in dev, but I supposed that taints the dev -> test/qa -> prod path.
I'll use the patch. Why write code for something you've paid for?
What if it's not a commercial library, but an open source one? What if the patch to be applied is not from the releasing entity (e.g. your own patch)?
I use:
Infragistics (.NET WinForms controls)
LeadTools (video capture)
Xtreme ToolkitPro (MFC controls)
National Instruments Measurement Studio (computational libraries, plotting, and DAQ)
I've found significant bugs in every one of these, so I try to limit their use as much as possible. Infragisitcs is pretty good for what it is, and National Instruments is by far the best, although quite limited. I would avoid LeadTools at all cost.