mvn install error for some repository of broadleaf - broadleaf-commerce

I have to start to work on a project with broadleaf enterprise.
I got the source code with git, and i have to run mvn install to get all the dependencies. unluckly something goes always wrong.
[WARNING] Failure to transfer com.broadleafcommerce:broadleaf-avalara:1.1.0-SNAPSHOT/maven-metadata.xml from http://nexus.broadleafcommerce.org/nexus/content/groups/enterprise-source-snapshots/ was cached in the local repository, resolution will not be reattempted until the update interval of Broadleaf Enterprise Source Snapshots has elapsed or updates are forced. Original error: Could not transfer metadata com.broadleafcommerce:broadleaf-avalara:1.1.0-SNAPSHOT/maven-metadata.xml from/to Broadleaf Enterprise Source Snapshots (http://nexus.broadleafcommerce.org/nexus/content/groups/enterprise-source-snapshots/): Not authorized , ReasonPhrase:Unauthorized.
I receive this warning for a lot of dependencies(this is just one like example) for unauthorization. From the broadleaf enterprise.
what i have to do to fix it??
Maybe register somewhere? Speak with someone in the company that should authorize me??

Related

Azure DevOps extension cache wrong node_modules

General: I develop an Azure DevOps extension with tasks and pipeline decorators. Testing on local Azure DevOps Server instance. Extension loaded through manage extensions from local hard drive. Let's say that I installed the extension first time with version 1.0.0 and a node_modules dependency "3rdPartyDep" with version 2.0.0, which has transitive dependencies with vulnerabilities.
Scenario:
Upgrade "3rdPartyDep" to version 3.0.0 with fixed vulnerabilities. Build new version of my extension, say 1.0.1. Create the .vsix, update the extension in the Azure DevOps Server.
Run a pipeline, which fails because I did not check the "3rdPartyDep" changes and there are breaking changes and the extension fails to run.
Rollback the "3rdPartyDep" library to 2.0.0 because I have no time now to check what is broken in there right now as I have other things to debug and implement, repackage the extension, increase version to 1.0.2, update extension in Azure DevOps Server.
Run the pipeline. It fails with the same exception, as if I didn't rollback. I look into the agent taks folder and I see that the node_modules with the "3rdPartyDep" library is pointing to 3.0.0, which is wrong because I rolled back the version.
I open the generated .vsix archive and check that the node_modules inside contains the correct 2.0.0 version, so no problems of packaging or building from my side.
I make a conclusion that Azure DevOps stores somewhere a cached version of the extension with the node_modules including the wrong version of the "3rdPartyDep". I search that cache folder over internet to find out where it is, and I also search with a search tool all my machine, including words in file. Nowhere to be found. There is no location on my machine with such node_modules containing the 3.0.0 version. It might be stored in some encrypted DB?
I uninstall completely the extension, and install it back. I see that Azure DevOps has a history for the extension, and the cache is not cleared. Any pipeline fails, even if my .vsix does not contain this dependency.
I'm stuck.
Questions:
Where extensions are actually cached inside Azure DevOps Server?
Why updating, uninstalling and installing does not fix the problem?
Is there any way to fix this? What can I do? I do not want to reinstall the server completely. Moreover, this raises concerns about how node_modules are managed and cached and what happens at the clients and the cloud.
You could try the following items:
Try to clean the browser cache, and check whether you have increase the version number in the task.json.
Try to perform Delete task -- Save definition -- add task again process.
Delete Azure DevOps Server cache, which can be followed in this link.
Uninstall the extension from CollectionSettings, remove the extension from local Manage Extensions. Then upload again the extension and install it in the collection.

GitLab CE: How to restore or repair repos with issues / merge requests that are suddenly missing?

I started running GitLab CE inside of an x86 Debian VM locally about two years ago, and last year I decided to migrate the GitLab CE instance to a dedicated Intel NUC server. Everything appeared to go well with no issues, and my GitLab CE instance is up-to-date as of today (running 13.4.2).
I discovered recently though, that some repos that were moved give a "NO REPOSITORY!" error when visiting their project pages, and if they had any issue boards, merge requests, etc, that these were also gone. But you wouldn't suspect it since the broken repos appear in the repo lists along with working repos that I use all the time.
If I had to reason about these broken repos, it would be that they had their last activity over a year ago, with either no pushes ever made to them other than an initial push, or if changes were made, issues created, or merge requests created, it was literally over a year ago.
Some of these broken repos are rather large with a lot of history, whereas others are super tiny (literally just tracking changes to a shell script), so I don't think repo size itself has anything to do with it.
If I run the GitLab diagnostic check sudo gitlab-rake gitlab:check, everything looks good except for "hashed storage":
All projects are in hashed storage? ... no
Try fixing it:
Please migrate all projects to hashed storage
But then running sudo gitlab-rake gitlab:storage:migrate_to_hashed doesn't appear to complete (with something like six failed jobs in the dashboard), and running the "gitlab:check" again still indicates this "hashed storage" problem. I've also tried running sudo gitlab-rake gitlab:git:fsck and sudo gitlab-rake cache:clear but these commands don't seem to make a difference.
Luckily I have the latest versions of all the missing repos on my machine, and in fact, I still have the original VM running GitLab CE 12.8.5 (with slightly out of date copies of the repos.)
So my questions are:
Is it possible to "repair" the broken repos on my current instance? I suspect I could just "re-push" my local copies of these repos back up to my server, but I really don't want to lose any metadata like issues / merge requests and such.
Is there any way to resolve the "not all projects are in hashed storage" issue? (Again the migrate_to_hashed task fails to complete.)
Would I be able to do something like "backup", "inspect / tweak backup", "restore backup" kind of thing to fix the broken repos, or at least the metadata?
Thanks in advance.
Okay, so I think I figured out what happened.
I found this thread on the GitLab User Forums.
Apparently the scenario here is:
Have a GitLab instance that has repos not in "hashed storage"
Backup your repo
Restore your repo (either to the same server or migrating to another server)
Either automatically or manually, attempt to update your repos to "hashed storage"
You'll now find that any repo with a "ci runner" (continuous integration runner) will now be listed as "NO REPOSITORY!" and be completely unavailable, since the "hashed storage" migration process will fail
The fix is to:
Reset runner registration tokens as listed in this article in the GitLab documentation
Re-run the sudo gitlab-rake gitlab:storage:migrate_to_hashed process
Once the background jobs are completed, run sudo gitlab-rake gitlab:check to ensure the output contains the message:
All projects are in hashed storage? ... yes
If successful, the projects that stated "NO REPOSITORY!" should now be fully restored.
A key to know if you need to run this process is if you:
Log in to your GitLab CE instance as an admin
Go to the Admin Area
Look under Monitoring->Background Jobs->Dead
and see a job with the name
hashed_storage:hashed_storage_project_migrate
with the error
OpenSSL::Cipher::CipherError:

Azure Artifacts Feed is much slower than maven central

I'm working on a project in Azure DevOps and, as recommended in the doc, I created an Artifacts Feed with maven central as upstream source to store all my dependencies (I don't really need to publish artifacts for now).
So I configured my local maven to fetch all the dependencies from my feed instead of maven central and it all works fine, except that it's very slow compared to maven central.
When I start from an empty .m2 on my local machine, it takes 1 min 15 secs to build my project when downloading the dependencies from maven central, but it takes over 8 minutes to do the same when downloading the dependencies from the Feed (which contains already all the dependencies).
I could live with that, since the download of everything happens only on the first build.
But the issue is that it's also slower when building my project from Azure Pipelines, which I really didn't expect since it's a connection from Azure to Azure and within the same organization. In this case, it takes at least twice the time when using the feed rather than maven central. And this will be true every time since Azure Pipelines gives you a fresh VM each time you build (I'm using a hosted agent), so there's no dependencies caching in this case.
It's really annoying since my project is just a HelloWorld so far, so it will only get worse over time.
Using a repository manager/feed is the best practice according to both Maven and Azure, but at this point I'm really thinking of going for the bad practice of getting everything from maven central instead of my feed, at least in my pipeline, to improve the performance.
Am I the only one having this issue ? What are your thoughts about this ?
Finally, after diving into the documentation for Azure Pipelines recently, I found out there is a way to cache the maven repository between runs so it partially solves my issue since the full download of the dependencies will happen only once.
Here is the doc in question for those who are interested.

Is there a way to download compilation artifacts from a travis-ci build?

I'm working on an open source project and I made a pull request. This project has travis-ci set up to check all incoming pull requests, so it did and it failed. The error it failed with is fairly cryptic and tells me about a bug in the rustc compiler, no wonder it filled me with curiosity and wish to investigate.
This CI account belongs to the project's author (not me) so I tried to reproduce the build on my own account. The very same commit passed. Not to mention the same passing situation on my local laptop.
The only thing I can think about is some kind of caching of build artifacts travis does.
So here we are: I have a link to the failing build and I'd like to download the build artifacts produced so I could dig into It or at least report this bug to the rustc team.
Is there any way to do it?
You can download rust artifacts from the ci server (https://s3-us-west-1.amazonaws.com/rust-lang-ci2), but only for 167 days2.
An example for a build artifact would be
https://s3-us-west-1.amazonaws.com/rust-lang-ci2/rustc-builds-alt/003382e4150984cb476047b3925edf8d75df2d59/rust-nightly-x86_64-unknown-linux-gnu.tar.gz
There is the cargo-bisect-rustc tool which can help you with bisecting a problem.
As a note: your problem is most likely an incremental compiler bug already covered in https://github.com/rust-lang/rust/issues/63161

Preview Sync and Sync From Repository options not available on fresh install

I have set up a fresh Crafter CMS 3.0.2 installation following the instructions here. When I login as admin user I don't get Preview Sync and Sync From Repository options in the Site Config section as show in the page here. How can I add those options for the admin user? I could not find the instructions for the same in the documentation
Sync'ing is now automatic starting version 3.0.2, see the release notes: http://docs.craftercms.org/en/3.0/release-notes/index.html
You can make updates to the underlying git repo, and the system will automatically pick those up.

Resources