azure artifact not showing Nuget from upstream organization feed source - nuget-package

i am having transitive dependency on a nuget package. this package is coming from another organization feed (which gets it from second/another organization feed)
Below is what it looks like.
I am not able to upgrade the version of package (higher version doesn't show in my feed on search)
Question: why package version 1.1 is not showing in my feed, and I am not able to add it (from Visual studio, when i try to upgrade package version)
My project Feed:
package
visibility
source
my.package (version 1.0)
local
orgFeed1
orgFeed1 Feed:
package
visibility
source
my.package (version 1.1) (selected as current)
local
orgFeed2
my.package (version 1.0)
local
orgFeed2
orgFeed2 Feed:
package
visibility
source
my.package (version 1.1) (selected as current)
local
Nuget
my.package (version 1.0)
local
Nuget

Firstly, please make sure the user you are using has been set the correct role (Contributor or Owner) to download/cache the upstream feeds packages to your current feed.
Secondly, you have to allow packages that have been fetched from a private repo to be fetched from a public repo. See Allow external versions and Configure upstream behavior for details.
You can also call the REST API to Set Upstreaming Behavior. Bellow PowerShell snippet for your reference to allow a package to be fetched from a public repo:
$env:PATVAR = "PAT_HERE"
$token = [Convert]::ToBase64String(([Text.Encoding]::ASCII.GetBytes("username:$PatVar")))
$headers = #{
Authorization = "Basic $token"
}
$url =
"https://pkgs.dev.azure.com/{OrganizationName}/{ProjectName}/_apis/packaging/feeds/{FeedName}/{Protocol}/packages/{PackageName}/upstreaming?api-version=6.1-preview.1"
$body = '{"versionsFromExternalUpstreams": "AllowExternalVersions"}'
Invoke-RestMethod -Uri $url -Headers $headers -Body $body -Method Patch -ContentType "application/json"

Related

Using node-adodb with .mdb Access Files

I am attempting to write a containerized Node application on Windows OS that intakes Microsoft Access databases and accesses the data within. I wish to use npm node-adodb to interact with Access.
My application works perfectly fine with .accdb Access files. When I try to connect to .mdb Access files I get this error Spawn C:\Windows\SysWOW64\cscript.exe error, Provider cannot be found. It may not be properly installed. My code works on my local Windows computer, so I'm guessing that it's something to do with how my container environment is set up.
I set up the base Dockerfile like this:
# Get base Windows OS image
FROM mcr.microsoft.com/windows/servercore:ltsc2019
# Set environment variables
ENV NPM_CONFIG_LOGLEVEL info
ENV NODEJS_VERSION 12.9.1
# Download & Install 2010 Access Driver
RUN powershell -Command "wget -Uri https://download.microsoft.com/download/2/4/3/24375141-E08D-4803-AB0E-10F2E3A07AAA/AccessDatabaseEngine.exe -OutFile AccessDatabaseEngine.exe -UseBasicParsing"
RUN powershell -Command "Start-Process -NoNewWindow -FilePath \"AccessDatabaseEngine.exe\""
# Download & Install 2016 Access Driver
RUN powershell -Command "wget -Uri https://download.microsoft.com/download/3/5/C/35C84C36-661A-44E6-9324-8786B8DBE231/accessdatabaseengine_X64.exe -OutFile accessdatabaseengine_X64.exe -UseBasicParsing"
RUN powershell -Command "Start-Process -NoNewWindow -FilePath \"accessdatabaseengine_X64.exe\""
# Download and install Node.js
RUN powershell -Command "wget -Uri https://nodejs.org/dist/v%NODEJS_VERSION%/node-v%NODEJS_VERSION%-x64.msi -OutFile node.msi -UseBasicParsing"
RUN msiexec.exe /q /i node.msi
# Run node
CMD [ "node" ]
I establish the Access connection like so. How I instantiate the connection differs depending on if I'm in my local environment or online. It also differs on .accdb vs .mdb:
// Define connection string & initialize export connection depending on if it's a .accdb .mdb file
let access;
if ((file.path).substring(Math.max(0, (file.path).toString().length - 5)) === 'accdb') {
const connStr = `Provider=Microsoft.ACE.OLEDB.12.0;Data Source=${file.path};Persist Security Info=False;`;
access = ADODB.open(connStr); // This works
} else {
const connStr = `Provider=Microsoft.Jet.OLEDB.4.0;Data Source=${file.path};`;
access = ADODB.open(connStr); // This fails
}
Is there another software package that I need to install in order to work with .mdb files? Do I need to connect in a different way? Any help would be very much appreciated.
With the error message coming from "C:\Windows\SysWOW64\cscript.exe", please make sure that you have the 32-bit version of the "Microsoft Access Database Engine" distribution package installed.
If you have the 64-bit engine package installed, open the database connection like this:
access = ADODB.open(connStr, true);

Error of Assembly with same name is already loaded doing Import-Module in Azure Function

In an Azure Function I am trying to load a PowerShell module but getting the error Assembly with same
name is already loaded.
Code Sample
Import-Module "D:\home\site\wwwroot\HelloWorld\modules\MsrcSecurityUpdates\1.7.2\MsrcSecurityUpdates.psd1"
Error Message
Import-Module : Assembly with same name is already loaded
At C:\home\site\wwwroot\HelloWorld\run.ps1:25 char:5
+ Import-Module "D:\home\site\wwwroot\HelloWorld\modules\MsrcSecuri ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (:) [Import-Module], FileLoadException
+ FullyQualifiedErrorId : FormatXmlUpdateException,Microsoft.PowerShell.Commands.ImportModuleCommand
Some additional background..
This code was working yesterday. I have made a lot of edits so cannot clearly state the same code which was working yesterday is now failing.
I am editing the code directly via the browser.
I have restarted the web app, to potentially flush out any assemblies loaded during my code. Did not make a difference.
I checked if the module is available with the following, which returns the MsrcSecurityUpdates is NOT installed.
if (-not (Get-Module -Name "MsrcSecurityUpdates"))
{
Write-Output "MsrcSecurityUpdates NOT installed";
}
else
{
Write-Output "MsrcSecurityUpdates YES installed";
}
I downloaded the module with
Save-Module -Name MsrcSecurityUpdates -Path "C:\TEMP" -Force
and subsequently uploaded to the Azure Function File Share using the Kudo console. As per the steps outlined in this Stackoverflow question
This module seems to conflict with other modules in your app, or with assemblies loaded explicitly from your code. It is also possible that the module content is corrupted.
First of all, I would recommend relying on the Managed Dependencies feature instead of uploading the module via Kudu. Just include a reference to your module into the requirements.psd1 file at the root of your app:
#{
...
'MsrcSecurityUpdates' = '1.*'
}
If you edit this file in the Portal, you may need to restart your app. The next time you invoke any function, the latest version of this module will be automatically installed from the PowerShell Gallery and will be available on PSModulePath, so you can import it without specifying any path:
Import-Module MsrcSecurityUpdates
Try this on a brand new app without any other modules: MsrcSecurityUpdates will be loaded. However, if you are still getting the same error, this means MsrcSecurityUpdates is in conflict with other modules your app is using. You can narrow it down by removing other modules from your app (including cleaning up the modules uploaded via Kudu) and reducing your code.
[UPDATE] Potential workarounds:
Try to import (Import-Module) the modules in a certain fixed order, to make sure the more recent assembly versions are loaded first. This may or may not help, depending on the design of the modules.
Try executing commands from one of the modules in a separate process (using PowerShell jobs or sessions, or even invoking pwsh.exe).

Multiple provider versions with Terraform

Does anyone know if it is possible to have a Terraform script that uses multiple provider versions?
For example azurerm version 2.0.0 to create one resource, and 1.4.0 for another?
I tried specifying the providers, as documented here: https://www.terraform.io/docs/configuration/providers.html
However it doesn't seem to work as it tries to resolve a single provider that fullfills both 1.4.0 and 2.0.0.
It errors like:
No provider "azurerm" plugins meet the constraint "=1.4.0,=2.0.0".
I'm asking this because we have a large Terraform codebase and I would like to migrate bits by bits if doable.
There used to be a similar question raised, here: Terraform: How to install multiple versions of provider plugins?
But it got no valid answer
How to use multiple version of the same Terraform provider
This allowed us a smooth transition from helm2 to helm3, while enabling new deployments to use helm3 right away, therefore reducing the accumulation of tech debt.
Of course you can do the same for most providers
How we've solved this
So the idea is to download a specific version of our provider (helm 0.10.6 in my case) and move it to one of the filesystem mirrors terraform uses by default. The key part is the renaming of our plugin binary. In the zip we can find terraform-provider-helm_v0.10.6, but we rename it to terraform-provider-helm2_v0.10.6
PLUGIN_PATH=/usr/share/terraform/plugins/registry.terraform.io/hashicorp/helm2/0.10.6/linux_amd64
mkdir -p $PLUGIN_PATH
curl -sLo_ 'https://releases.hashicorp.com/terraform-provider-helm/0.10.6/terraform-provider-helm_0.10.6_linux_amd64.zip'
unzip -p _ 'terraform-provider-helm*' > ${PLUGIN_PATH}/terraform-provider-helm2_v0.10.6
rm _
chmod 755 ${PLUGIN_PATH}/terraform-provider-helm2_v0.10.6
Then when we declare our two provider plugins
We can use hashicorp/helm2 plugin from the filesystem mirror, and let terraform directly download the latest hashicorp/helm provider, which uses helm3
terraform {
required_providers {
helm2 = {
source = "hashicorp/helm2"
}
helm = {
source = "hashicorp/helm"
version = ">= 2.0.0"
}
}
}
# you will find the doc here https://registry.terraform.io/providers/hashicorp/helm/0.10.6/docs
provider "helm2" {
install_tiller = false
namespace = "kube-system"
kubernetes {
...
}
}
# you will find the doc at latest version https://registry.terraform.io/providers/hashicorp/helm/latest/docs
provider "helm" {
kubernetes {
...
}
}
When initializing terraform, you will find that
- Finding latest version of hashicorp/helm...
- Finding latest version of hashicorp/helm2...
- Installing hashicorp/helm v2.0.2...
- Installed hashicorp/helm v2.0.2 (signed by HashiCorp)
- Installing hashicorp/helm2 v0.10.6...
- Installed hashicorp/helm2 v0.10.6 (unauthenticated)
Using it
Its pretty straightforward from this point. By default, helm resources will pick our updated helm provider at v2.0.2. You must explicitly use provider = helm2 for old resources (helm_repositoryand helm_releases in our case). Once migrated, you can remove it to use the default helm provider.
No you cannot do what you want. Terraform expects your constraint to match one plugin version as eluded to in:
Plugin Names and Versions
If multiple versions of a plugin are installed, Terraform will use the
newest version that meets the configuration's version constraints.
So your constraint cannot be parsed to match anyone plugin, hence the error

Unable to restore NuGet Package: "Failed to find api location for area: nuget id: 9D3A4E8E"

I am unable to publish new packages to (or restore packages from) my own NuGet Feed in Azure DevOps . It started a few days ago and, since I have created a brand new feed (out of curiosity). Still, it fails with the same error:
Failed to find api location for area: nuget id:
9D3A4E8E-2F8F-4AE1-ABC2-B461A51CB3B3
2019-05-27T22:09:46.8693807Z ##[section]Starting: dotnet push (Common)
2019-05-27T22:09:46.8802338Z ==============================================================================
2019-05-27T22:09:46.8802573Z Task : .NET Core
2019-05-27T22:09:46.8802607Z Description : Build, test, package, or publish a dotnet application, or run a custom dotnet command. For package commands, supports NuGet.org and authenticated feeds like Package Management and MyGet.
2019-05-27T22:09:46.8802660Z Version : 2.151.1
2019-05-27T22:09:46.8802692Z Author : Microsoft Corporation
2019-05-27T22:09:46.8802724Z Help : [More Information](https://go.microsoft.com/fwlink/?linkid=832194)
2019-05-27T22:09:46.8802931Z ==============================================================================
2019-05-27T22:09:47.4837790Z [command]C:\windows\system32\chcp.com 65001
2019-05-27T22:09:47.4924880Z Active code page: 65001
2019-05-27T22:09:47.4964761Z SYSTEMVSSCONNECTION exists true
2019-05-27T22:09:47.5770818Z SYSTEMVSSCONNECTION exists true
2019-05-27T22:09:50.7225430Z SYSTEMVSSCONNECTION exists true
2019-05-27T22:09:50.7673768Z ##[warning]Can\'t find loc string for key: Warning_SessionCreationFailed
2019-05-27T22:09:50.7683040Z ##[warning]Warning_SessionCreationFailed {}
2019-05-27T22:09:52.3193091Z ##[error]Error: Error: Failed to find api location for area: nuget id: 9D3A4E8E-2F8F-4AE1-ABC2-B461A51CB3B3
2019-05-27T22:09:52.3194182Z ##[error]Packages failed to publish
2019-05-27T22:09:52.3294148Z ##[section]Finishing: dotnet push (Common)
How can I fix this error?
Unable to restore NuGet Package: “Failed to find api location for area: nuget id: 9D3A4E8E”
The reason for the above error is a wrong specification of the URL of the service to use.
So, to resolve this issue, please try to double check if the source of nuget feed is correct.
If the source of nuget feed is correct and this issue is also occurred on the other nuget feed(which is worked before.), you can try to delete the brand new feed, then check if this issue is resolved.
If this issue only occurred on the brand new feed, please try to check if your feed configured correctly and check if the agent has any proxy setting.
Check the ticket VSTS - Cannot Push Nuget Package to VSTS Package Feed from Private Agent Behind Web Proxy
Hope this helps.

Chef WebPI cookbook fails install in Azure

I setup a new Win2012 VM in Azure with the Chef plugin and have it connected to manage.chef.io. Added a cookbook which uses the WebPi cookbook to install ServiceBus and its dependencies. The install fails with the following error:
“Error opening installation log file. Verify that the specified log file location exists and is writable.”
After some searching it looks like this is not new in Azure based on this 2013 blog post - https://nemetht.wordpress.com/2013/02/27/web-platform-installer-in-windows-azure-startup-tasks/
It offers a hack to disabled security on the folder temporarily but I'm looking for a better solution.
Any ideas?
More of the log output -
Started installing: 'Microsoft Windows Fabric V1 RTM'
.
Install completed (Failure): 'Microsoft Windows Fabric V1 RTM'
.
WindowsFabric_1_0_960_0 : Failed.
Error opening installation log file. Verify that the specified log file location exists and is writable.
DependencyFailed: Microsoft Windows Fabric V1 CU1
DependencyFailed: Windows Azure Pack: Service Bus 1.1
.
..
Verifying successful installation...
Microsoft Visual C++ 2012 SP1 Redistributable Package (x64) True
Microsoft Windows Fabric V1 RTM False
Log Location: C:\Windows\system32\config\systemprofile\AppData\Local\Microsoft\Web Platform Installer\logs\install\2015-05-11T14.15.51\WindowsFabric.txt
Microsoft Windows Fabric V1 CU1 False
Windows Azure Pack: Service Bus 1.1 False
Install of Products: FAILURE
STDERR:
---- End output of "WebpiCmd.exe" /Install /products:ServiceBus_1_1 /suppressreboot /accepteula /Log:c:/chef/cache/WebPI.log ----
Ran "WebpiCmd.exe" /Install /products:ServiceBus_1_1 /suppressreboot /accepteula /Log:c:/chef/cache/WebPI.log returned -1
A Chef contact (thanks Bryan!) helped me understand this issue better. Some WebPI packages do not respect the explicit log path provided to WebPIcmd.exe. The author should fix the package to use the provided log path when it is set. So the options became:
Have the author fix the package
Run Chef in a new scheduled task as a different user which has access
to the AppData folder
Edit the cookbook to perform/unperform a registry edit to temporarily move the AppData folder to a location that the System
user has access. Either in my custom cookbook or fork the WebPI
cookbook.
Obviously, waiting on the author (Microsoft in this case) to fix the package would not happen quickly.
Changing how the Azure VM runs Chef doesn't make sense considering the whole idea is to provide the configuration at the time of provisioning and it just work. Plus changing the default setup may have unintended consequences and puts us in a non-standard environment.
In the short term, I decided to alter the registry in my custom cookbook.
registry_key 'HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders' do
values [{
:name => "Local AppData",
:type => :expand_string,
:data => "%~dp0appdata"
}]
action :create
end
webpi_product 'ServiceBus_1_1' do
accept_eula true
action :install
end
webpi_product 'ServiceBus_1_1_CU1' do
accept_eula true
action :install
end
registry_key 'HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders' do
values [{
:name => "Local AppData",
:type => :expand_string,
:data => '%%USERPROFILE%%\AppData\Local'
}]
end
This change could also be done in the WebPI cookbook as well to fix this issue for all dependent cookbooks. I decided to not approach this until the WebPI team responds to a feature request for the framework to verify packages respect the log path instead.
http://forums.iis.net/t/1225061.aspx?WebPI+Feature+Request+Validate+product+package+log+path+usage
Please go and reply to this thread to try to get the team to help protect against this common package issue.
Here is the solution with POWERSHELL
I had the same error while installing "Service Fabric SDK" during VMSS VM creation. Also the system user was used.
Issue: when I was connecting with RDP with my "admin" user and run the same, it worked.
Solution: change the registry entry as above, install and reset back
here is my solution using "powershell"
I installed 2 .reg files into %TEMP% folder. The content is the old and new exported key / value for the
plugin-sf-SDK-temp.reg
Windows Registry Editor Version 5.00
[HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders]
"Local AppData"=hex(2):25,00,54,00,45,00,4d,00,50,00,25,00,00,00
plugin-sf-SDK-orig.reg
Windows Registry Editor Version 5.00
[HKEY_USERS\.DEFAULT\Software\Microsoft\Windows\CurrentVersion\Explorer\User Shell Folders]
"Local AppData"=hex(2):25,00,55,00,53,00,45,00,52,00,50,00,52,00,4f,00,46,00,\
49,00,4c,00,45,00,25,00,5c,00,41,00,70,00,70,00,44,00,61,00,74,00,61,00,5c,\
00,4c,00,6f,00,63,00,61,00,6c,00,00,00
Integrate the following code into your custom-powershelgl script:
Write-Output "Reset LocalApp Folder to TEMP"
Start-Process "$($env:windir)\regedit.exe" `
-ArgumentList "/s", "$($env:TEMP)\plugin-sf-SDK-temp.reg"
## replace the following lines with your installation - here my SF SDK installation via WebWPIcmd
Write-Output "Installing /Products:MicrosoftAzure-ServiceFabric-CoreSDK"
Start-Process "$($env:programfiles)\microsoft\web platform installer\WebPICMD.exe" `
-ArgumentList '/Install', `
'/Products:"MicrosoftAzure-ServiceFabric-CoreSDK"', `
'/AcceptEULA', "/Log:$($env:TEMP)\WebPICMD-install-service-fabric-sdk.log" `
-NoNewWindow -Wait `
-RedirectStandardOutput "$($env:TEMP)\WebPICMD.log" `
-RedirectStandardError "$($env:TEMP)\WebPICMD.error.log"
Write-Output "Reset LocalApp Folder to ORIG"
Start-Process "$($env:windir)\regedit.exe" `
-ArgumentList "/s", "$($env:TEMP)\plugin-sf-SDK-orig.reg"

Resources