Using maven coordinates to load assemblies into synapse spark workspace - apache-spark

I would like to declaratively define the jars (package dependencies) that are used by the synapse workspace. Ideally I could do this by using maven coordinates. Is this possible? Currently the only way I've been able to add dependencies is by manually uploading them, under the "workspace packages" configuration section of the synapse studio.
Alternatively, I wouldn't mind retrieving packages at the session level. Perhaps this could be initiated by the driver at runtime if possible.

Related

Azure DevOps extension cache wrong node_modules

General: I develop an Azure DevOps extension with tasks and pipeline decorators. Testing on local Azure DevOps Server instance. Extension loaded through manage extensions from local hard drive. Let's say that I installed the extension first time with version 1.0.0 and a node_modules dependency "3rdPartyDep" with version 2.0.0, which has transitive dependencies with vulnerabilities.
Scenario:
Upgrade "3rdPartyDep" to version 3.0.0 with fixed vulnerabilities. Build new version of my extension, say 1.0.1. Create the .vsix, update the extension in the Azure DevOps Server.
Run a pipeline, which fails because I did not check the "3rdPartyDep" changes and there are breaking changes and the extension fails to run.
Rollback the "3rdPartyDep" library to 2.0.0 because I have no time now to check what is broken in there right now as I have other things to debug and implement, repackage the extension, increase version to 1.0.2, update extension in Azure DevOps Server.
Run the pipeline. It fails with the same exception, as if I didn't rollback. I look into the agent taks folder and I see that the node_modules with the "3rdPartyDep" library is pointing to 3.0.0, which is wrong because I rolled back the version.
I open the generated .vsix archive and check that the node_modules inside contains the correct 2.0.0 version, so no problems of packaging or building from my side.
I make a conclusion that Azure DevOps stores somewhere a cached version of the extension with the node_modules including the wrong version of the "3rdPartyDep". I search that cache folder over internet to find out where it is, and I also search with a search tool all my machine, including words in file. Nowhere to be found. There is no location on my machine with such node_modules containing the 3.0.0 version. It might be stored in some encrypted DB?
I uninstall completely the extension, and install it back. I see that Azure DevOps has a history for the extension, and the cache is not cleared. Any pipeline fails, even if my .vsix does not contain this dependency.
I'm stuck.
Questions:
Where extensions are actually cached inside Azure DevOps Server?
Why updating, uninstalling and installing does not fix the problem?
Is there any way to fix this? What can I do? I do not want to reinstall the server completely. Moreover, this raises concerns about how node_modules are managed and cached and what happens at the clients and the cloud.
You could try the following items:
Try to clean the browser cache, and check whether you have increase the version number in the task.json.
Try to perform Delete task -- Save definition -- add task again process.
Delete Azure DevOps Server cache, which can be followed in this link.
Uninstall the extension from CollectionSettings, remove the extension from local Manage Extensions. Then upload again the extension and install it in the collection.

Can I install application directly in a VDA and use it in catalog (Xenapp essentials)

Can I install any applicAtions in VDA in xenapp essentials and use it in the catalog, I am talking about defining a catalog and using the same catalog in azure to install any applications, would that applications reflect in the catalog?
This is not a recommended approach. It is always better to update your master image with the application and then update the catalog with new master image.
Here is a video that walks you through catalog image update workflow:
https://www.youtube.com/watch?v=vAMOoYLhMTw

Using EventHub from WebJob

I'm trying to use EventHub from a WebJob as described https://github.com/Azure/azure-webjobs-sdk/wiki/EventHub-support and Any Example of WebJob using EventHub?.
The problem is that the SO question refers to the nuget package Microsoft.Azure.WebJobs.ServiceBus.1.2.0-alpha-10291 and as far as I can see, that package isn't available anymore.
Does anybody know if there's a new release in the pipe or any other workaround?
BR,
Max.
That library is available from MyGet repository and is being updated sometimes. For adding that, you need to add that repository to NuGet settings:
Package Manager Settings => create a new package source => Use https://www.myget.org/F/azure-appservice/ as a link for the repository.
Now, you should have the library and its dependencies.

Deploy the same Azure binaries to multiple subscriptions

We are trying to work out a good continuous deployment setup using TFS, Visual Studio and Azure. At our company, each developer has their own Azure subscription that we use for testing, as well as shared QA1/QA2/PROD subscriptions that we can deploy to. We have matching TFS XAML build definitions for each of these, running Powershell scripts with parameters and PublishSettings files.
This all gives us a set of .cspkg and .cspkg files, and in theory we can deploy the right cspkg with the correct cspkg to any Azure system.
The problem we are encountering now though is that we want to start using the Redis Cache service. Installing the nuget package writes subscription-specific settings into the web.config, to point at the cache. This means that the cspkg is now complied specifically for the Azure subscription.
We could use SlowCheetah to merge web.config files on build, but this means that we would have to compile the package for each build definition, and as the number of developers increases this is obviously going to become unsustainable.
I am looking for a way to keep our old generic packages and still use the Redis Cache. We can connect to the cache in code during app_start, but then we can't use it to store IIS session state. I understand that the Azure Load Balancer is meant to keep users on the same server, but I'm unsure how that will work as we swap servers in/out.
It feels like we are approaching the problem wrong and there should be a simple solution that we are overlooking.
We are using Azure Tools 2.6, Visual Studio 2013, TFS 2015r2.
I think there are always 3 ways of doing this.
1st one is config during build, which is building one thing for one thing you described, which is not desired in most of scenario.
2nd is config during deployment, which means you open the cspkg file, change config, then put it back before upload without re-compile.
3nd is config after deployment, have a configuration management tool adjust the config file for you on the fly.
We use octopus deploy to archive #2 above, our CI tool feed octopus with cspkg and cscfg, octopus handles the rest. I would definitely not going after #1 but consider #3 is a valid option too.
As of today we store all our connection settings in .cscfg files. Even if for security reasons, we avoid storing any production connection strings in source control, only QA. And we have CI for QA, but not production. This way it works well for us, we just maintain different .cscfg for different environments (subscriptions)
However, in near future I think we will move to Key Vault for this.

Is there a way to force artifactory jenkins plugin to store in a hierarchal way?

I have a jenkins server and artifactory connection. I want to use the artifactory jenkins plugin to send out files in a hierarchal way .
E.g a rpm might go as sys/arch/os/packages/rpm. IS there a way to force artifactory jenkins plugin to use this hierarchy globally for every job as soon as it sees that it is a rpm while uploading stuff to artifactory or make a custom structure.
#Tim's answer in the comments is accurate. You can configure the deployment of artifacts after the build by using the Artifactory Generic (Freestyle) Integration.
It is configured on a job level.
You can take a look on a bunch of job templating and generating plugins and tools for job creation automation, the Artifactory deployment configuration is just a piece of xml in the job config file.
If you want to verify that the files are uploaded in the correct layout, you can use a user plugin.

Resources