We have been asked to build some spfx extensions for SharePoint Online. Using yeoman some essential files are being generated such as :
elements.xml,
ClientSideInstance.xml (for tenant wide deployment in SharePoint,
some .json files in the config folder for debuggin/packaging the solution
the files inside the src/extensions folder with the manifest.json and the .ts file.
files outside src such as gulpfile.js, package.json, .yo-rc.json and tsconfig.json
The structure resembles the following:
When I packaged the solution for uploading it to sharepoint, I decided to make many packages from the same solution, using a suffix such spo-extension-test.sspkg, spo-extension-dev.sspkg to use in different sites (for different environments) but in the same tenant.
In the apps catalog site however, I received the following warning,
The product Id, is in the package-solution.json,
{
"$schema": "https://developer.microsoft.com/json-schemas/spfx-build/package-solution.schema.json",
"solution": {
"name": "spfx-extension",
"id": "2475dj2f-733b-41e5-8k39-d78ba3ce2fme",
"version": "1.0.0.0",
"includeClientSideAssets": true,
.... //more settings
and in the .yo-rc.json file as library Id.
{
"#microsoft/generator-sharepoint": {
"plusBeta": false,
"isCreatingSolution": true,
"environment": "spo",
"version": "1.13.1",
"libraryName": "spfx-extension",
"libraryId": "2475dj2f-733b-41e5-8k39-d78ba3ce2fme",
"packageManager": "npm",
"isDomainIsolated": false,
"componentType": "extension",
"extensionType": "ApplicationCustomizer"
}
}
As I understand, this guid is also automatically generated through yeoman but I was wondering if for different environment packages can be changed manually to bypass this warning. What are the caveats of something like this? Is it considered bad practice?
And in the end, if it is, how can I create different extension packages with the suffix to match the environments in my tenant?
Related
Goal:
I'm trying to build a UI-Kit that can be consumed by web applications and react native applications. I have a specific need to be able to do this without react-native-web.
My Solution:
Create a UI Kit that has a shared interface layer. I have a project that exports two nested folders. #abc/ui-kit/web and #abc/ui-kit/native
My Problem:
When I import the UI Kit package, Typescript complains about a missing module definition.
import { Button } from '#abc/ui-kit';
import { Button } from '#abc/ui-kit/web';
Cannot find module '#abc/ui-kit' or its corresponding type
declarations
Cannot find module '#abc/ui-kit/web' or its corresponding
type declarations
My package.json for the UI-Kit has the following exports:
"exports": {
"web": {
"types": "./dist/web/index.d.ts",
"default": "./dist/web/index.ts"
},
"native": {
"types": "./dist/native/index.d.ts",
"default": "./dist/native/index.ts"
}
}
You can find the code here on Github. Easy to recreate, I tried keeping it minimal. Is what I'm trying to do possible? Or do I need to completely restructure my application?
https://github.com/Spidy88/multi-platform-example/tree/master
Note: If you look under the dev branch, I have a v1 UI Kit and App which shows how a regular library and consuming app work. No issues with it. The v2 UI Kit and App are my next iteration where I try and turn a single package into a multiple platform package (which essentially is two packages in one using package.json exports). This v2 version is what doesn't seem to work
Subpaths must start with ./ (answered by a colleague) Subpath Docs
"exports": {
"./web": {
"types": "./dist/web/index.d.ts",
"default": "./dist/web/index.ts"
},
"./native": {
"types": "./dist/native/index.d.ts",
"default": "./dist/native/index.ts"
}
}
I want to include some functionality via an azure function along with my managed app and was hoping I could include it in the .zip file you specify which has your mainTemplate.json and createUiDefinition.json files in it.
If so, where do you put them, and then how are they configured to run? What identity do they run as, what is their URL, etc? Can they just be "consumption" model (i.e. CPU allocated as needed instead of a dedicated server pool)? These are all low-frequency, high-latency type operations I want to provide.
I've found documentation that says you can put additional resources in the .zip file but nowhere that talks about actually doing it or how to use them.
Of if I'm totally off base, how does one provide specific behaviors along with a managed app to a customer? I'd really rather not host the functions myself, but if that's how you have to do it...
Thank you.
This guide here has a section on deployment artifacts, which includes the information you need:
https://github.com/Azure/azure-quickstart-templates/blob/master/1-CONTRIBUTION-GUIDE/best-practices.md
To summarise that page, you can include your scripts anywhere in the zip, but best practice is to not have it at the top level.
To get the URI in the main template, you need to add the following parameters:
"parameters": {
"_artifactsLocation": {
"type": "string",
"metadata": {
"description": "The base URI where artifacts required by this template are located including a trailing '/'"
},
"defaultValue": "[deployment().properties.templateLink.uri]"
},
"_artifactsLocationSasToken": {
"type": "securestring",
"metadata": {
"description": "The sasToken required to access _artifactsLocation if needed."
},
"defaultValue": ""
}
},
Then get the full path to your resource using the URI function, like in this example:
"variables": {
"scriptFileUri": "[uri(parameters('_artifactsLocation'), concat('scripts/configuration.sh', parameters('_artifactsLocationSasToken')))]",
"nestedtemplateUri": "[uri(parameters('_artifactsLocation'), concat('nestedtemplates/jumpbox.json', parameters('_artifactsLocationSasToken')))]"
},
Simply, how should I implement switching between dev and prod on a lp4 app.
I have my datasource added and everything works fine, but it's a .json file. How can I switch to another configuration saved in a .env file that could be read through dotenv package.
I have tried creating the datasource object manually, but I get errors so most likely my approach is wrong. Any suggestion would be appreciated.
Hello from the LoopBack team 👋
LoopBack does not support dotenv out of the box.
If you want to keep your environment-specific configuration in files, then you should place your production configuration to server/datasources.production.json and then set the environment variable NODE_ENV to production. LoopBack is reading server/datasources.${NODE_ENV}.json at startup and applying any overrides from that file on top of the default configuration specified in server/datasources.json.
Having said that, we think it's better to use https://12factor.net and provide production configuration via environment variables. I guess that's what you are trying to achieve too?
Maybe you are looking for a way how to configure datasources from environment variables? LoopBack supports variable substitution in certain configuration files, datasources.json is amongst the supported ones. For example:
{
"db": {
"connector": "mysql",
"database": "${MYSQL_DB}",
"username": "${MYSQL_USER},
"password": "${MYSQL_PASSWORD}"
}
When it comes to datasources in particular, we are recommending a slightly different approach:
Connection settings used by local development are specified in server/datasources.json file using datasource configuration options like host, database, username, etc.
When running in production (staging, etc.), the connection settings should be configured via a single connection string/url. The datasource file should contain an entry to set url property to the value provided by the environment.
Under the hood, LoopBack and all connectors provide specific behavior to support this feature: the url property overrides all other settings, but is silently ignored when it's not present.
An example datasource configuration:
{
"db": {
"connector": "mysql",
"database": "demo",
"username": "demo",
"password": "L00pBack"
"url": "${MYSQL_URL}"
}
When running in production, you can set the connection string as follows:
export MYSQL_URL=mysql://prod:strong-password#localhost/realdb
Please refer to our older blog post https://strongloop.com/strongblog/managing-loopback-configurations-the-twelve-factor-way/ to learn more details.
I've been having some issues deploying a dscExtension to an Azure virtual machine scale set (VMSS) using a deployment template.
Here's how I've added it to my template:
{
"name": "dscExtension",
"properties": {
"publisher": "Microsoft.Powershell",
"type": "DSC",
"typeHandlerVersion": "2.9",
"autoUpgradeMinorVersion": true,
"settings": {
"ModulesUrl": "[concat(parameters('_artifactsLocation'), '/', 'MyDscPackage.zip', parameters('_artifactsLocationSasToken'))]",
"ConfigurationFunction": "CmvmProcessor.ps1\\CmvmProcessor",
"Properties": [
{
"Name": "ServiceCredentials",
"Value": {
"UserName": "parameters('administratorLogin')",
"Password": "parameters('administratorLoginPassword')"
},
"TypeName": "System.Management.Automation.PSCredential"
}
]
}
}
}
The VMSS itself is successfully deploying, but when I browse the InstanceView of the individual VMs, the dscExtension shows the failed status with an error message.
The problems I'm having are as follows:
The ARM deployment does not try to update the dscExtension upon redeploy. I am used to MSDeploy web app extensions where the artifacts are updated and the code is redeployed on each new deployment. I do not know how to force it to update the dscExtension with new binaries. In fact it only seems to give an error on the first deploy of the VMSS, then it won't even try again.
The error I'm getting is for old code that doesn't exist anymore.
I had a bug previously in a custom DSC Powershell script where I tried to use the -replace operator which is supposed to create a $Matches variable but it was saying $Matches didn't exist.
In any case, I've since refactored the code and deleted the entire resource group then redeployed. The dscExtension is still giving the same error. I've verified the blob storage account where my DSC .zip is located no longer has the code which is capable of producing this error message. Azure must be caching the dscExtension somewhere. I can't get it to use my new blob .zip that I upload before each deployment.
Any insight into the DSC Extension and how to force it to update on deploy?
It sounds like you may be running into multiple things here, so trying the simple one first. In order to get a VM extension to run on a subsequent deployment you have to "seed" it. (and you're right this is different than the rest of AzureRM) Take a look at this template:
https://github.com/bmoore-msft/AzureRM-Samples/blob/master/VMDSCInstallFile/azuredeploy.json
There is a property on the DSC extension called:
"forceUpdateTag" : "changeThisToEnsureScriptRuns-maxlength=50",
The property value must be different if you ever want the extension to run again. So for example, if you wanted it to run every time you'd seed it with a random number or a guid. You could also use version numbers if you wanted to version it somehow. The point is, if the value in the template is the same as the one you're passing in, the extension won't run again.
That sample uses a VM, but the VMSS syntax should be the same. That property also applies to other extensions (e.g. custom script).
The part that seems odd is that you said you deleted the entire RG and couldn't get it to accept the new package... That sounds bad (i.e. like a bug). If the above doesn't fix it, we may need to dig deeper into the template and script. LMK...
I have posted another question regarding my Chrome extension here.
But I have one more question about extensions themselves. I only need a content script for the modification of the Tumblr-Dashboard, no background page or something else, right?
Here is the manifest.json file:
{
"name": "Tumblr - Tiled Dashboard",
"version": "0.0.54",
"manifest_version": 2,
"description": "This extension modifies the look of your Tumblr dashboard.",
"icons": {
"16": "images/icon_16.png",
"48": "images/icon_48.png",
"128": "images/icon_128.png"
},
"content_scripts": [
{
"matches": [ "*://*.tumblr.com/dashboard" ],
"css": [ "styles.css" ],
"js": [ "jquery-2.1.3.min.js", "masonry.min.js", "code.js" ]
}
],
"homepage_url": "mypage",
"author": "myname"
}
To start, I ask if this is alright? I have read a lot about the manifest.json file and everything seems to work fine when I try out the extension locally. But when I pack the extension and upload it, there are two problems:
I cannot find the extension when I search for it
When I use the link to find the extension, and I want to install it (tried that on 2 different PCs), I get an error, telling me that the jquery-2.1.3.min.js file could not be loaded. I therefore changed the order of my JavaScript files to test if it was a problem related to the jQuery file, but having masonry.min.js as the first file in the array resulted in the same error.
Why does this happen? Is the manifest.json file ok? Do I need some special permissions?
Edit:
This is a screenshot of when I try to install the extension from the Chrome Web Store (where I also can't find it by search).
I took a look inside your extension's ZIP file before downloading it, and the result was the following:
*Inspected using Chrome extension source viewer by Rob Wu
The problem here, is that you've uploaded a packed CRX file inside of your ZIP file, instead of your extension source code. You should instead upload a ZIP file containing your extension's root. Since that you're including the manifest.json file, the Web Store doesn't notice anything wrong until you try to install the extension, because the manifest is well written, but when Chromes tries to access the files declared, it fails and returns an error, because those files do not exist.
Quoting from the upload page of the Chrome Web Store Developer Dashboard:
Uploading an item:
Upload a ZIP file of your item directory, not a packaged CRX file.
Include a well-designed product icon in your manifest (more info).
Read the documentation about creating and packaging apps.
Need more help? Check out the Chrome Web Store developer documentation.
So, you should create a ZIP file of your extension's root directory, containing all the files of your extension. Your ZIP file should then look like the following:
I had the same "Could not load javascript file" error, then I noticed that my_custom_script.js file was not builded in dist directory (I'm using npm). You can move it manually and try to reload plugin to check is this problem.
Solution to me was adding new entry to webpack.config.js like this:
entry: {
'my_custom_script': './my_custom_script.js',
'background': './background.js',
'popup/popup': './popup/popup.js',
'options/options': './options/options.js',
},