Unable to generate certificates for a device in group enrollements - azure - azure

I am following https://learn.microsoft.com/en-us/azure/iot-dps/tutorial-group-enrollments
this article to create a group enrollment and add devices to that. I have completed the first two steps i.e
prepare the environment
Create a device enrollment entry
While doing the simulate the device step. It is showing that
"{deviceName}-public.pem file and include this value as your Client Cert. Open your {deviceName}-all.pem file ".
I am not able to find the two .pem files. Where can I find these files and how to generate those files?
Can somebody please help me in solving this issue.
I am getting following error although I have set the path in System variable

In "prepare the environment" part, there is a step 4:
Use the following Certifcate Overview to create your test certificates.
you will create all the certificates you will need.
For device certificate created in this step:
Step 4 - Create a new device
Where can I find these files and how to generate those files?
If you use PowerShell and for example, use "x509devicetogroup" as the device name in the following command:
New-CACertsDevice x509devicetogroup
You will get the following certificates in your working folder:
Here x509devicetogroup-public.pem and x509devicetogroup-all.pem files are what you need.
Update:
If you get the following error when using PowerShell to create certificates:
openssl : The term 'openssl' is not recognized as the name of a cmdlet, function, script file, or operable program.
Check the spelling of the name, or if a path was included, verify that the path is correct and try again.
At D:\Sample\azure-iot-sdk-c\tools\CACertificates\ca-certs.ps1:367 char:5
+ openssl pkcs12 -in $newDevicePfxFileName -out $newDevicePemAllFil ...
+ ~~~~~~~
+ CategoryInfo : ObjectNotFound: (openssl:String) [], ParentContainsErrorRecordException
+ FullyQualifiedErrorId : CommandNotFoundException
Add a variable named "OPENSSL_CONF" to system environment variables:
Add a new system path variable points to OPENSSL bin directory like this:

Related

mutipass:launch failed: Failed to resize instance image - error executing powershell command

I always have the following error when I install a virtual machine
launch failed: Failed to resize instance image - error executing powershell command. Detail: Resize-VHD : �޷������������̵Ĵ�С��
ϵͳ�޷�������C:\WINDOWS\system32\config\systemprofile\AppData\Roaming\multipassd\vault\instances\krun\ubuntu-20.04-serv
er-cloudimg-amd64.vhdx���Ĵ�С��
�޷������������̵Ĵ�С��
ϵͳ�޷�������C:\WINDOWS\system32\config\systemprofile\AppData\Roaming\multipassd\vault\instances\krun\ubuntu-20.04-serv
er-cloudimg-amd64.vhdx���Ĵ�С: ��һ����������ʹ�ô��ļ��������޷����ʡ� (0x80070020)��
����λ�� ��:1 �ַ�: 1
Resize-VHD -Path C:/WINDOWS/system32/config/systemprofile/AppData/Roa ...
+ CategoryInfo : ResourceBusy: (:) [Resize-VHD], VirtualizationException
+ FullyQualifiedErrorId : ObjectInUse,Microsoft.Vhd.PowerShell.Cmdlets.ResizeVhd
Background
In my case, I changed the location of my multipass instance locations by following this post here. I then created a folder on my drive where I wanted the instances to be stored.
After doing so, multipass launch failed with your exact same error. After trying reboots, uninstall/reinstall multipass, etc. - I finally tried renaming the folder where I wanted to store my multipass instances, and this worked.
Apparently, if your destination multipass folder includes a space, some part of the script fails.
Workaround / Fix
When specifying a multipass instance destination:
MULTIPASS_STORAGE -Value "<path>"
Make sure there are NO SPACES in the "<path>" you have provided.
Hope this helps.
You have to add the HyperV module to powershell in "Turn Windows features on or off", if you don't check this the checkbox, "Resize-VHD" won't work and will fail the multipass launch.
The direct command is:
Enable-WindowsOptionalFeature -Online -FeatureName Microsoft-Hyper-V-Management-PowerShell

Powershell Unable to find type [Pester.OutputTypes] on VSCode Linux

I'm working on a powershell port of Lesspass using Visual Studio Code on Linux Mint.
Test were working nicely from the IDE as of today.
From VSCode
Now When I'm on a test file and hit F5 to run the test I got:
PS ~/projects/Lesspass/Lesspass> ~/projects/Lesspass/Lesspass/src/Password.tests.ps1
Unable to find type [Pester.OutputTypes].
At ~/.local/share/powershell/Modules/Pester/4.6.0/Functions/PesterState.ps1:8 char:9
+ [Pester.OutputTypes]$Show = 'All',
+ ~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : InvalidOperation: (Pester.OutputTypes:TypeName) [], RuntimeException
+ FullyQualifiedErrorId : TypeNotFound
The Describe command may only be used from a Pester test script.
At ~/.local/share/powershell/Modules/Pester/4.6.0/Functions/Describe.ps1:234 char:9
+ throw "The $CommandName command may only be used from a Peste ...
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : OperationStopped: (The Describe comman\u2026Pester test script.:String) [], RuntimeException
+ FullyQualifiedErrorId : The Describe command may only be used from a Pester test script.
From makefile
However when running my test with make test it works. The task is:
.PHONY: test
test:
pwsh -Command 'Invoke-Pester -EnableExit (Get-childItem -Recurse *.tests.ps1).fullname'
I think your issue is likely the fact that you are attempting to call a pester test script on it's own rather than via the Invoke-Pester Command.
I think that if you change your call to Invoke-Pester -Script ~/projects/Lesspass/Lesspass/src/Password.tests.ps1 your error may go away.
The reason is that *.tests.ps1 files do not, on their own, know how to set up all of the background plumbing required to handle a test run. Invoke-Pester does a lot of set up before test files are run, and calling a test script directly with F5 skips that setup.
If you want to be able to press F5 to kick off a test run, what many PowerShellers do in VSCode is create a debug_entry.ps1 file on the local system, and in that file put the command Invoke-Pester -Script ~/projects/Lesspass/Lesspass/src/Password.tests.ps1. Then when you want to start a run, you switch tabs to your debug_entry.ps1 file and hit F5 and your debug script makes the correct call for you. It has the side benefit of the fact that any debugging break points you have set either in the tests file, or in the code you are testing should then be respected as well.
I also think I should also point out that in your make test script, you are using Get-ChildItem to explicitly get all of the test file paths manually and pass them to Invoke-Pester. This is not necessary. Invoke-Pester by default will always search either your current working directory, or any path that you give to it recursively to find all test files available.
For instance from the output of Get-Help Invoke-Pester is the following snippet
By default, Invoke-Pester runs all *.Tests.ps1 files in the current directory
and all subdirectories recursively. You can use its parameters to select tests
by file name, test name, or tag.
This snippet from the output of Get-Help Invoke-Pester -Examples demonstrates Invoke-Pester's ability to search sub directories of a given directory, not necessarily the current working directory for tests to run
-------------------------- EXAMPLE 11 --------------------------
PS > Invoke-Pester -Script C:\Tests -Tag UnitTest, Newest -ExcludeTag Bug
This command runs *.Tests.ps1 files in C:\Tests and its subdirectories. In those
files, it runs only tests that have UnitTest or Newest tags, unless the test
also has a Bug tag.
So in your case it would probably be easier and cleaner to change your make call to
pwsh -Command 'Invoke-Pester -EnableExit
That's assuming that your build system will set the current working directory to the root folder of your project.

The 'Run PowerShell' artifact failed to install with CommandNotFoundException

I'm trying to download and run a PowerShell script (from blob storage) using the Run Powershell artifact on an existing VM in Azure DevTest labs.
I get the following error and I assume I am doing something stupid.
& : The term './script.ps1' is not recognized as the name of a cmdlet,
function, script file, or operable program. Check the spelling of the name, or
if a path was included, verify that the path is correct and try again.
At line:1 char:3
+ & ./script.ps1
+ ~~~~~~~~~~~~
+ CategoryInfo : ObjectNotFound: (./script.ps1:String) [], Comman
dNotFoundException
+ FullyQualifiedErrorId : CommandNotFoundException
Here is my setup...
I have also tried the JSON array syntax, which gave the same result, and an invalid URL which gave a 404 error so it seems as if it is downloading my script but then failing to find it.
Below is info I wrote a while back.
Few items to note:
Folder structure is not supported as of this writing. Therefore, script needs to be at the root of the container
Ensure your blob is public
First you will need your file in Azure storage. Once uploaded in your container, click the file to get to its properties and copy the URL field.
As an example, I have created the following Run.ps1 script file and uploaded it to storage as a blob:
param ( [string]$drive = "c:\" )
param ( [string]$folderName = "DefaultFolderName" )
New-Item -Path $drive -Name $folderName -ItemType "directory"
Now, while adding the 'Run PowerShell' artifact to the VM, we provide the following information:
File URI(s): URL field copied from earlier step. (eg. https://myblob.blob.core.windows.net/mycontainer/Run.ps1)
Script to Run: Name of the PS1 script, (eg. Run.ps1)
Script Arguments: Arguments as you would write them at the end of your command (eg. -drive "d:\" -folderName "MyFolder")

How to start a BITS download as System Account? current error: "user has not logged on to the network" 0x800704DD

I'm trying to launch a BITS service download in a GPO Startup Script. Startup Scripts are started as the local SYSTEM account, which works well for background downloads as per Microsoft's documentation https://msdn.microsoft.com/en-us/library/windows/desktop/aa363152(v=vs.85).aspx
Sadly when I try to start a download (disregarding valid source or destination) I get the following error:
Start-BitsTransfer : The operation being requested was not performed because
the user has not logged on to the network. The specified service does not
exist. (Exception from HRESULT: 0x800704DD)
At line:1 char:1
+ Start-BitsTransfer -Source localhost -Destination c:\temp
+ ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
+ CategoryInfo : NotSpecified: (:) [Start-BitsTransfer], COMException
+ FullyQualifiedErrorId : System.Runtime.InteropServices.COMException,Microsoft.BackgroundIntelligentTransfer.Management.NewBitsTransferCommand
This is just a test, the actual BITS transfer is started within a c# application that is launched within the GPO Startup Script. Further tests with a manually created process through Sysinternals PSExec yield the same error.
Additional checks for security principals in whoami /all look fine:
User Name SID
=================== ========
nt authority\system S-1-5-18
GROUP INFORMATION
-----------------
...
CONSOLE LOGON Well-known group S-1-2-1
...
LOCAL Well-known group S-1-2-0
BUILTIN\Administrators Alias S-1-5-32-544
I checked for services BITS and SENS - all running.
To summarize:
How can i successfully launch a BITS download as SYSTEM in Startup Script?
How does the error "user has not logged on to the network" make sense, considering System account is always logged on? What is the meaning of "The specified service does not exist." - what service?

Setting up a workspace using Team Explorer Everywhere on Linux

Im having trouble creating a workspace and downloading the files from a Team Foundation Server using the Team Explorer Everywhere command line client (TEE-CLC-10.0.0). I've gotten as far as creating workspace:
$ ../tfs/TEE-CLC-10.0.0/tf -login:secretUsername,secretPassword -server:http://secretHost:8080 workspace -new KOLOBI
Workspace 'KOLOBI2' created.
Then I want to download files from the server to my workspace:
$ ../tfs/TEE-CLC-10.0.0/tf -login:secretUsername,secretPassword -server:http://secretHost:8080 get -recursive -all -force .
An argument error occurred: Items must reside in a workspace that has been previously used on this computer.
I guess I'm missing one step which is to add local directories to the workspace or something like that. But I can't figure out how to do it to be able to download the files.
You'll need to create working folder mappings between your local folder and the server items you wish to correspond to.
For example:
tf workfold -map -login:secretUsername,secretPassword -server:http://secretHost:8080 -workspace:KOLOBI '$/TeamProject/Project' '/home/me/project'
Then from the /home/me/project directory (or whatever you pick), you can just execute tf get .

Resources