I encountered a problem when using the swupdate image built by yocto.
Software Update started !
[network_initializer] : Software update started
[extract_file_to_tmp] : Found file
[extract_file_to_tmp] : filename sw-description
[extract_file_to_tmp] : size 303
[get_common_fields] : Version 0.1.0
[get_common_fields] : Description Firmware update for XXXXX Project
[parse_hw_compatibility] : Accepted Hw Revision : 1.0
[parse_hw_compatibility] : Accepted Hw Revision : 1.2
[parse_hw_compatibility] : Accepted Hw Revision : 1.3
[_parse_images] : Found Image: rootfs.ext4.gz in device : /dev/mmcblk2p4 for handler raw
[check_hw_compatibility] : Hardware myir Revision: 1.0
[check_hw_compatibility] : Hardware compatibility verified
[extract_files] : Found file
[extract_files] : filename rootfs.ext4.gz
[extract_files] : size 373258053 required
ERROR : Not enough free space to extract rootfs.ext4.gz (needed 373258053, got 223219712)
Image invalid or corrupted. Not installing ...
[network_initializer] : Main thread sleep again !
Waiting for requests...
ERROR : Writing to IPC fails due to Broken pipe
As shown in the figure, it indicates that there is not enough space, and then I use resize2fs /dev/mmcblk2p4 to expand the space. Now it has 1g of space. But still the same hint. Please let me know what you think.
Try using the installed-directly flag in the sw-description file.
files: (
{
filename = "rootfs.ext4.gz";
sha256 = "bc57b9c737033d0d6826db51618d596da7ecf3fdc0cb48dc9986a6094f529413";
type = "archive";
path = "/path/to/extract";
preserve-attributes = true;
installed-directly = true; <---------- this option
properties: {
create-destination = "true";
}
}
Related
I am currently working on Azure Device Update using layers meta-azure-device-update and meta-swupdate. I want to run a post-install script. I have followed sources mentioned below:
1.sw-description.rst
https://git.rigado.com/vesta/swupdate/-/blob/acf50e361a8752db48e69ffe3c20a167c402d35f/doc/source/sw-description.rst#board-specific-settings
2.adu-swupdate.sh
https://github.com/Azure/iot-hub-device-update/blob/main/src/adu-shell/scripts/adu-swupdate.sh
The image was built successfully and I was able to locate adu-swupdate.sh in .swu file which I provided Azure Device Update. The install failed giving below mentioned error:
Sep 21 07:21:30 rpi AducIotAgent[281]: -> 07:21:29 PUBLISH | IS_DUP: false | RETAIN: 0 | QOS: DELIVER_AT_MOST_ONCE | TOPIC_NAME: $iothub/twin2021-09-21T07:21:30.2396Z [E] Install failed, extendedResultCode = 1 [Install]
Sep 21 07:21:30 rpi AducIotAgent[281]: 2021-09-21T07:21:30.2398Z [E] Install failed. error 0, 1 - Expecting service to send Cancel action [ADUC_Workflow_WorkCompletionCallback]
The sw-update log is as given below:
Swupdate v2021.04.0
Licensed under GPLv2. See source distribution for detailed copyright notices.
[INFO ] : SWUPDATE running : [main] : Running on raspberrypi4 Revision 1.0
[INFO ] : SWUPDATE running : [print_registered_handlers] : Registered handlers:
[INFO ] : SWUPDATE running : [print_registered_handlers] : dummy
[INFO ] : SWUPDATE running : [print_registered_handlers] : archive
[INFO ] : SWUPDATE running : [print_registered_handlers] : tar
[INFO ] : SWUPDATE running : [print_registered_handlers] : uboot
[INFO ] : SWUPDATE running : [print_registered_handlers] : bootloader
[INFO ] : SWUPDATE running : [print_registered_handlers] : raw
[INFO ] : SWUPDATE running : [print_registered_handlers] : rawfile
[INFO ] : SWUPDATE running : [print_registered_handlers] : rawcopy
[INFO ] : SWUPDATE running : [main] : software set: stable mode: copy2
[TRACE] : SWUPDATE running : [listener_create] : creating socket at /tmp/swupdateprog
[TRACE] : SWUPDATE running : [network_initializer] : Main loop daemon
[TRACE] : SWUPDATE running : [listener_create] : creating socket at /tmp/sockinstctrl
[TRACE] : SWUPDATE running : [network_thread] : Incoming network request: processing...
[INFO ] : SWUPDATE started : Software Update started !
[TRACE] : SWUPDATE running : [network_initializer] : Software update started
[WARN ] : SWUPDATE running : [scan_mtd_devices] : MTD is not present on the target
[TRACE] : SWUPDATE running : [extract_file_to_tmp] : Found file
[TRACE] : SWUPDATE running : [extract_file_to_tmp] : filename sw-description
[TRACE] : SWUPDATE running : [extract_file_to_tmp] : size 1144
[TRACE] : SWUPDATE running : [extract_file_to_tmp] : Found file
[TRACE] : SWUPDATE running : [extract_file_to_tmp] : filename sw-description.sig
[TRACE] : SWUPDATE running : [extract_file_to_tmp] : size 256
[TRACE] : SWUPDATE running : [swupdate_verify_file] : Verify signed image: Read 1144 bytes
[TRACE] : SWUPDATE running : [swupdate_verify_file] : Verified OK
[TRACE] : SWUPDATE running : [get_common_fields] : Version 0.1.0.1
[TRACE] : SWUPDATE running : [parse_hw_compatibility] : Accepted Hw Revision : 1.0
[TRACE] : SWUPDATE running : [_parse_images] : Found compressed Image: core-image-base-raspberrypi4.ext4.gz in device : /dev/mmcblk0p3 for handler raw
[TRACE] : SWUPDATE running : [_parse_scripts] : Found Script: adu-swupdate.sh
[ERROR] : SWUPDATE failed [0] ERROR : feature 'postinstall' required for 'adu-swupdate.sh' in sw-description is absent!
[ERROR] : SWUPDATE failed [0] ERROR : Compatible SW not found
[ERROR] : SWUPDATE failed [1] Image invalid or corrupted. Not installing ...
[TRACE] : SWUPDATE running : [network_initializer] : Main thread sleep again !
[INFO ] : No SWUPDATE running : Waiting for requests...
[INFO ] : SWUPDATE running : [endupdate] : Swupdate *failed* !```
So, After hours of exploration and reading each page of SW-UPDATE Wiki, I figured out that there are handlers for each function which we must enable before using it.
you can read more about them here. https://sbabic.github.io/swupdate/handlers.html
These handlers are available in the meta-swupdate/recipes-support/swupdate/defconfig
#
# Automatically generated file; DO NOT EDIT.
# Swupdate Configuration
#
CONFIG_HAVE_DOT_CONFIG=y
#
# Swupdate Settings
#
#
# General Configuration
#
# CONFIG_CURL is not set
# CONFIG_CURL_SSL is not set
# CONFIG_SYSTEMD is not set
CONFIG_DEFAULT_CONFIG_FILE="/etc/swupdate.cfg"
CONFIG_SCRIPTS=y
CONFIG_HW_COMPATIBILITY=y
CONFIG_HW_COMPATIBILITY_FILE="/etc/hwrevision"
CONFIG_SW_VERSIONS_FILE="/etc/sw-versions"
#
# Socket Paths
#
CONFIG_SOCKET_CTRL_PATH=""
CONFIG_SOCKET_PROGRESS_PATH=""
CONFIG_SOCKET_REMOTE_HANDLER_DIRECTORY="/tmp/"
CONFIG_MTD=y
CONFIG_LUA=y
CONFIG_LUAPKG="lua"
# CONFIG_FEATURE_SYSLOG is not set
#
# Build Options
#
CONFIG_CROSS_COMPILE=""
CONFIG_SYSROOT=""
CONFIG_EXTRA_LDLIBS=""
#
# Debugging Options
#
# CONFIG_DEBUG is not set
# CONFIG_WERROR is not set
# CONFIG_NOCLEANUP is not set
# CONFIG_BOOTLOADER_EBG is not set
CONFIG_UBOOT=y
# CONFIG_BOOTLOADER_NONE is not set
# CONFIG_BOOTLOADER_GRUB is not set
CONFIG_UBOOT_FWENV="/etc/fw_env.config"
CONFIG_UBOOT_DEFAULTENV="/etc/u-boot-initial-env"
# CONFIG_SSL_IMPL_NONE is not set
CONFIG_SSL_IMPL_OPENSSL=y
# CONFIG_SSL_IMPL_MBEDTLS is not set
# CONFIG_DOWNLOAD is not set
# CONFIG_HASH_VERIFY is not set
# CONFIG_SIGNED_IMAGES is not set
# CONFIG_ENCRYPTED_IMAGES is not set
# CONFIG_SURICATTA is not set
CONFIG_WEBSERVER=y
CONFIG_MONGOOSE=y
CONFIG_MONGOOSEIPV6=y
CONFIG_MONGOOSESSL=y
CONFIG_GUNZIP=y
# CONFIG_ZSTD is not set
#
# Parser Features
#
CONFIG_LIBCONFIG=y
CONFIG_PARSERROOT=""
# CONFIG_JSON is not set
# CONFIG_LUAEXTERNAL is not set
# CONFIG_SETSWDESCRIPTION is not set
#
# Image Handlers
#
# CONFIG_UBIVOL is not set
CONFIG_CFI=y
# CONFIG_CFIHAMMING1 is not set
# CONFIG_DISKPART is not set
CONFIG_RAW=y
# CONFIG_RDIFFHANDLER is not set
CONFIG_LUASCRIPTHANDLER=y
CONFIG_SHELLSCRIPTHANDLER=y
# CONFIG_HANDLER_IN_LUA is not set
# CONFIG_ARCHIVE is not set
# CONFIG_REMOTE_HANDLER is not set
# CONFIG_SWUFORWARDER_HANDLER is not set
# CONFIG_BOOTLOADERHANDLER is not set
# CONFIG_SSBLSWITCH is not set
# CONFIG_UCFWHANDLER is not set
So to enable post and preinstall script feature you should edit this defconfig file at
CONFIG_SHELLSCRIPTHANDLER=y
This will enable the post and preinstall script feature for your swupdate.
After smoothly working for more than 10 months, I start getting this error on production suddenly while doing simple search queries.
{
"error" : {
"root_cause" : [
{
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
}
],
"type" : "circuit_breaking_exception",
"reason" : "[parent] Data too large, data for [<http_request>] would be [745522124/710.9mb], which is larger than the limit of [745517875/710.9mb]",
"bytes_wanted" : 745522124,
"bytes_limit" : 745517875
},
"status" : 503
}
Initially, I was getting this error while doing simple term queries when I got this circuit_breaking_exception error, To debug this I tried _cat/health query on elasticsearch cluster, but still, the same error, even the simplest query localhost:9200 is giving the same error Not sure what happens to the cluster suddenly.
Her is my circuit breaker status:
"breakers" : {
"request" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 0,
"estimated_size" : "0b",
"overhead" : 1.0,
"tripped" : 0
},
"fielddata" : {
"limit_size_in_bytes" : 639015321,
"limit_size" : "609.4mb",
"estimated_size_in_bytes" : 406826332,
"estimated_size" : "387.9mb",
"overhead" : 1.03,
"tripped" : 0
},
"in_flight_requests" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 560,
"estimated_size" : "560b",
"overhead" : 1.0,
"tripped" : 0
},
"accounting" : {
"limit_size_in_bytes" : 1065025536,
"limit_size" : "1015.6mb",
"estimated_size_in_bytes" : 146387859,
"estimated_size" : "139.6mb",
"overhead" : 1.0,
"tripped" : 0
},
"parent" : {
"limit_size_in_bytes" : 745517875,
"limit_size" : "710.9mb",
"estimated_size_in_bytes" : 553214751,
"estimated_size" : "527.5mb",
"overhead" : 1.0,
"tripped" : 0
}
}
I found a similar issue hereGithub Issue that suggests increasing circuit breaker memory or disabling the same. But I am not sure what to choose. Please help!
Elasticsearch Version 6.3
After some more research finally, I found a solution for this i.e
We should not disable circuit breaker as it might result in OOM error and eventually might crash elasticsearch.
dynamically increasing circuit breaker memory percentage is good but it is also a temporary solution because at the end after solution increased percentage might also fill up.
Finally, we have a third option i.e increase overall JVM heap size which is 1GB by default but as recommended it should be around 30-32 GB on production, also it should be less than 50% of available total memory.
For more info check this for good JVM memory configurations of elasticsearch on production, Heap: Sizing and Swapping
In my case I have an index with large documents, each document has ~30 KB and more than 130 fields (nested objects, arrays, dates and ids).
and I was searching all fields using this DSL query:
query_string: {
query: term,
analyze_wildcard: true,
fields: ['*'], // search all fields
fuzziness: 'AUTO'
}
Since full-text searches are expensive. Searching through multiple fields at once is even more expensive. Expensive in terms of computing power, not storage.
Therefore:
The more fields a query_string or multi_match query targets, the
slower it is. A common technique to improve search speed over multiple
fields is to copy their values into a single field at index time, and
then use this field at search time.
please refer to ELK docs that recommends searching as few fields as possible with the help of copy-to directive.
After I changed my query to search one field:
query_string: {
query: term,
analyze_wildcard: true,
fields: ['search_field'] // search in one field
}
everything worked like a charm.
I got this error with my docker container so I increase the java_opts to 1GB and now it works without any error.
Here are the docker-compose.yml
version: '1'
services:
elasticsearch-cont:
image: docker.elastic.co/elasticsearch/elasticsearch:7.9.2
container_name: elasticsearch
environment:
- "ES_JAVA_OPTS=-Xms1024m -Xmx1024m"
- discovery.type=single-node
ulimits:
memlock:
soft: -1
hard: -1
ports:
- 9200:9200
- 9300:9300
networks:
- elastic
networks:
elastic:
driver: bridge
In my case, I also have an index with large documents which store system running logs and I searched the index with all fields. I use the Java Client API, like this:
TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("uid", uid);
searchSourceBuilder.query(termQueryBuilder);
When I changed my code like this:
TermQueryBuilder termQueryBuilder = QueryBuilders.termQuery("uid", uid);
searchSourceBuilder.fetchField("uid");
searchSourceBuilder.fetchSource(false);
searchSourceBuilder.query(termQueryBuilder);
the error disappeared.
I am using video-length to get duration of video in nodejs. The code worked fine in Centos.
In Ubuntu 16.x it is showing error:
SyntaxError: Unexpected token G in JSON at position 0
at JSON.parse (<anonymous>)
at VideoLength (/.../node_modules/video-length/video-length.js:14:24)
I have used below code to get duration:
const VideoLength = require('video-length');
VideoLength(video, {
bin: '/usr/bin/mediainfo',
extended: true
})
.then(data => {
console.log("data: %j", data)
duration = data['duration']
console.log("duration: " + duration)
})
.catch(err => {
console.log(err);
})
I have installed mediainfo also. Please guide me if I am doing anything wrong.
After debugging video-length.js, I've found below difference in the stdout obtained in centos and Ubuntu.
In Centos:
{
"media": {
"#ref": "/var/kurento/tmp/30/27122019/1514kurento-recording.webm",
"track": [
{
"#type": "General",
"VideoCount": "1",
"FileExtension": "webm",
"Format": "WebM",
"Format_Version": "2",
"FileSize": "157078809",
"FrameRate": "25.028",
"IsStreamable": "Yes",
"Encoded_Date": "UTC 2019-12-27 09:44:33",
"File_Modified_Date": "UTC 2019-12-27 10:56:31",
"File_Modified_Date_Local": "2019-12-27 16:26:31",
"Encoded_Application": "GStreamer Matroska muxer",
"Encoded_Library": "GStreamer matroskamux version 1.8.1.1",
"extra": {
"IsTruncated": "Yes"
}
},
{
"#type": "Video",
"StreamOrder": "0",
"ID": "1",
"UniqueID": "2934041311738436184",
"Format": "VP8",
"CodecID": "V_VP8",
"Width": "1920",
"Height": "1080",
"PixelAspectRatio": "1.000",
"DisplayAspectRatio": "1.778",
"FrameRate_Mode": "CFR",
"FrameRate": "25.028",
"Compression_Mode": "Lossy",
"Delay": "0.000",
"Title": "Video",
"Language": "en",
"Default": "Yes",
"Forced": "No"
}
]
}
}
In Ubuntu:
General
Count : 308
Count of stream of this kind : 1
Kind of stream : General
Kind of stream : General
Stream identifier : 0
Count of video streams : 1
Video_Format_List : VP8
Video_Format_WithHint_List : VP8
Codecs Video : V_VP8
Video_Language_List : English
Complete name : /var/kurento/tmp/27/23012020/1355kurento-recording.webm
Folder name : /var/kurento/tmp/27/23012020
File name : 1355kurento-recording
File extension : webm
Format : WebM
Format : WebM
Format/Url : http://www.webmproject.org/
Format/Extensions usually used : webm
Commercial name : WebM
Format version : Version 2
Internet media type : video/webm
Codec : WebM
Codec : WebM
Codec/Url : http://www.webmproject.org/
Codec/Extensions usually used : webm
File size : 1760787
File size : 1.68 MiB
File size : 2 MiB
File size : 1.7 MiB
File size : 1.68 MiB
File size : 1.679 MiB
Duration : 45037
Duration : 45s 37ms
Duration : 45s 37ms
Duration : 45s 37ms
Duration : 00:00:45.037
Duration : 00:00:45.037
Overall bit rate : 312772
Overall bit rate : 313 Kbps
Encoded date : UTC 2020-01-23 08:25:19
File last modification date : UTC 2020-01-23 08:26:04
File last modification date (local) : 2020-01-23 13:56:04
Writing application : GStreamer Matroska muxer
Writing application : GStreamer Matroska muxer
Writing library : GStreamer matroskamux version 1.8.1.1
Writing library : GStreamer matroskamux version 1.8.1.1
Video
Count : 311
Count of stream of this kind : 1
Kind of stream : Video
Kind of stream : Video
Stream identifier : 0
StreamOrder : 0
ID : 1
ID : 1
Unique ID : 3811409054155698551
Format : VP8
Format/Url : http://www.webmproject.org/
Commercial name : VP8
Codec ID : V_VP8
Codec ID/Url : http://www.webmproject.org/
Codec : V_VP8
Codec : V_VP8
Bit rate : 293486
Bit rate : 293 Kbps
Width : 1920
Width : 1 920 pixels
Height : 1080
Height : 1 080 pixels
Pixel aspect ratio : 1.000
Display aspect ratio : 1.778
Display aspect ratio : 16:9
Frame rate mode : VFR
Frame rate mode : Variable
Compression mode : Lossy
Compression mode : Lossy
Delay : 0
Delay : 00:00:00.000
Delay, origin : Container
Delay, origin : Container
Title : Video
Language : en
Language : English
Language : English
Language : en
Language : eng
Language : en
Default : Yes
Default : Yes
Forced : No
Forced : No
Don't know the reason for the difference in structure. Still debugging for the cause. Any help will be highly appreciated.
It is good that you found the issue by yourself, As you can see the stdout in Ubuntu is not a valid JSON string and code in lib doing JSON.parse to parse out to object hence it is throwing error.
I would suggest you not using that library because that library itself not tested. If you have other option I would suggest you go with this:https://github.com/caffco/get-video-duration
Install
$ npm install --save get-video-duration
Usage
const { getVideoDurationInSeconds } = require('get-video-duration')
// From a local path...
getVideoDurationInSeconds('video.mov').then((duration) => {
console.log(duration)
})
// From a URL...
getVideoDurationInSeconds('http://clips.vorwaerts-gmbh.de/big_buck_bunny.mp4').then((duration) => {
console.log(duration)
})
// From a readable stream...
const fs = require('fs')
const stream = fs.createReadStream('video.mov')
getVideoDurationInSeconds(stream).then((duration) => {
console.log(duration)
})
I have in hiera node variable solr_enabled = true. Also I have in this node list of fstab mount points like:
fstab_homes:
'/home1':
device: 'UUID=ac2ca97e-8bce-4774-92d7-051482253089'
'/home2':
device: 'UUID=d9daaeed-4e4e-40e9-aa6b-73632795e661'
'/home3':
device: 'UUID=21a358cf-2579-48cb-b89d-4ff43e4dd104'
'/home4':
device: 'UUID=c68041de-542a-4f72-9488-337048c41947'
'/home16':
device: 'UUID=d55eff53-3087-449b-9667-aeff49c556e7'
In solr.pp I want to get the first mounted home disk, create there folder and make symbolic link to /home/cpanelsolr.
For this I wrote the code /etc/puppet/environments/testing/modules/cpanel/manifests/solr.pp:
# Install SOLR - dovecot full text search plugin
class cpanel::solr(
$solr_enable = hiera('solr_enabled',false),
$homes = hiera_hash('fstab_homes', false),
$homesKeys = keys($homes),
)
{
if $solr_enable == true {
notify{"Starting Solr Installation ${homesKeys[0]}":}
if $homes != false and $homesKeys[0] != '/home' {
file { "Create Solr home symlink to ${homesKeys[0]}":
path => '/home/cpanelsolr',
ensure => 'link',
target => "${homesKeys[0]}/cpanelsolr",
}
}
exec { 'cpanel-dovecot-solr':
command => "/bin/bash -c
'/usr/local/cpanel/scripts/install_dovecot_fts'",
}
}
}
But when I run this in dev node I get error:
root#webcloud2 [/home1]# puppet agent -t --no-use_srv_records --server=puppet.development.internal --environment=testing --tags=cpanel::solr
Info: Retrieving pluginfacts
Info: Retrieving plugin
Info: Loading facts
2018-08-03 6:04:54 140004666824672 [Note] libgovernor.so found
2018-08-03 6:04:54 140004666824672 [Note] All governors functions found too
2018-08-03 6:04:54 140004666824672 [Note] Governor connected
2018-08-03 6:04:54 140004666824672 [Note] All governors lve functions found too
Error: Could not retrieve catalog from remote server: Error 400 on SERVER: keys(): Requires hash to work with at
/etc/puppet/environments/testing/modules/cpanel/manifests/solr.pp:6 on node webcloud2.development.internal
Warning: Not using cache on failed catalog
Error: Could not retrieve catalog; skipping run
What's wrong?
You have at least two problems there.
First problem is $home won't be set at all in that context. You would need to rewrite as:
class cpanel::solr(
$solr_enable = hiera('solr_enabled',false),
$homes = hiera_hash('fstab_homes', false),
)
{
$homes_keys = keys($homes)
...
}
Second problem is that your YAML isn't correctly indented, so fstab_homes would not actually return a Hash. It should be:
fstab_homes:
'/home1':
device: 'UUID=ac2ca97e-8bce-4774-92d7-051482253089'
'/home2':
device: 'UUID=d9daaeed-4e4e-40e9-aa6b-73632795e661'
'/home3':
device: 'UUID=21a358cf-2579-48cb-b89d-4ff43e4dd104'
'/home4':
device: 'UUID=c68041de-542a-4f72-9488-337048c41947'
'/home16':
device: 'UUID=d55eff53-3087-449b-9667-aeff49c556e7'
Finally, be aware that use of camelCase in parameter names in Puppet can cause you issues in some contexts, so best to use snake_case.
I was testing out DSC with linux,
Here are the details of the linux server i am using
PS C:\Windows\system32> Get-CimInstance -CimSession $Session -namespace root/omi -ClassName omi_identify
InstanceID : 2FDB5542-5896-45D5-9BE9-DC04430AAABE
SystemName : CENTOSTESTDSC
ProductName : OMI
ProductVendor : Microsoft
ProductVersionMajor : 1
ProductVersionMinor : 0
ProductVersionRevision : 8
ProductVersionString : 1.0.8
Platform : LINUX_X86_64_GNU
OperatingSystem : LINUX
Architecture : X86_64
Compiler : GNU
ConfigPrefix : GNU
ConfigLibDir : /opt/omi-1.0.8/lib
ConfigBinDir : /opt/omi-1.0.8/bin
ConfigIncludeDir : /opt/omi-1.0.8/include
ConfigDataDir : /opt/omi-1.0.8/share
ConfigLocalStateDir : /opt/omi-1.0.8/var
ConfigSysConfDir : /opt/omi-1.0.8/etc
ConfigProviderDir : /opt/omi-1.0.8/etc
ConfigLogFile : /opt/omi-1.0.8/var/log/omiserver.log
ConfigPIDFile : /opt/omi-1.0.8/var/run/omiserver.pid
ConfigRegisterDir : /opt/omi-1.0.8/etc/omiregister
ConfigSchemaDir : /opt/omi-1.0.8/share/omischema
ConfigNameSpaces : {root-check, interop, root-omi, root-cimv2}
PSComputerName : 123.112.123.156
When i try to invoke a DSC configuration against the Linux VM i get an error as below.
I am using the latest PSDSC tar file from below location
wget https://github.com/Microsoft/PowerShell-DSC-for-Linux/releases/download/V1.1.0-466/PSDSC.tar.gz
Can some one help please.