Is Snapshot feature available with OpenEBS cStor? - openebs

I am using OpeNEBS 0.7.2 cStor version and I am unable to create cStor volume snapshot. It show following errors.
E1129 18:49:42.581577 1 processor.go:82] failed to create snapshot for volume :pvc-845b429c-f34e-11e8-bea9-021288b303ec, err: Get http://:9501/v1/volumes: dial tcp :9501: connect: connection refused
W1129 18:49:42.581601 1 snapshotter.go:269] failed to snapshot &v1.PersistentVolumeSpec{Capacity:v1.ResourceList{"storage":resource.Quantity{i:resource.int64Amount{value:5368709120, scale:0}, d:resource.infDecAmount{Dec:(*inf.Dec)(nil)}, s:"5Gi", Format:"BinarySI"}}, PersistentVolumeSource:v1.PersistentVolumeSource{GCEPersistentDisk:(*v1.GCEPersistentDiskVolumeSource)(nil), AWSElasticBlockStore:(*v1.AWSElasticBlockStoreVolumeSource)(nil), HostPath:(*v1.HostPathVolumeSource)(nil), Glusterfs:(*v1.GlusterfsVolumeSource)(nil), NFS:(*v1.NFSVolumeSource)(nil), RBD:(*v1.RBDPersistentVolumeSource)(nil), ISCSI:(*v1.ISCSIPersistentVolumeSource)(0xc42051de00), Cinder:(*v1.CinderPersistentVolumeSource)(nil), CephFS:(*v1.CephFSPersistentVolumeSource)(nil), FC:(*v1.FCVolumeSource)(nil), Flocker:(*v1.FlockerVolumeSource)(nil), FlexVolume:(*v1.FlexPersistentVolumeSource)(nil), AzureFile:(*v1.AzureFilePersistentVolumeSource)(nil), VsphereVolume:(*v1.VsphereVirtualDiskVolumeSource)(nil), Quobyte:(*v1.QuobyteVolumeSource)(nil), AzureDisk:(*v1.AzureDiskVolumeSource)(nil), PhotonPersistentDisk:(*v1.PhotonPersistentDiskVolumeSource)(nil), PortworxVolume:(*v1.PortworxVolumeSource)(nil), ScaleIO:(*v1.ScaleIOPersistentVolumeSource)(nil), Local:(*v1.LocalVolumeSource)(nil), StorageOS:(*v1.StorageOSPersistentVolumeSource)(nil), CSI:(*v1.CSIPersistentVolumeSource)(nil)}, AccessModes:[]v1.PersistentVolumeAccessMode{"ReadWriteOnce"}, ClaimRef:(*v1.ObjectReference)(0xc420486f50), PersistentVolumeReclaimPolicy:"Delete", StorageClassName:"openebs-cstor-disk", MountOptions:[]string(nil), VolumeMode:(*v1.PersistentVolumeMode)(nil), NodeAffinity:(*v1.VolumeNodeAffinity)(nil)}, err: Get http://:9501/v1/volumes: dial tcp :9501: connect: connection refused
E1129 18:49:42.581694 1 goroutinemap.go:150] Operation for "createdefault/snapshot-cstor-postgres-data-a9defa6b-f406-11e8-bea9-021288b303ecpostgres-data" failed. No retries permitted until 2018-11-29 18:51:44.581673306 +0000 UTC m=+97194.100277390 (durationBeforeRetry 2m2s). Error: "Failed to take snapshot of the volume pvc-845b429c-f34e-11e8-bea9-021288b303ec: %!q(<nil>)"
Are snapshots supported with cstor?

0.7 had a preview version of snapshot of cStor Engine the above mentioned issue has been fixed in 0.8 revision.
Note -- Snaphot feature has enhanced and clone feature has been introduced with 0.8 OpenEBS version.

Related

Error installing postgis for YugabyteDB in Ubuntu 22.04

[Question posted by a user on YugabyteDB Community Slack]
I tried to install postgis plugin by following the directions in the docs [Yb Version 2.14.0.0 , Ubuntu 22.04] . What am I missing?
yugabyte=# CREATE EXTENSION postgis;
WARNING: AbortSubTransaction while in DEFAULT state
WARNING: AbortSubTransaction while in ABORT state
WARNING: AbortSubTransaction while in ABORT state
WARNING: AbortSubTransaction while in ABORT state
ERROR: Illegal state: Set active sub transaction 2, when not transaction is running
ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
ERROR: Illegal state: Rollback sub transaction 2, when not transaction is running
PANIC: ERRORDATA_STACK_SIZE exceeded
server closed the connection unexpectedly
This probably means the server terminated abnormally
before or while processing the request.
The connection to the server was lost. Attempting reset: Succeeded.
Found a relevant github issue here https://github.com/yugabyte/yugabyte-db/issues/13359
Here are the steps that were successful for another user:
(Disabled pg audit) sed -ie '/-- Check that pgaudit is disabled or
not installed/,+18d'
/home/yugabyte/postgres/share/extension/postgis--3.2.1.sql
Downgraded Ubuntu from 22.04 and 20.04 and 18 now and repeated the
step1
Now the GLIBC_2.27 with Ubuntu 18.
Rerun the postinstall.sh otherwise it's not working.
It's working now and tested with a couple of GIS data types.

K8s Error: ImagePullBackOff || read: connection refused

Can you please assist when deploying we getting ImagePullBackOff for our pods.
running kubectl get <pod-name> -n namespace -o yaml am getting below error.
containerStatuses:
- image: mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644
imageID: ""
lastState: {}
name: dmd-base
ready: false
restartCount: 0
started: false
state:
waiting:
message: Back-off pulling image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644"
reason: ImagePullBackOff
hostIP: x.x.x.53
phase: Pending
podIP: x.x.x.237
and running kubectl describe pod <pod-name> -n namespace am getting below error infomation
Normal Scheduled 85m default-scheduler Successfully assigned dmd-int/app-app-base-5b4b75756c-lrcp6 to aks-agentpool-35064155-vmss00000a
Warning Failed 85m kubelet Failed to pull image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
[rpc error: code = Unknown desc = failed to pull and unpack image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to resolve reference "mycontainer-registry.io/commpany/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to do request: Head "https://mycontainer-registry.azurecr.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
dial tcp: lookup mycontainer-registry.azurecr.io on [::1]:53: read udp [::1]:56109->[::1]:53: read: connection refused,
rpc error: code = Unknown desc = failed to pull and unpack image "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to resolve reference "mycontainer-registry.io/company/my-app:1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
failed to do request: Head "https://mycontainer-registry.io/v2/company/my-app/manifests/1.0.0-integration-62c7e30532bd430477731a01a962372166fd5644":
dial tcp: lookup mycontainer-registry.io on [::1]:53: read udp [::1]:60759->[::1]:53: read: connection refused]`
From the described logs I can see the issue is a connection but I can't tell where the issue is with connectivity, we running our apps in a Kubernetes cluster on Azure.
If anyone has come across this issue can you please assist the application has been running successfully throughout the past months we just got this issue this morning.
There is a known Azure outage multiple regions today.
Some DNS issue that also affects image pulls.
https://status.azure.com/en-us/status

Cassandra : nodetool reporting error: Failed to connetc IP address [duplicate]

Cassandra nodetool throws an error after updating OpenJDK
nodetool status
nodetool: Failed to connect to '127.0.0.1:7199' - URISyntaxException: 'Malformed IPv6 address at index 7: rmi://[127.0.0.1]:7199'.
This also affects the current official Docker-Hub Image https://hub.docker.com/_/cassandra version 3.11.12
How can I fix this error?
There seems to be an issue with "improved" IPv6 address parsing in the latest jdk update.
The workaround would be to use the IPv6 notation of localhost
nodetool -h ::FFFF:127.0.0.1 status
You can upgrade to Apache Cassandra 3.11.13 or use this command:
nodetool -Dcom.sun.jndi.rmiURLParsing=legacy status
Another way is to add this -Dcom.sun.jndi.rmiURLParsing=legacy to JAVA_TOOL_OPTIONS environment variable.

AKS nodes failed provisioning

So I have an AKS cluster in DEV env which was working fine. Today I have noticed that some pods due being removed/uninstalled via helm were stuck in Terminating state.
I found out that none of the 3 nodes are ready. When I stopped the cluster and started again, VMs failed to create in VMMS with associated message:
VM has reported a failure when processing extension 'vmssCSE'. Error message: "Enable failed: failed to execute command: command terminated with exit status=50
According to what I have found might look like the VMs in scale set are missing outbound internet connectivity, however the associated NSG has only the defaults:
When inspecting the VMSS status, it says the following:
VM has reported a failure when processing extension 'vmssCSE'. Error message: "Enable failed: failed to execute command: command terminated with exit status=50 [stdout] [stderr] nc: connect to mcr.microsoft.com port 443 (tcp) failed: Connection timed out Command exited with non-zero status 1 0.00user 0.00system 2:10.07elapsed 0%CPU (0avgtext+0avgdata 2360maxresident)k 0inputs+8outputs (0major+113minor)pagefaults 0swaps " More information on troubleshooting is available at https://aka.ms/VMExtensionCSELinuxTroubleshoot
This troubleshooting doesn't seem to be helpful as it states:
When restricting egress traffic from an AKS cluster, there are required and optional recommended outbound ports / network rules and FQDN / application rules for AKS. If your settings are in conflict with any of these rules, certain kubectl commands won't work correctly. You may also see errors when creating an AKS cluster.
Verify that your settings aren't conflicting with any of the required or optional recommended outbound ports / network rules and FQDN / application rules.
But the default rules have not changed, therefore I'm lost at that point.

opscenter Disks stats not available

hi everybody im using OpsCenter 5.2.0 and agent connected to opscenter
bout no Disks in Dashboard I have error on datastax-agent log
ERROR [os-metrics-7] 2015-08-23 21:00:37,095 Short os-stats collector failed: Process failed: df --print-type --no-sync --block-size=1G --local

Resources