XenServer error after reboot (xenopsd internal error) - linux

How to fix? I can't boot anymore
Jan 24, 2014 10:03:29 AM Error: Starting VM 'CentOS 6 (64-bit)' -
Internal error: xenopsd internal error:
VM = 182361af-d10a-d97b-3a65-346d9cec1bcb; domid = 133;
Bootloader.Bad_error Traceback (most recent call last):
File "/usr/bin/pygrub", line 895, in ?
part_offs = get_partition_offsets(file)
File "/usr/bin/pygrub",
line 105, in get_partition_offsets
image_type = identify_disk_image(file)
File "/usr/bin/pygrub", line 49, in identify_disk_image
fd = os.open(file, os.O_RDONLY)
OSError: [Errno 2] No such file or directory:
'/dev/sm/backend/94b422b6-3e31-88fb-bc55-99b33de9d89a/36bce863-ba6d-4792-b29d-dc6211bd5e8c'

Probably your server was shutdown improperly, hence, the partition was mounted Read Only. You have to unplug your pbd, check and plug again Read and Write.
That did it for me.

Looks like the VDI of the VM is either corrupted or deleted. From Xencenter click on the VM and go to the respective Storage (Local Storage or Shared) to check that the VDI exists or not. I guess you have to re-create the disk again !

I solved this same problem, where I was unable to get my VDIs to mount in any VM, and booting the VMs failed with the "No such file or directory: /dev/sm/backend" error you're getting.
What worked to fix it was to make a snapshot of each VM, creating a new VM from the snapshot and then deleting the old VM.

Related

Azure bastion failing to connect to VM

I am trying to connect to a Linux VM with a network bastion in Azure. I am running the following command.
az network bastion ssh --name "<bastion-host>" --resource-group "<resource-group>" --target-resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Compute/virtualMachines/<vm-name>" --auth-type password --username azureuser
And getting the error in azure CLI
Exception in thread Thread-1 (_start_tunnel):
Traceback (most recent call last):
File "threading.py", line 1009, in _bootstrap_inner
File "threading.py", line 946, in run
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/network/custom.py", line 8482, in _start_tunnel
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/network/tunnel.py", line 184, in start_server
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/network/tunnel.py", line 117, in _listen
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/network/tunnel.py", line 104, in _get_auth_token
msrestazure.azure_exceptions.CloudError: Unexpected internal error
Terminate batch job (Y/N)? y ```
I have contacted Microsoft support team about this issue and it seems, the network bastion is still under preview and it's an internal error. The response from Microsoft team was:
"Due to an improper cleanup of closed connections, this caused newer connections to fail"

Problem running odoo-bin on MacOS (ValueError: current limit exceeds maximum limit)

I am unable to get Odoo 14 running on my MacOS machine. Some research into the following error suggests that I can manually configure the memory limits which may resolve the issue but I cannot find the relevant config files on my machine.
I've checked and reinstalled all of the requirements and I can't find much information to point me in the right direction.
(venv) kilgow#wmbp odoo-dev % python3 odoo/odoo-bin
2021-09-18 15:56:53,295 1931 INFO ? odoo: Odoo version 14.0
2021-09-18 15:56:53,295 1931 INFO ? odoo: addons paths: ['/Users/kilgow/Desktop/odoo-dev/odoo/odoo/addons', '/Users/kilgow/Library/Application Support/Odoo/addons/14.0', '/Users/kilgow/Desktop/odoo-dev/odoo/addons']
2021-09-18 15:56:53,295 1931 INFO ? odoo: database: default#default:default
2021-09-18 15:56:53,351 1931 INFO ? odoo.addons.base.models.ir_actions_report: You need Wkhtmltopdf to print a pdf version of the reports.
Traceback (most recent call last):
File "/Users/kilgow/Desktop/odoo-dev/odoo/odoo-bin", line 8, in <module>
odoo.cli.main()
File "/Users/kilgow/Desktop/odoo-dev/odoo/odoo/cli/command.py", line 61, in main
o.run(args)
File "/Users/kilgow/Desktop/odoo-dev/odoo/odoo/cli/server.py", line 178, in run
main(args)
File "/Users/kilgow/Desktop/odoo-dev/odoo/odoo/cli/server.py", line 172, in main
rc = odoo.service.server.start(preload=preload, stop=stop)
File "/Users/kilgow/Desktop/odoo-dev/odoo/odoo/service/server.py", line 1298, in start
rc = server.run(preload, stop)
File "/Users/kilgow/Desktop/odoo-dev/odoo/odoo/service/server.py", line 510, in run
self.start(stop=stop)
File "/Users/kilgow/Desktop/odoo-dev/odoo/odoo/service/server.py", line 452, in start
set_limit_memory_hard()
File "/Users/kilgow/Desktop/odoo-dev/odoo/odoo/service/server.py", line 83, in set_limit_memory_hard
resource.setrlimit(rlimit, (config['limit_memory_hard'], hard))
ValueError: current limit exceeds maximum limit
(venv) kilgow#wmbp odoo-dev % python3 odoo/odoo-bin
You should check out this documentation for more info, so the easy way is add an extra argument to your run script like below.
python3 odoo-bin --addons-path=addons -d mydb --limit-memory-hard 0
I believe this could be due to the machine using an M1 chip. Manually increasing the memory limits did not resolve the problem.
I’ve managed to work around the issue by running Odoo and Postgres in Docker containers instead.

apache spark "Py4JError: Answer from Java side is empty"

I get this error every time...
I use sparkling water...
My conf-file:
***"spark.driver.memory 65g
spark.python.worker.memory 65g
spark.master local[*]"***
The amount of data is about 5 Gb.
There is no another information about this error...
Does anybody know why it happens? Thank you!
***"ERROR:py4j.java_gateway:Error while sending or receiving.
Traceback (most recent call last):
File "/data/analytics/Spark1.6.1/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 746, in send_command
raise Py4JError("Answer from Java side is empty")
Py4JError: Answer from Java side is empty
ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server
Traceback (most recent call last):
File "/data/analytics/Spark1.6.1/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 690, in start
self.socket.connect((self.address, self.port))
File "/usr/local/anaconda/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused
ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server
Traceback (most recent call last):
File "/data/analytics/Spark1.6.1/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 690, in start
self.socket.connect((self.address, self.port))
File "/usr/local/anaconda/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused
ERROR:py4j.java_gateway:An error occurred while trying to connect to the Java server
Traceback (most recent call last):
File "/data/analytics/Spark1.6.1/python/lib/py4j-0.9-src.zip/py4j/java_gateway.py", line 690, in start
self.socket.connect((self.address, self.port))
File "/usr/local/anaconda/lib/python2.7/socket.py", line 228, in meth
return getattr(self._sock,name)(*args)
error: [Errno 111] Connection refused"***
Have you tried setting spark.executor.memory and spark.driver.memory in your Spark configuration file?
See https://stackoverflow.com/a/22742982/5453184 for more info.
Usually, you'll see this error when the Java process get silently killed by the OOM Killer.
The OOM Killer (Out of Memory Killer) is a Linux process that kicks in when the system becomes critically low on memory. It selects a process based on its "badness" score and kills it to reclaim memory.
Read more on OOM Killer here.
Increasing spark.executor.memory and/or spark.driver.memory values will only make things worse in this case, i.e. you may want to do the opposite!
Other options would be to:
increase the number of partitions if you're working with very big data sources;
increase the number of worker nodes;
add more physical memory to worker/driver nodes;
Or, if you're running your driver/workers using docker:
increase docker memory limit;
set --oom-kill-disable on your containers, but make sure you understand possible consequences!
Read more on --oom-kill-disable and other docker memory settings here.
Another point to note if you are on wsl2 using pyspark. Ensure that your wsl2 config file has an increased memory.
# Settings apply across all Linux distros running on WSL 2
[wsl2]
# Limits VM memory to use no more than 4 GB, this can be set as whole numbers using GB or MB
memory=12GB # This was originally set to 3gb which caused me to fail since spark.executor.memory and spark.driver.memory was only able to MAX of 3gb regardless of how high i set it.
# Sets the VM to use eight virtual processors
processors=8
for reference. your .wslconfig config file should be located in C:\Users\USERNAME

OpsCenter Installation error: DNSLookupError: DNS lookup failed

I downloaded the tar.gz file and I'm trying to install Opscenter. I am getting the following error.
ERROR:
Trying to download https://opscenter.datastax.com:443/definitions/5.0.1/version.md5
resulted in following error:
Traceback (most recent call last):
File "build/lib/python2.6/site-packages/opscenterd/Definitions.py", line 133, in getNewHash
DNSLookupError: DNS lookup failed: address 'opscenter.datastax.com' not found: [Errno -2] Name or service not known.
Do I need to have internet access to install Opscenter?
This error should not prevent OpsCenter from working correctly, you just won’t have update information (for cases when you’re running outdated versions of Cassandra or OpsCenter.)
OpsCenter does not require internet connection.
You should be able to configure OpsCenter to not fetch the updated definition files. Refer to OpsCenter configuration properties and look for the [definitions] auto_update property which you can set to False.

Cqlsh error on cassandra 2.0.1

We recently upgraded to cassandra 2.0.1 with cqlsh 4.0.1. I am seeing timeout errors/ broken pipe while using the cqlsh client. Please see error trace below. I have verified that the cluster is Up using nodetool and I am able to read/write using mapreduce. Please advice.
Thanks,
Prateek
Traceback (most recent call last):
File "./bin/cqlsh", line 897, in perform_statement_untraced
self.cursor.execute(statement, decoder=decoder)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cursor.py", line 80, in execute
response = self.get_response(prepared_q, cl)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py", line 77, in get_response
return self.handle_cql_execution_errors(doquery, compressed_q, compress, cl)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/thrifteries.py", line 96, in handle_cql_execution_errors
return executor(*args, **kwargs)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py", line 1782, in execute_cql3_query
self.send_execute_cql3_query(query, compression, consistency)
File "./bin/../lib/cql-internal-only-1.4.0.zip/cql-1.4.0/cql/cassandra/Cassandra.py", line 1793, in send_execute_cql3_query
self._oprot.trans.flush()
File "./bin/../lib/thrift-python-internal-only-0.9.1.zip/thrift/transport/TTransport.py", line 292, in flush
self.__trans.write(buf)
File "./bin/../lib/thrift-python-internal-only-0.9.1.zip/thrift/transport/TSocket.py", line 128, in write
plus = self.handle.send(buff)
error: [Errno 32] Broken pipe
If you have an open cqlsh session, it will always give you Errno 32 if the Cassandra instance that it connected to was stopped or even just restarted. You will have to restart cqlsh in order to re-establish a connection to the server.
If you see this problem without having stopped or restarted a Cassandra server, then please supply and additional details about conditions that lead up to this error.

Resources