FIO io_queue_init Resource temporarily unavailable when numjobs is greater than 64 - io

I have issue running FIO commands with numjobs is greater than 64. The error is "io_queue_init Resource temporarily unavailable". Is there a limitation on numjobs?
Thanks

Related

Terraform mongodbatlas disk size scale down unexpected

I have backend "remote" configured with provider mongodb/mongodbatlas. Mongodb Atlas cluster configured with auto_scaling_disk_gb_enabled = true. Also during first deploy there was specified disk_size_gb = 15. Since auto scaling is enabled in a while cluster changed size to bigger one 21Gb. Next application of terraform script via app.terraform.io showed plan for this cluster which included disk_size_gb : 21 change to 15. Terraform version used: 1.1.7
Since I'm quite new to terraform it's not clear for me if this is expected behavior to attempt scaling down disk size? I see specifically this step took lots of time (maybe 40 minutes), however completed successfully. Is this possible to not perform scaling down disk size and leave there disk_size_gb specified?

how to let Ubuntu see that I increased disk size in azure portal

I have Ubuntu 18 VM on Azure with one 30 GB SSD disk. unfortunately, the disk free space reached zero, and mysql service went down. I stopped the VM and increased the disk space on Azure portal to 60 GB, but when I start the VM again, Ubuntu keeps showing 100% use of 30 GB. it did not notice the new space.. is there any command I have to run on Ubuntu server in order to see the new size?
I just found that cfdisk can solve the whole issue. however if you get errors while running the cfdisk command, run parted and type print and you will get an option to Fix or Ignore the disk space. type Fix and then run cfdisk again and it will work perfectly. using cfdisk, you can resize, delete, create partitions as required.

Azure Windows VM Monitoring "% Committed Bytes In Use" issue

Has anyone configured Azure windows VM percentage memory usage? I have tried some of the solutions and configured % Committed Bytes In Use but actually this is not what is actual RAM usage of VM. It is \ Committed Bytes to the Memory \ Commit Limit.This is not what i needed because even my VM memory(RAM) is at 90% my % Committed Bytes In Use is shows around 60 to 65% but i set a alert rule at 70%.
Can any one tell me how i will get actual memory(RAM) usage so that when my VM memory goes above 70 i can alerted but not % Committed Bytes In Use.

why canĀ“t I install Cassandra 3.0?

I'm trying to install Cassandra 3.0, but, when I trying to install it on my PC, it gives me this error:
Cassandra 3.0 install error
it says:
WARNING! Powershell script execution unavailable.
Please use 'powershell Set-ExecutionPolicy Unrestricted'
on this user-account to run cassandra with fully featured
functionality on this platform.
Starting with legacy startup options
Starting Cassandra Server
Error occurred during initialization of VM
Could not reserve enough space for 2097152KB object heap
Can anyone help me with this error? Thanks!
By security means, not all script are enabled to run at powershell. So you should enable powershell to execute that script by running this command:
Set-ExecutionPolicy Unrestricted
Could not reserve enough space for 2097152KB object heap
This means the Java VM for Cassandra tried to allocate around 2G of heap memory from the operating system, but your operating system was not able to provide that much.
You either have to run C* with a lower heap size setting, free more RAM on the machine, add RAM hardware, or enable/increase swap. For the latter case beware that C* will perform badly if the OS has to swap memory to disk.
I had exactly the same problem on my windows 10, I solved it by running cassandra in an admin command prompt.

Many spark worker exit when read data from Cassandra 3.7

My Spark running on Java1.7, but my cassandra running on java 1.8. When Spark read data from Cassandra, at the beginning a lot of works exit with the following error message:
OpenJDK 64-Bit Server VM warning: INFO: os::commit_memory(0x00007f338d000000, 21474836480, 0) failed; error='Cannot allocate memory' (errno=12)
#
# There is insufficient memory for the Java Runtime Environment to continue.
# Native memory allocation (malloc) failed to allocate 21474836480 bytes for committing reserved memory.
# An error report file with more information is saved as:
# /tmp/jvm-18047/hs_error.log
But remaining workers were still running well, finally the job can be finished well. So I'm wondering that should I use the same JDK version for both of them, but they communicate by socket, it should not the JDK version problem.
This looks much more like you are just causing the Spark Executor JVM to overload. It's trying to get 21 GB but the OS says there isn't that much RAM left. You could always try reducing the allowed heap for executors?

Resources