I am trying to connect to a Linux VM with a network bastion in Azure. I am running the following command.
az network bastion ssh --name "<bastion-host>" --resource-group "<resource-group>" --target-resource-id "/subscriptions/<subscription-id>/resourceGroups/<resource-group>/providers/Microsoft.Compute/virtualMachines/<vm-name>" --auth-type password --username azureuser
And getting the error in azure CLI
Exception in thread Thread-1 (_start_tunnel):
Traceback (most recent call last):
File "threading.py", line 1009, in _bootstrap_inner
File "threading.py", line 946, in run
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/network/custom.py", line 8482, in _start_tunnel
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/network/tunnel.py", line 184, in start_server
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/network/tunnel.py", line 117, in _listen
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/network/tunnel.py", line 104, in _get_auth_token
msrestazure.azure_exceptions.CloudError: Unexpected internal error
Terminate batch job (Y/N)? y ```
I have contacted Microsoft support team about this issue and it seems, the network bastion is still under preview and it's an internal error. The response from Microsoft team was:
"Due to an improper cleanup of closed connections, this caused newer connections to fail"
Related
I have setup libretranslate on my local system (ubuntu focal fossa) by following steps described by https://github.com/LibreTranslate/LibreTranslate url and scaled the app with gunicorn and nginx as described in the same tutorial. I have created libretranslate as ubuntu service unit. below is my ExecStart command of my service file.
ExecStart=/home/support/LibreTranslate/env/bin/gunicorn --workers 3 --log-level 'error' --error-logfile /home/support/LibreTranslate/Logs/gunicorn_nohup.log --bind unix:libretranslate.sock -m 007 wsgi:app
I started gunicorn with 3 worker. However, after running for sometimes, it started to give 500 internal server error. Below is log generated by gunicorn
[2022-05-10 13:44:03 +0100] [306482] [ERROR] Error handling request /detect
Traceback (most recent call last):
File "/home/support/LibreTranslate/env/lib/python3.8/site-packages/gunicorn/workers/sync.py", line 136, in handle
self.handle_request(listener, req, client, addr)
File "/home/support/LibreTranslate/env/lib/python3.8/site-packages/gunicorn/workers/sync.py", line 179, in handle_request
respiter = self.wsgi(environ, resp.start_response)
File "/home/support/LibreTranslate/wsgi.py", line 14, in app
instance = main()
File "/home/support/LibreTranslate/app/main.py", line 121, in main
app = create_app(args)
File "/home/support/LibreTranslate/app/app.py", line 113, in create_app
remove_translated_files.setup(get_upload_dir())
File "/home/support/LibreTranslate/app/remove_translated_files.py", line 23, in setup
scheduler.start()
File "/home/support/LibreTranslate/env/lib/python3.8/site-packages/apscheduler/schedulers/background.py", line 38, in start
self._thread.start()
File "/usr/lib/python3.8/threading.py", line 852, in start
_start_new_thread(self._bootstrap, ())
RuntimeError: can't start new thread
Does anyone knows why this is happening? And Is there any other way to achieve same without facing this issue?
I have raised issue on LibreTransate community. here is the link https://community.libretranslate.com/t/python-library-of-libretranslate-run-with-gunicorn-and-nginx-not-freeing-up-threads/221
and link to GH issue https://github.com/argosopentech/LibreTranslate-init/issues/10
I run the following "az container delete" on old cli version successfully. However, when I upgraded to 2.28.0, it failed with error
az container delete --subscription xx --resource-group xx --name xx
Errors:
The command failed with an unexpected error. Here is the traceback:
'ContainerGroupsOperations' object has no attribute 'delete'
Traceback (most recent call last):
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\knack/cli.py", line 231, in invoke
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 657, in execute
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 720, in _run_jobs_serially
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 691, in _run_job
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/__init__.py", line 328, in __call__
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/core/commands/command_operation.py", line 121, in handler
File "D:\a\1\s\build_scripts\windows\artifacts\cli\Lib\site-packages\azure/cli/command_modules/container/custom.py", line 66, in delete_container
AttributeError: 'ContainerGroupsOperations' object has no attribute 'delete
This is a known issue introduced by Microsoft updating Az CLI to version 2.28.0.
There is no solution other that downgrade at the moment but MS is acknowledged.
You can track the track their responses in the official repo:
https://github.com/Azure/azure-cli/issues/19475
Still, there are some workarounds to address it.
Using REST API:
RESOURCE_GROUP=my-rg
ACI_CONTAINER_NAME=containername
SUBSCRIPTION_ID="xxxxxxxxxxxxxxxxxxxxxxxx"
az rest --method delete \
--uri "/subscriptions/{subscriptionId}/resourceGroups/${RESOURCE_GROUP}/providers/Microsoft.ContainerInstance/containerGroups/${ACI_CONTAINER_NAME}?api-version=2019-12-01" \
--subscription ${SUBSCRIPTION_ID}
Using PowerShell:
Remove-AzContainerGroup -Name ${var.resourceName} -ResourceGroupName ${var.resourceGroup} -Confirm:$False
Got the same error at DevOps jobs, week ago everything was successful. Didn't find any possibilities how to change az version from UI.
https://developercommunity.visualstudio.com/t/set-up-fixed-az-cli-version-in-my-pipeline/960733
I'm trying to write an API and publish and run on Azure Function app. And in the function I need to read from kubectl.
Reading kubectl and configuration works fine on localhost.
But when I publish to azure function app, it returns me error message: "Exception: OSError: [Errno 8] Exec format error: './kubectl'".
I'm creating an HTTP triggered function in Azure using Python on a mac device. And the Azure service plan is LinusDynamicPlan. The kubectl I'm using is a mac binary.
Code to read kubectl :
deployments = subprocess.check_output(["./kubectl", "get", "deployments", cluster_config_name])
I can successfully run the script on localhost. But not in Azure function app.
The error message I get in Azure:
2019-07-09T07:37:38.168 [Error] Executed 'Functions.nc6v3_usage' (Failed, Id=71d76d36-95ab-4bd6-9656-5578141c4c3f)
Result: Failure
Exception: OSError: [Errno 8] Exec format error: './kubectl'
Stack: File "/usr/local/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 300, in _handle__invocation_request
self.__run_sync_func, invocation_id, fi.func, args)
File "/usr/local/lib/python3.6/concurrent/futures/thread.py", line 56, in run
result = self.fn(*self.args, **self.kwargs)
File "/usr/local/lib/python3.6/site-packages/azure/functions_worker/dispatcher.py", line 389, in __run_sync_func
return func(**params)
File "/home/site/wwwroot/nc6v3_usage/__init__.py", line 18, in main
deployments = subprocess.check_output(["./kubectl", "get", "deployments", cluster_config_name])
File "/usr/local/lib/python3.6/subprocess.py", line 356, in check_output
**kwargs).stdout
File "/usr/local/lib/python3.6/subprocess.py", line 423, in run
with Popen(*popenargs, **kwargs) as process:
File "/usr/local/lib/python3.6/subprocess.py", line 729, in __init__
restore_signals, start_new_session)
File "/usr/local/lib/python3.6/subprocess.py", line 1364, in _execute_child
raise child_exception_type(errno_num, err_msg, err_filename)
The kubectl I'm using is a mac binary.
Please correct me if I didn't understand you properly. Are you using mac binary on a Linux system in your Azure instance ? If so, you just cannot do it, Mac =/= Linux.
Try to install kubectl on your Azure instance following this instruction provided by official Kubernetes docummentation and then provide the full path to kubectl binary installed for your system to your script. If you follow the above instruction it will be /usr/local/bin/kubectl.
I have few subscriptions in Azure and at least 35 resource groups and at least 100 Virtual machines in each subscription.
So it is 35 resource groups and 100 VM's and I wan't to delete azure extension in every VM.
Currently I am using script:
!/bin/bash
now=$(date +"%T")
USER="user"
RESOURCEGROUPLIST="/home/$USER/resourcegroupsdev"
VMLIST="/home/$USER/vmlistdev"
echo "################## DELETE EXTENSION ##################"
echo "Current time : $now"
cat $RESOURCEGROUPLIST | while read -r LINER
do
cat $VMLIST | while read -r LINE
do
az vm extension delete -g $LINER --vm-name $LINE -n LinuxDiagnostic --verbose
done
echo "Current time : $now"
done
Frequently I get this error:
VM 'dev-vm-test-001' has not reported status for VM agent or extensions. Please verify the VM has a running VM agent, and can establish outbound connections to Azure storage.
and sometimes this error:
Error occurred in request., SSLError: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",)
Traceback (most recent call last):
File "/usr/bin/azure-cli/lib/python2.7/site-packages/azure/cli/main.py", line 36, in main
cmd_result = APPLICATION.execute(args)
File "/usr/bin/azure-cli/lib/python2.7/site-packages/azure/cli/core/application.py", line 210, in execute
result = expanded_arg.func(params)
File "/usr/bin/azure-cli/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 289, in __call__
return self.handler(*args, **kwargs)
File "/usr/bin/azure-cli/lib/python2.7/site-packages/azure/cli/core/commands/__init__.py", line 498, in _execute_command
raise client_exception
ClientRequestError: Error occurred in request., SSLError: ("bad handshake: Error([('SSL routines', 'SSL3_GET_SERVER_CERTIFICATE', 'certificate verify failed')],)",)
Deleting process takes forever - I mean - a lot of errors. No clear output, in which VM extension was deleted.
Do someone has idea how to boost process to delete extension?
For your own routine - It looks like you should be responsible for printing out\keeping a list of vm extension operations errored out\completed. since you are doing the looping, not the CLI. So its your responsibility.
For the SSL handshake - no ideas; for the storage - are you having Network Security Groups blocking outbound connections (or iptables or whatever)? as they might interfere with the VM extensions, so VM extensions cannot report status. And this essentially leads to this error. You can easily verify this by logging to the portal and checking the vm in question. under the extension property it should tell you something like: "vm agent failed to report status bla-bla-bla"
i would suggest raising this at the Azure CLI 2.0 repo. I don't think there's anything SO users can help you with here.
How to fix? I can't boot anymore
Jan 24, 2014 10:03:29 AM Error: Starting VM 'CentOS 6 (64-bit)' -
Internal error: xenopsd internal error:
VM = 182361af-d10a-d97b-3a65-346d9cec1bcb; domid = 133;
Bootloader.Bad_error Traceback (most recent call last):
File "/usr/bin/pygrub", line 895, in ?
part_offs = get_partition_offsets(file)
File "/usr/bin/pygrub",
line 105, in get_partition_offsets
image_type = identify_disk_image(file)
File "/usr/bin/pygrub", line 49, in identify_disk_image
fd = os.open(file, os.O_RDONLY)
OSError: [Errno 2] No such file or directory:
'/dev/sm/backend/94b422b6-3e31-88fb-bc55-99b33de9d89a/36bce863-ba6d-4792-b29d-dc6211bd5e8c'
Probably your server was shutdown improperly, hence, the partition was mounted Read Only. You have to unplug your pbd, check and plug again Read and Write.
That did it for me.
Looks like the VDI of the VM is either corrupted or deleted. From Xencenter click on the VM and go to the respective Storage (Local Storage or Shared) to check that the VDI exists or not. I guess you have to re-create the disk again !
I solved this same problem, where I was unable to get my VDIs to mount in any VM, and booting the VMs failed with the "No such file or directory: /dev/sm/backend" error you're getting.
What worked to fix it was to make a snapshot of each VM, creating a new VM from the snapshot and then deleting the old VM.