I have a much larger script but trying to diagnose why the dns lookups I am trying to do via Resolve-DnsName are blank. I tried running the basic command
`Resolve-DnsName -Name www.bing.com -Type CNAME`
Output>
www.bing.com : Not enough storage is available to complete this operation
I can run this locally and get the output
Name Type TTL Section NameHost
www.bing.com CNAME 7369 Answer www-www.bing.com.trafficmanager.net
Is the something I am missing with having to allocate storage to run azure automation?
Related
I've deployed a blank Private DNS Zone ARM Template with the standards SOA recordset. I now need to add additional recordsets to this private dns zone using the Azure CLI commands.
The issue is the following error message:
az network dns record-set a add-record -g "MyResourceGroup" -z "MyPrivateDNSZoneName.net" -n "MyRecordSetName" -a "AnIP"
Can not perform requested operation on nested resource. parent resource '' not
found.
It isn't an access issue because I can add the recordset in the portal. I also get the same issue using the powershell commands. Is there something wrong with the way these strings are being passed to the portal?
According to the details of the error, it shows the parent resource can't be found. So the possible reasons here:
You set the wrong subscription
You set the wrong resource group
You set the wrong DNS zone name
So you need to check the above options to make sure if they're alright.
I am trying to set home directory for FTP Server created on Azure VM (Linux) and I tried many ways taking help of google, but it throws error saying
If I add $USER in local_root while modifying /etc/vsftpd.conf file
ftp> mkdir blob
550 Create directory operation failed.
If I dont use $USER
500 OOPS: cannot change directory:/home/ftp
I have set below configuration in /etc/vsftpd.conf file
listen=NO
listen_ipv6=YES
anonymous_enable=NO
local_enable=YES
write_enable=YES
anon_upload_enable=YES
anon_mkdir_write_enable=YES
dirmessage_enable=YES
use_localtime=YES
xferlog_enable=YES
connect_from_port_20=YES
chroot_local_user=YES
user_sub_token=$USER
local_root=/home/$USER/ftp
allow_writeable_chroot=YES
pasv_enable=YES
pasv_min_port=10000
pasv_max_port=10010
chroot_list_file=/etc/vsftpd.chroot_list
secure_chroot_dir=/var/run/vsftpd/empty
pam_service_name=vsftpd
rsa_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
rsa_private_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
ssl_enable=NO
Kindly help in fixing this issue while setting home directory for FTP Server (Azure VM).
I got my answer. We have two ways to short out this issue.
ftpClient.DataConnectionType = FtpDataConnectionType.AutoActive; Though this works out well, but this is not recommended.
ftpClient.DataConnectionType = FtpDataConnectionType.AutoPassive; This is recommended in case of Azure VM.
And to work it out, we need to enable pasv_enable=true in AZURE VM through cmd and also have to enable port range in Azure VM through Portal.
pasv_enable=Yes <br>
pasv_max_port=10010<br>
pasv_min_port=10000
Here is what I found as an explanation on Microsoft link-
A multiregion deployment can use an active-active or active-passive configuration. An active-active configuration distributes requests across multiple active regions. An active-passive configuration keeps warm instances in the secondary region, but doesn't send traffic there unless the primary region fails.
For more explanation please refer the link Azure VM Active Passive
as it makes azure configuration mode which results all the traffic in open mode
Please suggest better approach if any.
We've got a rogue process running somewhere that executes queries against a test database we have hosted on Azure SQL. I'm trying find this process so I can kill it. There are a number of app servers and development PCs where it could be hiding, and I haven't been able to track it down by looking at processes running on these machines by hand.
I can use The Azure Data Studio Profiler extension to get some Extended Event logging from the database. From there, I can see the text of queries being run, the Application Name and the ClientProcessID.
Sample data from profiler
I can't seem to use any of this data to find the host name or ip address of the server where these queries originate. Can I determine this using the data available in Azure Data Tools Profiler? Or is there some other way to work backward to find it? Since this is hosted on Azure, I can't use the Sql Management Studio Profiler, which I think would give me the Hostname right away.
Azure SQL Auditing should provide you the application name, login name and client IP address that executed the query. Please read this article to enable Auditing and look for the event type BATCH_COMPLETED.
Set-AzureRmSqlDatabaseAuditing `
-State Enabled `
-ResourceGroupName "rgYourResourceGroup" `
-ServerName "yourservername" `
-StorageAccountName "StorageAccountForAuditing" `
-DatabaseName "YourDatabaseName"
I have used the following powershell script to delete a database:
Remove-AzureRmSqlDatabase -ServerName $server -ResourceGroupName $rgname -DatabaseName $dbname
(first setting the variables)
and have tried using the Azure Portal
The port indicates a success in deletion, as does the activity logs, however
the resource is not being deleted?
Screenshot of activity log:
The deletes (on a number of occasions after the db comes back) show successful, however there is an audit policy that seems to be doing something.
There are no Locks on the resrouce group.
UPDATE:
I have deleted from SSMS, and is not showing there or in the portal anymore..
(will wait to see if it comes back, as it did when deleting via portal and powershell)
UPDATE 2:
Database is now back, so this is the database having been deleted 3 ways, portal, powershell and via SSMS.
It turns out the web application uses EF migrations which is recreating the database.
Note: The bigger issue is that the database is created on a much higher, and much more expensive tier.
Do you happen to have a rogue policy somewhere? It seems something is running a Policy Effect: deployIfNotExist on the resource. Without access to your environment, there's not much I could recommend.
Check the documentation here: https://learn.microsoft.com/en-us/azure/governance/policy/concepts/effects#audit
I am attempting to create a SharePoint Server Farm using the option via the New menu in Azure. I've tried like 5 times now and each time, it fails after about an hour.
The error is so cryptic that I cannot figure out exactly what is going on.
The operation name is: Microsoft.ClassicCompute/virtualMachines/extensions/write
The status is: Failed
And then another,
The operation name is: Update deployment
The status is: Failed.
Thoughts?
I've had a similar experience using the "new" preview portal when deploying Databases. This portal uses Resource Manager to deploy it's resources which is a new way of doing things. Anyhow, I used the PowerShell command:
Get-AzureResourceGroupLog -Name ResourceGroupName
This provided me the detail to understand what was happening with the deployment. I had to adjust my Datacenter because the one I was trying to use was hot.
I talk a little more about the other options this command has here:
http://devian.co/?p=1181