I need some help regarding creating network interfaces azure terraform. I am getting an error while provisioning them. Please help me if i am doing anything wrong.
Error:
azurerm_network_interface.revproxy-1: network.InterfacesClient#CreateOrUpdate: Failure sending request: StatusCode=400 -- Original Error: Code="IpConfigDeleteNotSupported" Message="IP Configuration dataname-1-configuration cannot be deleted. Deletion and renaming of primary IP Configuration is not supported" Details=[]
* azurerm_network_interface.core-1: 1 error(s) occurred:
It would help if we had additional information in the form of:
The full Terraform script used to deploy the infrastructure (not just the network interface part, since it depends on resources previously created but not present in the script provided in the comments).
The steps leading to this error.
It sounds like there was a multi-step process leading to this error. The error referencing ‘dataname-1-configuration’ but that name not being present in the script leads one to believe that the error is in the multi-step process. We could certainly be wrong, however, and the information above would help in understanding this.
Related
I'm new to the Security Command Center (SCC) and Data Loss Prevention (DLP). I'm trying to create a job in DLP to check if there is any PII in a BigQuery table, but I'm getting the following error upon creation:
Request violates constraint constraints/gcp.resourceLocations on the project resource.
Learn more https://cloud.google.com/resource-manager/docs/organization-policy/defining-locations.
On the organization level, there's an organization policy (inherited in my project) allowing resources to be created in Europe only due to GDPR. I suspect that maybe DLP runs in some other region (US maybe) and that's why I'm getting this error.
I can't seem to be able to choose where does the job run in the options while creating the job, and I can't seem to find anything about this in the documentation. Any idea why am I getting this error and how to fix it?
The answer is copied from here.
Your organization policy is not allowing DLP to run the job, at this moment the DLP API is blocked by this "constraints/gcp.resourceLocations" and there's no workaround at the moment. However, there's a feature request to have the possibility to set a specific location rather than using "global", which is what in fact is causing this issue.
I am trying to Queue an existing build that worked fine up until a few days ago. The message I am getting is the following:
The pipeline is not valid. Job Phase_1: Step NuGetInstaller_1 input
externalEndpoints references service connection
{guid} which could not be found. The
service connection does not exist or has not been authorized for use.
For authorization details, refer to https://aka.ms/yamlauthz.
From these links Troubleshooting authorization for a YAML pipeline
and Build Error: The pipeline is not valid I understand that there is a chance that someone created a duplicate endpoint that prevents somehow the build ("A duplicate endpoint (which points to a KeyVault) was created by someone not on our team and this duplicated endpoint was preventing any builds that referenced the endpoint.") but I can't really find what exactly to look for and the suggestion of the first link didn't work.
Can anyone suggest what to look for or what might be the problem in my case?
Many thanks.
I am trying to to follow the tutorial below
Azure Tutorial
As noted at the bottom there appear to have been changes since this was created
When I get to the part where I create an input for my stream analytics job, I cannot select an event hub even though there is one in my subscription
So I went to provide the information manually and I get an error stating invalid token
Has anyone got any ideas how to resolve this or can point me to a better/more recent tutorial?
I am looking to stream data in real time
Paul
Thanks for the help here I ended up using the secondary key and that worked fine!
Change to use Secondary connection string or use a different shared policy altogether.
You can use the primary of the new shared access policy.
PS : It is a weird error, sometimes removing the last ";" worked.
I have followed all steps shown in the MSDN documentation to Copy File from FTP.
So far, the data sets are created, linked servers were created, the pipeline is created. The diagram for the pipeline shows the logical flow. However, when I schedule the ADF, to do the work for me. It fails. The input dataset passes, but when executing the output dataset, I am presented with the following error.
Copy activity encountered a user error at Source side:
ErrorCode=UserErrorFileNotFound,'Type=Microsoft.DataTransfer.Common.Shared.HybridDeliveryException,Message=Cannot
find the file specified. Folder path: 'Test/', File filter:
'Testfile.text'.,Source=Microsoft.DataTransfer.ClientLibrary,''Type=System.Net.WebException,Message=The
remote server returned an error: (500) Syntax error, command
unrecognized.,Source=System,'.
I can physically navigate to the folder and see for myself the file, but when using the ADF, I ma having issues. The firewall is set to allow the connection. Still I am getting this issue. As there is very minimal logging, I am unable to nail down the issue. Could someone help me out here?
PS: Cross Posted at MSDN
I encountered the same error and I was able to solve it by adding "enableSsl": true,
"enableServerCertificateValidation": true
I've got an Azure app up and running, but various requests generate a 500 error. There are no other details that come back from the server to let me know exactly what the problem is. No stack trace, no error message. The only thing I get back from the server are the http headers indicating I've got an error.
I've done a little looking around but can't seem to find a way to retrieve the error details that I'm looking for. I've seen some articles that suggest that I enable logging, but I'm not sure 1) how to do that, 2) where those log files would go and 3) how to access said log files. I've seen posts that say to add a whole bunch of code to my application to enable logging, but all I'm looking for is an error message and a stack trace from a 500 error. Do I really have to add a bunch of code to my app to see that information? If not, how can I get at it?
Thanks!
Chris
The best long-term solution is to enable Azure Diagnostics, which I think is what you're referring to. If you want a quick-and-dirty solution, you can log errors out to a file and then RDP into the role instances to view them. This is very similar to what you would do on a server in your own datacenter.
You can create the logs however you like. I've used log4net and RollingFileAppenders with some success. Setting the logfile path to something like "\logs\mylog.txt" will place the logs in the E: drive of the VM. Note you'll still need code somewhere in your app to capture the error and write it to the log - typically the global error handler in Global.asax is a good place for that.
You'll also have to enable RDP access to your role instances. There are many articles detailing how to do that. Here's one.
This is not a generally recommended approach because the logs may disappears when the role recycles or is recreated. It's also a pain in the butt to log to keep an eye on all those different servers.
One other warning - it's possible that the 500 error is due to some failure in your web.config. If that is the case, all the the application-level error logging in the world isn't going to help you. So be sure that your web.config is valid, and also check the Windows Event Logs while you're RDP'd into the server.
500 internal server error is most generally caused by some problem on the server when it was not able to understand incoming requests or there was some problem in configuration. So, try to run the app locally and see if there is some problem. You can record errors in a database in catches/application_error and also can use tracing. Believe me they are very helpful and worth a few extra lines of code.
For tracing have a look here, http://msdn.microsoft.com/en-us/magazine/ff714589.aspx