I am trying to create a DNS service (automation of various DNS operations) to serve our existing private cloud. I am looking for options and ideas to do this. Is there any existing Java API to do this? Please suggest.
I made a research on the possible solutions. I found DNSJava to be a good solution. But I did not find much documentation/examples. The following are some questions which, when answered, can solve my current problems:
How to add NS or A records to zone files?
How to print out the contents of a zone file?
I have created a local DNS server for test purpose. It will be really helpful if the examples are given with respect to localhost.
Thank you!
After a lot of research, I found a way to modify the zone files with DNSJava. Bind9 should be setup in the server. Required zone files should be created with basic information. Adding and deleting a record in the zone file is straight forward once we have this setup. Please refer to this page to generate TSIG key for Bind9. The code that can actually add a record is given below.
Name zoneName = null;
String domain = "your.domain";
String host = "hostname";
DNSRecordType type = DNSRecordType.A;
int ttl = 600;
Lookup lookup = new Lookup(Name.fromString("your.domain"));
Record [] records = lookup.run();
if(records != null) {
zoneName = records[0].getName();
}
if(zoneName != null) {
Name hostName = Name.fromString("hostname", zoneName);
Update update = new Update(zoneName);
update.add(hostName, Type.value(type.toString()), 600,
"192.168.2.50");
Resolver resolver = new SimpleResolver();
resolver.setTCP(true);
resolver.setTSIGKey(new TSIG("your.domain.",
"z0pll56C4cwLXYd2HG6WsQ=="));
Message response1 = resolver.send(update);
response = response1.getHeader().toString();
}
Related
I have 3 environments for my infrastructure. All of them the same, but with various sizes. I understand this is a good use case for Terraform workspaces. And indeed it works well in that regard. But please correct me if this is not the right way to go.
Now my only issue is with managing the DNS within the workspaces. I use the Google provider and that works by having 2 types of resources: a google_dns_managed_zone which represents the zone, and a google_dns_record_set type for each DNS record.
Note that the record set type needs to have a reference to the managed zone type.
With that in mind, I need to manage the DNS zone from the production environment. I can't share that resource in the other workspaces because I should be able to destroy the dev or staging workspace without destroying the DNS zone.
I try to solve that issue with count. I use it as a boolean as shown in the code below and find it pretty hackish but that's what I have found in the Terraform community. Any improvement is welcome.
That allows me to have the zone and the production records (like MX shown below as example) only present in the prod workspace.
But then I am stuck when it comes to managing record sets only in a specific workspace. I need that for example in the case of creating an nginx in the dev workspace and automatically create a DNS record set for it, like dev.example.com.
For that I need to access the managed zone resource. As shown below I use terraform_remote_state in order to access the resource from the prod workspace. To the extent of my understanding, that works with an output, which you can see below. When I select the prod workspace, I can indeed output the managed zone. And then if I select another workspace, using the remote state retrieves the managed zone from prod successfully. But my issue is that Terraform fails when it comes to the output line since it is only present in the prod workspace and does not exist in any other workspace and thus can't be outputted.
So it's a bit of a nonsense and I don't understand if there is a better way to achieve this. I did a fair bit of research and asked the community but could not find an answer to that. It seems to me that managing DNS is common to all infrastructures and should be pretty well covered. What am I doing wrong and how should it be done?
locals {
environment="${terraform.workspace}"
dns_zone_managers = {
"dev" = "0"
"staging" = "0"
"prod" = "1"
}
dns_zone_manager = "${lookup(local.dns_zone_managers, local.environment)}"
}
resource "google_dns_managed_zone" "base_zone" {
name = "base_zone"
dns_name = "example.com."
count = "${local.dns_zone_manager}"
}
resource "google_dns_record_set" "mx" {
name = "${google_dns_managed_zone.base_zone.dns_name}"
managed_zone = "${google_dns_managed_zone.base_zone.name}"
type = "MX"
ttl = 300
rrdatas = [
"10 spool.mail.example.com.",
"50 fb.mail.example.com."
]
count = "${local.dns_zone_manager}"
}
data "terraform_remote_state" "dns" {
backend = "local"
workspace = "prod"
}
output "dns_zone_name" {
value = "${google_dns_managed_zone.base_zone.*.name[0]}"
}
Then I can introduce record sets in a specific workspace only, using count again and referring to the managed zone through the remote state like so:
resource "google_dns_record_set" "a" {
name = "dev"
managed_zone = "${data.terraform_remote_state.dns.dns_zone_name}"
type = "A"
ttl = 300
rrdatas = ["1.2.3.4"]
}
I have a multi-domain active directory environment and need to find a user based on DOMAIN\username.
The following code works great for finding a user by SID.
DirectorySearcher directorySearcher = new DirectorySearcher(new DirectoryEntry(
"GC://" + Forest.GetCurrentForest().Name));
directorySearcher.Filter =
"(&" +
(&(objectCategory=person)(objectClass=user)) +
"(objectSid=" + this.SID + "))";
var result = directorySearcher.FindOne();
But now I'm in a situation where all I have is DOMAIN\username.
What goes in the filter for this?
One approach I considered is connecting to the specific domain rather than the global catalog and searching by the unqualified SAMAccountName. But my problem there is I don't know how to get from DOMAIN to DC=Domain,DC=Org or domain.org.
When I'm in Active Directory Users and Computers, there seems to be no problem searching the entire directory by DOMAIN\username. What is happening there behind the scenes?
This was the missing piece.
using System.Security.Principal;
var sid = (SecurityIdentifier)new NTAccount(userName).Translate(
typeof(SecurityIdentifier));
Is there any solution to create record in other DBs from the CRM 2011 records? When a record such as "cost" was created in CRM 2011, we want a record would be created in out Oracle DB. Could it be done through a plugin? Or a service should be created for this?
Could you please provide me references or solutions for this.
Any helps would be greatly appreciated.
We had a similar request from a customer a while ago. They claimed that CRM's database wasn't to be trusted and wanted to securely store a copy of the records created in - guess what - SQL Server too. (Yes, we do understand the irony. They didn't.)
The way we've resolved it was to create a plugin. However, bear in mind that simply reacting to the message of Create won't really do. You need to set up a listener for three of the CRUD operations (retrieval doesn't affect the external database so it's rather C_UD operations, then).
Here's the skeleton of the main Execute method.
public void Execute(IServiceProvider serviceProvider)
{
Context = GetContextFromProvider(serviceProvider);
Service = GetServiceFromProvider(serviceProvider);
switch (Context.MessageName)
{
case "Create": ExecuteCreate(); break;
case "Update": ExecuteUpdate(); break;
case "Delete": ExecuteDelete(); break;
}
}
After this dispatcher, you can implement the actual calls to the other database. There are three gotchas I'd like to give you head-up on.
Remember to provide a suitable value to the outer DB when CRM doesn't offer you one.
Register the plugin as asynchronous since you'll be talking to an external resource.
Consider the problem with entity references, whether to store them recursively as well.
Walk-through for plugin construction
Link to CRM SDK if you haven't got that
Information on registering the plugin
And besides that, I've got a walk-through (including code and structure) on the subject in my blog. The URL to it, you'll have to figure out yourself - I'm not going to self-promote but it's got to do with my name and WP. Google is your friend. :)
You could use a plugin to create a record in another system, although you would need to think about syncing and ensure you don't get duplicates, but it certainly can be done.
Tutorial on plugins can be found here.
You need to write a plugin that runs on Create and uses the information on the created Cost entity to create a record in your Oracle DB.
As an example:
public void Execute(IServiceProvider serviceProvider)
{
var context = (IPluginExecutionContext)serviceProvider.GetService(typeof(IPluginExecutionContext));
//get the created entity from CRM
var theCreatedEntity = context.InputParameters["Target"] as Entity;
//build up a stored procedure call
using (OracleConnection objConn = new OracleConnection("connection string"))
{
var cmd = new OracleCommand();
cmd.Connection = objConn;
cmd.CommandText = "stored procedure name";
cmd.CommandType = CommandType.StoredProcedure;
cmd.Parameters.Add("param1", OracleType.Number).Value = theCreatedEntity.GetAttributeValue<int>("Attribute1");
cmd.Parameters.Add("param2", OracleType.Number).Value = theCreatedEntity.GetAttributeValue<int>("Attribute2");
//etc
cmd.ExecuteNonQuery();
}
}
That should give you enough to get going
Following is the code used to fetch data from azure table:
CloudStorageAccount storageAccount = CloudStorageAccount.Parse(ConnectionString);
CloudTableClient tableClient = storageAccount.CreateCloudTableClient();
TableServiceContext serviceContext = tableClient.GetDataServiceContext();
CloudTableQuery<Customer> partitionQuery =
(from e in serviceContext.CreateQuery<Customer>("Customer")
select e).AsTableServiceQuery<Customer>();
// Loop through the results, displaying information about the entity
foreach (Customer entity in partitionQuery)
{
CustomerList.Add(entity);
}
Everything was working fine. Suddenly I observed that, above call does not return anything.
It is able to get the ServiceContext. And I tried to troubleshoot using "Fidler" tool. Actually data is coming from the cloud table (which I saw in query response in Fidler). But when I iterate "partitionQuery" object, it returns with no data. Same code works with other machines. The Azure SDK used is same in other machines also. Can anyone help on this? Thanks.
Update: Now I am using new version of Azure SDK. (Oct 2012) Can this be a problem?
I've experienced the same problem. When I specify a partition key (or any other query), it works just fine.
rangeQuery = new TableQuery<PhotoEvent>().Where(TableQuery.GenerateFilterConditionForBool("active", QueryComparisons.Equal, true));
If you are using the October 2012 Azure SDK , you want to change :
TableServiceContext serviceContext = tableClient.GetDataServiceContext();
CloudTableQuery<Customer> partitionQuery = (from e in serviceContext.CreateQuery<Customer>("Customer")
select e).AsTableServiceQuery<Customer>();
to
TableServiceContext serviceContext = tableClient.GetTableServiceContext();
TableServiceQuery<Customer > partitionQuery = (from e in serviceContext.CreateQuery<Customer>(“Customer”)
select e).AsTableServiceQuery(serviceContext);
Note: see a more detailed list of breaking change.
Also, please make sure you have included the Microsoft.WindowsAzure.Storage.Table.DataServices namespace.
If you are still observing an unexpected behavior, please provide a fiddler trace (through email) so that we can investigate this further.
The Problem
We upload (large amounts of) files to SharePoint using FrontPage RPC (put documents call). As far as we've been able to find out, setting the value of taxonomy fields through this protocol requires their WssId.
The problem is that unless terms have been explicitly used before on a listitem, they don´t seem to have a WSS ID. This causes uploading documents with previously unused metadata terms to fail.
The Code
The call TaxonomyField.GetWssIdsOfTerm in the code snippet below simply doesn´t return an ID for those terms.
SPSite site = new SPSite( "http://some.site.com/foo/bar" );
SPWeb web = site.OpenWeb();
TaxonomySession session = new TaxonomySession( site );
TermStore termStore = session.TermStores[new Guid( "3ead46e7-6bb2-4a54-8cf5-497fc7229697" )];
TermSet termSet = termStore.GetTermSet( new Guid( "f21ac592-5e51-49d0-88a8-50be7682de55" ) );
Guid termId = new Guid( "a40d53ed-a017-4fcd-a2f3-4c709272eee4" );
int[] wssIds = TaxonomyField.GetWssIdsOfTerm( site, termStore.Id, termSet.Id, termId, false, 1);
foreach( int wssId in wssIds )
{
Console.WriteLine( wssId );
}
We also tried querying the taxonomy hidden list directly, with similar results.
The Cry For Help
Both confirmation and advice on how to tackle this would be appreciated. I see three possible routes to a solution:
Change the way we are uploading, either by uploading the terms in a different way, or by switching to a different protocol.
Query for the metadata WssIds in a different way. One that works for unused terms.
Write/find a tool to preresolve WssIds for all terms. Suggestions on how to do this elegantly are most welcome.
setting the WssID value to -1 should help you. I had similar problem (copying documents containing metadata fields) between two different web applications. I've spent many hours on solving strange metadata issues. In the end, setting the value to -1 have solved all my issues. Even if the GetWssIdsOfTerm returns a value, I've used -1 and it works correctly.
Probably there is some background logic that will tak care of the WssId.
Radek