Duplicate record name in dns - dns

Currently I have a TXT record with record name example.com for Amazon SES. I will be adding another TXT record for DMARC with the same record name. Will the TXT record in Table 1 be overwritten by Table 2?
Table 1
|Record Name|Record Type|Record Value|
|:----------|:----------|:-----------|
|example.com|TXT |amazonses:xxxxxxxxxxxxxxxx/xxxxxxxxxxxxxxxxxxxxxx=|
Table 2
|Record Name|Record Type|Record Value|
|:----------|:----------|:-----------|
|example.com|TXT |"v=DMARC1;p=reject;pct=100;rua=mailto:dmarcreports#example.com;ruf=mailto:dmarcreports#example.com;adkim=s"|

Each TXT record is a TXT record on its own. They are separate. Same for any other type, if a name has two A records, they are not "overwritten". DNS handles records sets, so a multiple name can resolve to multiple values, for each type (with some exception like CNAME that can happen only once).

Related

How I get URL of a customer record in suitetalk by passing recordType and internalId

I am getting the customer records by passing the saved search script id and some filters in CustomerSearchAdvanced, but I want URL of each record also
example- recordType customer and internalId is 3645
URL is -- https://------.app.netsuite.com/app/common/entity/custjob.nl?id=3645
My organization chose to just hard-code the URLs for each record type and then append the internal id. Fortunately, all record families share a single URL that resolves to the specific record type URL based on the internal id. Here is an example of what I mean:
"/app/accounting/transactions/transaction.nl?id=1234567"
This URL format can be used for all transactions. Assuming that record 1234567 is a sales order, this resolves to:
"/app/accounting/transactions/salesord.nl?id=1234567"
If record 1234567 is an estimate, then this will resolve to:
"/app/accounting/transactions/estimate.nl?id=1234567"
Therefore, you will not have to hard-code URLs for each record type, just each record family. Here are some other family URLs:
"/app/common/entity/entity.nl?id="
"/app/common/item/item.nl?id="

How to Rename Column name using cassandra table

I have a question in cassandra db. I want to rename the column name. But its showing syntax error. Because my column name contain space. So how can I change column name:
Ex: sample column into samplecolumn?
You can use alter table to rename a column but theres a lot of restrictions on it. As sstables are immutable in order to change state of things on disk everything must be rewritten.
The main purpose of RENAME is to change the names of CQL-generated primary key and column names that are missing from a legacy table. The following restrictions apply to the RENAME operation:
You can only rename clustering columns, which are part of the primary key.
You cannot rename the partition key.
You can index a renamed column.
You cannot rename a column if an index has been created on it.
You cannot rename a static column (since you cannot use a static column in the table's primary key).
https://docs.datastax.com/en/cql/3.1/cql/cql_reference/alter_table_r.html

Using Boto3, get put_item to replace an object in DynamoDB if certain attributes exist on the object?

I'm having issues getting Boto3's put_item to replace an object in DynamoDB if certain attributes (some of which are the primary keys of the DynamoDB table, and some of which are not) match up with the new object that I'm inserting into the table.
For instance, if a DynamoDB table has within it:
user (Partition Key) | ts (Sort Key) | height | weight
__________________________________________________________________________________
'fred' 01-01-2017 5'10'' 190
'george' 01-02-2017 5'08'' 200
and I'm trying to add a new row to this table:
'fred' 01-01-2018 5'10'' 200
Based on the user fred and the height 5'10'' matching another user already in the table, I'd like to substitute the new entry for the old one. The docs are bit unclear for boto3 and AWS put_item -- how do I do so?
For reference, this is what I have currently:
tracking_result = tracking_dbs[drivetype].put_item(
Item=track,
# insert fields to remove old entry that matches
# up with certain attributes from new entry here
)
The already existing entry in the DynamoDB will be replaced by the new entry only if the partition key and the sort key of the new entry matches that of the existing entry. Otherwise a new entry will be created.
In your case you have to delete the existing entry( by getting the existing entry by scanning and not querying as querying requires both partition and sort key as input parameters and then performing the delete operation) and then creating the new entry.

Identify unique record from duplicates in Talend

http://postimg.org/image/89yglfakx/
Refer the above link for the image as a reference.
I have an excel file which gets updated on a daily basis i.e the data is always different every time.
I am pulling the data from the excel sheet into the table using Talend. I have a primary key Company_ID defined in the table.
The error I am facing is that the Excel sheet has few duplicate Company_ID values.
It will also pick up more duplicate values in the future as the excel Excel file will be updated on a daily basis ,so it will have different duplicate values in Company_ID field.
I want to choose the unique data record for the Company ID 1,the record that doesn't have null in the rest of the columns.
For Company_ID 3 ,there is a null value for the columns which is ok since it is a unique record for that company_id.
How do I choose a unique row which has maximum no. of column values present ie for eg in the case of Company ID 1 in Talend ?
I tried using Tuniqrow but it uniquely picks up the first record from the duplicates,so if my first record has null values from the duplicate Company ID then it won't work.

How to optimize Cassandra model while still supporting querying by contents of lists

I just switched from Oracle to using Cassandra 2.0 with Datastax driver and I'm having difficulty structuring my model for this big data approach. I have a Persons table with UUID and serialized Persons. These Persons have lists of addresses, names, identifications, and DOBs. For each of these lists I have an additional table with a compound key on each value in the respective list and the additional person_UUID column. This model feels too relational to me, but I don't know how else to structure it so that I can have index(am able to search by) on address, name, identification, and DOB. If Cassandra supported indexes on lists I would have just the one Persons table containing indexed lists for each of these.
In my application we receive transactions, which can contain within them 0 or more of each of those address, name, identification, and DOB. The persons are scored based on which person matched which criteria. A single person with the highest score is matched to a transaction. Any additional address, name, identification, and DOB data from the transaction that was matched is then added to that person.
The problem I'm having is that this matching is taking too long and the processing is falling far behind. This is caused by having to loop through result sets performing additional queries since I can't make complex queries in Cassandra, and I don't have sufficient memory to just do a huge select all and filter in java. For instance, I would like to select all Persons having at least two names in common with the transaction (names can have their order scrambled, so there is no first, middle, last; that would just be three names) but this would require a 'group by' which Cassandra does not support, and if I just selected all having any of the names in common in order to filter in java the result set is too large and i run out of memory.
I'm currently searching by only Identifications and Addresses, which yield a smaller result set (although it could still be hundreds) and for each one in this result set I query to see if it also matches on names and/or DOB. Besides still being slow this does not meet the project's requirements as a match on Name and DOB alone would be sufficient to link a transaction to a person if no higher score is found.
I know in Cassandra you should model your tables by the queries you do, not by the relationships of the entities, but I don't know how to apply this while maintaining the ability to query individually by address, name, identification, and DOB.
Any help or advice would be greatly appreciated. I'm very impressed by Cassandra but I haven't quite figured out how to make it work for me.
Tables:
Persons
[UUID | serialized_Person]
addresses
[address | person_UUID]
names
[name | person_UUID]
identifications
[identification | person_UUID]
DOBs
[DOB | person_UUID]
I did a lot more reading, and I'm now thinking I should change these tables around to the following:
Persons
[UUID | serialized_Person]
addresses
[address | Set of person_UUID]
names
[name | Set of person_UUID]
identifications
[identification | Set of person_UUID]
DOBs
[DOB | Set of person_UUID]
But I'm afraid of going beyond the max storage for a set(65,536 UUIDs) for some names and DOBs. Instead I think I'll have to do a dynamic column family with the column names as the Person_UUIDs, or is a row with over 65k columns very problematic as well? Thoughts?
It looks like you can't have these dynamic column families in the new version of Cassandra, you have to alter the table to insert the new column with a specific name. I don't know how to store more than 64k values for a row then. With a perfect distribution I will run out of space for DOBs with 23 million persons, I'm expecting to have over 200 million persons. Maybe I have to just have multiple set columns?
DOBs
[DOB | Set of person_UUID_A | Set of person_UUID_B | Set of person_UUID_C]
and I just check size and alter table if size = 64k? Anything better I can do?
I guess it's just CQL3 that enforces this and that if I really wanted I can still do dynamic columns with the Cassandra 2.0?
Ugh, this page from Datastax doc seems to say I had it right the first way...:
When to use a collection
This answer is not very specific, but I'll come back and add to it when I get a chance.
First thing - don't serialize your Persons into a single column. This complicates searching and updating any person info. OTOH, there are people that know what they're saying that disagree with this view. ;)
Next, don't normalize your data. Disk space is cheap. So, don't be afraid to write the same data to two places. You code will need to make sure that the right thing is done.
Those items feed into this: If you want queries to be fast, consider what you need to make that query fast. That is, create a table just for that query. That may mean writing data to multiple tables for multiple queries. Pick a query, and build a table that holds exactly what you need for that query, indexed on whatever you have available for the lookup, such as an id.
So, if you need to query by address, build a table (really, a column family) indexed on address. If you need to support another query based on identification, index on that. Each table may contain duplicate data. This means when you add a new user, you may be writing the same data to more than one table. While this seems unnatural if relational databases are the only kind you've ever used, but you get benefits in return - namely, horizontal scalability thanks to the CAP Theorem.
Edit:
The two column families in that last example could just hold identifiers into another table. So, voilĂ  you have made an index. OTOH, that means each query takes two reads. But, still will be a performance improvement in many cases.
Edit:
Attempting to explain the previous edit:
Say you have a users table/column family:
CREATE TABLE users (
id uuid PRIMARY KEY,
display_name text,
avatar text
);
And you want to find a user's avatar given a display name (a contrived example). Searching users will be slow. So, you could create a table/CF that serves as an index, let's call it users_by_name:
CREATE TABLE users_by_name (
display_name text PRIMARY KEY,
user_id uuid
}
The search on display_name is now done against users_by_name, and that gives you the user_id, which you use to issue a second query against users. In this case, user_id in users_by_name has the value of the primary key id in users. Both queries are fast.
Or, you could put avatar in users_by_name, and accomplish the same thing with one query by using more disk space.
CREATE TABLE users_by_name (
display_name text PRIMARY KEY,
avatar text
}

Resources