_deleted_conflicts in CouchDB? - couchdb

Using CouchDB 1.0.1.
I have DELETEd some documents, then I PUT some other documents with the same _id as the deleted ones.
Now these new docs have the _deleted_conflicts field:
"_deleted_conflicts":["2-667c9e8e75f8ee51a4ab79ed534622dd"]
It looks like the _rev field of the deleted doc (can't be sure though).
The CouchDB wiki just says "Information about conflicts".
Is this a problem?
Why CouchDB saves this information?
Am I supposed to do something about it?
Thanks,
Giacomo

I'm not sure if this is actually going to be a problem, but it's likely this could come up during replication.
If you want to prevent it from coming up, you should look into the /db/_purge command. This command will remove references to deleted documents, and you can specify a single document ID to affect.

Deleted Conflicts should be regularly purged otherwise a very long history will slow couchdb down significantly (at least for earlier versions of couchdb I used).

Related

It seems that documents Liferay indexing is not done completely

I have a problem with the Elastic indexation of the documents.
Many documents are not retrieved on research (on portlets and on administrations pages).
When I do the reindex by Administration page, I see on logs missing documents.
If I modify the document manualy, the document is retrieve on research.
What is the problem ?
Samuel
(Sorry for my bad english ;) )
The problem is due to incoherences on DB.
Table ddm_content was not correctly set (data_ column).
Best regards

What to do after TYPO3 security update from 13.09.2016?

I don't understand the security patch from last week: https://typo3.org/teams/security/security-bulletins/typo3-core/typo3-core-sa-2016-022/ . I have an old TYPO3 6.2 installation. I have truncated all cf_* tables and opened the pages with UID 2-6. No cHash. As a result I see 13 cf_cache_hash-entries.
Now I have opened a detail page from a listing page in frontend. I see some parameters in URL like action, controller, the UID of the current displayed record and of cause a cHash.
Then I have copied these parameters (excluding id=x) to the URL of my pages 2-6. In cf_cache_hash I have still 13 records. So, there is no cache flooding.
Or how I have to interprete this quote:
Links with a valid cHash argument lead to newly generated page cache
entries. Because the cHash is not bound to a specific page, attackers
could use valid cHash arguments for multiple pages, leading to
additional useless page cache entries.
Next problem:
If extensions like realurl are used, it is required to flush their
caches (and TYPO3 caches as well)
Can you please tell me WHICH tables I/we should clear?
tx_realurl_urldecodecache
tx_realurl_urlencodecache
are maybe OK. But what about tx_realurl_pathcache? Of cause, I can clear that, but what about older entries for earlier realurl configuration? If I truncate that table, these old entries are not valid anymore and they were not builded again. So, old Search Engine Results are invalid.
Question from one of our customers: Is it enough to clear system cache in backend or should he click on Clear all Cache in Installtool? Nice. IMO, it is not enough and the tables have to be truncated on DB directly. Right.
Next one:
This means if such URLs are indexed by a search engine, visitors from
this search engine will end up on a not properly working page.
Hey cool. And now? What is the solution? Keep it as it is? IMO it depends on an InstallTool setting called: pageNotFoundOnCHashError. Right?
Please tell us what to do and please add some more details how to handle that.
Stefan
For me it boils down to (after installing the updated TYPO3 version):
If you don't use realurl: enable
$GLOBALS['TYPO3_CONF_VARS']['FE']['cHashIncludePageId'] = true;
& and you are probably "done". Of course all old google hits will be done for, but on a "public" site it's quite probable you never cared about google anyway if you didn't run realurl (or similar)
If you use realurl 1.X on a 6.2
Don't enable the config (there'll probably never be a proper patch)
Two options:
take the risk of a DDOS
use the 1.x version from https://github.com/mogic-le/typo3-realurl
If I understand it correctly it will set TYPO3 to no_cache mode if there is no hit on the caching table; While that is a performance issue, it will prevent cache table entries being made (as a side effect)
If you run 7.6+ and realurl 2
Wait for realurl 2.1 (and take the risk?)
Change the caching
framework to something like memcached (it's somewhat suggested
between the lines: If you have a caching backend that cannot be used
for a DDOS, you don't really have to care)
Use the fork from
helhum (though I think that won't help you one bit regarding old
links)
Realurl >= 2.1.0 supports this core option. But you are recommended to update to at least 2.1.4 because that fixes various other cHash issues.

How do I clear out a Akavache database?

I am using Akavache for a cache of local objects. I'd like to be able to delete everything in the database (so it is as if it was the first time the program was run). I've seen the Vacuum method, but that only removes old items that have expired. What is the easy way to clean everything up?
#SmartyP and #fenix2222,
I had to do the following to permanently remove the data:
BlobCache.LocalMachine.InvalidateAll();
BlobCache.LocalMachine.Vacuum();
It appears as though InvalidateAll() essentially marks everything as expired, but you still must use Vacuum() to remove the expired items.
Turns out it is right there, I just couldn't see it!
BlobCache.UserAccount.InvalidateAll();
Does the trick!

Microsoft Dynamics CRM 2011 best solution to change entity's field data type

I'm having an issue on changing data type of a field in Dynamics CRM 2011 On Premise deployment.
In my managed solution, name it "Solution 1", I have a custom field in contact entity: "new_usernumber" of type number (int). I want to change it to string as per new client's requirement (for new users they want to add prefix to it).
I can uninstall the solution and deploy the new "fixed" managed solution, but this requires me to delete the value on my custom fields. Is there any better solution for this?
TIA
There is no easy way to do this. If you don't already have data deployed in the instance using the managed solution I recommend deleting it and importing a corrected managed solution file.
There is no supported or unsupported process of changing the data type(or logical name) of a field without data loss. What you will need to do is add the new field and then write a quick update utility to copy the data from the old field to the new field.
Here is a great article on exactly how to pull off deleting a field in a managed solution. Note, if you are trying to preserve data you'll need to run the update after the step "Import devkeydetDeleteExample_1_1_HOLDING.zip"
Have fun...this is a pain, but certainly doable!
A few months ago I recreated fields in a solution (from double to int). That was a huge mistake. I'm still not sure where things went wrong but they did go wrong. Not only did I lose the date. I managed to introduce errors in the meta-layer so our MVP had to sit dear help me get it running again. He wasn't happy. I wasn't happy. The customer wasn't happy (ex-customer today, mostly because of that).
So, my humble advice - don't do that. Declare a new field instead. If you have usernumber, keep it but start using userNumberString (or userString, userName etc.). My guess is you'll keep your hair longer that way.
And if you manage to succeed, please do tell. :)

Google Drive SDK docid changed - How to relate documents stored by old docid to the new one?

It appear that the docid's changed for all google drive documents on our google apps domain...
Will it change again?
Why was it changed? (my google/yahoo/bing searches on this subject turn up nothing useful - is no one else experiencing this? For me it seems to have happened at around Jan 16/17th)
And most importantly for now:
Is there a way to cross-reference all of the old docid's to the corresponding new docid's?
Found out some more info. The root cause is a migration from the old presentation editor to the new one. The new editor has been the default for a while, but to complete switch over all the older presentations needed to be converted. For various technical reasons, it wasn't possible to do this without creating new file entries for each presentation. This happened once before about 6 months ago when the same thing was done for documents.
It is possible remap the IDs by checking the change feed. The delete & create will appear as separate events, but if you look for deletes followed by a create of a file with the same title you can build up a mapping of the file IDs. Not entirely foolproof, but its a one time operation.
So turns out IDs aren't quite as immutable as made out to be...

Resources