Delete Entities in Azure Table Storage - azure

I'm developing an application, that allows using dictionaries (e.g. English - German or Country - Capital). There are just 2 very plain tables:
1) Dictionary:
int Id, string Title //PartitionKey="SomeConstString", Rowkey=Id.ToString()
2) Article:
int DictionaryId, string Word, string Meaning //PartitionKey="D" + DictionaryID, Rowkey=Word
I can add articles, but when trying to delete I get the following problem: in every dictionary one or two articles are not deleted. Instead I get ResourceNotFoundException. There is absolutely nothing special about those articles (e.g. Russia - Moscow). When I try to add articles with same PartitionKey and RowKey I get EntityAlreadyExistsException. I installed "Cloud Storage Studio" and found out that those entities are really still in table. I tried to delete them manually but got the same ResourceNotFoundException in storage studio that I was getting in code. So if I add 100 articles and then try to delete them (in code or in studio like Ctrl+A -> Delete), 99 (or sometimes 98) are deleted and others are not. I'm using development storage emulator. Here is how I remove articles (I tried different approaches, result is still the same):
public void DeleteAllArticlesFromDictionary(int dictionaryID) {
TableServiceContext tableServiceContext = tableClient.GetDataServiceContext();
string partitionKey = "D" + dictionaryID;
Article[] articles = tableServiceContext.CreateQuery<Article>(articleTableName).Where(a => a.PartitionKey == partitionKey).ToArray();
for (int i = 0; i<articles.Length; ++i)
tableServiceContext.DeleteObject(articles[i]);
tableServiceContext.SaveChanges();
}
Can anyone tell me what can possibly be wrong with this?
UPD: Works fine in Cloud

I think I found the problem, there was a space character (0x20, ' ') in the end of the word which is RowKey. So it was say 'RowKey1 '. I removed space and now everything is fine in devstorage too (it was not a problem in real environment). However it is rather confusing that spaces are treated correctly inside the key (say 'Row Key 1' can be used without errors) but cause such behaviour if are trailing characters (or maybe leading too). I've read about 'Characters Disallowed in Key Fields', but spaces were not mentioned there. I guess I should use Trim() for my strings before using them as keys.

Related

Encoding and decoding a unique ID between two systems in nodejs

I am writing a script which synchronizes users between two systems. Lets say the source and target system.
To help with synchronising them I was hoping to store the user ID from the source system as an ID in the target system.
Unfortunately the target system has a max character length for the property I can store this in.
If possible, i would like to avoid creating a new table to persist the relationship.
I can't truncate it as I need to be able to refer back to the user in the source system from the target system.
Is there a way of encoding and decoding the source User ID?
Edit:
The ID in the source system will always follow the structure of:
/^[a-zA-Z0-9]{6}-[a-zA-Z0-9]{4}-[a-zA-Z0-9]{3}-[a-zA-Z0-9]{6}$/
For example: as0092-banc-mdn-da1023
There are no restrictions on what can be stored in the target system apart from it having a max character length of 20.
When you originally posted your question, it seemed like you needed a massive conversion, ie squishing a 20 character long ID into something much shorter, that would have been quite a problem. But now after your edit it seems quite easy:
You have 20 characters available in the target system, but the ID in the source system only has 19 variable characters plus 3 dashes on fixed positions.
So when converting from source to target just remove the dashes. This will give you a string of length 19, which will perfectly fit into your 20 character wide field on the target system. And as the dashes are at fixed positions, they don't carry any semantics, ie removing them won't tamper with uniqueness of your ID.
And when converting from target back to source, you know where in your ID the dashes have to go, ie after the 6th, the 10th and the 13th character of your squished ID. So just put them back in there at the right places, and you have your ID format of the source system back ...
let
original = 'as0092-banc-mdn-da1023', //this is the key from your example
target = original.replace(/-/g, ""), //this removes all dashes
source = target.slice(0, 6) + "-" //this takes certain portions of your squished ID
+ target.slice(6, 10) + "-" //and inserts dashes between them
+ target.slice(10, 13) + "-"
+ target.slice(13)
console.log("original ", original, " length", original.length);
//the mapped ID fits perfectly into your target system's length restriction
console.log("target ", target, " length", target.length);
//the reversed mapping is equal to your original ID
console.log("source ", source, " length", source.length);
console.log("test eq ", source === original);
Of course there are other possibilities, to remove and readd the dashes, but I think you get the idea behind the code.

IBM Domino xpage - parse iCalendar summary with new lines manually/ical4j

So far I was parsing the NotesCalendarEntry ics manually and overwriting certain properties, and it worked fine. Today i stumbled upon a problem, where a long summary name of the appointment gets split into multiple lines, and my parsing goes wrong, it replaces the part up to the first line break and the old part is still there.
Here's how I do this "parsing":
NotesCalendarEntry calEntry = cal.getEntryByUNID(apptuid);
String iCalE = calEntry.read();
StringBuilder sb = new StringBuilder(iCalE);
int StartIndex = iCalE.indexOf("BEGIN:VEVENT"); // care only about vevent
tmpIndex = sb.indexOf("SUMMARY:") + 8;
LineBreakIndex = sb.indexOf(Character.toString('\n'), tmpIndex);
if(sb.charAt(LineBreakIndex-1) == '\r') // take \r\n into account if exists
LineBreakIndex--;
sb.delete(tmpIndex, LineBreakIndex); // delete old content
sb.insert(tmpIndex, subject); // put my new content
It works when line breaks are where they are supposed to be, but somehow with long summary name, line breaks are put into the summary (not literal \r\n characters, but real line breaks).
I split the iCalE string by \r\n and got this (only a part obviously):
SEQUENCE:6
ATTENDEE;ROLE=CHAIR;PARTSTAT=ACCEPTED;CN="test/Test";RSVP=FALSE:
mailto:test#test.test
ATTENDEE;CUTYPE=ROOM;ROLE=REQ-PARTICIPANT;PARTSTAT=ACCEPTED
;CN="Room 2/Test";RSVP=TRUE:mailto:room2#test.test
CLASS:PUBLIC
DESCRIPTION:Test description\n
SUMMARY:Very long name asdjkasjdklsjlasdjlasjljraoisjroiasjroiasjoriasoiruasoiruoai Mee
ting long name
LOCATION:Room 2/Test
ORGANIZER;CN="test/Test":mailto:test#test.test
Each line is one array element from iCalE.split("\\r\\n");. As you can see, the Summary field got split into 2 lines, and a space was added after the line break.
Now I have no idea how to parse this correctly, I thought about finding the index of next : instead of a new line break, and then finding the first line break before that : character, but that wouldn't work if the summary also contained a : after the injected line-break, and also wouldn't work on fields like that ORGANIZER;CN= as it doesn't use : but ;
I tried importing external ical4j jar into my xpage to overcome this problem, and while everything is recognized in Domino Designer it resulted in lots of NoClassDefFound exceptions after trying to reach my xpage service, despite the jars being in the build path and all.
java.lang.NoClassDefFoundError: net.fortuna.ical4j.data.CalendarBuilder
How can I safely parse this manually, or how can I properly import ical4j jar to my xpage? I just want to modify 3 fields, the DTSTART, DTEND and SUMMARY, with the dates I had no problems so far. Fields like Description are using literal \n string to mark new lines, it should be the same in other fields...
Update
So I have read more about iCalendar, and it seems that there is a standard for this called line folds, these are crlf line endings followed by a space. I made a while loop checking until the last line-break not followed by a space, and it works great so far. Will use this unless there's a better solution (ical4j is one, but I can't get it working with Domino)

Adding json data to json column is adding escape characters

I am using postgres database, I have a table called names which has a column named 'info' which is of type json. I am adding
{ "info": "security" , description : "Sednit update: Analysis of Zebrocy: The Sednit group \u2013 also known as APT28, Fancy Bear, Sofacy or STRONTIUM \u2013 is a group of attackers operating since 2004, if not earlier, and whose main objective is to steal confidential information from specific targets.\n\nToward the end of 2015, we started seeing a new component deployed by the group; a downloader for the main Sednit backdoor, Xagent. Kaspersky mentioned this component for the first time in 2017 in their APT trend report and recently wrote an article where they quickly described it under the name Zebrocy.\n\nThis new component is a family of malware, comprising downloaders and backdoors written in Delphi and AutoIt. These components play the same role in the Sednit ecosystem as Seduploader; that of first-stage malware."}
Here I am using node js, with sequelize as orm. When I save it in table. I see "\\n" for "\n" and "\\u" for \u. Can anyone help me to avoid escaping characters while saving to table.
I see \n for \n and \u for \u.
In your json description is type of string , so it will convert the new line/enter to \n that the default behavior , or else you will not get the new line / enter when you try to fetch the data again.
And \u is for unicode , you might be saving some smily or special character so that will be converted to such strings.
So there is no bug , this is how it works.

Arduino String.replace() not working after file name change

I have encountered a strange failure when this code runs on my NodeMCU 0.9 board. It is basically getting http code from an API in the following format:
<abbr title="klokken">kl</abbr> 11–12
In this case I want to isolate the 11 and 12 by first removing the first 42 characters which works perfectly fine and then replacing the – with --. When I open a sketch and paste this program in it it runs perfectly fine and returns 11--12 but when I save this program under a random name it is rebuilt and for some reason doesn't replace the characters properly it then returns 11–12.
I have tried to replace different parts of the string when it was rebuilt which worked fine but for some reason I can't seem to either find the index of nor replace the three strange characters.
http.begin(URL_time);
int httpCode = http.GET();
String timerange;
if(httpCode > 0){
timerange = http.getString();
timerange.remove(0,42);
timerange.replace("–", "--");
Serial.println(timerange);
Thus my question is if anyone knows how to work around this issue apart from not saving my code. Feel free to ask me to elaborate on my question when needed.
The string you're receiving is encoded in Windows Latin 1 (ISO 8859-1), and you're (probably) using UTF-8. What you need is to re-encode the string properly.

Using indexed types for ElasticSearch in Titan

I currently have a VM running Titan over a local Cassandra backend and would like the ability to use ElasticSearch to index strings using CONTAINS matches and regular expressions. Here's what I have so far:
After titan.sh is run, a Groovy script is used to load in the data from separate vertex and edge files. The first stage of this script loads the graph from Titan and sets up the ES properties:
config.setProperty("storage.backend","cassandra")
config.setProperty("storage.hostname","127.0.0.1")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","db/es")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
The second part of the script sets up the indexed types:
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make();
The third part loads in the data from the CSV files, this has been tested and works fine.
My problem is, I don't seem to be able to use the ElasticSearch functions when I do a Gremlin query. For example:
g.E.has("property",CONTAINS,"test")
returns 0 results, even though I know this field contains the string "test" for that property at least once. Weirder still, when I change CONTAINS to something that isn't recognised by ElasticSearch I get a "no such property" error. I can also perform exact string matches and any numerical comparisons including greater or less than, however I expect the default indexing method is being used over ElasticSearch in these instances.
Due to the lack of errors when I try to run a more advanced ES query, I am at a loss on what is causing the problem here. Is there anything I may have missed?
Thanks,
Adam
I'm not quite sure what's going wrong in your code. From your description everything looks fine. Can you try the follwing script (just paste it into your Gremlin REPL):
config = new BaseConfiguration()
config.setProperty("storage.backend","inmemory")
config.setProperty("storage.index.elastic.backend","elasticsearch")
config.setProperty("storage.index.elastic.directory","/tmp/es-so")
config.setProperty("storage.index.elastic.client-only","false")
config.setProperty("storage.index.elastic.local-mode","true")
g = TitanFactory.open(config)
g.makeKey("name").dataType(String.class).make()
g.makeKey("property").dataType(String.class).indexed("elastic",Edge.class).make()
g.makeLabel("knows").make()
g.commit()
alice = g.addVertex(["name":"alice"])
bob = g.addVertex(["name":"bob"])
alice.addEdge("knows", bob, ["property":"foo test bar"])
g.commit()
// test queries
g.E.has("property",CONTAINS,"test")
g.query().has("property",CONTAINS,"test").edges()
The last 2 lines should return something like e[1t-4-1w][4-knows-8]. If that works and you still can't figure out what's wrong in your code, it would be good if you can share your full code (e.g. in Github or in a Gist).
Cheers,
Daniel

Resources