How to find and replace with different UUIDs - search

I want to replace the following UUIDs with newly generated ones which I have saved in a file.
Current UUIDs
'uuid': '10000000-0000-0000-0000-0000000000',
'uuid': '20000000-0000-0000-0000-0000000000',
'uuid': '30000000-0000-0000-0000-0000000000',
'uuid': '40000000-0000-0000-0000-0000000000',
'uuid': '50000000-0000-0000-0000-0000000000',
New UUIDs I want to use
61c1345a-15ef-4286-a97c-a4ade5858eee
6c548dcf-6ede-4342-8735-7cce300a3148
bfbb27df-1a26-49db-8408-85aaa676c4be
e6d2e99e-a4da-41d5-a4e0-7ce2e56a5258
50a2a6c6-57b8-4329-b306-9e361a66d8f7
Is there a way of replacing each of my current UUIDs with the new ones I have without manually doing each single one? (I have about 50 to do)
Many thanks.

If you want to do this programmatically, you should look into regular expressions. To match one of your UUIDs you'll want to use one like this:
\d{8}-(?:\d{4}-){3}\d{10}
You can use this to figure out where in the first file you have this pattern, and replace it with the next pattern in the second file.

Related

Dynamic test tag pattern execution in karate [duplicate]

I'm wondering if you can use wildcard characters with tags to get all tagged scenarios/features that match a certain pattern.
For example, I've used 17 unique tags on many scenarios throughout many of my feature files. The pattern is "#jira=CIS-" followed by 4 numbers, like #jira=CIS-1234 and #jira=CIS-5678.
I'm hoping I can use a wildcard character or something that will find all of the matches for me.
I want to be able to exclude them from being run, when I run all of my features/scenarios.
I've tried the follow:
--tags ~#jira
--tags ~#jira*
--tags ~#jira=*
--tags ~#jira=
Unfortunately none have given my the results I wanted. I was only able to exclude them when I used the exact tag, ex. ~#jira=CIS-1234. It's not a good solution to have to add each single one (of the 17 different tags) to the command line. These tags can change frequently, with new ones being added and old ones being removed, plus it would make for one real long command.
Yes. First read this - there is this un-documented expression-language (based on JS) for advanced tag selction based on the #key=val1,val2 form: https://stackoverflow.com/a/67219165/143475
So you should be able to do this:
valuesFor('#jira').isPresent
And even (here s will be a string, on which you can even do JS regex if you know how):
valuesFor('#jira').isEach(s => s.startsWith('CIS-'))
Would be great to get your confirmation and then this thread itself can help others and we can add it to the docs at some point.

Compare two text columns to find partial match and new rows

I have two lists of messages. The first one is short messages and the second one is a master file which has longer texts which includes the short messages in the first list but also has many new messages. I want to find the new ones in master file (second list) which has no partial matches.
something like above. then NO means they are new errors
I tried =IF(ISERROR(VLOOKUP(""&A2&"",C:C,1,0)),"No","Yes") but it is other way around. it will find short messages within master file with big messages. I want to check big messages which have the short messages inside compare with the list with short messages and if there is no (partial) match label it as new.
This should work, currently can't test it though
=if(sumproduct(--isnumber(search($a$2:$a$8,b2)))>0,"YES","NO")
Try:
=IF(OR(ISNUMBER(FIND(" "&$A$2:$A$8&" "," "&B2& " "))),"YES","NO")
Note the use of spaces otherwise aaa would be found in kkaaa

Problems working out the re.compile mask/cypher

I have working code using re.compile that searches for a given key and extracts specified bytes from that line.
Working cypher
S011=re.compile(r"S0\w*\W*11\b")
Searches for 'S0' at the start and '11' further in (the intervening alphanumeric changes with each file)
S012PA041 11 1001650953.34N 72627.05E 426930.97227906.7 285.3227033224
I am trying to use the same method for a different input file but I can't work out the correct mask/cypher. There are several lines starting 'P1' so that in not exclusive enough; the 'P1....,V0' is the exclusive key. Again the numbers between the keys change with each event and file.
P1,0,01169-72-063,,1001,,1,2020:07:31:12:48:01.7,1,V01,2,,436389.57,7196330.69,,64.88354429,7.65691702,,64.88327349,7.65520631,,0.00,0.00,0.00,0.00,,248.04
I have tried these but with no success:
V0=re.compile(r"^P1\w*\W*V0")
V0=re.compile(r"^P1\w*\W*V0\w*\W*")
V0=re.compile(r"^P1\w*V0\w*")
After running more combinations than an safe-cracker on Red Bull, I've finally got the right sequence for the regex.
Line to be identified using 'P1' and further in 'V01' as search keys
P1,0,01169-72-063,,1001,,1,2020:07:31:12:48:01.7,1,V01,2,,436389.57,7196330.69,,64.88354429,7.65691702,,64.88327349,7.65520631,,0.00,0.00,0.00,0.00,,248.04
re.compile code to identify it.
V0=re.compile(r"^P1\s*,*:*\S*V01\s*,*:*\S*\b")

Combining phrases from list of words Python3

doing my best to grab information out of a lot of pdf files. Have them in a dictionary format where the key is a given date and the values are a list of occupations.
looks like this when proper:
'12/29/2014': [['COUNSELING',
'NURSING',
'NURSING',
'NURSING',
'NURSING',
'NURSING']]
However, occasionally there are occupations with several words which cannot be reliably understood in single word-form, such as this:
'11/03/2014': [['DENTISTRY',
'OSTEOPATHIC',
'MEDICINE',
'SURGERY',
'SOCIAL',
'SPEECH-LANGUAGE',
'PATHOLOGY']]
Notice that "osteopathic medicine & surgery" and "speech-language pathology" are the full text for two of these entries. This gets hairier when we also have examples of just "osteopathic medicine" or even "medicine."
So my question is this - How should I go about testing combinations of these words to see if they match more complex occupational titles? I can use the same order of the words, as I have maintained that from the source.
Thanks!

Start loop at specific line of text file in groovy

I am using groovy and I am trying to have a text file be altered at specific line, without looping through all of the previous lines. Is there a way to state the line of a text file that you want to wish to alter?
For instance
Text file is:
1
2
3
4
5
6
I would like to say
Line(3) = p
and have it change the text file to:
1
2
p
4
5
6
I DO NOT want to have to do a loop to iterate through the lines to change the value, aka I do not want to use a .eachline {line ->...} method.
Thank you in advance, I really appreciate it!
I dont think you can skip lines and traverse like this. You could do the skip by using the Random Access File in java, but instead of lines you should be specifying the number of bytes.
Try using readLines() on file text. It will store all your lines in a list. To change content at line n, change content at n-1 index on list and then join on list items.
Something like this will do
//We can call this the DefaultFileHandler
lineNumberToModify = 3
textToInsert = "p"
line( lineNumberToModify, textToInsert )
def line(num , text){
list = file.readLines()
list[num - 1] = text
file.setText(list.join("\n"))
}
EDIT: For extremely large files, it is better that you have a custom implementation. May be something on the lines of what Tim Yates had suggested in the comment on your question.
The above readLines() can easily process upto 100000 lines of text within less than a sec. So you can do something like this:
if(file size < 10 MB)
use DefaultFileHandler()
else
use CustomFileHandler()
//CustomFileHandler
- Split the large file into buckets of acceptable size.
- Ex: Bucket 1(1-100000 lines), Bucket 2(100000-200000 lines), etc.
- if (lineNumberToModify falls in bucket range)
insert into line in the bucket
There is no hard and fast rule to define how you implement your CustomFileHandler as it completely depends on the use case scenario. If you need to do the above operation multiple times on the same file, you can choose to do the complete bucket split first, store them in memory and use the buckets for the following operations. Or if it is a one time operation, you can avoid manipulating all the buckets first but deal with only what you need and process the others later on on-demand basis.
And even within the buckets you can define your own intelligence to speed up your job. Say if you want to insert into 99999 line of a bucket with 1-100000 lines, you can exploit groovy's methods and closures to their fullest,
file.readLines().reverse()[1] = "some text"

Resources