my question is:
I'm trying to save the values of properties to a hashmap. Every time I click, it will save a new set of variables to my hashmap, but when I go to output the saved variables, it will only display the newly saved variables even though I have the hashMap loop though. Why is this?
I have
Map<Boolean, Integer> property = new HashMap<Boolean, Integer>();
As a global variable
This is how I save the variables to my Hashmap
property.put(m.turn, tempBoard.current.position);
This happens every time I click a certain image after a series of events happen, though they don't affect the Hashmap
I then return back to the same class that the hashmap is created in, but in a different procedure. The code I use to loop though the hashmap is:
for (Map.Entry<Boolean, Integer> entry : property.entrySet()) {
System.out.println("Key = " + entry.getKey() + ", Value = " + entry.getValue());
}
This only outputs the newly saved variables to the hashmap, not any of the other save variables.
To be honest with you, I've been searching google, trying to find the reason why it won't start at the beginning of the save variables. I can't find anything that resembles my problem because every other example, they save their variables to the hashmap all at the same time; never at different times or in between event events.
Any help is appreciated, and sorry for how weird this post is worded. I'm not sure how to explain it any better :)
How are supposed to store more than 2 values in hashmap?
I think you need to understand how hashmap works, In hashmap you store a VALUE against a UNIQUE key, and the method put will alwaysa replace old value of the key with new value passed, if you use some already added key, otherwise it will add this value against given key.
Now you just need to figure how would you store all values and which type of Key you want to use.
Have a look at this link for details of HashMap
https://docs.oracle.com/javase/8/docs/api/java/util/HashMap.html
Related
I have a simple Core Data entity Story that occasionally I update with the latest data from a network call. This network call sometimes updates many, many stories instances, so I run an NSBatchInsertRequest, shown below. (The other reason I'm using a batch insert is that many stories might need to be added to the persistent store.)
The problem is a user can have already marked a Story as a favorite. When they do that, I set story.isFavorite = true on the main thread and save viewContext.
However, when the batch insert occurs it overwrites story.isFavorite, setting it back to false, even though I'm using NSMergeByPropertyObjectTrumpMergePolicy on both the batch insert and view contexts. I am not touching story.isFavorite in the batch insert handler either so I don't expect that property to be overwritten.
I thought the benefit of a batch insert with this merge policy was to avoid first fetching + then manually updating changed properties + finally saving. What is the right way to avoid changing property values in an NSBatchInsertRequest?
Story
#objc(Story)
public class Story: NSManagedObject {
#NSManaged public var title: String?
#NSManaged public var storyURL: URL?
#NSManaged public var updatedTime: Date?
#NSManaged public var isFavorite: Bool // <- the problem property
}
Batch insert
container.viewContext.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
container.viewContext.automaticallyMergesChangesFromParent = false
let context = NSManagedObjectContext(concurrencyType: .privateQueueConcurrencyType)
context.parent = container.viewContext
context.mergePolicy = NSMergeByPropertyObjectTrumpMergePolicy
context.perform {
let batchInsert = NSBatchInsertRequest(entity: Story.entity(), managedObjectHandler: { managedObject in
let story = managedObject as! Story
let storyResponse = downloadedStories[I]
// Update story with latest response data BUT don't modify story.isFavorite.
story.title = storyResponse.title
story.storyURL = storyResponse.storyURL
story.updatedTime = storyResponse.updatedTime
// ...
})
let result = try context.execute(batchInsert) as? NSBatchInsertResult
if let insertedIDs = result?.result as? [NSManagedObjectID] {
// Merge changes into parent context. Skip save() because not needed for batch insert.
NSManagedObjectContext.mergeChanges(fromRemoteContextSave: [NSInsertedObjectsKey: insertedIDs], into: [container.viewContext])
}
}
Edit
The Story entity does have a unique value constraint using attribute storyURL.
Update after Michael Tsai's answer
By making the Story entity attribute isFavorite a non-Optional Boolean without a default value (it was marked as Optional before, though I'm not sure it makes a difference here) and keeping the Use Scalar Type box checked, I can confirm that existing objects in the store will not be modified (at all) with this configuration of the batch insert context.
context.persistentStoreCoordinator = container.persistentStoreCoordinator
// HOWEVER, observe that regardless of the merge policy below,
// setting `context.parent = container.viewContext` will also
// overwrite the store data!
context.mergePolicy = NSMergeByPropertyStoreTrumpMergePolicy
// NSMergeByPropertyObjectTrumpMergePolicy ignores objects in the store
// (which have the same unique constraint value, here equal `storyURL`)
// and overwrites all properties.
// To confirm that the batch insert operation does not modify
// existing Story instances (at all), first delete all instances where
// where isFavorite == false. Then load the all story data again and
// execute the NSBatchInsertRequest with this change to managedObjectHandler:
story.title = storyResponse.title + " (modified)"
You will see the missing stories get inserted back, this time with their titles having a suffix " (modified)"; but previously favorited stories
do not get modified (basically, with this setup, the batch insert won't re-insert objects).
So the isFavorite property does not get overwritten BUT neither do any properties that should be changed (because they received a new title, for example).
Therefore, if you don't want your objects to get updated, but you want completely new objects to be inserted, you can use this approach.
However, if you are expecting your objects to require updates here are some alternatives:
you may opt to run a separate update operation, maybe an NSBatchUpdateRequest after you run your batch insert in this way,
or after the batch insert, you can update certain properties in a simple loop in a (possibly background/child) context without a batch operation, which could be fine if there isn't tons of data;
lastly, you might be able to first batch insert new data to a temporary store before somehow manually merging your choice of properties with the new store, then delete the temporary store.
A simpler approach: you could fetch the all properties you want to keep unchanged before you execute the batch insert (storing them in an dictionary keyed by your object's uniqueness constraint value), and then during the batch insert set the property again.
For this approach, you will want to use a different merge policy such as NSMergeByPropertyObjectTrumpMergePolicy so that the updated object gets re-inserted into the store (make sure to fetch all properties that you don't want to lose in advance of the batch insert)
random idea: How to Save Data When Using One ManagedObjectContext and PersistentStoreCoordinator with Two Stores
I don't think it is actually possible to do a partial update with a batch insert request. It's hard to know for sure because I don't think any of this is documented except in WWDC sessions. When I first watched the 2019 session, I was excited because the presenter said:
Attributes that are optional or configured with default values can be omitted from the dictionary as well.
In the case of updating an object with unique constraint, the existing values will not be changed.
I took this to mean that:
You can omit values for new objects, and you'll get the defaults or NULL. That makes sense.
If there's an existing object and you omit a value, that value will not the changed. So you can purposely omit values to do a partial update, i.e. update other values while leaving your isFavorite alone.
But, after writing code to test this and looking at the output from com.apple.CoreData.SQLDebug, what actually seems to happen with NSMergeByPropertyObjectTrumpMergePolicy is:
If you omit a value that's required you get a validation error.
If you omit a value that's optional, it updates the row to NULL. For a Bool property in Swift, this will become false.
If you omit a value with a default value, it updates the row to the default.
This is a shame because it seems like partial updates could be implemented by having the ON CONFLICT clause only specify DO UPDATE SET for the attributes that you actually set. But (as of macOS 11) Core Data seems to always generate SQL to set all of the columns.
In summary, with batch inserts, NSMergeByPropertyObjectTrumpMergePolicy does not actually merge by property based on what's changed (like with a regular Core Data save). Rather, it either inserts a new row (if the object is absent) or overwrites all the columns but preserves the objectID (if the object was present).
NSMergeByPropertyStoreTrumpMergePolicy also doesn't merge by property. It just means to leave the stored object alone if it's already present.
Update (2021-06-24): I heard from DTS that Apple considers the current (iOS 14/macOS 11) behavior described above a bug, and that it should let you batch insert without changing omitted properties. The Radar number is 79747419.
I need to find a way to generate a random number each time the REST call is executed.
I have the following GET call:
exec(http("Random execution")
.get("/randomApi")
.queryParam("id", getRandomId()))
}
Obviously it doesn't work as the random number is only generated once and I end up with the same
number whenever this call is executed. I cant use the feeder option as my feeder is already huge and is generated by a 3rd party for each test.
.queryParam takes Expressions as its arguments, and since Expression is an alias for a session function, you can just do...
.queryParam("id", session => getRandomId())
You could also define a second feeder that uses a function to generate the values - no need to update your existing feeder or add another csv file. This would be useful if you had more complicated logic for getting / generating an Id
val idFeeder = Iterator.continually(Map("id" -> Random.nextInt(999999)))
//in your scenario...
.feed(idFeeder)
.exec(http("Random execution")
.get("/randomApi")
.queryParam("id", "${id}")
)
In the spirit of having options, another option you have is to store an object in the session that support toString, which generates whatever you need. It's a nifty trick that you can use for all kinds of things.
object RANDOM_ID {
toString() { return RandomId().toString() }
}
...
exec( _.set( "RANDOM_ID", RANDOM_ID ) )
...
.exec(
http("Random execution")
.get("/randomApi")
.queryParam( "id", "${RANDOM_ID}" )
)
You can apply the same principle to generating random names, addresses, telephone numbers, you name it.
So, which is the better solution? The feeder, or the object in session?
Most of the time, it'll be the feeder, because you control when it is updated. The object in session will be different every time, whereas the feeder solution, you control when the value updates, and then you can reference it multiple times before you change it.
But there may be instances where the stored object solution results in easier to read code, provided you are good with the value changing every time it is accessed. So it's good to know that it is an option.
For several years I have been using the CGridCellCombo class. It is designed to be used with the CGridCtrl.
Several years ago I did make a request in the comments section for an enhancement but I got no replies.
The basic concept of the CGridCellCombo is that it works with the text value of the cell. Thus, when you present the drop list it will have that value selected. Under normal circumstances this is fine.
But I have places where I am using the combo as a droplist. In some situations it is perfectly fine to continue to use the text value as the go-between.
But is some situations it would have been ideal to know the actual selected index of the combo. When I have a droplist and it is translated into 30 languages, and I need to know the index, I have no choice but to load the possible options for that translation and then examine the cell value and based on the value found in the array I know the index.
It works, but is not very elegant. I did spend a bit of time trying to keep track of the selected index by adding a variable to CInPlaceList and setting it but. I then added a wrapper method to the CGridCellCombo to return that value. But it didn't work.
I wondered if anyone here has a good understanding of the CGridCellCombo class and might be able to advise me in exposing the CComboCell::GetCurSel value.
I know that the CGridCtrl is very old but I am not away of another flexible grid control that is designed for MFC.
The value that is transfered back to the CGridCtrl is choosen in CInPlaceList::EndEdit. The internal message GVN_ENDLABELEDIT is used, and this message always use a text to set it into the grid.
The value is taken here via GetWindowText from the control. Feel free to overwrite this behaviour.
The handler CGridCtrl::OnEndInPlaceEdit again calls OnEndEditCell. All take a string send from GVN_ENDLABELEDIT.
When you want to make a difference between the internal value and the selected value you have to manage this via rewriting the Drawing and selecting. The value in the grid is the GetCurSel value and you have to show something different... There isn't much handling about this in the current code to change.
More information
The key is CInPlaceList::EndEdit(). There is a call to GetWindowText (CInPlaceList is derived from CComboBox), just get the index here. Also in CGridCellCombo::EndEdit you have access to the m_pEditWnd, that is the CInPlaceList object and derived from CComboBox, so you have access here too.
I have found this to be the simplest solution:
int CGridCellCombo::GetSelectedIndex()
{
int iSelectedIndex = CB_ERR;
CString strText = GetText();
for (int iOption = 0; iOption < m_Strings.GetSize(); iOption++)
{
if (strText.CollateNoCase(m_Strings[iOption]) == 0) // Match
{
iSelectedIndex = iOption;
break;
}
}
return iSelectedIndex;
}
Is there anyway to check when you type in to a field if there already are any document saved with that value in that field. Ex, if you type projectno i want to check if any other document already have that projectno. Any suggestion how i will validate that
Regards
You need a view in the database that is sorted in the first column by the field that you are using. I will assume it is a hidden view, called "(lookupUnique)". Build it and test it to make sure it is showing the field that you want in the first column, and that the values are sorted.
Now you need a way to do a lookup into this view. Ideally, you're wanting the lookup to fail -- because there is no document with the same value, in which case you allow the save to continue. But there's one other case where you might want to allow the save to continue. That's the case where the lookup succeeds because the lookup found the document that you are working on right now, which was previously saved and therefore is found in the view, and a user is now editing it again.
The #DbLookup function with the [RETURNDOCUMENTUNIQUEID] and [FAILSILENT] arguments is the IBM-recommended solution for this. I.e.,
foundId := #DbLookup("Notes":"NoCache";"":"";"(lookupUniqe)";theUniqueFieldNameGoesHereWithoutQuotes;1;[RETURNDOCUMENTUNIQUEID]);
If this formula returns "", then no match was found, therefore your code should return #Success to let the save continue. If it returns anything else, then compare the result with #DocumentUniqueId. If they match, then your code should return #Success to let the save continue. If they do not match, then you have found another document with the same value in the field, so your code should return #Failure with an appropriate error message.
Now here's the caveat: there have been known problems with [RETURNDOCUMENTUNIQUEID] in some versions of Domino, including a bug that caused Domino 6 servers to crash if an agent called ComputeWithForm on a document based on a form that used this feature. There's also a bug that causes it to return only the unid of the first match out of many matches, and so if you have duplicates this strategy in your code will allow users to re-save old documents that are already non-unique instead of forcing them to change them to make them unique, and that may or may not be what you want.
If either of those known issues might create a problem for you, then you would be better off not using [RETURNDOCUMENTUNIQUEID], and instead just do what Notes and Domino programmers did before IBM added the [RETURNDOCUMENTUNIQUEID] option in the first place: add another column to your (lookupUnique) view, and set the column value to #Text(#DocumentUniqueId). Change the 1 in the above #DbLookup formula to the number of the column that you added, and write your validation code to anticipate the possibility that you might get back an empty string, a single value, or a list of values.
If a type 45678 i return a value because there already are a document with that value. I don’t understan how i will validate it.
var dbname = session.getServerName() + "!!" + "proj\\webno.nsf";
getFieldValue = getComponent("oNo").getValue();
tmp = #DbLookup(dbname, "(webNo)", getFieldValue, ”obNo”);
if (tmp == getFieldValue)
{
Here i will do a validate. If value i return are the same as in the getFieldValue
and tmp or just getFieldValue is empty.
}
else
{
Here is it OK
}
Taking your code and modifying it. Assuming we're in the database we're creating the document in, just use #DbName() instead of trying to build the name from the session and some hard-coding. When using validation, the value of the control should be accessible simply with value. Then, just get all the values in the column and see if your value is in there.
I think the following should work.
<xp:inputText id="projectNumber" value="#{doc.ProjectNumber}">
<xp:this.validators>
<xp:validateExpression message="Value already in use">
<xp:this.expression><!CDATA[#{javascript:var usedValues = #DbColumn(#DbName(), "(webNo)", 1);
if ( #IsMember ( value, usedValues ) ) { return false };
return true;
</xp:this.expression>
</xp:validateExpression>
</xp:this.validators>
</xp:inputText>
Why don't you just generate a value for them? The simplest would be to use #Unique, but there are plenty of other ways besides having them have to create one.....
On the project which I am currently working, I have to read an Excel file (with over a 1000 rows), extract all them and insert/update to a database table.
in terms of performance, is better to add all the records to a Doctrine_Collection and insert/update them after using the fromArray() method, right? One other possible approach is to create a new object for each row (a Excel row will be a object) and them save it but I think its worst in terms of performance.
Every time the Excel is uploaded, it is necessary to compare its rows to the existing objects on the database. If the row does not exist as object, should be inserted, otherwise updated. My first approach was turn both object and rows into arrays (or Doctrine_Collections); then compare both arrays before implementing the needed operations.
Can anyone suggest me any other possible approach?
We did a bit of this in a project recently, with CSV data. it was fairly painless. There's a symfony plugin tmCsvPlugin, but we extended this quite a bit since so the version in the plugin repo is pretty out of date. Must add that to the #TODO list :)
Question 1:
I don't explicitly know about performance, but I would guess that adding the records to a Doctrine_Collection and then calling Doctrine_Collection::save() would be the neatest approach. I'm sure it would be handy if an exception was thrown somewhere and you had to roll back on your last save..
Question 2:
If you could use a row field as a unique indentifier, (let's assume a username), then you could search for an existing record. If you find a record, and assuming that your imported row is an array, use Doctrine_Record::synchronizeWithArray() to update this record; then add it to a Doctrine_Collection. When complete, just call Doctrine_Collection::save()
A fairly rough 'n' ready implementation:
// set up a new collection
$collection = new Doctrine_Collection('User');
// assuming $row is an associative
// array representing one imported row.
foreach ($importedRows as $row) {
// try to find an existing record
// based on a unique identifier.
$user = Doctrine_Core::getTable('User')
->findOneByUsername($row['username']);
// create a new user record if
// no existing record is found.
if (!$user instanceof User) {
$user = new User();
}
// sync record with current data.
$user->synchronizeWithArray($row);
// add to collection.
$collection->add($user);
}
// done. save collection.
$collection->save();
Pretty rough but something like this worked well for me. This is assuming that you can use your imported row data in some way to serve as a unique identifier.
NOTE: be wary of synchronizeWithArray() if you're using sf1.2/doctrine 1.0 - if I remember correctly it was not implemented correctly. it works fine in doctrine 1.2 though.
I have never worked on Doctrine_Collections, but I can answer in terms of database queries and code logic in a broader sense. I would apply the following logic:-
Fetch all the rows of the excel sheet from database in a single query and store them in an array $uploadedSheet.
Create a single array of all the rows of the uploaded excel sheet, call it $storedSheet. I guess the structures of the Doctrine_Collections $uploadedSheet and $storedSheet will be similar (both two-dimensional - rows, cells can be identified and compared).
3.Run foreach loops on the $uploadedSheet as follows and only identify the rows which need to be inserted and which to be updated (do actual queries later)-
$rowsToBeUpdated =array();
$rowsToBeInserted=array();
foreach($uploadedSheet as $row=>$eachRow)
{
if(is_array($storedSheet[$row]))
{
foreach($eachRow as $column=>$value)
{
if($value != $storedSheet[$row][$column])
{//This is a representation of comparison
$rowsToBeUpdated[$row]=true;
break; //No need to check this row anymore - one difference detected.
}
}
}
else
{
$rowsToBeInserted[$row] = true;
}
}
4. This way you have two arrays. Now perform 2 database queries -
bulk insert all those rows of $uploadedSheet whose numbers are stored in $rowsToBeInserted array.
bulk update all the rows of $uploadedSheet whose numbers are stored in $rowsToBeUpdated array.
These bulk queries are the key to faster performance.
Let me know if this helped, or you wanted to know something else.