MongoEngine: How to add EmbeddedDocument and its container list at same time? - mongoengine

I have a working db that needs a list of EmbeddedDocuments added to a model.
class Alert(medb.EmbeddedDocument):
# embedded in Sensor
name = medb.StringField(default='new alert')
destination_list = medb.ListField(medb.EmbeddedDocumentField(AlarmDestination))
but, when I try to add the first AlarmDestination with:
dest = models.AlarmDestination()
dest.notifydestination_id = ObjectId(post['id'])
alertdata.destination_list.append(dest)
thesite.save()
I get the following error:
Updating the path 'sensordict.4.alert_list.1.destination_list.0.notifydestination_id' would create a conflict at 'sensordict.4.alert_list.1.destination_list')
A little fumbling around revealed the problem appears to be that since this is a change to the model, the destination_list does not exist yet when trying to saving the first new AlarmDestination.
If I manually add an empty Destination_list(array) with Mongo Compass and run my code again it works fine.
How can I ensure my code will always work even if the container for my new EmbeddedField does not yet exist?

Related

migration issue, django.core.exceptions.FieldDoesNotExist: lightspeed.inventoryimagehistory has no field named 'imageID'

I'm having issues with this Django project, when I try to run migrations got the following message:
django.core.exceptions.FieldDoesNotExist: InventoryImageHistory has no field named 'InventoryImageID'
this is the class for InventoryImageHistory
class InventoryImageHistory(models.Model):
ImageID = models.IntegerField(db_index=True, unique=True, null=False, primary_key=True)
history = ListField(DictField())
objects = models.DjongoManager()
these are the migration files:
migration file number 40:
class Migration(migrations.Migration):
operations = [
migrations.RenameField(
model_name='inventoryimagehistory',
old_name='InventoryImageID',
new_name='imageID',
),
De model definition is InventoryImageHistory, but for some reason, it keeps returning an error
Well, a few minutes after I posted my comment I seem to have fixed it. It was because I had initially put the RenameField in the previous (not yet committed to repo, but already applied) migration, and even though that one had been applied, Django seems to parse all of the dependency migrations logically and decided that the rename had already happened.
This message might have to do with the fact you faked/manipulated some migrations in that app recently.
What I do when this pops up is to:
Create another change in that models.py file (i.e.: add an extra field to the model).
Run the migration
Do the change I wanted to do in the first place that triggered the error and run the migration (hopefully successfully this time)
Undo the change I added on 1. and run migrations again.

Unable to insert data through EntityRepository.create or EntityRepository.persist #MikroOrm #NestJS

:)
I am trying to test my Entity operations using the code in the file.
I am creating a userRepository object as follows:
image
When I console.log find{} from the repository, it fetches the previously stored records:
image
I create a dummy object using faker and it works fine but as soon as I try to create it in DB or persist it, it does not seem to work:
enter image description here
I also tried orm.em.persist. Let me know if more details are required.
Just for future readers, this has been asked & asnwered on the GH.
https://github.com/mikro-orm/mikro-orm/discussions/1571

Is there a way to specify a "master" or "index" migration?

I'm working on an existing Django 2.2 application comprising a custom app in conjunction with a Wagtail CMS, where I'm iteratively adding new wagtail page-types in separate user stories over time.
I want to be able to create a "master" or "index" migration that pre-builds each page-type in the database automatically when migrations are run (ours are performed in an Ansible task upon deployment). As far as I can tell, what I need requires:
The auto-built migration that modifies the DB schema for each page
A further migration that is always run last and which contains a dependencies attr - able to be updated with a single list-entry representing the new page's migration name, each time one is added.
I can already auto-build page-types using the following logic in a create() method called from migrations.RunPython() but at the moment, this same page-build logic needs to exist in each page's migration - I'd prefer it if this existed in a single migration (or an alternative procedure if one exists in DJango) that can always be run.
Ideally, the page_types list below could be replaced by just iterating over BasePage.__subclasses__(), (Where all page-types inherit from BasePage) meaning this "master" migration need never be altered again.
Note: if it helps any, the project is still in development, so any solution that is slightly controversial or strictly "dev-only" is acceptable - assuming it can be made acceptable and therefore less controversial by merging migrations later.
...
...
# Fetch the pre-created, root Page"
root_page = BasePage.objects.all().first()
page_types = [
ManageAccountPage,
EditUserDetailPage,
]
path_init = int('000100020003') # The last value for `path` from 0007_initialise_site_ttm.py
# Create, then add all child pages
for page_type in page_types:
title_raw = page_type.__name__.replace('Page', '')
page = page_type(
title=utils.convert_camel_to_human(title_raw),
slug=title_raw.lower(),
show_in_menus='t',
content_type=ContentType.objects.get_for_model(page_type),
path=path_init + 1,
depth=2
)
try:
root_page.add_child(instance=page)
except exceptions.ValidationError:
continue
...
...
What's the problem?
(See "What I've tried" below)
What I've tried:
A custom pin_curr_migration() method called from migrations.RunPython() that deletes the "master" migration's own record in django_migrations allowing it to be re-run. This however, results in errors where DJango complains about previously built pages already existing.

Maximo automatisation script to change statut of workorder

I have created a non-persistent attribute in my WoActivity table named VDS_COMPLETE. it is a bool that get changed by a checkbox in one of my application.
I am trying to make a automatisation script in Python to change the status of every task a work order that have been check when I save the WorkOrder.
I don't know why it isn't working but I'm pretty sure I'm close to the answer...
Do you have an idea why it isn't working? I know that I have code in comments, I have done a few experimentations...
from psdi.mbo import MboConstants
from psdi.server import MXServer
mxServer = MXServer.getMXServer()
userInfo = mxServer.getUserInfo(user)
mboSet = mxServer.getMboSet("WORKORDER")
#where1 = "wonum = :wonum"
#mboSet .setWhere(where1)
#mboSet.reset()
workorderSet = mboSet.getMbo(0).getMboSet("WOACTIVITY", "STATUS NOT IN ('FERME' , 'ANNULE' , 'COMPLETE' , 'ATTDOC')")
#where2 = "STATUS NOT IN ('FERME' , 'ANNULE' , 'COMPLETE' , 'ATTDOC')"
#workorderSet.setWhere(where2)
if workorderSet.count() > 0:
for x in range(0,workorderSet.count()):
if workorderSet.getString("VDS_COMPLETE") == 1:
workorder = workorderSet.getMbo(x)
workorder.changeStatus("COMPLETE",MXServer.getMXServer().getDate(), u"Script d'automatisation", MboConstants.NOACCESSCHECK)
workorderSet.save()
workorderSet.close()
It looks like your two biggest mistakes here are 1. trying to get your boolean field (VDS_COMPLETE) off the set (meaning off of the collection of records, like the whole table) instead of off of the MBO (meaning an actual record, one entry in the table) and 2. getting your set of data fresh from the database (via that MXServer call) which means using the previously saved data instead of getting your data set from the screen where the pending changes have actually been made (and remember that non-persistent fields do not get saved to the database).
There are some other problems with this script too, like your use of "count()" in your for loop (or even more than once at all) which is an expensive operation, and the way you are currently (though this may be a result of your debugging) not filtering the work order set before grabbing the first work order (meaning you get a random work order from the table) and then doing a dynamic relationship off of that record (instead of using a normal relationship or skipping the relationship altogether and using just a "where" clause), even though that relationship likely already exists.
Here is a Stack Overflow describing in more detail about relationships and "where" clauses in Maximo: Describe relationship in maximo 7.5
This question also has some more information about getting data from the screen versus new from the database: Adding a new row to another table using java in Maximo

Drupal 6 - is node_submit() needed when saving node?

I'm trying to fix problem in some legacy code which is generating nodes of custom content type "show", but only if node in same type and with same title doesn't exist already. Code looks like:
$program = node_load(array('title' => $xml_node->program_title, 'type' => 'show'));
if (!$program) {
$program = new stdClass();
$program->type = 'show';
...
node_submit($program);
node_save($program);
}
So, script is first trying to load node in 'show' content type with specific title and if it fails it creates one.
Problem is, when it's called multiple times in short period of time (inside a loop) it creates double nodes. Like 2 shows with the same title created in same second?!?
What can be the problem there?
I was looking examples for how to save node in Drupal 6. In some they don't even call node_submit() . Is that call needed? If so, do I maybe have to pass to node_save() what node_submit() returned? Or maybe node_load() fails to load existing node for some reason? Maybe some cache has to be cleared or something?
As far as i know and used node_save to create nodes programmaticly there is no need for the node_submit() function.
The reason that double nodes are created is that the node_load() function fired before completing the updates to the node_load() cache. Try to add:
node_load(FALSE, NULL, TRUE);
after node_save($program).
this will clear the node_load() cache.
see:
https://api.drupal.org/comment/12084#comment-12084

Resources