Brightway2: Modifying/deleting exchanges from activity without using activity as dict - brightway

I would like to modify an activity's exchanges and save the activity back to the database.
It is possible to change other aspects of the activity, like its name:
some_act['name'] = "some new name"
and then save the activity with:
some_act.save()
It is also possible to modify exchanges the same way:
some_exc['scale"] = 0.5
and then save the exchange with:
some_exc.save()
However, the only way I have found to add/delete exchanges from a specific activity is to go through the dictionary version of the activity:
some_act_dataset = some_act._data
some_act_dataset['exchanges'] = [{exchange1}, {exchange2}] # exc must be valid exchange dict
The problem is that I don't know how to save the new activity (as dict) back to the database.
some_act_dataset.save() doesn't work, since dictionaries don't have a save method.
Database("my_database").write(some_act_dataset)overwrites all the other data in the database.
I could work in the loaded database:
loaded_db = Database("my_database").load()
and make the changes I need in the resulting dictionary, and then write the whole database, but when the databases are big, this seems like a costly operation.
So, the question is: is there a way to modify an activity's exchanges and save the activity back to the database without needing to overwrite the entire database?

Actiities and exchanges are stored in separate tables in the SQLite database, and they each have their own object. In the journey to and from the database, several translation layers are used:
However, we almost always work with Activity or Exchange objects. The key point here is that because activities and exchanges are two separate tables, they have to be treated separately.
To create a new exchange, use Activity.new_exchange():
In [1] from brightway2 import *
In [2]: act = Database("something").random()
In [3]: exc = act.new_exchange()
In [4]: type(exc)
Out[4]: bw2data.backends.peewee.proxies.Exchange
You can also specify data attributes in the new_exchange method call:
In [5]: exc = act.new_exchange(amount=1)
In [6]: exc['amount']
Out[6]: 1
To delete an Exchange, call Exchange.delete(). If you are doing a lot of data manipulation, you can either execute SQL directly against the database, or write peewee queries with ActivityDataset or ExchangeDataset (see e.g. the queries built in the construction of an Exchanges object).

Related

How to get Salesforce REST API to paginate?

I'm using the simple_salesforce python wrapper for the Salesforce REST API. We have hundreds of thousands of records, and I'd like to split up the pull of the salesforce data so all records are not pulled at the same time.
I've tried passing a query like:
results = salesforce_connection.query_all("SELECT my_field FROM my_model limit 2000 offset 50000")
to see records 50K through 52K but receive an error that offset can only be used for the first 2000 records. How can I use pagination so I don't need to pull all records at once?
Your looking to use salesforce_connection.query(query=SOQL) and then .query_more(nextRecordsUrl, True)
Since .query() only returns 2000 records you need to use .query_more to get the next page of results
From the simple-salesforce docs
SOQL queries are done via:
sf.query("SELECT Id, Email FROM Contact WHERE LastName = 'Jones'")
If, due to an especially large result, Salesforce adds a nextRecordsUrl to your query result, such as "nextRecordsUrl" : "/services/data/v26.0/query/01gD0000002HU6KIAW-2000", you can pull the additional results with either the ID or the full URL (if using the full URL, you must pass ‘True’ as your second argument)
sf.query_more("01gD0000002HU6KIAW-2000")
sf.query_more("/services/data/v26.0/query/01gD0000002HU6KIAW-2000", True)
Here is an example of using this
data = [] # list to hold all the records
SOQL = "SELECT my_field FROM my_model"
results = sf.query(query=SOQL) # api call
## loop through the results and add the records
for rec in results['records']:
rec.pop('attributes', None) # remove extra data
data.append(rec) # add the record to the list
## check the 'done' attrubite in the response to see if there are more records
## While 'done' == False (more records to fetch) get the next page of records
while(results['done'] == False):
## attribute 'nextRecordsUrl' holds the url to the next page of records
results = sf.query_more(results['nextRecordsUrl', True])
## repeat the loop of adding the records
for rec in results['records']:
rec.pop('attributes', None)
data.append(rec)
Looping through the records and using the data
## loop through the records and get their attribute values
for rec in data:
# the attribute name will always be the same as the salesforce api name for that value
print(rec['my_field'])
Like the other answer says though, this can start to use up a lot of resources. But it what you're looking for if want to achieve page nation.
Maybe create a more focused SOQL statement to get only the records needed for your use case at that specific moment.
LIMIT and OFFSET aren't really meant to be used like that, what if somebody inserts or deletes a record on earlier position (not to mention you don't have ORDER BY in there). SF will open a proper cursor for you, use it.
https://pypi.org/project/simple-salesforce/ docs for "Queries" say that you can either call query and then query_more or you can go query_all. query_all will loop and keep calling query_more until you exhaust the cursor - but this can easily eat your RAM.
Alternatively look into the bulk query stuff, there's some magic in the API but I don't know if it fits your use case. It'd be asynchronous calls and might not be implemented in the library. It's called PK Chunking. I wouldn't bother unless you have millions of records.

How can I update expiration of a document in Couchbase using Python 3?

We have a lot of docs in Couchbase with expiration = 0, which means that documents stay in Couchbase forever. I am aware that INSERT/UPDATE/DELETE isn't supported by N1QL.
We have 500,000,000 such docs and I would like to do this in parallel using chunks/bulks. How can I update the expiration field using Python 3?
I am trying this:
bucket.touch_multi(('000c4894abc23031eed1e8dda9e3b120', '000f311ea801638b5aba8c8405faea47'), ttl=10)
However I am getting an error like:
_NotFoundError_0xD (generated, catch NotFoundError): <Key=u'000c4894abc23031eed1e8dda9e3b120'
I just tried this:
from couchbase.cluster import Cluster
from couchbase.cluster import PasswordAuthenticator
cluster = Cluster('couchbase://localhost')
authenticator = PasswordAuthenticator('Administrator', 'password')
cluster.authenticate(authenticator)
cb = cluster.open_bucket('default')
keys = []
for i in range(10):
keys.append("key_{}".format(i))
for key in keys:
cb.upsert(key, {"some":"thing"})
print(cb.touch_multi(keys, ttl=5))
and I get no errors, just a dictionary of keys and OperationResults. And they do in fact expire soon thereafter. I'd guess some of your keys are not there.
However maybe you'd really rather set a bucket expiry? That will make all the documents expire in that time, regardless of what the expiry on the individual documents are. In addition to the above answer that mentions that, check out this for more details.
You can use Couchbase Python (Any) SDK Bucket.touch() method Described here https://docs.couchbase.com/python-sdk/current/document-operations.html#modifying-expiraton
If you don't know the document keys you can use N1QL Covered index get the document keys asynchronously inside your python SDK and use the above bucket touch API set expiration from your python SDK.
CREATE INDEX ix1 ON bucket(META().id) WHERE META().expiration = 0;
SELECT RAW META().id
FROM bucket WHERE META().expiration = 0 AND META().id LIKE "a%";
You can issue different SELECT's for different ranges and do in parallel.
Update Operation, You need to write one. As you get each key do (instead of update) bucket.touch(), which only updates document expiration without modifying the actual document. That saves get/put of whole document (https://docs.couchbase.com/python-sdk/current/core-operations.html#setting-document-expiration).

Maximo automatisation script to change statut of workorder

I have created a non-persistent attribute in my WoActivity table named VDS_COMPLETE. it is a bool that get changed by a checkbox in one of my application.
I am trying to make a automatisation script in Python to change the status of every task a work order that have been check when I save the WorkOrder.
I don't know why it isn't working but I'm pretty sure I'm close to the answer...
Do you have an idea why it isn't working? I know that I have code in comments, I have done a few experimentations...
from psdi.mbo import MboConstants
from psdi.server import MXServer
mxServer = MXServer.getMXServer()
userInfo = mxServer.getUserInfo(user)
mboSet = mxServer.getMboSet("WORKORDER")
#where1 = "wonum = :wonum"
#mboSet .setWhere(where1)
#mboSet.reset()
workorderSet = mboSet.getMbo(0).getMboSet("WOACTIVITY", "STATUS NOT IN ('FERME' , 'ANNULE' , 'COMPLETE' , 'ATTDOC')")
#where2 = "STATUS NOT IN ('FERME' , 'ANNULE' , 'COMPLETE' , 'ATTDOC')"
#workorderSet.setWhere(where2)
if workorderSet.count() > 0:
for x in range(0,workorderSet.count()):
if workorderSet.getString("VDS_COMPLETE") == 1:
workorder = workorderSet.getMbo(x)
workorder.changeStatus("COMPLETE",MXServer.getMXServer().getDate(), u"Script d'automatisation", MboConstants.NOACCESSCHECK)
workorderSet.save()
workorderSet.close()
It looks like your two biggest mistakes here are 1. trying to get your boolean field (VDS_COMPLETE) off the set (meaning off of the collection of records, like the whole table) instead of off of the MBO (meaning an actual record, one entry in the table) and 2. getting your set of data fresh from the database (via that MXServer call) which means using the previously saved data instead of getting your data set from the screen where the pending changes have actually been made (and remember that non-persistent fields do not get saved to the database).
There are some other problems with this script too, like your use of "count()" in your for loop (or even more than once at all) which is an expensive operation, and the way you are currently (though this may be a result of your debugging) not filtering the work order set before grabbing the first work order (meaning you get a random work order from the table) and then doing a dynamic relationship off of that record (instead of using a normal relationship or skipping the relationship altogether and using just a "where" clause), even though that relationship likely already exists.
Here is a Stack Overflow describing in more detail about relationships and "where" clauses in Maximo: Describe relationship in maximo 7.5
This question also has some more information about getting data from the screen versus new from the database: Adding a new row to another table using java in Maximo

unique indentifiers of new activities in Brightway

I want to create a simple activity to add to my ecoinvent database on Brightway2. How can I create a unique identifier to act as the "code" field?
The only way I know to create an activity from scratch is:
bw.Database('database_name').new_activity('code')
but I need to specify a code, and I would rather have it automatically generated (as when we do a copy of an existing activity). Is there a way to do it?
In the docs, one can read:
Brightway2 identifies an activity or flow with the MD5 hash of a few attributes: For ecoinvent 2, the name, location, unit, and categories. For ecoinvent 3, the activity and reference product names.
When diving in the bw2io code though (specifically the utils), we see this is not actually exact: Brightway generates a unique code as the MD5 hash of the ecoinvent UUIDs for the activity and the reference flow:
In [1] import brightway2 as bw
import hashlib
act = bw.Database('ecoinvent 3.3 cutoff').random()
act['code']
Out[1] '965e4a277c353bd2ed8250b49c0e24ef'
In [2] act['activity'], act['flow']
Out[2] ('ff086b85-84bf-4e44-b60e-194c0ac7f7cf',
'45fbbc41-7ae9-46cc-bb31-abfa11e69de0')
In [3] string = u"".join((act['activity'].lower(), act['flow'].lower()))
string
Out[3] 'ff086b85-84bf-4e44-b60e-194c0ac7f7cf45fbbc41-7ae9-46cc-bb31-abfa11e69de0'
In [4] str(hashlib.md5(string.encode('utf-8')).hexdigest())
Out[4] '965e4a277c353bd2ed8250b49c0e24ef'
In [5] act['code'] == str(hashlib.md5(string.encode('utf-8')).hexdigest())
Out[5] True
Note that this implies you have informed the activity and flow fields of your activity dataset. You can generate these using the uuid library. You could also decide to use other fields in your MD5 hash (e.g. the name of the activity and of the reference flow, as the docs imply).

Create a Couchbase Document without Specifying an ID

Is it possible to insert a new document into a Couchbase bucket without specifying the document's ID? I would like use Couchbase's Java SDK create a document and have Couchbase determine the document's UUID with Groovy code similar to the following:
import com.couchbase.client.java.CouchbaseCluster
import com.couchbase.client.java.Cluster
import com.couchbase.client.java.Bucket
import com.couchbase.client.java.document.JsonDocument
// Connect to localhost
CouchbaseCluster myCluster = CouchbaseCluster.create()
// Connect to a specific bucket
Bucket myBucket = myCluster.openBucket("default")
// Build the document
JsonObject person = JsonObject.empty()
.put("firstname", "Stephen")
.put("lastname", "Curry")
.put("twitterHandle", "#StephenCurry30")
.put("title", "First Unanimous NBA MVP)
// Create the document
JsonDocument stored = myBucket.upsert(JsonDocument.create(person));
No, Couchbase documents have to have a key, that's the whole point of a key-value store, after all. However, if you don't care what the key is, for example, because you retrieve documents through queries rather than by key, you can just use a uuid or any other unique value when creating the document.
It seems there is no way to have Couchbase generate the document IDs for me. At the suggestion of another developer, I am using UUID.randomUUID() to generate the document IDs in my application. The approach is working well for me so far.
Reference: https://forums.couchbase.com/t/create-a-couchbase-document-without-specifying-an-id/8243/4
As you already found out, generating a UUID is one approach.
If you want to generate a more meaningful ID, for instance a "foo" prefix followed by a sequence number, you can make use of atomic counters in Couchbase.
The atomic counter is a document that contains a long, on which the SDK relies to guarantee a unique, incremented value each time you call bucket.counter("counterKey", 1, 2). This code would take the value of the counter document "counterKey", increment it by 1 atomically and return the incremented value. If the counter doesn't exist, it is created with the initial value 2, which is the value returned.
This is not automatic, but a Couchbase way of creating sequences / IDs.

Resources