i have something from my client that i need to update although it is my first time to see this code.
CREATE NEW FIELD IN or_en_listingsdb TABLE CALLED status
Reference chart of values
--------------------------
0 = current
1 = to be removed
--------------------------
STEP ZERO ---------------------------------------
Run through database and if record exists then leave record at 0 else update to 1.
DELETE ---------------------------------------
if (status == 1)
>> pull or_en_listingsimages table to find out all images associated with listingID and remove them from listing_photos dir
>> remove entry in or_classlistingsdb table
>> remove entries in or_en_listingsdbelements table
>> remove entry in or_en_listingsdb table
UPDATE ----------------------------------------
if (status == 0)
>> leave entry alone as it is still valid
ADD -------------------------------------------
Check to see if entry exists in database
If entry exists (leave alone and move on to next one)
If entry does not exist (import new data and photos)
2 NEW FILES TO BE CREATED
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
marker.php : compares current db with new idx file and updates status
rem_old.php : removes expired db entries and images
2 FILES TO BE UPDATED
-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=-=
dataprocess_all.php : update to check against db to see if record needs to be added (Everyone else)
dataprocess_tng.php : update to check against db to see if record needs to be added (TNG only)
Can someone explain to me a little your idea on how am i going to create or to do with this MLS update for alternative way on Cron jobs.
Thanks.
Related
[Question posted by a user on YugabyteDB Community Slack]
I am running YugabyteDB 2.12 single node and would like to know if it is possible to create a temporary table such that it is automatically dropped upon committing the transaction in which it was created.
In “vanilla” PostgreSQL it is possible to specify ON COMMIT DROP option when creating a temporary table. In the YugabyteDB documentation for CREATE TABLE no such option is mentioned, however, when I tried it from ysqlsh it did not complain about the syntax. Here is what I tried from within ysqlsh:
yugabyte=# begin;
BEGIN
yugabyte=# create temp table foo (x int) on commit drop;
CREATE TABLE
yugabyte=# insert into foo (x) values (1);
INSERT 0 1
yugabyte=# select * from foo;
x
---
1
(1 row)
yugabyte=# commit;
ERROR: Illegal state: Transaction for catalog table write operation 'pg_type' not found
The CREATE TABLE documentation for YugabyteDB mentions the following for temporary tables:
Temporary tables are only visible in the current client session or transaction in which they are created and are automatically dropped at the end of the session or transaction.
When I create a temporary table (without the ON COMMIT DROP option), indeed the table is automatically dropped at the end of the session, but it is not automatically dropped upon commit of the transaction. Is there any way that this can be accomplished (apart from manually dropping the table just before the transaction is committed)?
Your input is greatly appreciated.
Thank you
See these two GitHub issues:
#12221: The create table doc section doesn’t mention the ON COMMIT clause for a temp table
and
#7926 CREATE TEMP … ON COMMIT DROP writes data into catalog table outside the DDL transaction
You cannot (yet, through YB-2.13.0.1) use the ON COMMIT DROP feature. But why not use ON COMMIT DELETE ROWS and simply let the temp table remain in place until the session ends?
Saying this raises a question: how do you create the temp table in the first place? Your stated goal implies that you’d need to create it before every use. But why? You could, instead, have dedicated initialization code to create the ON COMMIT DELETE ROWS temp table that you call from the client for this purpose at (but only at) the start of a session.
If you don’t want to have this, then (back to a variant of your present thinking) you could just do this before every intended use the table:
drop table if exists t;
create temp table t(k int) on commit delete rows;
After all, how else (without dedicated initialization code) would you know whether or not the temp table exists yet?
If you prefer, you could use this logic instead:
do $body$
begin
if not
(
select exists
(
select 1 from information_schema.tables
where
table_type='LOCAL TEMPORARY' and
table_name='t'
)
)
then
create temp table t(k int) on commit delete rows;
end if;
end;
$body$;
When I try the following code given on firebase documentation
doc_ref = db.collection(u'users')
def on_snapshot(doc_snapshot, changes, read_time):
for change in changes:
print(u'new doc:{}'.format(change.document.id))
doc_watch = doc_ref.on_snapshot(on_snapshot)
It prints all the entries even if they already existed before I invoked the listener. I only want to listen to the changes that take place after the listener is invoked and ignore any entries that already exited before I invoked the listener.
Example : if i had 3 documents in my users collection : user1, user2 and user3 already. I run my program and add another document - user4. I my program to print user4 and not user1, user2 or user3.
The SDK doesn't provide a way to query for "everything that doesn't already exist". You should come up with your own query that satisfies what you want. In your case, perhaps you need a timestamp in each on of the documents that indicates when the document was created, and query for only documents with a creation date greater than the current time.
I have a requirement like this:
Maximum 500 records.
I have to insert records into a table. However, before inserting them I have to check if that same record or it's parents are already inserted.
Want to achieve:- How can i notify the user at the same time once the record is inserted in node.js
Example:- if i am uploading 400 records and 5 records are inserted user should be notified that 5 record is inserted if any failed, failed record count should be notified.
Any help would be really appreciated.
Igor already told you how to make a question so you should pass through what he righted.
Now answering your question , you need basically control the insertion. And use 2 counters, for example: let inserted , let notInserted or var inserted , var notInserted
For each insertion you check if exists , if yes notInserted +1 else inserted+1
At the end you should return to user the result res.send().json({message:"Inserted"+ inserted+"Not Inserted"+notInserted}) ;
Something like this!
I have created a non-persistent attribute in my WoActivity table named VDS_COMPLETE. it is a bool that get changed by a checkbox in one of my application.
I am trying to make a automatisation script in Python to change the status of every task a work order that have been check when I save the WorkOrder.
I don't know why it isn't working but I'm pretty sure I'm close to the answer...
Do you have an idea why it isn't working? I know that I have code in comments, I have done a few experimentations...
from psdi.mbo import MboConstants
from psdi.server import MXServer
mxServer = MXServer.getMXServer()
userInfo = mxServer.getUserInfo(user)
mboSet = mxServer.getMboSet("WORKORDER")
#where1 = "wonum = :wonum"
#mboSet .setWhere(where1)
#mboSet.reset()
workorderSet = mboSet.getMbo(0).getMboSet("WOACTIVITY", "STATUS NOT IN ('FERME' , 'ANNULE' , 'COMPLETE' , 'ATTDOC')")
#where2 = "STATUS NOT IN ('FERME' , 'ANNULE' , 'COMPLETE' , 'ATTDOC')"
#workorderSet.setWhere(where2)
if workorderSet.count() > 0:
for x in range(0,workorderSet.count()):
if workorderSet.getString("VDS_COMPLETE") == 1:
workorder = workorderSet.getMbo(x)
workorder.changeStatus("COMPLETE",MXServer.getMXServer().getDate(), u"Script d'automatisation", MboConstants.NOACCESSCHECK)
workorderSet.save()
workorderSet.close()
It looks like your two biggest mistakes here are 1. trying to get your boolean field (VDS_COMPLETE) off the set (meaning off of the collection of records, like the whole table) instead of off of the MBO (meaning an actual record, one entry in the table) and 2. getting your set of data fresh from the database (via that MXServer call) which means using the previously saved data instead of getting your data set from the screen where the pending changes have actually been made (and remember that non-persistent fields do not get saved to the database).
There are some other problems with this script too, like your use of "count()" in your for loop (or even more than once at all) which is an expensive operation, and the way you are currently (though this may be a result of your debugging) not filtering the work order set before grabbing the first work order (meaning you get a random work order from the table) and then doing a dynamic relationship off of that record (instead of using a normal relationship or skipping the relationship altogether and using just a "where" clause), even though that relationship likely already exists.
Here is a Stack Overflow describing in more detail about relationships and "where" clauses in Maximo: Describe relationship in maximo 7.5
This question also has some more information about getting data from the screen versus new from the database: Adding a new row to another table using java in Maximo
USE CASE: Create AtomicSequence using last saved id in DB (not start from zero) and generate id after last saved id in db.
First we are checking if AtomicSequence instance is there or not if not we create AtomicSequence from last saved id (if the entry is in db.).
In HazelcastAtomicSequenceManager getSequenceGenerator method is two-step process.
Step 1: getHzInstance().getAtomicLong(key). // It will get if not present create a new one with 0 initial value.
Step 2: this.sequence.compareAndSet(0, startVal); // set value if initial value is zero.
Now consider Thread 1 come check and see the AtomicSequence for the given key is not present and execute stpe1 still did not execute step 2.
Thread 2 come and see the AtomicSequence is created (As step 1 is executed by thread1) and go ahead and increment it to 1.As the initial value still zero as Thread 2 did not execute step 2.
Now thread 1 will try to execute step2 but unable to it as initial value became 1 or something not equal to zero. So the atomicsequence will generate id from 1 next instead it should start from last save id, Due to which our test case is failing.
Any way to fix this issue
You need to get and try compareAndSet in a loop, until it succeeds:
long current;
do {
current = atomicLong.get();
} while (!atomicLong.compareAndSet(current, startVal));