Validate Line function in Netsuite ?&Schedule Script function in netsuite? - netsuite

1.Validate Line Function
Does this script fire at field level of Line Items OR fire at the record level?
2.Scheduled Script
How to test Performance of a Script? Would debug help or are there any other logs to check on bottlenecks (delays)?

Validate Line fires at the line level, i.e. not until you click "Done" or moving to the next line. Your event handler function can return false to stop the line addition/update from happening, or true to continue normally.
As far as I know, currently the debugger can only be used on User Event scripts. You could do some simple time measurements and log timings throughout your script. Your most likely points of bottlenecks will be anything that goes back to the database (e.g. creating/loading/submitting records, complex searches, etc).

Related

What is the best way to run a background process in a Dash app?

I have a Dash application that queries an API, based on a user search query, performs some calculations on the response, then displays the final results to the user on a Dash app. In order to provide a quick response to the user, I am trying to set up a quick result callback and a full result long_callback.
The quick result will grab limited results from the API and display results to the user within 10-15 seconds, while the full results will run in the background, collecting all results (which can take up to 2 minutes), then updates the page with the full results when they are available.
I am curious what the best way to perform this action is, as I have run into forking issues with my current attempt.
My current attempt: Using the diskcache.Cache() as the DiskcacheLongCallbackManager and a database txt file to store availability of results.
I have a database txt file that stores a dictionary, with the keys being the search query and the fields being quick_results: bool, full_results: bool, file_path: str, timestamp: dt (as str).
When a search query is entered and submit is pressed, a callback loads the database file as a variable and then checks the dictionary keys for the presence of this search query.
If it finds the query in the keys of the database, it loads the saved feather file from the provided file_path and returns it to the dash app for generation of the page content.
If it does not find the query in the database keys, it requests limited data from the API, runs calculations, saves the DataFrame as a feather file on disk, then creates an entry in the database with the search query(as the key), the file path of the saved feather file, the current timestamp, and sets the quick_results value to True.
It then loads this feather file from the file_path created and returns it to the dash app for generation of the page content.
A long_callback is triggered at the same time as the above callback, with a 20 second sleep to prevent overlap with the quick search. This callback also loads the database file as a variable and checks if the query is present in the database keys.
If found, it then checks if the full results value is True and if the timestamp is more than 0 days old.
If the full results are unavailable or are more than 0 days old, the long_callback requests full results from the API, performs the same calculations, then updates the already existing search query in the database, making the full_results True and the timestamp the time of completion for the full search.
It then loads the feather file from the file_path and returns it to the dash app for generation of the page content.
If the results are available and less then 1 day old, the long callback simply loads the feather file from the provided file_path and returns it to the dash app for generation of the page content.
The problem I am currently facing is that I am getting a weird forking error on the long callback on only one of the conditions for a full search. I currently have the long_callback setup to perform a full search only if the full results flag is False or the results are more than 0 days old. When the full_results flag is False, the callback runs as expected, updates the database and returns the full results. However, when the results are available but more than 0 days old, the callback hits a forking error and is unable to complete.
The process has forked and you cannot use this CoreFoundation functionality safely. You MUST exec(). Break on __THE_PROCESS_HAS_FORKED_AND_YOU_CANNOT_USE_THIS_COREFOUNDATION_FUNCTIONALITY___YOU_MUST_EXEC__() to debug.
I am at a loss as to why the function would run without error on one of the conditions, but then have a forking error on the other condition. The process that runs after both conditions is exactly the same.
By using print statements, I have noticed that this forking error triggers when the function tries to call the requests.get() function on the API.
If this issue is related to how I have setup the background process functionality, I would greatly appreciate some suggestions or assitance on how to do this properly, where I will not face this forking error.
If there is any information I have left out that will be helpful, please let me know and I will try to provide it.
Thank you for any help you can provide.

Send line from file each time Lambda function is triggered [python]

I have search about this issue and couldn't find anything that would help me.
EDITED
The main idea is to, each time the lambda function is triggered by cloudwatch (everyday), choose a subsequent line from the text file that I get from a s3 bucket, that line will be attached to an e-mail.
The next time the lambda is triggered, the same will happen, but with the next line in the text file, and so on.
I have more or less an idea, using a for loop, my problem is how to, each time the function is triggered, select the next line in the text file.
If you need to read lines from your file in sequence, one lambda execution a day is one line, then you have to keep track of those lines. If its only once a day, you could use SSM Parameter Store for that. Each time your lambda executes, it would query SSM Parameter Store the the line number which was read previously.
Similarly, after successful dispatch of a line, the Lambda function would update the parameter in SSM Parameter Store.
The exact details depend on how big the file is, as this process can get progressively slow with time.

tailLines and SinceTime in logging api,both not worked simultaneously

I am using container engine, and my pods are hosted there.
I am trying to fetch logs, using log api :
http://localhost:8000/api/v1/namespaces/app-test/pods/designer-0/log?tailLines=100&sinceTime=2017-09-17T10:47:58Z
if i used both the query params separately, it works and show the proper result, but if i am using it simultaneously only the top 100 logs are returning, the sinceTime param is get ignored.
my scenario is, i need a log from a specific time, in a chunk like, 100 lines, 100 lines.. like this.
I am not sure, whether it is a bug, or it is not implemented.
I found this from the api reference manual
https://kubernetes.io/docs/api-reference/v1.6/
tailLines - If set, the number of lines from the end of the logs to
show. If not specified, logs are shown from the creation of the
container or sinceSeconds or sinceTime
So, that means if you specify tailLines, it start from the end. I dont see any option explicitly mentioned other than limitBytes. But you will have to play around with it as it does not guarantee number of lines.
tailLines=X tells the server to start that many lines from the end
sinceTime tells the server to start from the specified time
the options are mutually exclusive
Thanks All,
I have later on recognized that, it is not ignoring the sinceTime, as the TailLines intended functionality is return the lines from the last.
So, if i mentioned the sinceTime= 10 PM yesterday, it will return the records from that time..And if also tailLines, is mentioned, so it will return the recent logs from that chunk.
So, it was working as expected. I need to play with LimitBytes for getting the logs in chunk, from that time, Instead of full logs.

How to debug plperl script in postgres-8.4 trigger

I am writing the plperl script function for my trigger execution. When INSERT / UPDATE happens ,my plperl script will run , in that I am dynamically forming some query based on event I receive. I wanted to print it in terminal when I do insert/update. But it does not happen. Tell me which way i can print it.?
Use the elog function to raise notices. You can also use it to raise full errors.

How to see output of TextOutW(...) after each call?

On writing to the display with:
::TextOutW( pDC->m_hDC, x, y, &Out, 1 );
It only shows on the screen after every 15 calls (15 characters).
For debugging purposes only, I would like to see the new character on the display after each call. I have tried ::flushall() and a few other things but no change.
TIA
GDI function calls are accumulated and called in batches for performance reasons.
You can call GdiFlush after the TextOut call to perform the drawing immediately. Alternatively, call GdiSetBatchLimit(1) before outputting the text to disable batching completely.
::flushall() is for iostreams, so it won't affect Windows screen output at all. I've never tried it, but based on the docs, I believe GDIFlush() might be what you want. You should also be able to use GDISetBatchLimit(1); to force each call to run immediately upon being called.

Resources