Currently Acumatica ERP provides Import scenarios for import from other Systems, but to import closed Invoice it's needed to import it and process.
We don't want to process the invoice and import it as already closed invoice.
How can I do this? Will I have to import data directly to Database?
I would try to avoid Database direct entries. What if you add a customization on the page you are trying to import to allow it to go direct to closed?
You can use the graph IsImport to know the screen is running as an import to perform special functionality for import scenarios only. I am not sure, but keep in mind that screen based API might also set/use IsImport. There is also an IsContractBasedAPI but don't think that applies for your scenario.
For example maybe you allow the status field to be enabled when the process is running IsImport and adjust your import scenario to to set the status. Or add a custom button only available for import that does a direct close process.
Related
We are trying to implement push notification with webhook, so we know that GI created on top of SQL view will not support this, but will it support a projection DAC?
Yes, it supports projection DAC.
Acumatica Framework monitors its cache. And in case if changes happened to be with usage of Acumatica cache, then Push Notifications will work.
But if you will make SQL and then make PXProjection, then cache of Acumatica still will not be used, hence GI will not be able to track notifications.
IMHO I'd suggest you to create multiple GI which are used for SQL view, and collect information for multiple tables via multiple GIs.
BTW, the same is true regarding PXDataBase.Update and PXDatabase.Insert and PXDataBase.Delete. If anywhere in the code one of these is used, then it will affect how cache of Acumatica will refresh and push notifications as well. Both of them are dependent of usage of cache.
I am an amateur to the world of Python programming and I need help. I have 10GB of data and I have written python codes with Spyder to process the data. a part of codes is provided:
The codes are good with a small sample of data. However, with 10GB of data, my laptop cannot handle it so I need to use Google Cloud Engine. How I can upload the data and use Google Cloud Engine to run codes?
import os
import pandas as pd
import pickle
import glob
import numpy as np
df=pd.read_pickle(r'C:\user\mydata.pkl')
i=2018
while i>=1995:
df=df[df.OverlapYearStart<=i]
df.to_pickle(r'C:\user\done\{}.pkl'.format(i))
i=i-1
I agree with the previous answer, just to complement it you can take a look in AI Platform Notebooks which is a managed service that offers an integrated JupyterLab environment, also has the capacity to pull your data from BigQuery and allow you to scale your application on demand.
On the other hand, I don't know how you have storage your 10GB of data into CSV? in a database? As is mentioned in the first answer Cloud Storage allows you to create buckets to store your data, once the data is in Cloud Storage you may export that data into BigQuery tables to work with that data in your app using Google Cloud App Engine or the earlier suggestion AI Platform Notebooks this will depend of your solution.
Probably the easiest thing to start digging into, is going to be to use App Engine to run the code itself:
https://cloud.google.com/appengine/docs/python/
And use Google Cloud Storage to hold your data objects:
https://cloud.google.com/storage/docs/reference/libraries#client-libraries-install-python
I don't know what the output of your application is, so depending on what you want to do with the output, Google Compute Engine may be the right answer if AppEngine doesn't quite fit what you're doing.
https://cloud.google.com/compute/
The first two links take you to the documentation on how to get going with Python for AppEngine and Google Cloud Storage.
Edit to add from comments, that you'll also need to manage the memory footprint of your app. If you're really doing everything in one giant while loop, no matter where you run the application you'll have memory problems as all 10GB of your data will likely get loaded into memory. Definitely still shift that into the Cloud IMO, but yeah, that memory will need to get broken up somehow and handled in smaller chunks.
Since I cannot modify builtin models (entities, intents..) as provided by the LUIS.ai, How can I import them into my own model in a way that I can modify them further specific to my scenario(s).
Some of the contextual information can be found here: https://github.com/Microsoft/BotBuilder/issues/1694#issuecomment-305531910
I am using Azure Bot Service with Node.js
If you are using the new prebuilt domains, once you add them to your model, you should be able to tweak them.
If you are using the Cortana prebuilt app, I don't think you will be able to update it; however, the documentation contains some information if you want to "mimic" it.
If you explain exactly what are your scenarios, we might be able to come up with other alternatives.
I can't think of a straight-forward way to go about doing this, but you could take the .csv logs from LUIS and incorporate it into your model; at the least the response column data is in json format.
I've recently come across some code from Github for using a multi-threaded Inventory Item import using web services:
https://github.com/Acumatica/InventoryItemImportMultiThreaded
I'd like to know if this is faster or better than using an import scenario for mass import, and what may be the advantages of one over the other.
We have used the multi-threaded inventory import and it is MUCH faster than an import Scenario. We were importing 250K Stock Items. Doing via import scenario, which only uses one thread, took 26 hours. Using the tool on Github brought that down to 6 hours. Be aware though that Acumatica limits your processing power to the licensed amount of cores. The license we used was an enterprise license so we could use a lot of cores. Here is a great article that talks about bulk loading: Optomizing Large Import
I'm building some custom content types to capture customer data on a website. Admins will enter the data, users will be able to view it, but I also need to be able to bolt on some statistics and infographics to the data.
The problem I have is that I can't see any simple way of doing this within Drupal. Are there modules which can produce simple stats on selected node types or will I have to write a complete custom module using the data abstraction layer?
Thanks for any insights!
Yeah turns out if you want truly custom stats then the simplest thing is to build whatever you need in PHP using the data abstraction layer.
Plug into the DB via Drupal and do whatever you need to do...