Getting FAGLL03H report using pyrfc - python-3.x

This is a mixed question between SAP and the usage of the pyrfc module. I need to use the FAGLL03H transaction code (tcode) to replicate a G/L report into a database on a daily basis. Now, the thing is that FAGLL03H is not a table per se, but a G/L Account Line Item Browser (G/L View), so I need to access that Tcode and pass a series of parameters in order to get the information we need.
How can I use the RFC protocol to access that tcode and generate a report?
is it possible to do (1) through pyrfc?
This is the code I use to consult tables:
import pyrfc
from pprint import PrettyPrinter
conn = pyrfc.Connection(ashost=...)
options = [{'TEXT': "FCURR = 'USD'"}]
pp = PrettyPrinter(indent=4)
ROWS_AT_A_TIME = 10
rowskips = 0
while True:
print(u"----Begin of Batch---")
result = conn.call('RFC_READ_TABLE', \
QUERY_TABLE='TCURR', \
OPTIONS=options, \
ROWSKIPS=rowskips, ROWCOUNT=ROWS_AT_A_TIME)
pp.pprint(result['DATA'])
rowskips += ROWS_AT_A_TIME
if len(result['DATA']) < ROWS_AT_A_TIME:
break

No way
No
The main point you need to understand is the difference between SAP transaction (tcode) and SAP RFC. The difference is huge and makes it impossible to use them in the similar manner. You are trying to call FAGLL03H report like a table via RFC_READ_TABLE, but it is not a table, it is much more, it is a transaction.
SAP tcode is nothing than a shortcut in SAP that points to some program, usually GUI program, and can contain hundreds of modules, including RFC-enabled ones. And some of these modules are internal and have no RFC equivalent, so it is impossible to call them remotely, at least but not the last it is necessary to know how to call them (in what order) and which parameters to pass.
SAP RFC is like a container for ABAP code (but also a protocol for calling this code) which implements some functionality, either a small piece like converting characters' case or converting measure units, or huge one, for example posting financial documents and creating enterprise hierarchy objects like workcenters, cost centers, sales organizations, etc. RFC-modules can be likened to Python modules or Java methods and they are usually implemented for one single task, and are usually used not standalone but in combination with other methods.
The above-mentioned transaction is huge and is intended for output of G/L account lines and cannot be called via PyRFC. PyRFC features are limited to calling only RFC-modules from which FAGLL03H consists of.
The only thing you can do here is to find equivalent function modules which returns the same items as FAGLL03H. Possible candidates:
BAPI_GLX_GETDOCITEMS
FAGL_GET_OPEN_ITEMS_GL
FAGL_GET_OPEN_ITEMS_KU
FAGL_GET_OPEN_ITEMS_LI
FAGL_GET_OPEN_ITEMS
FKK_GL_LINE_ITEMS_SELECT
BAPI_AP_ACC_GETBALANCEDITEMS
BAPI_AR_ACC_GETBALANCEDITEMS
BAPI_AP_ACC_GETOPENITEMS
BAPI_AR_ACC_GETOPENITEMS
You should try each and compare the output with your tcode, if it is identical. Only after then you can use PyRFC to call them.

Check this in order to get all the specific Tables:
https://www.recercat.cat/bitstream/handle/2072/5419/PFCLopezRuizAnnex3.pdf?sequence=4
You can then either build from there or create a Report (transaction SQ01) and execute through RSAQ_REMOTE_QUERY_CALL.
Your business requirements should decide your code, not the opposite.

Related

CQRS Read Model Projections: How complex is too complex a data transformation

I want to sanity check myself on a view projection, in regards to if an intermediary concept can purely exist in the read model while providing a bridge between commands.
Let me use a contrived example to explain.
We place an order which raises an OrderPlaced event. The workflow then involves generating a picking slip, which is used to prepare a shipment.
A picking slip can be generated from an order (or group of orders) without any additional information being supplied from any external source or user. Is it acceptable then that the picking slip can be represented purely as a read model?
So:
PlaceOrderCommand -> OrderPlacedEvent
OrderPlacedEvent -> PickingSlipView
The warehouse manager can then view a picking slip, select the lines they would like to ship, and then perform a PrepareShipment command. A ShipmentPrepared event will then update the original order, and remove the relevant lines from the PickingSlipView.
I know it's a toy example, but I have a conceptually similar use case where a colleague believes the PickingSlip should be a domain entity/aggregate in its own right, as it's conceptually different to order. So you have PlaceOrder, GeneratePickingSlip, and PrepareShipment commands.
The GeneratePickingSlip command however simply takes an order number (identifier), transforms the order data into a picking slip entity, and persists the entity. You can't modify or remove a picking slip or perform any action on it, apart from using it to prepare a shipment.
This feels like introducing unnecessary overhead on the write model, for what is ultimately just a transformation of existing information to enable another command.
So (and without delving deeply into the problem space of warehouses and shipping)...
Is what I'm proposing a legitimate use case for a read model?
Acting as an intermediary between two commands, via transformation of some data into a different view. Or, as my colleague proposes, should every concept be represented in the write model in all cases?
I feel my approach is simpler, and avoiding unneeded complexity, but I'm new to CQRS and so perhaps missing something.
Edit - Alternative Example
Providing another example to explore:
We have a book of record for categories, where each record is information about products and their location. The book of record is populated by an external system, and contains SKU numbers, mapped to available locations:
Book of Record (Electronics)
SKU# Location1 Location2 Location3 ... Location 10
XXXX Introduce Remove Introduce ... N/A
YYYY N/A Introduce Introduce ... Remove
Each book of record is an entity, and each line is a value object.
The book of record is used to generate different Tasks (which are grouped in a TaskPlan to be assigned to a person). The plan may only cover a subset of locations.
There are different types of Tasks: One TaskPlan is for the individual who is on a location to add or remove stock from shelves. Call this an AllocateStock task. Another type of Task exists for a regional supervisor managing multiple locations, to check that shelving is properly following store guidelines, say CheckDisplay task. For allocating stock, we are interested in both introduced and removed SKUs. For checking the displays, we're only interested in newly Introduced SKUs, etc.
We are exploring two options:
Option 1
The person creating the tasks has a View (read model) that allows them to select Book of Records. Say they select Electronics and Fashion. They then select one or more locations. They could then submit a command like:
GenerateCheckDisplayTasks(TaskPlanId, List<BookOfRecordId>, List<Locations>)
The commands would then orchestrate going through the records, filtering out locations we don't need, processing only the 'Introduced' items, and creating the corresponding CheckDisplayTasks for each SKU in the TaskPlan.
Option 2
The other option is to shift the filtering to the read model before generating the tasks.
When a book of record is added a view model for each type of task is maintained. The data might be transposed, and would only include relevant info. ie. the CheckDisplayScopeView might project the book of record to:
Category SKU Location
Electronics (BookOfRecordId) XXXX Location1
Electronics (BookOfRecordId) XXXX Location3
Electronics (BookOfRecordId) YYYY Location2
Electronics (BookOfRecordId) YYYY Location3
Fashion (BookOfRecordId) ... ... etc
When generating tasks, the view enables the user to select the category and locations they want to generate the tasks for. Perhaps they select the Electronics category and Location 1 and 3.
The command is now:
GenerateCheckDisplayTasks(TaskPlanId, List<BookOfRecordId, SKU, Location>)
Where the command now no longer is responsible for the logic needed to filter out the locations, the Removed and N/A items, etc.
So the command for the first option just submits the ID of the entity that is being converted to tasks, along with the filter options, and does all the work internally, likely utilizing domain services.
The second option offloads the filtering aspect to the view model, and now the command submits values that will generate the tasks.
Note: In terms of the guidance that Aggregates shouldn't appear out of thin air, the Task Plan aggregate will create the Tasks.
I'm trying to determine if option 2 is pushing too much responsibility onto the read model, or whether this filtering behavior is more applicable there.
Sorry, I attempted to use the PickingSlip example as I thought it would be a more recognizable problem space, but realize now that there are connotations that go along with the concept that may have muddied the waters.
The answer to your question, in my opinion, very much depends on how you design your domain, not how you implement CQRS. The way you present it, it seems that all these operations and aggregates are in the same Bounded Context but at first glance, I would think that there are 3 (naming is difficult!):
Order Management or Sales, where orders are placed
Warehouse Operations, where goods are packaged to be shipped
Shipments, where packages are put in trucks and leave
When an Order is Placed in Order Management, Warehouse reacts and starts the Packaging workflow. At this point, Warehouse should have all the data required to perform its logic, without needing the Order anymore.
The warehouse manager can then view a picking slip, select the lines they would like to ship, and then perform a PrepareShipment command.
To me, this clearly indicates the need for an aggregate that will ensure the invariants are respected. You cannot select items not present in the picking slip, you cannot select more items than the quantities specified, you cannot select items that have already been packaged in a previous package and so on.
A ShipmentPrepared event will then update the original order, and remove the relevant lines from the PickingSlipView.
I don't understand why you would modify the original order. Also, removing lines from a view is not a safe operation per se. You want to guarantee that concurrency doesn't cause a single item to be placed in multiple packages, for example. You guarantee that using an aggregate that contains all the items, generates the packaging instructions, and marks the items of each package safely and transactionally.
Acting as an intermediary between two commands
Aggregates execute the commands, they are not in between.
Viewing it from another angle, an indication that you need that aggregate is that the PrepareShippingCommand needs to create an aggregate (Shipping), and according to Udi Dahan, you should not create aggregate roots (out of thin air). Instead, other aggregate roots create them. So, it seems fair to say that there needs to be some aggregate, which ensures that the policies to create shippings are applied.
As a final note, domain design is difficult and you need to know the domain very well, so it is very likely that my proposed solution is not correct, but I hope the considerations I made on each step are helpful to you to come up with the right solution.
UPDATE after question update
I read a couple of times the updated question and updated several times my answer, but ended up every time with answers very specific to your example again and I'm most likely missing a lot of details to actually be helpful (I'd be happy to discuss it on another channel though). Therefore, I want to go back to the first sentence of your question to add an important comment that I missed:
an intermediary concept can purely exist in the read model, while providing a bridge between commands.
In my opinion, read models are disposable. They are not a single source of truth. They are a representation of the data to easily fulfil the current query needs. When these query needs change, old read models are deleted and new ones are created based on the data from the write models.
So, only based on this, I would recommend to not prepare a read model to facilitate your commands operations.
I think that your solution is here:
When a book of record is added a view model for each type of task is maintained. The data might be transposed, and would only include relevant info.
If I understand it correctly, what you should do here is not create view model, but create an Aggregate (or multiple). Then this aggregate can receive the commands, apply the business rules and mutate the state. So, instead of having a domain service reading data from "clever" read models and putting it all together, you have an aggregate which encapsulates the data it needs and the business logic.
I hope it makes sense. It's a broad topic and we could talk about it for hours probably.

Getting data fields AND associated entities into a Word file

NB. This question might be a bit similar to another one but still result in a different answer, hence the division in two.
I'd like to print out the contents of an entity, consisting of a subset of the fields and including some rudimentary information on its associated entities.
E.g. I'd like to print a lead by displaying its name and phone number but also a list of associated instances of type new_somesome (their new_name and new_size, for example) connected to it.
How can I do that?
I've tried cheating by using mail template but that doesn't include the related entities. I'm not sure how to edit the default print view to include the associatees neither.
Is it acceptable for you to use reports?
For this purpose you can try free MS ReportBuilder. MS allows to build reports without using such large tool as MS Buisness Intelligence. These custom reports appears to be more flexible than system ones. Another way is to try Mail merge functionality.

Dynamics CRM 2011 Import Data Duplication Rules

I have a requirement in which I need to import data from excel (CSV) to Dynamics CRM regularly.
Instead of using some simple Data Duplication Rules, I need to implement a point system to determine whether a data is considered duplicate or not.
Let me give an example. For example these are the particular rules for Import:
First Name, exact match, 10 pts
Last Name, exact match, 15 pts
Email, exact match, 20 pts
Mobile Phone, exact match, 5 pts
And then the Threshold value => 19 pts
Now, if a record have First Name and Last Name matched with an old record in the entity, the points will be 25 pts, which is higher than the threshold (19 pts), therefore the data is considered as Duplicate
If, for example, the particular record only have same First Name and Mobile Phone, the points will be 15 pts, which is lower than the threshold and thus considered as Non-Duplicate
What is the best approach to achieve this requirement? Is it possible to utilize the default functionality of Import Data in the MS CRM? Is there any 3rd party Add-on that answer my requirement above?
Thank you for all the help.
Updated
Hi Konrad, thank you for your suggestions, let me elaborate here:
Excel. You could filter out the data using Excel and then, once you've obtained a unique list, import it.
Nice one but I don't think it is really workable in my case, the data will be coming regularly from client in moderate numbers (hundreds to thousands). Typically client won't check about the duplication on the data.
Workflow. Run a process removing any instance calculated as a duplicate.
Workflow is a good idea, however since it is being processed asynchronously, my concern is the user in some cases may already do some update/changes to the data inserted, before the workflow finish working.. therefore creating some data inconsistency or at the very least confusing user experience
Plugin. On every creation of a new record, you'd check if it's to be regarded as duplicate-ish and cancel it's creation (or mark for removal).
I like this approach. So I just import like usual (for example, to contact entity), but I already have a plugin in place that getting triggered every time a record is created, the plugin will check whether the record is duplicat-ish or not and took necessary action.
I haven't been fiddling a lot with duplicate detection but looking at your criteria you might be able to make rules that match those, pretty much three rules to cover your cases, full name match, last name and mobile phone match and email match.
If you want to do the points system I haven't seen any out of the box components that solve this, however CRM Extensions have a product called Import Manager that might have that kind of duplicate detection. They claim to have customized duplicate checking. Might be worth asking them about this.
Otherwise it's custom coding that will solve this problem.
I can think of the following approaches to the task (depending on the number of records, repetitiveness of the import, automatization requirement etc.) they may be all good somehow. Would you care to elaborate on the current conditions?
Excel. You could filter out the data using Excel and then, once you've obtained a unique list, import it.
Plugin. On every creation of a new record, you'd check if it's to be regarded as duplicate-ish and cancel it's creation (or mark for removal).
Workflow. Run a process removing any instance calculated as a duplicate.
You also need to consider the implication of such elimination of data. There's a mathematical issue. Suppose that the uniqueness' radius (i.e. the threshold in this 1D case) is 3. Consider the following set of numbers (it's listed twice, just in different order).
1 3 5 7 -> 1 _ 5 _
3 1 5 7 -> _ 3 _ 7
Are you sure that's the intended result? Under some circumstances, you can even end up with sets of records of different sizes (only depending on the order). I'm a bit curious on why and how the setup came up.
Personally, I'd go with plugin, if the above is OK by you. If you need to make sure that some of the unique-ish elements never get omitted, you'd probably best of applying a test algorithm to a backup of the data. However, that may defeat it's purpose.
In fact, it sounds so interesting that I might create the solution for you (just to show it can be done) and blog about it. What's the dead-line?

Integrating with 500+ applications

Our customers use 500+ applications and we would like to integrate these applications with our. What is the best way to do that? These applications are time registration applications and common for most of them is that they can export to csv or similar, some of them are actually home-brewed excel sheets where time is registered.
The best idea so far is to create our own excel sheet, which can be used to integrate with all these applications. The integrations could be in the form of cells containing something like ='[c:\export.csv]rawdata'!$A$3 Where export.csv is the csv file exported from the time registration applications. Can you see a better way to integrate against all these applications? It should be mentioned that almost all our customers have Microsoft Office.
Edit: Answers to the excellent questions from Pontus Gagge:
How similar are the data in the different applications?
I assume that since they time registration applications, they will have some similarities, but I assume that some will register the how long time one has worked in total for a whole month, while others will spesify for each day. If Excel is chosen, I believe that many of the differences could be ironed out using basic formulas.
What quality is the data?
The quality of the data can vary so basic validation must be undertaken, a good way is also to make it transparent for the customers, how our application understands their input, so they are responsible.
How large amounts of data are you talking about?
There will be information about the time worked for up to 50 employees.
Is the integration one-way only?
Yes
With what frequency should information be transferred?
Once per month (when they need to pay salaries).
How often do the applications themselves change, and how often does your product change?
If their application is a home-brewed Excel sheet, then I assume it will change once a year (due for example a mistake someone). If it is a standard proper time registration application, then I do not believe they are updated more often than every fifth year or so, as it is a very stabile concept.
Should the integration be fully automatic or can your end users trigger a data transfer?
They can surely trigger data transfer. The users are often dedicated to the process so they can be trained at doing it, which means that they could make up to, say 30, mouse clicks in order to integrate each month.
Will the customers have somebody to monitor the integrations?
As we have many customers, many of them should be able to undertake the integration themselves. We will though be able to assist them over the telephone. We cannot, though undertake the integration ourselves because we would then be responsible for any errors due to user mistakes, etc.
Does the phrase 'integration spaghetti' mean anything to you...?
I am looking for ideas from the best chefs to cook a nice large portion of that.
You need to come up with a common data format, and a way to translate the individual data formats to the common format. There's really no way around this - any solution you come up with will have to do this in one way or the other. It's the essential complexity of what you're doing.
The bigger issue is actually variances within the source data, in terms of how things like dates are stored, missing columns, etc. Doing a generic conversion for CSV to move columns around is comparatively easy.
I would also look at CSV and then use an OLEDB connection against the CSV file for importing.
If you try to make something that can interface to any data structure in the universe (and 500 is plenty close enough), it is guaranteed to be a maintenance nightmare. Instead I would approach this from multiple angles:
Devise an interface into which a human can enter this data already in the proper format. With 500+ clients, I'd make this a small, raw but functional browser based site that users can use to enter this information manally. This is the fall-back. At the end of the day, a human can re-key the information into the site and solve the import issue. Ideally, everyone would use this instead of their own format. Data entry people are cheap.
Similar to above, but expanded, I would develop a standard application or standardize on an off-the-shelf application that can be used to replace their existing format. This might take more time than #1. The goal would be to only do one-time imports of these varying data schemas into the application and be done with them for good.
The nice thing about spreadsheets is that you can do anything anywhere. The bad thing about spreadsheets is that you can do anything anywhere. With CSV or a spreadsheet there is simply no way to enforce data integrity and thus consistency (which is the primary goal) on the data. If the source data is already in a database, then that is obviously simpler.
I would be inclined to use database format into which each of these files need to be converted rather than a spreadsheet (e.g. use something like Jet (MDB)). If you have non-Windows users then that will make it harder and you might have to use a spreadsheet. The problem is that it is too easy for the user to change their source structure, break their upload and come crying to you. If a given end user has a resident expert, they can find a way of importing the data into that database format . If you are that expert, then I would on a case-by-case basis, write something that would import into that database format. XML would be the other choice, but that will likely take more coding than an import/export into a database format.
Standardization of the apps (even having all the sources in a database format instead of a spreadsheet would help) and control over the data schema is the ultimate goal rather than permitting a gazillion formats. There really is no nice answer other than standardization. Otherwise, you are having to write a converter for every Tom-Dick-and-Harry format and again when someone changes the source format.
With a multitude of data sources mapping each one correctly to an intermediate format is not trivial. Regular expressions are good with a finite set of known data formats. Multipass can help when data is ambiguous without context (month,day fields and have several days of data), and also help defeat data entry errors. But it seems as this data is connected to salaries there needs a good reliable transfer.
An import configuring trick
Get the customer to make a set of training data in the application. It should have a "predefined unique date" and each subsequent data field have a number corresponding to the target data field in your application. On importing your application needs to recognise the predefined date, determine the unique translation required and effect the displaying/saving of this "mapping key", and stop the import. eg If you expect "Duration hours" in field two then get the user to enter 2 in the relevant field which might be "Attendance hours".
On subsequent runs, and with the mapping definition key, import becomes a fairly easy process of translation.
Note on terms
"predefined date" - must be historical, say founding date of your company?, might need to be in PC clock settable range.
"mapping key" - could be string of hex digits and nybble based so tractable to workout
The entered code can be extended to signify required conversions ie customer's application has durations in days and your application expects it in hours.
Interfacing with windows programs (in order if increasing fragility)
Ye Olde saving as CSV file
Print to operating system printer that is setup as a text file/pdf, then scavenge the data out of that
Extract data via the application interface control, typically ActiveX for several windows programs ie like Matlab's Spreadsheet Link
Read native file format xls format ie like Matlab's xlsread
Add an additional intermediate spreadsheet sheet that has extended cell references ie ='[filename]rawdata'!$A$3
Have a look at Teiid by JBoss: http://jboss.org/teiid
Also consider using SOA - e.g., if you're on Java, try JBoss SOA platform: http://www.jboss.com/resources/soa/?intcmp=1004
Use a simple XML format. A non-technical person can easily understand a simple XML format (and could even identify basic problems with XML documents that are not well-formed).
Maybe use a DTD (or even better an XML schema) to do very basic validation, and then supplement this with an XSL stylesheet to do more validation with better error reporting. (An XSL stylesheet simply converts from XML to something else and so can be generate readable error messages.)
The advantage of this approach is that web browsers such as Internet Explorer can apply the XSL stylesheets. A customer need only spend at most a day enhancing their applications or writing excel macros to generate the XML data in the format that you specify.
Recent versions of Excel have support for converting spreadsheet data to XML, and can even validate against schemas.
Once the data passes the XSL validation checks, you have validated XML data.
If you have heaps of data and heaps of money, you could look at existing data management and cleansing tools:
http://www-01.ibm.com/software/data/infosphere/datastage
http://www-01.ibm.com/software/data/infosphere/qualitystage
But even then, you'll likely need to follow kyoryu's suggestion assuming you have 500+ data formats. The problem isn't your side. You need them to standardize their output formats if you have no control over their apps. CSV is likely the easiest. You could even send them a excel template to help them along.

Can an excel worksheet be used as UDF?

I'm building a network business model in excel. A similar model is that of Gawker Media.
In my model I have a number properties that have some over lap of audience. Each property attracts users, which in turn affords cross promotional opportunities. In the case of Gawker they have a series of blogs whose audience will likely read several of their blogs in their network.
If gawker launched a new blog they're able to direct traffic from their blog network.
Creating a model for a single blog is fairly simple - although the initial assumptions are harder. The next step is to model the network effect.
Excel provides a scenarios manager that allows me to vary the key assumptions in the basic model. This is almost perfect, I can model the launch of 10 properties, each with different launch assumptions and see the summary.
Where I need help is figuring out how I can vary the initial number of users for the launch of each property. In other words, once the network is established, its possible to drive people to any new property launched on the network.
I don't believe the scenario manager will do what I need.
So, I'm wondering if its possible to use the model work sheet as a UDF? The UDF would need to spit out the monthly revenue and unique users given a number of input assumptions.
I would then be able to create my own summary sheet for the 10 properties and using the total uniques for each property get a summary for the network. This network summary would be used to determine how many people could be driven to the launch of a new property.
In effect, the only difference to the scenario manager is that I need one of my input variables (initial users) to be programmatically generated as a function of the number of people in the network at the time of launch.
I'm hoping its possible to achieve something along these lines in excel. I could drop down and create the whole model in Java, but then its much harder to share with business colleagues!
Thanks - Matt.
You could try Data Table.
It only allows you to analyse the effect of varying 2 input parameters, but you can create several data tables, and each parameter can take hundreds of different values.
It's little know, but efficient and available since Excel 3.0.
There is a product that I have researched but never used - search for calc4web. It takes a sheet of formulas and generates code (C++) that can be compiled into an XLL add-in. Then you can call a function that does what your sheet does. But of course then you have an XLL to distribute, and a build step every time you change your logic, which defeats much of the point of using a spreadsheet.
In my case, I wound up writing some very simple VBA code to vary my sheet "inputs" using the scenario manager, and capture my "outputs". This works if you have a batch of inputs that you can just point your macro at and step through.
EDIT:
See here for a VBA-only example of doing this:
using a sheet in an excel user defined function

Resources