Despite checking everything character by character Amazon constantly complains that my edi file "did not conform" to their specifications.
The edi file matches as close as possible to their own example without changing prices/product id's...
For me it turned out to be the prices... specifically the TDS segment. The TDS segment is used
To specify the total invoice discounts and amounts
...
Description:
Monetary amount
External Information:
This field will be the total invoice amount including and applicable charges,
allowances and tax.
...
1. TDS01 is the total amount of invoice (including charges, less allowances)
before terms discount (if discount is applicable).
The only problem is that Amazon's X12 specs fail to mention that the TDS is recorded in CENTS!!! $10.23 has to be recorded as 1023.
TDS*1023
instead of
TDS*10.23
never would of expected this, the only way I noticed is by using EDINotepad.
Related
Out of curiosity I was inspecting the official registrar of 16bit UUIDs:
https://btprodspecificationrefs.blob.core.windows.net/assigned-values/16-bit%20UUID%20Numbers%20Document.pdf
Is it me or is the number-space reserved for companies running out. What I mean to say is that even though 16bits are enough for 65536 numbers, in the document we see that company-ids start from 0xFCDC (64732) and up.
The greatest company-id seems to be 0xFEFF (=65279 held by 'GN Netcom'. So there seem to be only around 244 IDs left to be bought.
If those 244 IDs are exhausted what will the Bluetooth SIG do? Expand to 32bit or try to recycle 16bit IDs that have already been used by companies which have vanished?
You are mistaken. This list grows downwards and the 16-bit UUID for Members 0xFEFF GN Netcom is the first one that was registered. The latest entry is 0xFCDC Amazon.com Services, LLC, so I assume that there is still plenty of room for the next years. And last but not least, a fee of $3,000 USD is charged for registering a 16-bit UUID.
Is there a way to assign a score for mail sent between certain hours. I find a lot of spam is sent in the middle of the night so would like to give anything between say 2am and 5am a score of 2 or 3.
You can use SpamAssassin to penalize mail received within certain hours, but it's messy.
Before we start, verify that SA's primary defenses are properly set up:
DNS Blocklists, including DNSBLs & URIBLs, are a necessity; set them up before all else
Bayes in SpamAssassin is another must-have, though it requires training
Use Razor for fuzzy matching (see also Installing Razor) if the license works for you.
If that's insufficient, then you can address the sort of issue you're keying on. Try:
The RelayCountry plugin to penalize countries you never converse with
The TextCat plugin to discriminate against the languages you never converse in
If all that doesn't help enough, then (and only then) you can consider what you proposed. Read on.
Don't forget about time zones. You can't use the Date header for this reason. This type of rule is not safe for deployments that have conversations that span too many time zones, and you must ensure the MX record servers are all consistent and on the same time zone. Be aware that daylight savings (aka “summer time”) can be annoying here.
Identify a relay that your receiving infrastructure adds but is added before SpamAssassin runs (so SA can see it). This will manifest as a Received header near the top of your email. Again, make sure it's actually visible to SpamAssassin; the Received header added by your IMAP server will not be visible.
It is possible that you have SpamAssassin configured to run before any internal relay is stamped into the message. If this is the case, do not proceed further as you cannot reliably determine the local time.
Okay, all caveats aside, here's an example Received header:
Received: from external-host.example.com
(external-host.example.com [198.51.100.25])
by mx.mydomain with ESMTPS id ABC123DEF456;
Fri, 13 Mar 2020 12:34:56 -0400 (EDT)
This must be a header one of your systems adds or else it could have a different time zone, clock skew, or even a forged timestamp.
Match that in a rule that clearly denotes you as the author (by convention, start it with your initials):
header CL_RCVD_WEE_HOURS Received =~ /\sby\smx\.mydomain\swith\s[^:]{9,64}+:(?<=[0 ][2-4]:)[0-9:]{5}\s-0[45]00\s/
describe CL_RCVD_WEE_HOURS Received by our mx.mydomain relay between 2a and 5a EST/EDT
score CL_RCVD_WEE_HOURS 0.500
A walk through that regex (see also an interactive explanation at Regex101):
First, you need to verify that it's your relay, matched by name: by mx.mydomain with
Then, skip ahead 9-64 non-colon characters (quickly, with no backtracking, thus the + sign). You'll need to verify your server doesn't have any colons here
The real meat is in a look-behind (since we actually skipped over the hour for speed purposes), which seeks the leading zero (or else a space) and then the 2, 3, or 4 (not 5 since we don't want to match a time like 05:59:59)
Finally, there's a sanity check to ensure we're looking at the right time zone. I assumed you're in the US on the east coast, which is -0400 or -0500 depending on whether daylight savings is in effect
So you'll need to change the server name, review whether the colon trick works with your relay, and possibly adjust the time zone regex.
I also gave this a lower score than you desired. Start low and slowly raise it as needed. 3.000 is a really high value.
Everyone knows that MRTG needs at least one value to be passed on it's input.
In per-target options MRTG has 'gauge', 'absolute' and default (with no options) behavior of 'what to do with incoming data'. Or, how to count it.
Lets look at the elementary, yet popular example :
We pass cumulative data from network interface statistics of 'how much packets were recieved by the interface'.
We take it from '/proc/net/dev' or look at 'ifconfig' output for certain network interface. The number of recieved bytes is increasing every time. Its cumulative.
So as i can imagine there could be two types of possible statistics:
1. How fast this value changes upon the time interval. In oher words - activity.
2. Simple, as-is growing graphic that just draw every new value per every minute (or any other time interwal)
First graphic will be saltatory (activity). Second will just grow up every time.
I read twice rrdtool's and MRTG's docs and can't understand which option mentioned above counts what.
I suppose (i am not sure) that 'gauge' draw values as is, without any differentiation calculations (good for measuring how much memory or cpu is used every 5 minutes). And default or 'absolute' behavior tryes to calculate the speed between nearby measures, but what's the differencr between last two?
Can you, guys, explain in a simple manner which behavior stands after which option of three options possible?
Thanks in advance.
MRTG assumes that everything is being measured as a rate (even if it isnt a rate)
Type 'gauge' assumes that you have already calculated the rate; thus, the provided value is stored as-is (after Data Normalisation). This is appropriate for things like CPU usage.
Type 'absolute' assumes the value passed is the count since the last update. Thus, the value is divided by the number of seconds since the last update to get a rate in thingies per second. This is rarely used, and only for certain unusual data sources that reset their value on being read - eg, a script that counts the number of lines in a log file, then truncates the log file.
Type 'counter' (the default) assumes the value passed is a constantly growing count, possibly that wraps around at 16 or 64 bits. The difference between the value and its previous value is divided by the number of seconds since the last update to get a rate in thingies per second. If it sees the value decrease, it will assume a counter wraparound at 16 or 64 bit. This is appropriate for something like network traffic counters, which is why it is the default behaviour (MRTG was originally written for network traffic graphs)
Type 'derive' is like 'counter', but will allow the counter to decrease (resulting in a negative rate). This is not possible directly in MRTG but you can manually create the necessary RRD if you want.
All types subsequently perform Data Normalisation to adjust the timestamp to a multiple of the Interval. This will be more noticeable for Gauge types where the value is small than for counter types where the value is large.
For information on this, see Alex van der Bogaerdt's excellent tutorial
While defining a dataset to be created, one of the JCL parameters, DCB has a positional sub-parameter RECFM, has possible values of F,FB,V,VB etc.. What're the advantages/disadvantages of RECFM=FB over RECFM=F or RECFM=VB over RECFM=V? And which case prefers to use what RECFM format?
RECFM is short for record format.
F represents fixed length records, unblocked. FB represents fixed length records, blocked. Blocking stores multiple records in a disk block, while the unblocked format stores one record in a disk block. At one time, disk drives were so slow that the unblocked format provided relative speed, while the blocked format provided better disk usage. Today, with modern disk drives, there's no advantage to using the unblocked format.
V represents variable length records, unblocked. VB represents variable length records, blocked. You would use these formats if you have variable length records, rather than fixed length records. You need to add 4 to the maximum record length in the LRECL to account for the record length field.
There's an additional attribute character, A. Used with fixed blocked (FBA) or variable blocked (VBA), this tells the system that the first byte of your record is a printer control character.
I would like to know if it is possible to do a full statement (between a date range) through ISO 8583, I have seen ATMs which do full statements and was wondering what method they used. I know balance inquiry and mini statements are possible on a POS devise over 8583.
If it is possible does anyone have an information on the structure of the message, ideally for FLexcube.
we did something similar to that back in 1999 in one of the banks, where we would send the statement data in one of the generic private use fields, where it would allow the format ANS 999
but that means you are either to restrict the data to less than 999 characters, or to split the data on multiple messages. and have a multi legged transaction.
you would have the following flow
Customer request for statement on ATM
ATM sends NDC/D912 message to ATM Switch
ATM Switch look up account number after authenticating the card and forward the request to Core Banking Application
Core banking application would generate the statement and format it according to predesigned template and send the statement data into a generic field (say 72)
ATM Switch collects the data and formats it to NDC or D912 format where the statement data is tagged to statement printer (in NDC it is a field called q and the value should be ‘8’ - Print on statement printer only)
and on the field r place the preformatted data
however, it is not a good practice to do so, since we have faster means to generate a statement and send to email or internet banking. but this is the bank's preference anyways.
It depends upon implementation,
I had implemented NCR central switch, where I incorporate initial checking stuffs in the Central application itself rather than passing everything to Auth Host.
My implementation.
ATM Sends (NCD) the transaction requests based on State Machine setup in ATM to Central Application.
Central does basic checkings such as Validity of BIN (initial 6 digit of card no.) and also checks if the requested amount of cash is available in the ATM etc.
The the Central App sends the packet (ISO8583/BASE24) is sent to the Acquirer for further processing.
Acquires Sends it to CA and then it goes to Issuer for Approval.
Hope this helps.
The mini-statement is not part of ISO 8583 (or MVA). It is usually implemented as a proprietary extension. Hence you need to go to an ATM owned by your bank, or, is part of a consortium of banks that share an ATM infrastructure with your bank.
We implemented mini-statements in our ISO-8583 specification utilizing a $0.00 0200 (DE003 = 91xxxx) message and the statement data coming back from the host on DE125 on both Connex and Base24 and then modified our stateful loads to print the data at the ATM.
Though full statements fell out of use years ago so we removed it to just be mini-statements now utilizing the receipt printer vs. full page statements. There is a limited number of entries and not all host support it but it is used today on NCR & Diebold ATMs. I've personally participated in the testing in getting it to work on Base24 and Postilion.
The mini-statement data we do print is 40 characters per line and prints about 10 transactions I believe.