How does source MME resolve target MME to send forward relocation request message? - telecommunication

I am a bit confused here. Can someone please share his/her thoughts on this query ?

Once the handover is decided, the source MME sends a Forward Relocation Request to the target MME.
The method to determine the target MME is based on MME selection functionality in the source MME.
The MME selection function selects an available MME for serving a UE and is based on network topology.
When a MME selects a target MME, the selection function performs a simple load balancing between the possible target MMEs. In networks that deploy dedicated MMEs, e.g. for UEs configured for low access priority, the possible target MME selected by source MME is typically restricted to MMEs with the same dedication.
Refer to the 'MME Selection function' in 23.401 standard of 3GPP.

Related

Trace page table access of a Linux process

I am writing to inquire the feasibility of tracing the page table access (in terms of "index" of each page table access) of a common Linux user application. Basically, what I am doing is to re-produce the exploitation mentioned in this research article (https://www.ieee-security.org/TC/SP2015/papers-archived/6949a640.pdf). In particular, the data-page accesses need to be recorded for usage and inference of program secrets.
I understand the on Linux system, 64-bit x86 architecture, the page table size is 4K. And i have used pin (https://software.intel.com/en-us/articles/pin-a-dynamic-binary-instrumentation-tool) to log a trace of addresses for all virtual memory access. So can I simply calculate the "index" of each data page table access, with the following translation rule?
index = address >> 15
Since 4KB = 2 ^ 15. Is it correct? Thank you in advance for any suggestions or comments.
Also, I think one thing I want to point out is that conceptually, I don't need a "precise" identifier of each data page table ID, but just a number ("index") to distinguish the access of different data pages. This shall provide conceptually identical amount of information compared with their attacks.
Ok, so you don't really need an "index", but just some unique identifier to distinguish different pages in the virtual address of a process.
In such case, then you can just do address >> PAGE_SHIFT. In x86 with 4KB pages PAGE_SHIFT is 12, so you can do:
page_id = address >> 12
Then if address1 and address2 correspond to the same page the page_id will be the same for both addresses.
Alternatively, to achieve the same result, you could do address & PAGE_MASK, where PAGE_MASK is just 0xfffffffffffff000 (that is ~((1UL << PAGE_SHIFT) - 1)).

External source for sample rate of Redhawk system

We are using Redhawk for an FM modulator. It reads an audio modulating signal from a file, performs the modulation, then sends the modulated data from Redhawk to an external program via TCP/IP for DAC and up-conversion to RF.
The data flows through the following components: rh.FileReader, rh.DataConverter, rh.fastfilter, an FM modulator, rh.DataConverter, and rh.sinksocket. The FM modulator is a custom component.
The rh.sinksocket sends data to an external server program that sends the samples from Redhawk to an FPGA and DAC.
At present the sample rate appears to be controlled via the rh.FileReader component. However, we would like the external DAC to set the sample rate of the system, not the rh.FileReader component of Redhawk, for example via TCP/IP flow control.
Is it possible to use an external DAC as the clock source for a Redhawk waveform?
The property on FileReader dictating the sample rate is simply telling it what the sample rate of the provided file is. This is used for the Signal Related Information (SRI) passed to down stream components and then output rate if you do not block or throttle. Eg. FileReader does not do any resampling of the given file to meet the sample rate given.
If you want to resample to a given rate you can try the ArbitraryRateResampler component.
Regarding setting these properties via some external mechanism (TCP/IP) you would want to write a specific component or REDHAWK service that listens for this external event and then makes a configure call to set the property you'd like changed.
If these events are global and can apply to many applications on your domain then a service is the right pattern, if these events are specific to a single application then a component might make more sense.

Live with Predictable Network Interface Name

I'm facing for the first time with the new name scheme of network interfaces: Predictable Network Interface Name.
My question is NOT related if this scheme is better or worse... I'm just trying to understand how to use it correcly.
Here I read:
When changing the interface naming scheme, do not forget to update all network-related configuration files and custom systemd unit files to reflect the change.
So I have to write in all the configuration files the actual interface name. In the previous scheme it was i.e. eth0 and it just means the first ethernet card, with the known caveats if there are multiple interfaces.
Now, instead, I have to write the predictable name, that is composed of some easy parts (i.e. type of the interface) and other un-predictable ones like the MAC address. As far as I understand each card will have a different name.
I admit my question might appear fool, but I don't understand how to prepare a configuration file. Let's see an example, /etc/dhcpcd.conf:
profile static_eth0
static ip_address=192.168.1.23/24
static routers=192.168.1.1
static domain_name_servers=192.168.1.1
interface eth0
fallback static_eth0
What should I put instead of eth0 in the o.s. image?
Only when I run the target machine I can retreive the actual name of the ethernet interface.
100% of my systems are headless, and I never connect a keyboard and display to them. Furthermore, if I have to send a spare part of the SBC do I need to reconfigure all?
Would you please help me to understand the correct usage?
ps. I know I can revert back to the old naming scheme... but that's not the point of my question.
See https://www.freedesktop.org/wiki/Software/systemd/PredictableNetworkInterfaceNames/
it explains how the names are assigned
Names incorporating Firmware/BIOS provided index numbers for on-board devices (example: eno1)
Names incorporating Firmware/BIOS provided PCI Express hotplug slot index numbers (example: ens1)
Names incorporating physical/geographical location of the connector of the hardware (example: enp2s0)
Names incorporating the interfaces's MAC address (example: enx78e7d1ea46da)
Classic, unpredictable kernel-native ethX naming (example: eth0)
By default, systemd v197 will now name interfaces following policy 1) if that
information from the firmware is applicable and available, falling back to 2) if
that information from the firmware is applicable and available, falling back to 3)
if applicable, falling back to 5) in all other cases. Policy 4) is not used by
default, but is available if the user chooses so.
So you could opt for a different approach, likely in your setup its easiest to take the mac, just reboot once an image that tries pxe/dhcp requests and note down the sended mac.
Another way, that may work, depending on your setup, would be interface groupings.
From "man interfaces"
auto /eth*
If the kernel knows about the interfaces with names lo, eth0 and eth1, then the above line is then interpreted as:
auto eth0 eth1
Note that there must still be valid "iface" stanzas for each matching interface. However, it is possible to combine a pattern with a mapping to a logical interface, like so:
auto /eth*=eth
iface eth inet dhcp
So maybe if you only have one interface, but can't tell where it will be assigned, you could write "auto /e*=eth" to catch all interfaces starting with e and address them inside the configuration file as "eth".

How to implement GetTargetLUNs in vss Hardware provider?

I am implementing VSS Hardware provider for ZFS based iSCSI Target. We have implemented AreLunSupported, precommitsnapshot and commitsnapshot etc functions and till this point it is working fine. But after this it is failing with "VSS_E_NO_SNAPSHOTS_IMPORTED" error in LocateLun method. and I think we are not filling Target LUN information properly.
My questions are:
How to find serial number of target LUN ? Do I need to mount newly created snapshot and then get the serial number ?
Do we need to fill interconnect, storage identifier information also or can I just pass NULL for these.
Q: How to find serial number of target LUN ? Do I need to mount newly created snapshot and then get the serial number ?
No, you should not mount the snapshot at this point. You should use an out-of-band mechanism to directly communicate with your storage (I'm assuming your 'ZFS based iSCSI target' is coming from a NAS box), probably a REST API call, to figure out the serial number of the snapshot.
Let me elaborate some more on serial number of the snapshot:
VSS expects the 'shadow copy' to be a concrete, real volume, similar to the primary volume (in your case an iSCSI target)
Since you are using ZFS snapshots, without dwelling much into your exact implementation, you have 2 options to obtain the serial number for a concrete LUN:
a. If your storage allows exposing a ZFS snapshot directory as a iSCSI target, the create that iSCSI target and use its Page83 identifier
b. If not, create a ZFS clone using the ZFS snapshot and expose that as an iSCSI target and use its Page83 identifier
Q: Do we need to fill interconnect, storage identifier information also
or can I just pass NULL for these.
For all practical purposes, it usually suffices to simply copy the VDS_LUN_INFORMATION for the original source LUN and only edit the m_szSerialNumber field with that of the target LUN (assuming that the product ID, vendor ID etc. all will remain the same)
This link explains in detail what is expected out of a VSS Hardware Provider implementation: https://msdn.microsoft.com/en-us/library/windows/desktop/aa384600(v=vs.85).aspx
Unique Page 83 Information
Both the original LUN and the newly created shadow copy LUN must have
at least one unique storage identifier in the page 83 data. At least
one STORAGE_IDENTIFIER with a type of 1, 2, 3, or 8, and an
association of 0 must be unique on the original LUN and the newly
created shadow copy LUN.
Bonus chatter (Answer ends at this point):
Now, #2(b) above might raise eyebrows since you are creating a clone ahead-of-time and it is not yet being used. The reason for this is, the above steps need to be performed in IVssHardwareSnapshotProvider::FillInLunInfo and this same VDS_LUN_INFORMATION contents are passed later to IVssHardwareSnapshotProvider::LocateLuns (VSS is trying to tell you to locate the LUNs that you earlier told it were the shadow copy LUNs). Hence, regardless of whether you will be using the clone or not in future, you must have the concrete LUN (iSCSI target) created upfront.
A silver lining to this is: if you are sure that the workflow of the VSS Requestor will never mount the shadow copy, then you can get away with this by faking some (valid) info in VDS_LUN_INFORMATION during IVssHardwareSnapshotProvider::FillInLunInfo. For this to work, you will have to create a 'transportable' shadow copy (the VSS requestor uses the VSS_CTX_FILE_SHARE_BACKUP | VSS_VOLSNAP_ATTR_TRANSPORTABLE flags). The only use-case for such a shadow copy would be to perform a hardware-resync on it, in which the VSS Hardware Provider implements the IVssHardwareSnapshotProvider::ResyncLuns method and performs a ZFS snapshot rollback in it.

Full statement from ISO 8583

I would like to know if it is possible to do a full statement (between a date range) through ISO 8583, I have seen ATMs which do full statements and was wondering what method they used. I know balance inquiry and mini statements are possible on a POS devise over 8583.
If it is possible does anyone have an information on the structure of the message, ideally for FLexcube.
we did something similar to that back in 1999 in one of the banks, where we would send the statement data in one of the generic private use fields, where it would allow the format ANS 999
but that means you are either to restrict the data to less than 999 characters, or to split the data on multiple messages. and have a multi legged transaction.
you would have the following flow
Customer request for statement on ATM
ATM sends NDC/D912 message to ATM Switch
ATM Switch look up account number after authenticating the card and forward the request to Core Banking Application
Core banking application would generate the statement and format it according to predesigned template and send the statement data into a generic field (say 72)
ATM Switch collects the data and formats it to NDC or D912 format where the statement data is tagged to statement printer (in NDC it is a field called q and the value should be ‘8’ - Print on statement printer only)
and on the field r place the preformatted data
however, it is not a good practice to do so, since we have faster means to generate a statement and send to email or internet banking. but this is the bank's preference anyways.
It depends upon implementation,
I had implemented NCR central switch, where I incorporate initial checking stuffs in the Central application itself rather than passing everything to Auth Host.
My implementation.
ATM Sends (NCD) the transaction requests based on State Machine setup in ATM to Central Application.
Central does basic checkings such as Validity of BIN (initial 6 digit of card no.) and also checks if the requested amount of cash is available in the ATM etc.
The the Central App sends the packet (ISO8583/BASE24) is sent to the Acquirer for further processing.
Acquires Sends it to CA and then it goes to Issuer for Approval.
Hope this helps.
The mini-statement is not part of ISO 8583 (or MVA). It is usually implemented as a proprietary extension. Hence you need to go to an ATM owned by your bank, or, is part of a consortium of banks that share an ATM infrastructure with your bank.
We implemented mini-statements in our ISO-8583 specification utilizing a $0.00 0200 (DE003 = 91xxxx) message and the statement data coming back from the host on DE125 on both Connex and Base24 and then modified our stateful loads to print the data at the ATM.
Though full statements fell out of use years ago so we removed it to just be mini-statements now utilizing the receipt printer vs. full page statements. There is a limited number of entries and not all host support it but it is used today on NCR & Diebold ATMs. I've personally participated in the testing in getting it to work on Base24 and Postilion.
The mini-statement data we do print is 40 characters per line and prints about 10 transactions I believe.

Resources