Setup multiple location hierarchies in MAXIMO - maximo

I am looking for a way to setup multiple location hierarchies in MAXIMO. We have transmission lines and pipe lines, but the same geographical location might have different parent for transmission grid than for the pipeline network. Is there a way to do this?

SYSTEMS in MAXIMO is a logical grouping of locations. In order to have a location to be a child/parent of another, both the locations should belong to a system. Use "Associate Systems with Location" action item available in Location application for this. Within a system, there can be only one top-level location. All other locations within that system should be children of the top-level location.
One location can have more than one parent, but not in the same system. One location can belong to multiple systems and have multiple parents in these systems.
You can have a child location as the parent of its parent, provided the system to which all of these locations belong is "Network".
This way you can have a completely different drill-down by the power transmission system, gas pipeline system and geographical system. That will allow your users to drill-down through the different system-specific hierarchies to the same location.

Related

When would you use uuidgen in a live environment?

I came across uuidgen from watching a video to study for the redhat 8 exam but I had a question about it's usefulness and did not find any other thread nor did the manpage mention it. So I understand that each device has an UUID and the UUID can be used for multiple purposes - the purpose in the video was creating new mountpoints and the UUID was used to associate a new partition (/dev/sdb3) with a new mountpoint. Here they pointed out that you can generate a new UUID for that device using uuidgen.
My question is why/when would you use uuidgen in a production environment with live servers to relabel a partition? What would be the point of relableing the UUID for a currently existing mountpoint? Is there a sort of attack that target UUID of a system? Or is the sole purpose for uuidgen just used to create random UUIDs for others things like web links?
Thanks
Say you have a system with several disks, one partition each, and you need to play "Musical Data" with some of them.
If you start by copying, at block level via e.g. dd, the entire GPT-partitioned disk, then you will have, as a result, a duplicate UUID. This is fine if one of the duplicates is going to be blown away before the next time the OS needs to mount one of them. If, for whatever reason, this can not be ensured, then whichever copy you don't want the OS to pick up anymore, needs a new UUID. Enter uuidgen.
I'm assuming you're talking about GPT partition UUIDs, which are stored all together in each GPT that contains the identified partitions; if, instead, you're talking about filesystem UUIDs, which are stored inside the metadata for that filesystem and thus are copied whenever dd'ing that filesystem, then the above scenario still holds and more scenarios become plausible.

Can non-exclusive (stackable) Linux Security Modules use security blobs?

I'm experimenting with Linux Security Modules, trying to make one.
My main source of knowledge about how they're supposed to work are mailing lists archives and the existing LSMs' sources, plus the few pages about them in the Linux documentation.
I understand that there are two kinds of LSMs.
Exclusive LSMs like SELinux / AppArmor, which have the LSM_FLAG_EXCLUSIVE flag set in their LSM definition.
Non-exclusive LSMs like Yama, capabilities or lockdown.
Browsing the source code of all these LSMs, I figured out non-exclusive ones never make use of security blobs. On the other hand, exclusive ones make heavy use of them.
For instance, see the AppArmor LSM definition, and the one for Yama.
So, can non-exclusive LSMs specify blob sizes and use this feature ?
Trying to find an answer, I explored the framework's source to see if maybe security blobs were switched between each LSM hook call, I guess that would allow each LSM to only have access to its own blobs and not those of another LSM.
However, we can see here in the LSM framework that it is not the case.
If my LSM declares blob sizes, can I use the blobs if my kernel also have SELinux, for instance, enabled ?
Won't the structures from SELinux and mine overlap ?
Alright, I found the relevant code in the LSM framework.
QED: Yes, all LSMs can use security blobs as long as they use the sizes structure as containing offsets once the module is ignited.
Explanation:
When you define your LSM, you use the DEFINE_LSM macro followed by various informations, including a pointer to a struct lsm_blobs_sizes.
During its own ignition, the LSM framework (which is mostly implemented in security/security.c) manipulates your structure in a few operations.
It stores, in its own instance of the structure (declared here), the sum of all LSM's security blobs sizes. Precisely, looking at this stack trace:
ordered_lsm_init()
`- prepare_lsm(*lsm)
`- lsm_set_blob_sizes(lsm->blobs)
`- lsm_set_blob_size(&needed->lbs_task, &blob_sizes.lbs_task);
lsm_set_blob_size is responsible for the actual addition to the framework's structure instance.
However, combined with lsm_set_blob_sizes, it effectively replaces each size in the currently prepared LSM's struct lsm_blob_sizes with the offset at which this LSM's part of the blob resides.
The framework then calls their init function.
Later, when any structure with a security blob (a task_struct for instance) gets allocated, the framework will allocate one blob with enough space for the blobs of all security modules, which in turn will find their spot in this larger blob using the offsets in their own lsm_blobs_sizes.
The summed-up sizes of security blobs is actually meant to be controlled using init_debug, here.
So what this means is that all LSMs can define security blobs sizes. The framework is responsible for their allocation (and deallocation) and the blobs for different LSMs can happily live side by side in memory.

Mirroring files from one partition to another on the second disk without RAID1

I am looking for a program that would allow me to mirror one partition to another disk (something like RAID1) for Linux. It doesn't have to be a windowed application, it can be a console application, I just want what is in one place to be mirrored to another.
It would be nice if it were possible to mirror a specific folder that I would care for instead of copying everything from the given partition.
I was looking on the internet, but it's hard to find something that would give such opportunities, hence the idea to ask such a question.
I do not want to make fake RAID on Linux or hardware RAID because I read that if the motherboard fails then it is best to have the same second one to recover data.
I will be grateful for every suggestion :)
You can check my script "CopyDirFile" written in bash, which is located on github.
You can perform a replication (mirroring) task of any source folder to another destination folder (deleting a file in the source folder means deleting it in the destination folder).
The script also allows you to create copy tasks (deleted files in the source folder will not be deleted in the target folder).
The tasks are executed in background at a specified time, not all the time, frequency is set by the user when creating the task.
You can also set the task to start automatically when the user logs on.
All the necessary information can be found in the README file in repository.
If I understood you correctly, I think it meets your requirements.
Linux has standard support for software RAID: mdraid.
It allows you to bundle two disk devices into a RAID 1 device (among other things); you then create a filesystem on top of that device.
LVM offers another way to do software RAID; it doesn't seem to be very popular, but it's certainly supported.
(If your system supports hardware RAID, on the motherboard or with a separate RAID controller, Linux can use that, too, but that doesn't seem to be what you're asking here.)

Best process to retrieve files daily from outside Azure

I've asked this question in the Azure Logic Apps (LA) forum since I've used LA to implement this process but I may also get some valuable input here.
High-level description: in one specific client, we need to download, daily, dozens of file from a SFTP location to our servers in order to process their data. This workflow was built, in the past, using tools from a different technology than Azure but what we aimed to have was a general process that could be used for different source systems, different files, etc. With that in mind, our process retrieves, in the beginning, from a database, different variables to be applied to each execution of the process such as:
Business date
Remote location path - sftp location
Local location path - internal server location
File extension - .csv, .zip, etc
Number of iterations
Wait time between iterations
Dated files - whether files have business date on their name or not
Once all this is defined in the beginning of the process (there's some extra logic to it, not as straight forward as just getting variables but let's assume this for example purposes), the following logic is applied (the image below may help understand the LA flow):
Search for file in SFTP location
If file is there, get file size, wait X amount of time and check size again.
If file isn't there, try again until reaching maximum number of iterations or file is found
If file's size match, proceed to download the file
If file's size don't match, try again until reaching maximum number of iterations or file is found
LA Flow
In our LA, this process is implemented and working fine, we have 2 parameters in LA which is the filename and source system and based on these parameters, all variables are retrieved in the beginning of the process. Those 2 parameters can be changed from LA to LA and by scripting we can automatically deploy multiple LAs (one for each file we need to download). The process uses a schedule trigger since we want to run it at a specific time each day, we don't want to use the trigger of when a file is placed in a remote location since several files may be placed in the remote location which we aren't interested in.
One limitation that I can see compared to our current process is having multiple LAs grouped under one type of pipeline, where we can group multiple executions and check the state of them all without needing to check multiple LAs. I'm aware that we can do monitoring of LAs with OMS and, potentially, call multiple LAs from a Data Factory pipeline but I'm not exactly sure how that would work in this case.
Anyway, here is where my QUESTION comes in: what would be the best feature in Azure to implement this type of process? LAs works since I have it built, I'm going to take a look at replicate the same process in Data Factory but I'm afraid it may be a bit more complicated to set up this kind of logic there. What else could potentially be used? Really open to all kind of suggestions, just want to make sure I consider all valid options which is hard considering how many different features are offered by Azure and it's hard to keep track of them all.
Appreciate any input, cheers

Counter file placement and naming convention

Ok this one might be stupid, but i'm losing too much time overthinking a solution.
I have a web app with 2 differents kind of payment modules.
These modules need (each) a counter file, incremented each time someone want to pay, and locked while incrementing to make sure the payment get a unique payment reference.
The files were placed inside the main directory (public_html) and have been overriden by a bad versionning move.
So I want to move them outside of public_html, where I already placed the main config file.
But having these critical file placed at the root of my ftp sounds stupind and dangerous. So I'll create a directory to place them.
This is a lot of text just to ask this :
How would you call this directory ?
IMO, your question has not related especially with PHP, it's a common issue. You can use of one of standard directories to share data between the applications.
/var
From the Filesystem Hierarchy Standard (FHS):
/var contains variable data files. This includes spool directories and files, administrative and logging data, and transient and temporary files.
(read more)
Some options:
You can store your file directly in the /var.
Also /var/tmp can hold temporary files for a longer time and doesn't clean it after reboot (depends on your system).
Or you can create a custom subdirectory in the /var/opt with name that relevant to your applications.

Resources