Intrusion Detection System, Security+ question - security

I'm studying to take the Security+ exam.
I'm really having problems figuring out this chart. I understand most of it. Can someone explain the following?
Why are there 2 sensors in this picture which both point to analyzer?
Why is security policy not a block?
Why does "trending and reporting" have no inputs?
Can this picture be redrawn like this and have the same meaning?
This is really confusing to me.

I want to start out by saying that these kinds of diagrams are only really useful as high level overviews of what happens inside a system. Don't take them too literally. Why individual blocks are omitted or repeated is just going to be a mystery and probably not indicative of anything. That said, I'll try to look into my crystal ball and divine what the author might have been thinking:
1) There are two sensors to indicate that there is a 1:n relationship between analyzers and sensors. Meaning that in an IDS, there can be many sensors which all feed into a single analyzer.
2) Security Policy is the data which is supplied by an administrator. So the Administrator (a block) has an arrow (the policy) as an input to several other blocks. Think of it this way: you should always be able to label the arrows in a block diagram with exactly what data is being sent. In your blue diagram you made, what would the label be for the arrow between "Security Policy" and "Analyzer"? (It's the policy which is being sent)
3) "Trending and Reporting" is not a block (which would need an input). It is the label to the bidirectional arrow on the bottom. "Trending and Reporting" is the data which is being sent back and forth between the Administrator and Operator.
Hope that helps.

Related

Do I use foreach for 2 different inspection checks in activity diagram?

I am new in doing an activity and currently, I am trying to draw one based on given description.
I enter into doubt on a particular section as I am unsure if it should be 'split'.
Under the "Employee", the given description is as follows:
Employee enter in details about physical damage and cleanliness on the
machine. For the cleanliness, there must be a statement to indicate
that the problem is no longer an issue.
As such, I use a foreach as a means to describe that there should be 2 checks - physical and cleanliness (see diagram in the link), before it moves on to the next activity under the System - for the system to record the checks.
Thus, am I on the right track? Thank you in advance for any replies.
Your example is no valid UML. In order to make it proper you need to enclose the fork/join in a expansion region like so:
A fork/join does not accept any sematic labels. They just split the control flow into several parallel ones which join at the end.
However, this still seems odd since you would probably have some control for the different inspections being entered. So I'd guess there's a decision which loops through multiple inspection entries. Personally I use regions only for handling interrupts. ADs are nice to a certain level. But sometimes a tabular text (like suggested by Cockburn) is just easier to write and read. Graphical programming is not the ultimate answer (unlike 42).
First, the 'NO' branch of the decision node must lead somewhere (at the end?).
After, It differs if you want to show the process for ONE or MULTIPLE inspections. But the most logical way is to represent the diagram for an inspection, because you wrote inspection without S ! If you want represent more than one inspection, you can use decision and merge node to represent loop that stop when there is no more inspection.

Is there an orbital solver or modifier that allows orbiting tolerance?

I'm trying to create an orbiting menu. For that it needs to follow the user around
but it also needs a follow threshold for the user to interact with every corner of it.
From a design perspective (Hololens, Hololens 2, MR,...) is it a good idea to have a floating menu?
Menu design in XR is a complex topic so maybe deserves its own question as suggested by Julia.
In short, we've seen mixed results in our user testing with floating menus. They "work" in the sense that the user can always access the menu even while walking around in experiences that cover large areas.
If your experience is confined to a smaller space, I'd recommend looking into another interaction paradigm, either using a wall, table or floor placed interaction menu.
If you do want to be able to roam freely, a floating menu may be the answer. You can then use a basic tagalong script or configure a solver (body lock) like in this example by Dong Yoon Park (under section Solver System): https://medium.com/#dongyoonpark/open-source-building-blocks-for-windows-mixed-reality-experiences-hololens-mixedrealitytoolkit-28a0a16ebb61

Visio: Anchor to Sloping Face of Diamond Shape?

In Visio one of the most fundamental and frustrating annoyances I'm facing is not being able to anchor to the sloping face of the diamond shape.
My organization commonly uses this shape in flow diagrams.
Whether it's a densely connected logic point, which needs more than the 5-6 anchors...
...or a self connecting loop like this:
...I often want to connect to the sloping faces of the diamond shape, but can't seem to find a way to successfully anchor to the sloping part of the shape.
Currently I'm settling for connecting one side, but that leads to frustration when things are repositioned.
I've also explored the Data and Format Shape menus presented by right clicking the shape and the path I'm looking to connect to it as those sounded promising. However, examining those menus, I haven't found anything that looks close to what I need yet.
Seems like there must be a way to do this, though.
Update
I've also tried to redraw the diamond w/ a Pencil in the ribbon's Tools pane under the Shapes dropdown ... but did not have any luck anchoring to the result.
I also have clarified that my question is relating to a basic anchoring need, given my initial example's deviation from traditional UML.
Overview
To anchor to somewhere on a shape face other than the defaults, simply add connector points. (Shift+Ctrl+1, by default)
If this is a common issue, create a master shape with additional connection points to reduce time cost.
Adding Connectors
In Visio 2010 or later:
1.) Enabling Connection Points
Under Views tab, be sure Connection Points in Visual Aids group is CHECKED:
!! NOTE: If this step is ignored, attempts to add points may fail.
2.) Enter Connection Point Edit Mode
Either enter press Shift + Ctrl + 1 or go to Tools group in Home tab and click the x (Connection Point) to enter appropriate edit mode.
3.) Select Shape
Click to select the shape to be edited.
4.) Add a point
Hold Ctrl and then click again on the desired position along the face of the selected shape to add a point.
(Visio 2013 -- after adding a point)
The point is depicted in Visio 2010 as a magenta 'x', while it's depicted as a red square in Visio 2013. The shape itself is thinly outlined, w/ pre-existing connection points shown as blue 'x's in Visio 2010; for Visio 2013, it's instead depicted as a gray bounding box, w/ pre-existing points show in gray for unselected shapes.
You must select the shape before adding the points, however, once selected as many points as are desired may be added.
BEWARE -- once a shape has been selected, you can add connections on other shapes nearby, as well, leading to potentially weird routing.
Complete!
If you don't mind: you are not asking a UML, but a Visio drawing question.
However, I answer in UML context: your drawing does not make sense. Removing the No path will just make it a more valid one. Then it should be an Action called Wait for something that continues only when a something happens. You take a decision only if there's something to decide, not to stop the control flow until an event happens.
In response to your chat question (use of UML): Everything depends. Whether or not you stick with the UML specification (actually the ISO source is available for free at the author's site) is your decision. UML itself leaves great ways to adapt the language to your domain by using profiles. Whenever you deviate from the standard you have to document that and people need to be trained accordingly.
I have to admit that the UML specification is no bedtime lecture. However, there are great sources to learn from (e.g. lots of examples are found here). I for myself work with UML in practice for more than 20 years and have to say that it was worth the time learning it. Always remember that UML is a language and like any language it needs to be spoken actively to convey ideas effectively. Here in Germany we have so many dialects and a general High German. People with a certain idiom can talk to each of their peers without issue, but people from north and south are better served to use the common idiom since their own dialects differ quite fundamentally.

Using FRP to model road network with jams

I am currently trying to understand arrows and FRP, and I came upon a question, which I cannot seem to map to FRP, namely how to model a road network.
I thought I could model a road network as Arrows, where each Arrow represents a road segment. It accepts streams of cars at locations and times and produces the same type, albeit with different locations and times.
So far so good. But this model does not take into account, that segments may get jammed. While each segment could well respond to heavy traffic and delay cars more and more, the more congested it gets, there would be no backwater effect, i.e. the jam would not propagate backwards to other road segments.
I suspect I am applying too much OO thinking here, instead of focusing on what needs to be computed, but I cannot get it right in my head.
How can I model a road network with Arrows such that backwater effects are taken into account?
The problem is that in arrows and in FRP the flow of information is in general unidirectional. Think of a FRP arrow like a piece of digital circuit. The output of a circuit element doesn't depend on what's connected to it - it just "offers" the output to whoever is interested. This is also described visually in Primitive signal functions in the Yampa overview:
Your situation is different. The state of a segment of a road depends on both the next and previous segments - cars are comming from the previous one, but if cars can't leave to the next one, they have to stay. It's just like a pipe with running water. If you close the pipe at its end, water stops, and the information about that propagates backwards through the pipe at the speed of sound in water.
So each road segment will need to have 2 inputs: One saying let's say how many cars can the following segment accept, and how many cars are coming from the previous segment (which should always be less or equal to the number of cars the segment can accept at the moment). This means that the FRP signal flow will be actually circular. For this you'll need loops, shown in the last image in the above diagram, which are captured by ArrowLoop type-class. Most likely you'll have a custom binding function for road segments that'll be internally creating the required loops. Note that there must be a time delay in a loop, to prevent it from diverging, which makes sense as it takes some time for cars to go from one segment to another.
(I'll perhaps expand the answer with an example, if I'll have more time.)

Detecting Handedness from Device Use

Is there any body of evidence that we could reference to help determine whether a person is using a device (smartphone/tablet) with their left hand or right hand?
My hunch is that you may be able to use accelerometer data to detect a slight tilt, perhaps only while the user is manipulating some sort of on screen input.
The answer I'm looking for would state something like, "research shows that 90% of right handed users that utilize an input mechanism tilt their phone an average of 5° while inputting data, while 90% of left handed users utilizing an input mechanism have their phone tilted an average of -5°".
Having this data, one would be able to read accelerometer data and be able to make informed decisions regarding placement of on screen items that might otherwise be in the way for left handed users or right handed users.
You can definitely do this but if it were me, I'd try a less complicated approach. First you need to recognize that not any specific approach will yield 100% accurate results - they will be guesses but hopefully highly probable ones. With that said, I'd explore the simple-to-capture data points of basic touch events. You can leverage these data points and pull x/y axis on start/end touch:
touchStart: Triggers when the user makes contact with the touch
surface and creates a touch point inside the element the event is
bound to.
touchEnd: Triggers when the user removes a touch point from the
surface.
Here's one way to do it - it could be reasoned that if a user is left handed, they will use their left thumb to scroll up/down on the page. Now, based on the way the thumb rotates, swiping up will naturally cause the arch of the swipe to move outwards. In the case of touch events, if the touchStart X is greater than touchEnd X, you could deduce they are left handed. The opposite could be true with a right handed person - for a swipe up, if the touchStart X is less than touchEnd X, you could deduce they are right handed. See here:
Here's one reference on getting started with touch events. Good luck!
http://www.javascriptkit.com/javatutors/touchevents.shtml
There are multiple approaches and papers discussing this topic. However, most of them are written between 2012-2016. After doing some research myself I came across a fairly new article that makes use of deep learning.
What sparked my interest is the fact that they do not rely on a swipe direction, speed or position but rather on the capacitive image each finger creates during a touch.
Highly recommend reading the full paper: http://huyle.de/wp-content/papercite-data/pdf/le2019investigating.pdf
Whats even better, the data set together with Python 3.6 scripts to preprocess the data as well as train and test the model described in the paper are released under the MIT license. They also provide the trained models and the software to
run the models on Android.
Git repo: https://github.com/interactionlab/CapFingerId

Resources