Ubiquitous computing and magnetic interference - robustness

Imagine the radio of a car, does the electro magnetic fields through which the car goes through, have interference in the processing? It's easy to understand that a strong field can corrupt data. But what about the data under processment? Can it also be changed?
If so how could you protect your code against this? (without electrial protections just code ones)

For the most robust mission critical systems you use multiple processors and compare results. This is what we did with aircraft auto pilot (autolanding). We had three autopilots, one flying the aircraft and two check that one. If any one of the three disagreed, it was shut down.

You're referring to what Wikipedia calls soft errors. The traditional, industry-accepted work-around for this is through redundancy, as Jim C and fmsf noted.
Several years ago, our repair department's analysis showed an unacceptable number of returned units with single-bit errors in the battery-backed SRAM that held the firmware. Despite our efforts at root-cause analysis, we were unable to explain the source of the problem. At that point a hardware change was out of the question, so we needed a software-only solution to treat the symptom.
We wanted a reliable fix that we could implement simply and quickly, so we generated parity checks on blocks of code in the SRAM. We chose a block size that required very little additional storage for the parity data, yet provided enough redundancy to detect and correct any of the errors we'd seen and then some. It logs the errors it detects and indicates whether it can correct them, so we still know when bit errors occur in the field. So far, so good!
Our product manager did some additional research out of curiosity and convinced himself that the culprit was cosmic radiation. We never proved it unequivocally, but he was satisfied that the number of errors seemed to agree with what would be expected based on the data he found. I'm just glad the returns have stopped.

I doubt you can.
Code that is changed won't run, so likely your program(s) will crash if you have this problem.
This is a hardware problem.

Related

Is SRM in Google Optimize (Bayesian Model) a thing

So checking for Sample Ratio Mismatch is good for data quality.
But in Google Optimize i can't influence the sample size or do something against it.
My problem is, out of 15 A/B Tests I only got 2 Experiment with no SRM.
(Used this tool https://www.lukasvermeer.nl/srm/microsite/)
In the other hand the bayesian model deals with things like different sample sizes and I dont need to worry about, but the opinions on this topic are different.
Is SRM really a problem in Google Optimize or can I ignore it?
SRM affects Bayesian experiments just as much as it affects Frequentist. SRM happens when you expect a certain traffic split, but end up with a different one. Google Optimize is a black box, so it's impossible to tell if the uneven sample sizes you are experiencing are intentional or not.
Lots of things can cause a SRM, for example if your variation's javascript code has a bug in some browsers those users may not be tracked properly. Another common cause is if your variation causes page load times to increase, more people will abandon the page and you'll see a smaller sample size than expected.
That lack of statistical rigor and transparency is one of the reasons I built Growth Book, which is an open source A/B testing platform with a Bayesian stats engine and automatic SRM checks for every experiment.

Unknown events in nodejs/v8 flamegraph using perf_events

I try to do some nodejs profiling using Linux perf_events as described by Brendan Gregg here.
Workflow is following:
run node >0.11.13 with --perf-basic-prof, which creates /tmp/perf-(PID).map file where JavaScript symbol mapping are written.
Capture stacks using perf record -F 99 -p `pgrep -n node` -g -- sleep 30
Fold stacks using stackcollapse-perf.pl script from this repository
Generate svg flame graph using flamegraph.pl script
I get following result (which look really nice at the beginning):
Problem is that there are a lot of [unknown] elements, which I suppose should be my nodejs function calls. I assume that whole process fails somwhere at point 3, where perf data should be folded using mappings generated by node/v8 executed with --perf-basic-prof. /tmp/perf-PID.map file is created and some mapping are written to it during node execution.
How to solve this problem?
I am using CentOS 6.5 x64, and already tried this with node 0.11.13, 0.11.14 (both prebuild, and compiled as well) with no success.
FIrst of all, what "[unknown]" means is the sampler couldn't figure out the name of the function, because it's a system or library function.
If so, that's OK - you don't care, because you're looking for things responsible for time in your code, not system code.
Actually, I'm suggesting this is one of those XY questions.
Even if you get a direct answer to what you asked, it is likely to be of little use.
Here are the reasons why:
1. CPU Profiling is of little use in an I/O bound program
The two towers on the left in your flame graph are doing I/O, so they probably take a lot more wall-time than the big pile on the right.
If this flame graph were derived from wall-time samples, rather than CPU-time samples, it could look more like the second graph below, which tells you where time actually goes:
What was a big juicy-looking pile on the right has shrunk, so it is nowhere near as significant.
On the other hand, the I/O towers are very wide.
Any one of those wide orange stripes, if it's in your code, represents a chance to save a lot of time, if some of the I/O could be avoided.
2. Whether the program is CPU- or I/O-bound, speedup opportunities can easily hide from flame graphs
Suppose there is some function Foo that really is doing something wasteful, that if you knew about it, you could fix.
Suppose in the flame graph, it is a dark red color.
Suppose it is called from numerous places in the code, so it's not all collected in one spot in the flame graph.
Rather it appears in multiple small places shown here by black outlines:
Notice, if all those rectangles were collected, you could see that it accounts for 11% of time, meaning it is worth looking at.
If you could cut its time in half, you could save 5.5% overall.
If what it's doing could actually be avoided entirely, you could save 11% overall.
Each of those little rectangles would shrink down to nothing, and pull the rest of the graph, to its right, with it.
Now I'll show you the method I use. I take a moderate number of random stack samples and examine each one for routines that might be speeded up.
That corresponds to taking samples in the flame graph like so:
The slender vertical lines represent twenty random-time stack samples.
As you can see, three of them are marked with an X.
Those are the ones that go through Foo.
That's about the right number, because 11% times 20 is 2.2.
(Confused? OK, here's a little probability for you. If you flip a coin 20 times, and it has a 11% chance of coming up heads, how many heads would you get? Technically it's a binomial distribution. The most likely number you would get is 2, the next most likely numbers are 1 and 3. (If you only get 1 you keep going until you get 2.) Here's the distribution:)
(The average number of samples you have to take to see Foo twice is 2/0.11 = 18.2 samples.)
Looking at those 20 samples might seem a bit daunting, because they run between 20 and 50 levels deep.
However, you can basically ignore all the code that isn't yours.
Just examine them for your code.
You'll see precisely how you are spending time,
and you'll have a very rough measurement of how much.
Deep stacks are both bad news and good news -
they mean the code may well have lots of room for speedups, and they show you what those are.
Anything you see that you could speed up, if you see it on more than one sample, will give you a healthy speedup, guaranteed.
The reason you need to see it on more than one sample is, if you only see it on one sample, you only know its time isn't zero. If you see it on more than one sample, you still don't know how much time it takes, but you do know it's not small.
Here are the statistics.
Generally speaking it is a bad idea to disagree with a subject matter expert but (with the greatest respect) here we go!
SO urges the answer to do the following:
"Please be sure to answer the question. Provide details and share your research!"
So the question was, at least my interpretation of it is, why are there [unknown] frames in the perf script output (and how do I turn these [unknown] frames in to meaningful names)?
This question could be about "how to improve the performance of my system?" but I don't see it that way in this particular case. There is a genuine problem here about how the perf record data has been post processed.
The answer to the question is that although the prerequisite set up is correct: the correct node version, the correct argument was present to generate the function names (--perf-basic-prof), the generated perf map file must be owned by root for perf script to produce the expected output.
That's it!
Writing some new scripts today I hit apon this directing me to this SO question.
Here's a couple of additional references:
https://yunong.io/2015/11/23/generating-node-js-flame-graphs/
https://github.com/jrudolph/perf-map-agent/blob/d8bb58676d3d15eeaaf3ab3f201067e321c77560/bin/create-java-perf-map.sh#L22
[ non-root files can sometimes be forced ] http://www.spinics.net/lists/linux-perf-users/msg02588.html

Visual Studio bothersome message: "Locating source"

During debugging a C# Consoleproject, about once an hour, I get the following error for a mind-boggling 20-30 seconds:
The weird part is that the source files are stored on a local SSD hard drive....
This is a workflow-disruptive completely unacceptable nuisance. Googling didn't amount to anything, do you know how to get rid of this?
Long delays are usually associated with network timeouts. There is one setting that can affect this. Right-click your solution node in the Solution Explorer window, the one on top of the tree, Properties, Common Properties, Debug Source Files. Verify that the list contains directories that are not stored on a disconnected network drive.
a mind-boggling 20-30 seconds
If you are sure about the length of the delay then it is not very likely to be a network timeout, it is too short. That kind of delay is associated with another scourge on programmers' machines, notably on an uptick in the past few months. It is the kind of delay you get from anti-malware. One of your files tripping the pattern of a known virus. The canonical example of such a problem is here. Also has a hint on how to detect it and the recommended workaround.
If none of this pans out then you'll need the help of the techniques outlined in Mark Russinovich's blog posts. Lots of examples of him using his tools to troubleshoot problems like this. A sampler is this post, doesn't match your problem but shows his approach.

How large is the average delay from key-presses

I am currently helping someone with a reaction time experiment. For this experiment reaction times on the keyboard are measured. For this experiment it might be important to know, how much error could be introduced because of the delay between the key-press and the processing in the software.
Here are some factors that I found out using google already:
The USB-bus is polled at 125Hz at minimum and 1000Hz at maximum (depending on settings, see this link).
There might be some additional keyboard buffers in Windows that might delay the keypresses further, but I do not know about the logic behind those.
Unfortunately it is not possible to control the low level logic of the experiment. The experiment is written in E-Prime a software that is often used for this kind of experiments. However the company that offers E-Prime also offers additional hardware, that they advertise for precise reaction-timing. Hence they seem to be aware about this effect (but do not tell how large it is).
Unfortunately it is necessary to use a standart keyboard, so I need to provide ways to reduce the latency.
any latency from key presses can be attributed to the debounce routine (i usually use 30ms to be safe) and not to the processing algorithms themselves (unless you are only evaluating the first press).
If you are running an experiment where millisecond timing is important you may want to use http://www.blackboxtoolkit.com/ to find sources of error.
Your needs also depend on the nature of your study. I've run RT experiments in Eprime with a keyboard. Since any error should be consistent on average across participants, for some designs it is not a big problem. If you need to sync up the data though with something else (like Eye tracking or EEG) or want to draw conclusions about RT where specific magnitude is important then E-Primes serial resp box (or another brand, though I have had compatibility issues in the past with other brand boxes and eprime) is a must.

How does a 7- or 35-pass erase work? Why would one use these methods?

How and why do 7- and 35-pass erases work?
Shouldn't a simple rewrite with all zeroes be enough?
A single pass with zeros doesn't completely erase magnetic artifacts from a disk. It's still possible to recover the data from the drive. A 7-pass erasure using random data will do a pretty complete job to prevent reconstruction of the data on the drive.
Wikipedia has a number of different articles relating to this topic.
http://en.wikipedia.org/wiki/Data_remanence
http://en.wikipedia.org/wiki/Computer_forensics
http://en.wikipedia.org/wiki/Data_erasure
I'd never heard of the 35-part erase: http://en.wikipedia.org/wiki/Gutmann_method
The Gutmann method is an algorithm for
securely erasing the contents of
computer hard drives, such as files.
Devised by Peter Gutmann and Colin
Plumb, it does so by writing a series
of 35 patterns over the region to be
erased. The selection of patterns
assumes that the user doesn't know the
encoding mechanism used by the drive,
and so includes patterns designed
specifically for three different types
of drives. A user who knows which type
of encoding the drive uses can choose
only those patterns intended for their
drive. A drive with a different
encoding mechanism would need
different patterns. Most of the
patterns in the Gutmann method were
designed for older MFM/RLL encoded
disks. Relatively modern drives no
longer use the older encoding
techniques, making many of the
patterns specified by Gutmann
superfluous.[1]
Also interesting:
One standard way to recover data that
has been overwritten on a hard drive
is to capture the analog signal which
is read by the drive head prior to
being decoded. This analog signal will
be close to an ideal digital signal,
but the differences are what is
important. By calculating the ideal
digital signal and then subtracting it
from the actual analog signal it is
possible to ignore that last
information written, amplify the
remaining signal and see what was
written before.
As mentioned before, magnetic artifacts are present from the previous data on the platter.
In a recent issue of MaximumPC they put this to the test. They took a drive, ran it through a pass of all zeros, and hired a data recovery firm to try and recover what they could. Answer: Not one bit was recovered. Their analysis was that unless you expect the NSA to try, a zero pass is probably enough.
Personally, I'd run an alternating pattern or two across it.
one random pass is enough for plausible deniability, as the lost data will have to be mostly "reconstructed" with a margin of error that grows with the length of the data trying to be recovered, as well as whether or not the data is contiguous (most cases, its not).
for the insanely paranoid, three passes is good. 0xAA (10101010), 0x55 (01010101), and then random. the first two will grey out residual bits, the last random pass will obliterate any "residual residual" bits.
never do passes with zeros. under magnetic microscopy the data is still there, its just "faded".
never trust "single file shredding", especially on solid state mediums like flash drives. if you need to "shred" a file, well, "delete" it and fill your drive with random data files until it runs out of space. then next time think twice about housing shred-worthy data on the same medium as "low-clearance" stuff.
the gutmann method is based on tin-foil hat speculation, it does various things to get drives to degauss themselves, which is admirable in an artistic sense, but pragmatically its overkill. no private organisation to-date has successfully recovered data from even a single random pass. and as for big brother, if the DoD considers it gone then you know its gone, the military industrial complex gets all the big bucks to try and do exactly what gutmann claims they can do, and believe you me if they had the tech to do so it would already have been leaked to the private sector since they're all in bed with each other. however if you want to use gutmann in spite of this, check out the secure-delete package for linux.
7 pass and 35 pass would take forever to finish. HIPAA only requires DOD 3-pass overwrite,
and I am not certain why DOD even has a 7 pass overwrite as it seems they just simply
shred the disks before disposing of machines anyway. Theoretically, you could recover
data off of the outer edges of each track (using a scanning electron microscope or
microscopic magnetic probe), but it practice you would need the resources of a disk
drive maker or one of the three letter government organizations to do this.
The reason to perform multipass writes is to take advantage of the slight errors in positioning to overwrite the edges of the track also, making recovery far less likely.
Most drive recovery companies can't recover a drive that has had its data overwritten
even once. They are typically taking advantage of the fact that Windows doesn't zero out the data blocks, just changes the directory to mark the space free. They simply 'undelete'
the file and make it visable again.
If you don't believe me, call them up and ask them if they can recover a disk
that has been dd'ed over... they will typically tell you no, and if they do agree to try, it will be serious $$$ to get it back...
DOD 3 pass followed by a zero overwrite should be more than sufficent for most (i.e.
non- TOP SECRET) folks.
DBAN (and its commercially supported decendent, EBAN) do this all cleanly... I would
recommed these.
See: Secure Deletion of Data from Magnetic and Solid-State Memory
Advanced recovery tools can recover single pass deleted files easily. And they are expensive too (e.g http://accessdata.com/).
A visual GUI for Gutmann passes from http://sourceforge.net/projects/gutmannmethod/ shows it has 8 semi random passes. I never seen a proof that files deleted by Gutmann been recovered.
An overkill, maybe, still far better that Windows soft delete.
Regarding the second part of the question, some of the answers here actually contradict real research on that exact atopic. According the the Number of overwrites needed of the Data erasure article on wikipedia, on modern drives, erasing with more than one pass is redundant:
"ATA disk drives manufactured after 2001 (over 15 GB) clearing by
overwriting the media once is adequate to protect the media from both
keyboard and laboratory attack." (citation)
Also, infosec did a nice article entitled "The Urban Legend of Multipass Hard Disk Overwrite", on the entire subject, talking about the old USA Government erasure standards, among others, of how the multi-pass myth established itself in the industry.
"Fortunately, several security researchers presented a paper [WRIG08]
at the Fourth International Conference on Information Systems Security
(ICISS 2008) that declares the “great wiping controversy” about how
many passes of overwriting with various data values to be settled:
their research demonstrates that a single overwrite using an arbitrary
data value will render the original data irretrievable even if MFM and
STM techniques are employed.
The researchers found that the probability of recovering a single bit
from a previously used HDD was only slightly better than a coin toss,
and that the probability of recovering more bits decreases
exponentially so that it quickly becomes close to zero.
Therefore, a single pass overwrite with any arbitrary value (randomly
chosen or not) is sufficient to render the original HDD data
effectively irretrievable."
There's a lot of misinformation around this, though most of the answers I see on this page are correct. I've worked in the data recovery industry for 25 years and have addressed this exact question an enormous number of times.
The "residual magnetism" hypothesis never worked in real life. And back then, tolerances were millions of times looser.
If you still doubt this, remember that a rotational hard drive uses the same storage principle as an audio tape - moving magnetic substrate storage - and the audio tape that was recorded over a single time in the Watergate case has still not been recovered.
A single zero-pass wipe renders all the data on a HDD unrecoverable unless some malfunction or mistake causes the overwrite to be incomplete. This was true even back in the days when Peter Gutmann released his paper (which was like a tsunami in the erasure industry.) Gutmann's paper was pure hypothesis, it never panned out in reality. Even in the days of MFM/RLL drives, nobody could recover from a single-pass overwrite. It should be noted that Gutmann patented the algorithm that his paper said would be required to ensure complete erasure. Presumably, every time erasure was sold with his algorithm, he got paid. I am not saying there was intentional deception on his part, just pointing out that his algorithm, though there was never any evidence it erased better than a single overwrite, was patented and sold.
Please note that SSDs are different. SSDs can (and often do) use a pool of sectors that are rotated in and out of use, so if data is written to an SSD and then "deleted" and the drive rotates the sectors on which the deleted file is on out of the pool, an erasure might not be able to reach those sectors because the firmware in the SSD has control that software can't override. One way around this is to continuously overwrite until all sectors have been rotated into use.
The reason multiple passes exist is because hardware can malfunction. If the drive somehow malfunctions during one pass, it's possible that not all sectors will be erased - however, most good erasure software offers a full verification, which basically reads every bit on the drive to make sure the erasure didn't malfunction. With that, multi-pass overwrites are overkill.
And sometimes, data is so sensitive, it makes sense to go overboard in making sure it's destroyed. For example, I heard about a drive that was erased by the military with a 7-pass zero-fill, then the drive was run over by a tank, and then the remains were buried in a secret location in a highly secured area. Practically, the recoverability is about the same as a single-pass overwrite, but if lives could be lost as a result of the data falling into the wrong hands, then why not go for the overkill?

Resources