Calculating a Real Address from a Virtual Address - memory-address

Below is an example of a question from a work sheet of my Computer Engineering course. Up until now we've been given some information to work with, such as the address space of the device, the RAM memory capacity, and so on, so I might be overcomplicating things.
I don't get what the colons are for or how to proceed with only this information, anyone mind explaining me?
Calculate the Real Address of the following Virtual Addresses:
a) 1000H:1F00H
b) 1001H:0011H
c) 1209H:0011H

Related

problem about predecessor attack in ToR network

I am reading an article about attacks on Tor. on of them is predecessor attack and it has wrote this paragraph about this attack, but I cannot understand this part: (please explain more about it)
"In the case of Hidden Servers and using our scenario of attack, the Predecessor Attack becomes
trivial. Alice can now make statistics of the IP addresses that contacted the server in the cases where a positive traffic-pattern match was found. By selecting only circuits where there has been a match, and using an m node path towards RP, one single IP address will occur in around 1/m
of these connections when HS is selecting its first node. The attacker will then easily identify the IP address of the Hidden Server as long as m is significantly smaller than the number of nodes in the network "
you can read this article from this link:
https://www.onion-router.net/Publications/locating-hidden-servers.pdf

Geocoding street addresses with no suffixes

Situation:
I have been tasked with geocoding and plotting addresses to a map of a city for a friend of the family.
I have a list of over 400 addresses. Included in those addresses are PO Boxes and addresses with Street Number, Direction, Street Name, Street Suffix (some do not have this), City, and Zip Code.
I tried to geocode all of the addresses with Geopy and Nominatim.
Issue:
I noticed that those addresses without street suffixes and PO Boxes could not be geocoded.
What I have done:
I have read most posts dealing with addresses, read the Geopy notes and google searched until the cows came home.
I ended up stumbling across a geocoding website that PO boxes could not be mapped and that street suffix is required for mapping.
http://www.gis.harvard.edu/services/blog/geocoding-best-practices
Question:
Is there a way to search for the street suffix of each street that is missing a street suffix?
Is there another free service or library that can be utilized other than Nominatim and Geopy that can utilize the information I have and not require me to look up each individual street suffix in google maps?
Please advise!
I found out that using Geopy with Google's API can find the correct addresses that services like Nominatim, OpenCage and OpenMapquest will not fine.
There is one downside, the autocomplete can make it hard to determine if the address is the correct address.
First, speaking to the need to find an address that is missing a street suffix, you need to use address completion from an address validation service. Services that do address validation/verification use postal service data (and other data) and match address search inputs to real addresses. If the search input is not sufficiently specific, address validation services may return a handful of potential matches. Here is an example of a non-specific address (missing the State, zip code, and the street suffix) that returns two real addresses that match the search input. SmartyStreets can normally fill in the missing street suffix.
Second, speaking to the PO Box problem: some address services can give you geocode information, as well as other information that you may believe isn't available. For instance, this search shows the SmartyStreets service matching a PO Box number (that I just made up) to the local post office. The latitude and longitude in the response JSON corresponds to the post office when I search it on Google Maps.
Third, speaking to the problem of having a list of addresses: there are various address services that allow batch processes. For instance, it's a fairly common feature to allow a user to upload a spreadsheet of addresses. Here is the information page for SmartyStreets' tool.
There are multiple address services that can help you do all or some of these things. Depending on the service, they will provide some free functionality or have free tiers if you don't do very many searches. I am not aware of a service that does everything you need for free. You could probably use a few services together, like the Google Maps API to Geopy, etc, but it would take effort to code up a script to put them all together.
Full disclosure: I worked for SmartyStreets.

SSIS Split String address

I have a column which is made up of addresses as show below.
Address
1 Reid Street, Manchester, M1 2DF
12 Borough Road, London, E12,2FH
15 Jones Street, Newcastle, Tyne & Wear, NE1 3DN
etc .. etc....
I am wanting to split this into different columns to import into my SQL database. I have been trying to use Findstring to seperate by the comma but am having trouble when some addresses have more "sections" than others. ANy ideas whats the best way to go about this?
Many THanks
This is a requirements specification problem, not an implementation problem. The more you can afford to assume about the format of the addresses, the more detailed parsing you will be able to do; the other side of the same coin is that the less you will assume about the structure of the address, the fewer incorrect parses you will be blamed for.
It is crucial to determine whether you will only need to process UK postal emails, or whether worldwide addresses may occur.
Based on your examples, certain parts of the address seem to be always present, but please check this resource to determine whether they are really required in all UK email addresses.
If you find a match between the depth of parsing that you need, and the assumptions that you can safely make, you should be able to keep parsing by comma indexes (FINDSTRING); determine some components starting from the left, and some starting from the right of the string; and keep all that remains as an unparsed body.
It may also well happen that you will find that your current task is a mission impossible, especially in connection with international postal addresses. This is why most websites and other data collectors require the entry of postal address in an already parsed form by the user.
Excellent points raised by Hanika. Some of your parsing will depend on what your target destination looks like. As an ignorant yank, based on Hanika's link, I'd think your output would look something like
Addressee
Organisation
BuildingName
BuildingAddress
Locality
PostTown
Postcode
BasicsMet (boolean indicating whether minimum criteria for a good address has been met.)
In the US, just because an address could not be properly CASSed doesn't mean it couldn't be delivered - cip, my grandparent-in-laws live in enough small town that specifying their name and city is sufficient for delivery as local postal officials know who they are. For bulk mailings though, their address would not qualify for the bulk mailing rate and would default to first class mailing. I assume a similar scenario exists for UK mail
The general idea is for each row flowing through, you'll want to do your best to parse the data out into those buckets. The optimal solution for getting it "right" is to change the data entry method to validate and capture data into those discrete buckets. Since optimal never happens, it becomes your task to sort through the dross to find your gold.
Whilst you can write some fantastic expressions with FINDSTRING, I'd advise against it in this case as maintenance alone will drive you mad. Instead, add a Script Transformation and build your parsing logic in .NET (vb or c#). There will then be a cycle of running data through your transformation and having someone eyeball the results. If you find a new scenario, you go back and adjust your business rules. It's ugly, it's iterative and it's prone to producing results that a human wouldn't have.
Alternatives to rolling your address standardisation logic
buy it. Eventually your business needs outpace your ability to cope with constantly changing business rules. There are plenty of vendors out there but I'm only familiar with US based ones
upgrade to SQL Server 2012 to use DQS (Data Quality Services). You'll probably still need to buy a product to build out your knowledge base but you could offload the business rule making task to a domain expert ("Hey you, you make peanuts an hour. Make sure all the addresses coming out of this look like addresses" was how they covered this in the beginning of one of my jobs).

Bandwith loss of distance - am I being fed a line of bull or do I have research to do?

I just had a strange conversation with a man who was trying to explain to me that it is impossible for two healthy networks to communicate at each-other over the ocean without significant bandwidth loss.
For example - if you have a machine connected at 100Mb/sec here http://www.hetzner.de/en/hosting/unternehmen/rechenzentrum attempt to communicate to a machine in the US with exactly the same setup you'd only achieve a fraction of the original connection speed. This would be true no matter how you distributed the load - the total loss over distance would be the same. "Full capacity" between the US and Germany would be less than half of what it would be to a data center a mile from the originator with the same setup.
If this is true that means my entire understanding of how packets work is wrong. I mean, if there's no packet loss why would there be any issue other than latency? I'm trying to understand his argument but am at a loss. He seems intelligent and 100% sure of his information. It was very difficult to understand because he explained data like a river and I was thinking of it as a series of packets.
Can someone explain to me what I'm missing, or am i just dealing with a madman in a position of authority and confidence?
He could be referring to the number of packets you would be able to have 'in flight' at any one time.
Take a look at Wikipedia's entry on Bandwidth Delay Product for some more information on this:
http://en.wikipedia.org/wiki/Bandwidth-delay_product
That said, depending on the link you have between those two places, then I don't think latency would be that much of an issue to cause problems with this (assuming a fibre connection, not satellite).
He could also be referring to the fact that there would be a number of round trips to setup a TCP connection so the apparent speed to an end user who might be setting up lots of small connections (web browsing) might be less.
-Matt

How much risk is exposing all the sources to a third party? [closed]

Closed. This question does not meet Stack Overflow guidelines. It is not currently accepting answers.
This question does not appear to be about programming within the scope defined in the help center.
Closed 7 years ago.
Improve this question
I've been arguing with a co-worker about how necessary it is to wipe or destroy the hard disks that were used for storing the sources and are replaced with bigger ones or discarded.
His point is that no piece of source code exposed to a third party gives that party any competitive advantage. My point is that it only takes ten minutes to set up a wiping program and start it before leaving and in the morning you have a disk that contains no data that could be possibly recovered - doesn't hurt and compeletely removes the risk.
Now really how risky is it to throw away a hard drive containing a working copy of a repository of a commercial product having 10 million lines of source code?
The Drake Equation states that
N = R * d * p * e * c * x * y * z
where
N is the probability that doing this will result in the bankruptcy of your company, leaving you and all your co-workers unemployed and starving.
R is the number of hard drives discarded every year without first being erased
d is the fraction of those hard drives that are fished out of dumpsters
p is the fraction of recovered drives that are ever plugged in and fired up
e is the number of such drives that are subsequently listed on eBay because their contents look interesting
c is the number of competitors you have who browse eBay looking for trade secrets
x is the probability that your discarded drive contains something they can use
y is the probability that they do actually use that information
z is the probability that their use of such information ruins your company.
To estimate the risk that someone will work out it was you and sue/prosecute you for the damage you caused, calculate
(N / t) * m
where t is the number of people on your team, and m is the number of managers who are paying enough attention to work out who did what.
If you can prove that any of the coefficients involved is zero, then your strategy is risk-free. Otherwise there's a very small chance you'll bankrupt your company, starve your colleagues and end up in jail.
I also would not worry that much about leaking source code - the source alone is of limited value without the technical and domain knowledge required to use it. If you just want to copy stuff, you'll pirate the binary. Still, it's probably better to keep it private if you don't want to release it.
I'd be more concerned about private data on the drives. Private or confidential business email, test data with confidential information (think employee database or similar). That might cause you/your company lawsuits from affected parties.
So definitely wipe the disks. Even just checking that there's nothing sensitive on the disks is more work than just wiping them.
Without wiping it, it's very risky. If you see the disk on ebay, you'll find that most buyers will run a recovery software on it.
In order to sell it safely, it's enough to overwrite the whole disk once. The myth that it's possible to recover data after it has been overwritten is really a myth. Not even the NSA could do it.
If you don't have a special disk wiper, either use a script language to write a single big file onto the disk until it's full or format the disk and uncheck the "quick format" option. On Linux/Unix, use dd if=/dev/zero of=/dev/xxx and be really really sure that the device given is the correct one.
It all boils down to the unintended release of somebody's intellectual property with an associated value.
Who owns the Intellectual Property?
If it belongs to your company then the Board should be very annoyed to see an asset being released - it complicates corporate actions (Q. Is there any chance that other parties have access to this technology?)
SO WIPE THE DISK
If it belongs to a third party (perhaps work done by your company for them) then they'll be pissed (Q. Can we have our money back please?)
SO WIPE THE DISK
Aren't there any corporate IT standards in your organisation? Are you likely to get asked difficult questions?
SO WIPE THE DISK
I wouldn't take such a big risk. I'd do it on every hard disk I want to give/sell:
Boot up a Linux system from a Live CD/USB and run:
shred /dev/xxx
I'd say it depends on the source code. I'm sure Google wouldn't risk throwing a hard-drive away that contains their complete search algorithm. That would sell on Ebay. On the other hand, if yours is 'just another application' for some insurance company which won't interest any living soul except for you and the company itself, then why bother?
Then again, if you're really concerned, just grab a big hammer and smash your harddrive to smithereens.
While the source code may be of limited value to a 3rd party there is always a bit more in the source code than just pure statements, there may be comments describing algorithms or trade secrets, names/email addresses of programmers/customers or code describing some encryption scheme/copy protection. If somebody is knowledgeable and has patience he can learn a lot from the code.
The bottom line: better be safe than sorry.
It is a question of how to treat information.Any proprietary information, source code, documents, or any other type of information that can fall into the hands of a competitor, or anyone else that may misuse or take advantage, is simply to be avoided.
Unless it needs huge investments, big chunks of your time, and is generally causes hassles for you, there is no reason not to cleanup before you throw it out of your sight.
I am nervous that IP and information security (even if someone didnt strictly classify it as such) is stilla matter of debate in most of our work zones.
Any malicious, competent person with your source code has a much better chance of finding security holes and exploits in your system, even ones that you might not be aware of.
It may not compile, but it's a tour-guide into your world. Wipe it.

Resources