Related
When the card is generating arqc algorithm, does card number play any role ? In other words: having 2 brand new cards with ATC set to 0 and having the same CDOL, will they generate the same arqc ?
Card number and ATC are involved in session key generation. So every card will generate a different cryptogram.
Since I am not sure whether you are referring to the same card number (Assuming you were able to clone a card :) Still the cryptogram generated will be different for your first try for your both cards . If you look for CDOL, you can see an element unpredictable number( which is generated by terminal ). This is a 4 byte numeric field and chance of getting this generated same twice is very rare. Even in case where unpredictable number is same, still you need all other elements in CDOL same (amount, currency, country date atc etc) to get the same cryptogram. To block even this rare possibility issuers maintain the last used ATC for each card that it will not accept any ATC same or less. Hope it is clear.
So, a bit of explanation to preface the question. In variants of Open Face Chinese poker, you are dealt one and one card, which are to be placed into three different rows, and the goal is to make each row increasingly better, and of course getting the best hands possible. A difference from normal poker is that the top row only contains three cards, so three of a kind is the best possible hand you can get there. In a variant of this called Pineapple, which is what I'm working on a bot for, you are dealt three and three cards after the initial 5, and you discard one of those three cards each round.
Now, there's a special rule called Fantasyland, which means that if you get a pair of queens or better in the top row, and still manage to get successively better hands in the middle and top row, your next round becomes a Fantasyland round. This is a round where are dealt 15 cards at the same time, and are free to construct the best three rows possible (rows of 3, 5, and 5 cards, and discarding 2 of them). Each row yields a certain number of points (royalties, as they're called) depending on which hand is constructed, and each successive row needs better and better hands to yield the same amount of points.
Trying to optimize solutions for this seemed like a natural starting point, and one of the most interesting parts as well, so I started working on it. My first attempt, which is also where I'm stuck, was to use Simulated Annealing to do local search optimization. The energy/evaluation function is the amount of points, and at first I tried a move/neighbor function of simply swapping two cards at random, having first places them as they were drawn. This worked decently, managing to get a mean of around 6 points per hand, which isn't bad, but I often noticed that I could spot better solutions by swapping more than one pair of cards at the same time. Thus, I changed the move/neighbor function to swapping several pairs of cards at once, and also tried swapping a random amount of pairs between 1 and 3 through 5, which managed to yield slightly better results, but still I often spot better solutions by simply taking a look.
If anyone is reading this and understands the problem, any idea on how to better optimize this search? Should I use a different move/neighbor function, different Annealing parameters, or perhaps a different local search method, or even some kind of non-local search? All ideas are welcome and deeply appreciated.
You haven't indicated a performance requirement, so I'll assume that this should work quickly enough to be usable in a game with human players. It can't take an hour to find the solution, but you don't need it in a millisecond, either.
I'm wondering of simulated annealing is the right method. This might be an opportunity for brute force.
One can make a very fast algorithm for evaluating poker hands. Consider an encoding of the cards where 13 bits encode the card value and 4 bits encode the suit. OR together the cards in the hand and you can quickly identify pairs, triples, straights, and flushes.
At first glance, there would seem to be 15! (13,076,743,680,000) possible positions for all the cards which are dealt, but there are other symmetries and restrictions that reduce the meaningful combinations and limit the space that must be explored.
One important constraint is that the bottom row must have a higher score than the middle row and that the middle row must have a higher score than the top row.
There are 3003 sets of bottom cards, COMBINATIONS(15 cards, 5 at a time) = (15!)/(5!(15-5)!) = 3003. For each set of possible bottom cards, there are COMBINATIONS(10 cards, 5 at a time) = (10!)/(5!(10-5!)) = 252 sets of middle cards. The top row has COMBINATIONS(5 cards, 3 at a time) = (5!)/(3!*(5-3)!) = 10. With no further optimization, a brute force approach would require evaluating 3003*252*10 = 7567560 positions. I suspect that this can be evaluated within an acceptable response time.
A further optimization uses the constraint that each row must be worth less than the row below. If the middle row is worth more than the bottom row, the top row can be ignored by pruning the tree at that point, which removes a factor of 10 for those cases.
Also, since the bottom row must be work more than the middle and top rows, there may be some minimum score the bottom row must achieve before it is worth trying middle rows. Rejecting a bottom row prunes 2520 cases from the tree.
I understand that there is a way to use simulated annealing for estimating solutions for discrete problems. My use of simulated annealing has been limited to continuous problems with edge constraints. I don't have a good intuition for how to apply SA to discrete problems. Many discrete problems lend themselves to an exhaustive search, provided the search space can be trimmed by exploiting symmetries and constraints in the particular problem.
I'd love to know the solution you choose and your results.
Here is a question that has me awake for a number of days now. The only conclusion I came up so far is that Red Bull does not usually help coders.
I have a scenario in my application where I have a couple of jobs (1 to 50). The job has an address and I have the following properties of an address: Postcode, Latitude, and Longitude.
I have a table of workers also and they too have addresses. While the jobs or workers are created through screens, I use Google Map queries to make sure the provided Postcode is valid and is in UK so all the addresses are verified.
I am using a scheduler control to display some workers on y-axis and a timeline on x-axis. Every job has a date and can only move vertically on the scheduler on the job’s date. The user selects a number of jobs and they are displayed in a basket close to the scheduler. The user can then drag and drop job against workers. All this is manual so it works.
My task is to automate this so that the user does not do much except just verifying and allotting the jobs. Therefore, I have to automate the process.
Every worker has a property called WillingMaximumDistanceTravel which is an integer representing miles, the worker is willing to travel for a job.
Now here is the headache: I have over 1500 workers. I have a utility function that uses Newtonsoft’s Json Convert to de-serialize a stream of response from Google Maps. I need to feed it Postcode A and B.
I also plan to introduce a new table to DB to store the distance finds as Postcode A, Postcode B, and Distance. Therefore, if I find myself comparing the same postcodes again, I will just retrieve the result from DB instead and slowly and eventually, I would no longer require bothering Google anymore as this table would be very comprehensive.
I cannot use the simple Haversine formula, as Crow-fly path is not my requirement here. The pain in this is that it takes a lot of time to calculate. Some workers can travel over 10 miles while some vary from 15 to 80. I have to take the first job from the list and run it with every applicable worker o the system! I was wondering that the UK postcode has a pattern to it. If we sort a list of UK postcodes, can we rough-estimate, from the alphanumeric pattern, where will we hit a 100-mile mark, a 200-mile mark and so on?
If anyone is interested in the code, please drop a line and I will paste it.
(I work for Google, but I'm not speaking on behalf of Google. I have nothing to do with the maps API.)
I suspect this isn't a great situation for using the Google Maps API, simply because you're pushing so much data through. You really don't want to make that many requests, even if you could do so under the directions limits.
When I tackled something similar in a previous job, we bought into a locally-hosted maps API - but even that wasn't fast enough for this sort of work. We ended up precomputing the time to travel from the centroid of each postcode "area" (probably the wrong name for it, but the first part of the postcode followed by the first digit of the remainder, e.g. "SW1W 9" for "SW1W 9TQ") to every other area, storing the result in a giant table. I think we only did it for postcodes which were within 100 miles or something similar, to cut down on the amount of preprocessing.
Even then, a simple DB wasn't quite as fast as we wanted - so we stored the results in a giant file, with a single byte per source/destination pair. (We had a fixed sequence of source postcodes and target postcodes, so we didn't need to specify those.) At that point, computing a travel time consisted of:
Work out postcode areas (substring work)
Find the index of each postcode area within the sequence
Check if we'd loaded that part of the file (we lazy loaded for startup speed)
Load the row if necessary, and just access it otherwise
The bytes were on a sliding scale of accuracy, so for the first 60 minutes it was on a per-minute basis, then each extra value meant an extra 2 minutes, then 5 etc. (Those aren't the exact values, but it was something like that.)
When you've worked out "good candidates" you can ask an on-site API or the Google Maps API for more accurate directions for your exact postcodes, of course.
You want to look for a spatial-index or a space-filling-curve. A spatial index reduce the 2d problem to a 1d problem and recursivley subdivide the surface into smaller tiles but it is basically a reordering of the tiles. You can subdivide the surface either with an index or a string using 4 characters. The latter one can be useful to you because it let you query the string with all string operation hidden in the database engine. You want to look for Nick's spatial index quadtree hilbert-curve blog.
I decided to write a small program that solves TicTacToe in order to try out the effect of some pruning techniques on a trivial game. The full game tree using minimax to solve it only ends up with 549,946 possible games. With alpha-beta pruning, the number of states required to evaluate was reduced to 18,297. Then I applied a transposition table that brings the number down to 2,592. Now I want to see how low that number can go.
The next enhancement I want to apply is a strategic reduction. The basic idea is to combine states that have equivalent strategic value. For instance, on the first move, if X plays first, there is nothing strategically different (assuming your opponent plays optimally) about choosing one corner instead of another. In the same situation, the same is true of the center of the walls of the board, and the center is also significant. By reducing to significant states only, you end up with only 3 states for evaluation on the first move instead of 9. This technique should be very useful since it prunes states near the top of the game tree. This idea came from the GameShrink method created by a group at CMU, only I am trying to avoid writing the general form, and just doing what is needed to apply the technique to TicTacToe.
In order to achieve this, I modified my hash function (for the transposition table) to enumerate all strategically equivalent positions (using rotation and flipping functions), and to only return the lowest of the values for each board. Unfortunately now my program thinks X can force a win in 5 moves from an empty board when going first. After a long debugging session, it became apparent to me the program was always returning the move for the lowest strategically significant move (I store the last move in the transposition table as part of my state). Is there a better way I can go about adding this feature, or a simple method for determining the correct move applicable to the current situation with what I have already done?
My gut feeling is that you are using too big of a hammer to attack this problem. Each of the 9 spots can only have one of two labels: X or O or empty. You have then at most 3^9 = 19,683 unique boards. Since there are 3 equivalent reflections for every board, you really only have 3^9 / 4 ~ 5k boards. You can reduce this by throwing out invalid boards (if they have a row of X's AND a row of O's simultaneously).
So with a compact representation, you would need less than 10kb of memory to enumerate everything. I would evaluate and store the entire game graph in memory.
We can label every single board with its true minimax value, by computing the minimax values bottom up instead of top down (as in your tree search method). Here's a general outline: We compute the minimax values for all unique boards and label them all first, before the game starts. To make the minimax move, you simply look at the boards succeeding your current state, and pick the move with the best minimax value.
Here's how to perform the initial labeling. Generate all valid unique boards, throwing out reflections. Now we start labeling the boards with the most moves (9), and iterating down to the boards with least moves (0). Label any endgame boards with wins, losses, and draws. For any non-endgame boards where it's X's turn to move: 1) if there exists a successor board that's a win for X, label this board a win; 2) if in successor boards there are no wins but there exists a draw, then label this board a draw; 3) if in successor boards there are no wins and no draws then label this board a loss. The logic is similar when labeling for O's turn.
As far as implementation goes, because of the small size of the state space I would code the "if there exists" logic just as a simple loop over all 5k states. But if you really wanted to tweak this for asymptotic running time, you would construct a directed graph of which board states lead to which other board states, and perform the minimax labeling by traversing in the reverse direction of the edges.
Out of curiosity, I wrote a program to build a full transposition table to play the game without any additional logic. Taking the 8 symmetries into account, and assuming computer (X) starts and plays deterministic, then only 49 table entries are needed!
1 entry for empty board
5 entries for 2 pieces
21 entries for 4 pieces
18 entries for 6 pieces
4 entries for 8 pieces
You're on the right track when you're thinking about reflections and rotations. However, you're applying it to the wrong place. Don't add it to your transposition table or your transposition table code -- put it inside the move generation function, to eliminate logically equivalent states from the get-go.
Keep your transposition table and associated code as small and as efficient as possible.
You need to return the (reverse) transposition along with the lowest value position. That way you can apply the reverse transposition to the prospective moves in order to get the next position.
Why do you need to make the transposition table mutable? The best move does not depend on the history.
There is a lot that can be said about this, but I will just give one tip here which will reduce your tree size: Matt Ginsberg developed a method called Partition Search which does equivalency reductions on the board. It worked well in Bridge, and he uses tic-tac-toe as an example.
You may want to try to solve tic-tac-toe using monte-carlo simulation. If one (or both) of the players is a machine player, it could simply use the following steps (this idea comes from one of the mini-projects in the coursera course Principles of Computing 1 which is a part of the Specialization Fundamentals of Computing, taught by RICE university.):
Each of the machine players should use a Monte Carlo simulation to choose the next move from a given TicTacToe board position. The general idea is to play a collection of games with random moves starting from the position, and then use the results of these games to compute a good move.
When a paritular machine player wins one of these random games, it wants to favor the squares in which it played (in hope of choosing a winning move) and avoid the squares in which the opponent played. Conversely, when it loses one of these random games, it wants to favor the squares in which the opponent played (to block its opponent) and avoid the squares in which it played.
In short, squares in which the winning player played in these random games should be favored over squares in which the losing player played. Both the players in this case will be the machine players.
The following animation shows a game played between 2 machine players (that ended in a tie), using 10 MC trials at each board state to determine the next move.
It shows how each of the machine players learns to play the game just by using Monte-Carlo Simulation with 10 trials (a small number of trials) at every state of the board, the scores shown at the right bottom of each grid square are used by each of the players at their corresponding turns, to choose its next move (the brighter cells represent better moves for the current player, as per the simulation results).
Here is my blog on this for more details.
I am looking to develop a system in which i need to assign every user a unique pin code for security. The user will only enter this pin code as a means of identifying himself. Thus i dont want the user to be able to guess another users pincode. Assuming the max users i will have is 100000, how long should this pin code be?
e.g. 1234 4532 3423
Should i generate this code via some sort of algorithm? Or should i randomly generate it?
Basically I dont want people to be able to guess other peoples pincode and it should support enough number of users.
Am sorry if my question sounds a bit confusing but would gladly clarify any doubts.
thank you very much.
UPDATE
After reading all the posts below, I would like to add some more detail.
What i am trying to achieve is something very similar to a scratch card.
A user is given a card, which he/she must scratch to find the pin code.
Now using this pin code the user must be able to access my system.
I cannot add extra security (e.g. username and password), as then it will deter the user from using the scratch card. I want to make it as difficult as possible to guess the pincode within the limitations.
thankyou all for your amazing replies again.
4 random digits should be plenty if you append it to unique known userid (could still be number) [as recommended by starblue]
Pseudo random number generator should also be fine. You can store these in the DB using reversable encryption (AES) or one-way hashing
The main concern you have is how many times a person can incorrectly input the pin before they are locked out. This should be low, say around three...This will stop people guessing other peoples numbers.
Any longer than 6 digits and people will be forgetting them, or worse, writing them on a post-it note on their monitor.
Assuming an account locks with 3 incorrect attempts, then having a 4 digit pin plus a user ID component UserId (999999) + Pin (1234) gives you a 3/10000 chance of someone guessing. Is this acceptable? If not make the pin length 5 and get 3/100000
May I suggest an alternative approach? Take a look at Perfect Paper Passwords, and the derivatives it prompted .
You could use this "as is" to generate one-time PINs, or simply to generate a single PIN per user.
Bear in mind, too, that duplicate PINs are not of themselves an issue: any attack would then simply have to try multiple user-ids.
(Mileage warning: I am definitely not a security expert.)
Here's a second answer: from re-reading, I assume you don't want a user-id as such - you're just validating a set of issued scratch cards. I also assume you don't want to use alphabetic PINs.
You need to choose a PIN length such that the probability of guessing a valid PIN is less than 1/(The number of attempts you can protect against). So, for example, if you have 1 million valid PINs, and you want to protect against 10000 guesses, you'll need a 10-digit PIN.
If you use John Graham-Cumming's version of the Perfect Paper Passwords system, you can:
Configure this for (say) 10-digit decimal pins
Choose a secret IV/key phrase
Generate (say) the first million passwords(/PINs)
I suspect this is a generic procedure that could, for example, be used to generate 25-alphanumeric product ids, too.
Sorry for doing it by successive approximation; I hope that comes a bit nearer to what you're looking for.
If we assume 100,000 users maximum then they can have unique PINs with 0-99,999 ie. 5 digits.
However, this would make it easier to guess the PINs with the maximum number of users.
If you can restrict the number of attempts on the PIN then you can have a shorter PIN.
eg. maximum of 10 failed attempts per IP per day.
It also depends on the value of what you are protecting and how catastrophic it would be if the odd one did get out.
I'd go for 9 digits if you want to keep it short or 12 digits if you want a bit more security from automated guessing.
To generate the PINs, I would take a high resolution version of the time along with some salt and maybe a pseudo-random number, generate a hash and use the first 9 or 12 digits. Make sure there is a reasonable and random delay between new PIN generations so don't generate them in a loop, and if possible make them user initiated.
eg. Left(Sha1(DateTime + Salt + PseudoRandom),9)
Lots of great answers so far: simple, effective, and elegant!
I'm guessing the application is somewhat lottery-like, in that each user gets a scratch card and uses it to ask your application if "he's already won!" So, from that perspective, a few new issues come to mind:
War-dialing, or its Internet equivalent: Can a rogue user hit your app repeatedly, say guessing every 10-digit number in succession? If that's a possibility, consider limiting the number of attempts from a particular location. An effective way might be simply to refuse to answer more than, say, one attempt every 5 seconds from the same IP address. This makes machine-driven attacks inefficient and avoids the lockout problem.
Lockout problem: If you lock an account permanently after any number of failed attempts, you're prone to denial of service attacks. The attacker above could effectively lock out every user unless you reactivate the accounts after a period of time. But this is a problem only if your PINs consist of an obvious concatenation of User ID + Key, because an attacker could try every key for a given User ID. That technique also reduces your key space drastically because only a few of the PIN digits are truly random. On the other hand, if the PIN is simply a sequence of random digits, lockout need only be applied to the source IP address. (If an attempt fails, no valid account is affected, so what would you "lock"?)
Data storage: if you really are building some sort of lottery-like system you only need to store the winning PINs! When a user enters a PIN, you can search a relatively small list of PINs/prizes (or your equivalent). You can treat "losing" and invalid PINs identically with a "Sorry, better luck next time" message or a "default" prize if the economics are right.
Good luck!
The question should be, "how many guesses are necessary on average to find a valid PIN code, compared with how many guesses attackers are making?"
If you generate 100 000 5-digit codes, then obviously it takes 1 guess. This is unlikely to be good enough.
If you generate 100 000 n-digit codes, then it takes (n-5)^10 guesses. To work out whether this is good enough, you need to consider how your system responds to a wrong guess.
If an attacker (or, all attackers combined) can make 1000 guesses per second, then clearly n has to be pretty large to stop a determined attacker. If you permanently lock out their IP address after 3 incorrect guesses, then since a given attacker is unlikely to have access to more than, say, 1000 IP addresses, n=9 would be sufficient to thwart almost all attackers. Obviously if you will face distributed attacks, or attacks from a botnet, then 1000 IP addresses per attacker is no longer a safe assumption.
If in future you need to issue further codes (more than 100 000), then obviously you make it easier to guess a valid code. So it's probably worth spending some time now making sure of your future scaling needs before fixing on a size.
Given your scratch-card use case, if users are going to use the system for a long time, I would recommend allowing them (or forcing them) to "upgrade" their PIN code to a username and password of their choice after the first use of the system. Then you gain the usual advantages of username/password, without discarding the ease of first use of just typing the number off the card.
As for how to generate the number - presumably each one you generate you'll store, in which case I'd say generate them randomly and discard duplicates. If you generate them using any kind of algorithm, and someone figures out the algorithm, then they can figure out valid PIN codes. If you select an algorithm such that it's not possible for someone to figure out the algorithm, then that almost is a pseudo-random number generator (the other property of PRNGs being that they're evenly distributed, which helps here too since it makes it harder to guess codes), in which case you might as well just generate them randomly.
If you use random number generator algorithms, so you never have PIN like "00038384882" ,
starts with 0 (zeros), because integer numbers never begins with "0". your PIN must be started with 1-9 numbers except 0.
I have seen many PIN numbers include and begins many zeros, so you eliminate first million of numbers. Permutation need for calculations for how many numbers eliminated.
I think you need put 0-9 numbers in a hash, and get by randomly from hash, and make your string PIN number.
If you want to generate scratch-card type pin codes, then you must use large numbers, about 13 digits long; and also, they must be similar to credit card numbers, having a checksum or verification digit embedded in the number itself. You must have an algorithm to generate a pin based on some initial data, which can be a sequence of numbers. The resulting pin must be unique for each number in the sequence, so that if you generate 100,000 pin codes they must all be different.
This way you will be able to validate a number not only by checking it against a database but you can verify it first.
I once wrote something for that purpose, I can't give you the code but the general idea is this:
Prepare a space of 12 digits
Format the number as five digits (00000 to 99999) and spread it along the space in a certain way. For example, the number 12345 can be spread as __3_5_2_4__1. You can vary the way you spread the number depending on whether it's an even or odd number, or a multiple of 3, etc.
Based on the value of certain digits, generate more digits (for example if the third digit is even, then create an odd number and put it in the first open space, otherwise create an even number and put it in the second open space, e.g. _83_5_2_4__1
Once you have generated 6 digits, you will have only one open space. You should always leave the same open space (for example the next-to-last space). You will place the verification digit in that place.
To generate the verification digit you must perform some arithmetic operations on the number you have generated, for example adding all the digits in the odd positions and multiplying them by some other number, then subtracting all the digits in the even positions, and finally adding all the digits together (you must vary the algorithm a little based on the value of certain digits). In the end you have a verification digit which you include in the generated pin code.
So now you can validate your generated pin codes. For a given pin code, you generate the verification digit and check it against the one included in the pin. If it's OK then you can extract the original number by performing the reverse operations.
It doesn't sound so good because it looks like security through obscurity but it's the only way you can use this. It's not impossible for someone to guess a pin code but being a 12-digit code with a verification digit, it will be very hard since you have to try 1,000,000,000,000 combinations and you just have 100,000 valid pin codes, so for every valid pin code there are 10,000,000 invalid ones.
I should mention that this is useful for disposable pin codes; a person uses one of these codes only once, for example to charge a prepaid phone. It's not a good idea to use these pins as authentication tokens, especially if it's the only way to authenticate someone (you should never EVER authenticate someone only through a single piece of data; the very minimum is username+password)
It seems you want to use the pin code as the sole means of identification for users.
A workable solution would be to use the first five digits to identify the user,
and append four digits as a PIN code.
If you don't want to store PINs they can be computed by applying a cryptographically secure hash (SHA1 or better)
to the user number plus a system-wide secret code.
Should i generate this code via some
sort of algorithm?
No. It will be predictable.
Or should i randomly generate it?
Yes. Use a cryptographic random generator, or let the user pick their own PIN.
In theory 4 digits will be plenty as ATM card issuers manage to support a very large community with just that (and obviously, they can't be and do not need to be unique). However in that case you should limit the number of attempts at entering the PIN and lock them out after that many attempts as the banks do. And you should also get the user to supply a user ID (in the ATM case, that's effectively on the card).
If you don't want to limit them in that way, it may be best to ditch the PIN idea and use a standard password (which is essentially what your PIN is, just with a very short length and limited character set). If you absolutely must restrict it to numerics (because you have a PIN pad or something) then consider making 4 a (configurable) minimum length rather than the fixed length.
You shouldn't store the PIN in clear anywhere (e.g. salt and hash it like a password), however given the short length and limited char set it is always going to be vulnerable to a brute force search, given an easy way to verify it.
There are various other schemes that can be used as well, if you can tell us more about your requirements (is this a web app? embedded system? etc).
There's a difference between guessing the PIN of a target user, and that of any valid user. From your use case, it seems that the PIN is used to gain access to certain resource, and it is that resource that attackers may be after, not particular identities of users. If that's indeed the case, you will need to make valid PIN numbers sufficiently sparse among all possible numbers of the same number digits.
As mentioned in some answers, you need to make your PIN sufficiently random, regardless if you want to generate it from an algorithm. The randomness is usually measured by the entropy of the PIN.
Now, let's say your PIN is of entropy N, and there are 2^M users in your system (M < N), the probability that a random guess will yield a valid PIN is 2^{M-N}. (Sorry for the latex notations, I hope it's intuitive enough). Then from there you can determine if that probability is low enough given N and M, or compute the required N from the desired probability and M.
There are various ways to generate the PINs so that you won't have to remember every PIN you generated. But you will need a very long PIN to make it secure. This is probably not what you want.
I've done this before with PHP and a MySQL database. I had a permutations function that would first ensure that the number of required codes - $n, at length $l, with the number of characters, $c - was able to be created before starting the generation process.
Then, I'd store each new code to the database and let it tell me via UNIQUE KEY errors, that there was a collision (duplicate). Then keep going until I had made $n number of successfully created codes. You could of course do this in memory, but I wanted to keep the codes for use in a MS Word mail merge. So... then I exported them as a CSV file.