I am trying to get results from realtor.ca for all houses that were built between 2000 and 2013. The advanced search does not have this feature, but I am trying to see if it is possible to add the search criteria in the URL.
I looked at the source code and the id for the value seems to be builtin_value. So, I added &builtin_value=2000,2011,2012,2013 to the URL string but this did not work.
After messing around with the site's URL structure for a little bit, I agree that you can't add the builtin_value parameter to the URL and there doesn't seem to be an equivalent.
That being said, I think that in this case, the Keywords text box is your friend. I tried doing some sample searches entering 2001 as the value for the Keywords text box and only got back houses that were built in 2001 - same with experiments for other years. You can exploit the fact that nobody enters numbers like 2001 unless it's in the context of the year the house was built in. The exception to this is for the year 2000, however. Entering the keyword 2000 comes back with a plethora of results, including every house with approximately 2000 square feet. Thus, if you can wriggle a relaxation of requirements out of your project manager or boss where you only need years built excluding 2000, then I think that you could write a relatively simple program which just executes a request for each year (i.e. a for loop iterating over the years 2000 to 2013), accepting the GET responses at face value, and paging the data as needed by incrementing the CurrentPage URL parameter until you don't get anymore results.
If you really need the year 2000 then you could write a scraper that is more sophisticated than what I am proposing above. It would have to GET each search result and vet that it was indeed built in the year 2000.
For example, consider the following URLs:
A search for all houses in Vancouver built in 1971: https://www.realtor.ca/Residential/Map.aspx#CultureId=1&ApplicationId=1&RecordsPerPage=9&MaximumResults=9&PropertySearchTypeId=1&TransactionTypeId=2&StoreyRange=0-0&BedRange=0-0&BathRange=0-0&Keywords=1971&LongitudeMin=-123.2340278625491&LongitudeMax=-122.85396957397488&LatitudeMin=49.21465057441378&LatitudeMax=49.34746245927539&SortOrder=A&SortBy=1&viewState=m&Longitude=-123.043998718262&Latitude=49.2811012268066&ZoomLevel=12&CurrentPage=1
A search for all houses in Canada built in the year 2000 (lots of non-relevant results) on page 3: https://www.realtor.ca/Residential/map.aspx#CultureId=1&ApplicationId=1&RecordsPerPage=9&MaximumResults=9&PropertySearchTypeId=1&TransactionTypeId=2&StoreyRange=0-0&BedRange=0-0&BathRange=0-0&Keywords=2000&LongitudeMin=-135.1318359375&LongitudeMax=-37.8369140625&LatitudeMin=44.707725934249424&LatitudeMax=70.92742296535133&SortOrder=A&SortBy=1&viewState=m&CurrentPage=3
A search for all houses in Canada built in the year 2001: https://www.realtor.ca/Residential/map.aspx#CultureId=1&ApplicationId=1&RecordsPerPage=9&MaximumResults=9&PropertySearchTypeId=1&TransactionTypeId=2&StoreyRange=0-0&BedRange=0-0&BathRange=0-0&Keywords=2001&LongitudeMin=176.220703125&LongitudeMax=10.810546875&LatitudeMin=4.483729141145389&LatitudeMax=72.9471586872288&SortOrder=A&SortBy=1&viewState=m&CurrentPage=2
I hope that helps.
Related
first time posting.
I wanted to ask if anyone knows how I can search on YouTube for, let's say, music video's that have been viewed between a set number of times. Like the title says for example, between 9 and 11 million times.
One reason I want to do this is because I want to find good music that I haven't heard before. The logic I'm working on is that the Got Talent type video's that get viewed millions of times are generally viewed that many times for one of two reason. 1) they're amazing. 2) they're embarrassingly horrible.
And though I don't think a song being popular will necessarily mean I'll like it, I'm hoping this method will be successful to some degree.
Another reason is to look for trailers for independent films with a similar logic as above. Though with these movies I think I only hear about them six months to a year after they've been released because they're flying under the radar.
If I were to be able to search for movie trailers with 'x' number of views though.. for example, between 500,000 and a million, maybe I'd be able to find movies that I'll like quicker than via time passing and them getting mentioned to me by a friend.
Any help would be greatly appreciated as I've wanted to be able to perform these kind of searches for awhile now.
thanks
You will need to use YouTube API v3.
I havent written this exact request but it looks like you can list videos then filter by 'Chart' = 'mostPopular'
https://developers.google.com/youtube/v3/docs/videos/list
Perhaps a bit of background reading on the API would help too...
https://developers.google.com/youtube/v3/
First off, you would need the Youtube Data API. "v3" means nothing because it's simply the current version, like "Windows 10."
The API lets you get a video's view count, but doesn't put it in a range like 9 million to 11 million.
Youtube's own search function is pretty sophisticated. For instance,
https://www.youtube.com/results?search_query=movie+trailer&search_sort=video_view_count&filters=month. This gives all results for "movie trailer," within the last month, sorted by view count. You can customize the URL, i.e. "week" instead of month would return only trailers from the last week. Or year, etc. Essentially this is a "Videos: List: MostPopular" query, with subject filter.
I have a few Youtube API scripts, and I hardly think it's worth the hassle to do it that way when Youtube's advanced search get you 99% there. If you did, you would need to to a Search:list query for a given subject (i.e. "movie trailer"). Limited to a given time frame (i.e. last month). Then for each video ID, make a Videos:list query to get its view count. Then print all, sorted by views.
How can I get a list of linked work item IDs for a set of work items?
Excel-hosted queries preferred. API Sample is acceptable.
Direct DB table query is acceptable (read-only and unsupported of course!)
Many thanks in advance! -Zephan
MORE INFORMATION
UPDATE: No answers for my original Q so broadening scope of acceptable answers as follows:
Answer for TFS2015 (migrating very shortly) or TFS2013 (potentially useful for TFS2015) is preferred over TFS2010
Coding acceptable if there are any APIs or PowerShell cmdlets (MS or community).
Connecting directly (read-only!) to TFS DB tables is acceptable (source tables and related relationship link table names). Yes, directly referencing TFS DB tables is VERY unsupported, read-only, and "AT YOUR OWN RISK." Still beats having to manually copy/paste data or reconstruct list of links in Excel.
ORIGINAL QUESTION & DETAILS
My team uses TFS2010 (soon 2013 or hoping 2015) and VS2010-2015. I need to support traceability reports and analyze/quantify our coverage of ~300 Test Case work items linked to ~400 Requirement work items. Direct Link and Tree queries are close but don't give me related links on the same row as parent work item. Many thanks in advance for your suggestions and any related code fragments.
Example:
3 test cases (Test1, Test2, Test3)
4 Requirements (Req1, Req2, Req3, Req4)
For simplicity let's just use TFS work item IDs to represent each TestN and ReqN. In actuality, I have a keyword to identify my validation requirements (separate from the 1,000's of other requirements in this Team Project). The only Test Case WI I care about for this problem are those linked to one or more Validation Requirement trace-ability.
Scenarios:
1:1 (simple) Test1 is linked to Req1
1:2 (1:n) Test2 is linked to Req2 and Req3
2:1 (n:1) Test3 (and Test2) are both linked to Req3
0:1 (Requirement missing Test coverage) Req4 has no test case links
I have a good coverage gap query by creating a Direct Link query for all Requirements then set "linking filters" to Only return items that do not have the specified links.
Desired output (all tests with list of related work items):
|Test1 | Req1 |
|Test2 | Req2, Req3 |
|Test3 | Req3 |
For row #2 I am OK with other separators or even entire list using same separator (.CSV or TAB delimited).
Skip right to answer now if you have a tidy answer. If not then I added considerable RELATED RESEARCH info below to help kick-start an idea that fits the need! Especially since this hasn't been discoverably solved in the last 5 years :-).
RELATED RESEARCH (loooong but may be useful)
1. Visual Studio Queries
Flat Queries should support a list of linked items out-of-the-box... but it does not. RelatedLinkCount field is handy for knowing if there are any links to chase, but that's it for flat queries.
Direct Link queries give a list of all direct links, but the related IDs are on rows below the parent work item. I am seriously considering creating a formula to look on the next X rows to build a list of IDs, but this would be fragile especially when over 3 requirements are linked to same test. Still might solve 80% of my tracing needs.
Tree Queries also show links, but on different rows. Additionally they tend to follow just one link type. Ideally I will need list of User Requirements linked to Functional Requirements linked to Test Case(s).
2. Tools / Plug-ins
SmartExcel4TFS (eDEVTech, http://www.modernrequirements.com/smartexcel4tfs/) has 3 reports it supports, but none get me the core data I need in easily used format. At least it is FREE if you have an MSDN Premium subscription.
Requirements to Tests Trace Matrix is super-interesting. Alass, right now I need to go the other way (Requirements linked to a given test case). Also it merges cells and has sub-sections that are hard to manipulate I think. (I may revisit this option though.)
Intersection Traceability Matrix report is WAY too wide for a full 300 x 400 grid :-O.
Work Item Decomposition Matrix also didn't give me desired contents. (though frankly I've forgotten this report layout from when I checked ~1 month ago.)
3. TFS API calls
I have actually avoided this route in favor of native Excel solution... but if I can get an example of Excel VBA code (or other code with link to calling within Excel) I may go this route. At this point I don't have time to dig into rolling my own... but this would be cool assuming performance is acceptable.
Relevant API/code fragments:
Retrieving TFS Results from a Tree Query (Blogs.msdn.com 2012.02.22) - Looks like this would get me the data I need, but it is not in Excel so I'd need a bridge example of some sort calling this within Excel.
Retrieving work items and their linked work items in a single query using the TFS APIs (stackoverflow.com 2012.01.12) - Also looks very promising, but not connected to Excel. Gives hints for 2 level and 3 level nested links and performance consideration (don't make second call for each item returned!)
Retrieving work items using the Team Foundation Server API (pwee167.github.io 2012.09.18) - Excellently written introductory walkthrough blog posting to learn how to build an (ASP.Net MVC3) app that calls TFS APIs to run Flat or Tree queries. Start here if writing C# (which I could do but don't have time/justification unless easy example to integrate with Excel).
How can I query work items and their linked changesets in TFS? (stackoverflow.com 2011.05.10) - I don't need changesets but this has VB code to instantiate new TfsTeamProjectCollection which might work directly in Excel VBA (assuming proper reference is found and added)
var projectCollection = new TfsTeamProjectCollection(
new Uri("http://localhost:8080/tfs"),
new UICredentialsProvider());
OK, that's everything I have gathered on this problem. Please help contribute with the missing magic tool/snippet or follow the info above to build that last bit I have not had time to prototype & debug. Many thanks in advance!! -Zephan
I have a requirement in which I need to import data from excel (CSV) to Dynamics CRM regularly.
Instead of using some simple Data Duplication Rules, I need to implement a point system to determine whether a data is considered duplicate or not.
Let me give an example. For example these are the particular rules for Import:
First Name, exact match, 10 pts
Last Name, exact match, 15 pts
Email, exact match, 20 pts
Mobile Phone, exact match, 5 pts
And then the Threshold value => 19 pts
Now, if a record have First Name and Last Name matched with an old record in the entity, the points will be 25 pts, which is higher than the threshold (19 pts), therefore the data is considered as Duplicate
If, for example, the particular record only have same First Name and Mobile Phone, the points will be 15 pts, which is lower than the threshold and thus considered as Non-Duplicate
What is the best approach to achieve this requirement? Is it possible to utilize the default functionality of Import Data in the MS CRM? Is there any 3rd party Add-on that answer my requirement above?
Thank you for all the help.
Updated
Hi Konrad, thank you for your suggestions, let me elaborate here:
Excel. You could filter out the data using Excel and then, once you've obtained a unique list, import it.
Nice one but I don't think it is really workable in my case, the data will be coming regularly from client in moderate numbers (hundreds to thousands). Typically client won't check about the duplication on the data.
Workflow. Run a process removing any instance calculated as a duplicate.
Workflow is a good idea, however since it is being processed asynchronously, my concern is the user in some cases may already do some update/changes to the data inserted, before the workflow finish working.. therefore creating some data inconsistency or at the very least confusing user experience
Plugin. On every creation of a new record, you'd check if it's to be regarded as duplicate-ish and cancel it's creation (or mark for removal).
I like this approach. So I just import like usual (for example, to contact entity), but I already have a plugin in place that getting triggered every time a record is created, the plugin will check whether the record is duplicat-ish or not and took necessary action.
I haven't been fiddling a lot with duplicate detection but looking at your criteria you might be able to make rules that match those, pretty much three rules to cover your cases, full name match, last name and mobile phone match and email match.
If you want to do the points system I haven't seen any out of the box components that solve this, however CRM Extensions have a product called Import Manager that might have that kind of duplicate detection. They claim to have customized duplicate checking. Might be worth asking them about this.
Otherwise it's custom coding that will solve this problem.
I can think of the following approaches to the task (depending on the number of records, repetitiveness of the import, automatization requirement etc.) they may be all good somehow. Would you care to elaborate on the current conditions?
Excel. You could filter out the data using Excel and then, once you've obtained a unique list, import it.
Plugin. On every creation of a new record, you'd check if it's to be regarded as duplicate-ish and cancel it's creation (or mark for removal).
Workflow. Run a process removing any instance calculated as a duplicate.
You also need to consider the implication of such elimination of data. There's a mathematical issue. Suppose that the uniqueness' radius (i.e. the threshold in this 1D case) is 3. Consider the following set of numbers (it's listed twice, just in different order).
1 3 5 7 -> 1 _ 5 _
3 1 5 7 -> _ 3 _ 7
Are you sure that's the intended result? Under some circumstances, you can even end up with sets of records of different sizes (only depending on the order). I'm a bit curious on why and how the setup came up.
Personally, I'd go with plugin, if the above is OK by you. If you need to make sure that some of the unique-ish elements never get omitted, you'd probably best of applying a test algorithm to a backup of the data. However, that may defeat it's purpose.
In fact, it sounds so interesting that I might create the solution for you (just to show it can be done) and blog about it. What's the dead-line?
A rather complicated problem in data exchange between a database and a bookform:
The organisation in which I work has a database in mysql for all social profit organisations in Brussels, Belgium. At the same time there is a booklet created in Indesign which was developed in a different time and with different people than the database and consequently has a different structure.
Every year a new book is published and the data needs to be compared manually because of this difference in structure. The book changes its way of displaying entries according to the need of a chapter. It would help to have a crossplatform search and change tool, best not with one keyword but with all the relevant data for an entry in the book.
An example of an entry in the booklet:
BESCHUTTE WERKPLAATS BOUCHOUT
Neromstraat 26 • 1861 Wolvertem • Tel 02-272 42 80 • Fax 02-269 85 03 • Gsm 0484-101 484 E-mail info#bwbouchout.be • Website www.bwbouchout.be Werkdagen: 8u - 16u30, vrijdag tot 14u45.
Personen met een fysieke en/of verstandelijke handicap. Ook psychiatrische patiënten en mensen met een meervoudige handicap.
Capaciteit: 180 tewerkstellingsplaatsen.
A problem: The portable phone number is written in another format as in the database. The database would say: 0484 10 14 84 the book says: 0484-101 484
The opening times are formulated completely different, but some of it is similar.
Are there tools which would make life easier? Tools where you would be able to find similar data something like: similar data finder for excel but then cross platform and with more possibilities? I believe most data exchange programs work very "one-way same for every entry". Is there a program which is more flexible?
For clarity: I need to compare the data, not to generate the data out of the database.
It could mean saving a lot of time, money and eyestrain. Thanks,
Erik Willekens
Erik,
The specific problem of comparing two telephone number which are formatted differently is relatively easy to overcome by stripping all non-numeric characters.
However I don't think that's really what you are trying to achieve. I believe you're attempting to compare whether the booklet data is different to the database data but disregard certain formatting.
Realistically this isn't possible without having some very well defined rules on the formatting. For instance formatting on the organisation name is probably very significant whereas telephone number formatting is not.
Instead you should be tracking changes within the database and then manually check the booklet.
One possible solution is to store the booklet details for each record in your database alongside the correctly formatted ones. This allows you to perform a manual conversion once for the entire booklet and then each subsequent year lets you just compare the new booklet values to the old booklet values stored in the DB.
An example might make this clearer. Imagine you had this very simple record:
Org Name Booklet Org Name GSM Booklet GSM
-------- ---------------- --- -----------
BESCHUTTE BESCHUTTE WERKP 0484 10 14 84 0484-101 484
When you get next year's booklet, then as long as the GSM number in the new booklet still says 0484-101 484 you won't have to worry about converting it to your database format and then checking to see if it has changed.
This would not be a good approach if a large proportion of details in the booklet changed each year
About 10 years ago, I was using a computer language called INFO on an ancient Prime. It was a funky, but oddly useful language, and I'd like to find out more about it. For instance, can I get a copy that would run on a PC? However, if I google for "INFO" and "Computer Language," I'm not going to find it. I've tried. I think the company that created it was in the UK.
It was so easy to learn, that I was programming the first day, and yet was capable of true oddness. The programs had line numbers, and if the numbers started with an odd number, the commands were SQL like and worked on the entire set. If they started with an even number, the commands were a block that dealt with each item in the set one at a time.
So, a simple type of program could be something like this (I don't recall the syntax either)
100 select * from user_table
110 join with address_table
120 exclude duplicate addresses
200 for each record in set
205 print firstname, lastname
210 print addressLine1
220 print City, State, Zip
300 display "address labels printed"
It was capable of quick and impressive applications, that were almost simple to understand, and yet had a sneaky power that I was only beginning to understand after two years at the job.
How do I find it? Can I get a copy that will work on a PC?
I believe that's the Pr1me Information language; it's now hiding out in IBM Universe. It was a competitor for Pick.
I think the correct name was "Prime Information". Searching for that phrase might get you some slightly more relevant results.
You can find some info here.