Now the contest begins!! We have to solve 3 problems of different points (250, 500, and 1000 points). For you who're curious what's the contest problems look like? Click on the problem name to see the problem statement (but remember not to copy it for other use!). Below is my discussion for each problem:
This is the easiest problem among the three (you can click on the WebSpider link above to view the problem statements). This problem is about google engine crawling a website (google banget sih). You have to build a program to determine the number of unique files in the website that the google engine had crawled! For example, Google engine crawl a website and finished crawling and travels three levels deep. So now google engine has 3 passes (3 pages of the website that has been crawled). The first pass would be the home-page of the website. The second pass would be the page that linked from the home-page. The third pass would be the page that linked from the second pass. (just like DFS..).
Now you have the three pages that google engine has crawled: the page from the first pass, second pass, third pass. In each pages there are links in it, right? Now, you have to write a program that receive input of lists of all the links found in the first pass, in the second pass, and in the third pass and determine the number of unique "files" found... got the idea of the problem now?
The difficulties that arise from this problem is that... There could be a link that refer back to the same page/file. So? eventhough google found a thousand links when crawling a site that does not necessary mean that there is a thousand unique file linked... probably there's only 50 unique file (since many of the links refer to the same file/page). A "file" in this case is a page in a link (for example: a link support/login.jsp in this case, login.jsp is the file or the page). You have to determine the number of unique file from the three passes done by the google engine!
I solve this problem within 1 hour... I just use java.util.HashSet and string manipulation. I convert all the links into an ABSOLUTE path-link then add it to the HashSet, if it turns out to be in the set, the HashSet will ignore it. Therefore the answer is the size of the HashSet :) The examples given by the problem statement was plenty and very clear! I think this problem really2 was a bonus problem!
This problem, TripMapper, I should have known that it's a TrapMapper!! (I mean it's a Trap). A trap for Time Limit! Damn... I thought it was output time oriented... Anyway, you can look at the problem statement by clicking the link of the problem name above.
The second problem worth twice points from the first problem, but?? No One solves this problem during the contest! Most of the submitted code was successfuly challenged by Pascal AN... (including mine). He knew exactly that BFS/DFS/BruteForce will fail on the time :) (Damn... I thought my Iterative Deepening was impeccable :P)
Ok, let me say something about the problem. The problem is just a regular Shorthest Path problem but what made it different from the regular shortest path is that its DATA STRUCTURE!!! (damn.. they play on the data structure). It's quite hard to translate the "map" to a collections of nodes and edges. I hate intersections!! They present the map with grids 550 x 550. Guess what? I multiplied it again by two to avoid short-circuit for adjacent node. I guess this was the culprit of my TLE. You can look at the problem statement to get the idea of the map.
Conclusion: I should've never touched this problem!
Guess what? I didn't even have time to read this problem statement! (not enough time). But our number one winner, Ardian KP, brilliantly leaves the 500 problem intact and head directly to attack this problem (which he thought was easier, DP -> his specialty... gimana gak easier?).
Comment: No comment... Ardian, maybe you want to comment on this one? drop me an email : firstname.lastname@example.org
Sindux feels that problem 500 is similar with the previous TopCoder contest:
Next : Google Tech Talk & Awards