In this exercise, we will implement a parallel web crawler by using concurrent features we've seen.
crawl is implemented by
crawlNaive, which fetches web pages in serial (not parallel), and sometimes fetches same URL multiple times. Replace it to make
crawl work correctly.
Hint: You may want to represent a cache of fetched URLs as
Set, and share it between threads by using
An example solution is here.