Note: This video has subtitles in many languages available, too.
What kinds of links can Googlebot discover?
Googlebot parses the HTML of a page, looking for links to discover the URLs of related pages to crawl. To discover these pages, you need to make your links actual HTML links, as described in the webmaster guidelines on links.
What kind of URLs are okay for Googlebot?
Googlebot extracts the URLs from the href attribute of your links and then enqueues them for crawling. This means that the URL needs to be resolvable or simply put: The URL should work when put into the address bar of a browser. See the webmaster guidelines on links for more information.
As long as these links fulfill the criteria as per our webmaster guidelines and outlined above, yes.
Does Googlebot understand fragment URLs?
Fragment URLs, also known as “hash URLs”, are technically fine, but might not work the way you expect with Googlebot.
Fragments are supposed to be used to address a piece of content within the page and when used for this purpose, fragments are absolutely fine.
Does Googlebot still use the AJAX crawling scheme?
The AJAX crawling scheme has long been deprecated. Do not rely on it for your pages.
The recommendation for this is to use the History API and migrate your web apps to URLs that do not rely on fragments to load different content.
Stay tuned for more Webmaster Conference Lightning Talks
This post was inspired by the first installment of the Webmaster Conference Lightning Talks, but make sure to subscribe to our YouTube channel for more videos to come! We definitely recommend joining the premieres on YouTube to participate in the live chat and Q&A session for each episode!
If you are interested to see more Webmaster Conference Lightning Talks, check out the video Google Monetized Policies and subscribe to our channel to stay tuned for the next one!
Join the webmaster community in the upcoming video premieres and in the YouTube comments!