seo god google
Thursday, July 8, 2010
Google announcing the completion of a new web indexing system Caffeine
Google announcing the completion of a new web indexing system called Caffeine. Caffeine provides 50 percent fresher results for web searches than our last index, and it's the largest collection of web content we've offered. Whether it's a news story, a blog or a forum post, you can now find links to relevant content much sooner after it is published than was possible ever before.
Some background for those of you who don't build search engines for a living like us: when you search Google, you're not searching the live web. Instead you're searching Google's index of the web which, like the list in the back of a book, helps you pinpoint exactly the information you need. (Here's a good explanation of how it all works.
So why did we build a new search indexing system? Content on the web is blossoming. It's growing not just in size and numbers but with the advent of video, images, news and real-time updates, the average webpage is richer and more complex. In addition, people's expectations for search are higher than they used to be. Searchers want to find the latest relevant content and publishers expect to be found the instant they publish.
To keep up with the evolution of the web and to meet rising user expectations, we've built Caffeine. The image below illustrates how our old indexing system worked compared to Caffeine:
Our old index had several layers, some of which were refreshed at a faster rate than others; the main layer would update every couple of weeks. To refresh a layer of the old index, we would analyze the entire web, which meant there was a significant delay between when we found a page and made it available to you.
With Caffeine, we analyze the web in small portions and update our search index on a continuous basis, globally. As we find new pages, or new information on existing pages, we can add these straight to the index. That means you can find fresher information than ever before—no matter when or where it was published.
Caffeine lets us index web pages on an enormous scale. In fact, every second Caffeine processes hundreds of thousands of pages in parallel. If this were a pile of paper it would grow three miles taller every second. Caffeine takes up nearly 100 million gigabytes of storage in one database and adds new information at a rate of hundreds of thousands of gigabytes per day. You would need 625,000 of the largest iPods to store that much information; if these were stacked end-to-end they would go for more than 40 miles.
We've built Caffeine with the future in mind. Not only is it fresher, it's a robust foundation that makes it possible for us to build an even faster and comprehensive search engine that scales with the growth of information online, and delivers even more relevant search results to you. So stay tuned, and look for more improvements in the months to come.
Sunday, April 11, 2010
Search Engine Optemization Service
Sharam web solution has been a leading Web Design, Development and SEO company since 2009. Sharam web solutionSharam web solutionSEO services.
Google is making the Web unintentionally worse:-
There is a huge problem with Google’s crude attempt to use total page loading time for ranking long pages (that require scrolling down to fully view): it likely uses the total page loading time, not taking into account that in the user’s browser the page could be visible long time before that, if he doesn’t scroll down. We own very popular websites with long pages and we always tried to optimize the experience for the user by showing him what we can as soon as possible. That meant splitting images and JavaScript into small parts that only load when they are actually used in that part of the page. This way the user can see the page on his screen as soon as possible. None of the current tools, such as YSlow, web page test.org or Google’s very own Page Speed understand this, so there is absolutely no reason to think that Google bot could understand it either.
Traffic from Google rankings is important to us, so we did what we think they wanted: we listened to the recommendations of these tools and combined images and JavaScript to make the total page loading time quicker, making our pages appear to load slower to actual users. This is what happens when Google implements crude measures with a lot of secrecy about their methods – the Web becomes worse.
After the changes we made, our pages are faster to load according to all existing testing tools and I’m sure Google Webmaster Tools will show an increase in speed. But these tools are not just slightly flawed, they are totally wrong and misleading, because they use the total page loading time. What matters is what users see on their screens and our pages filled out the user’s screen very quickly and continued to load after that.
Think about a page that has 30 picture thumbnails arranged vertically. You have two choices:
1. Regular page that loads these pictures one by one, with one file for each thumbnail. First pictures will show up quickly and the user won’t even know that the rest of the thumbnails are not yet loaded, unless he scrolls down immediately. Good user experience, but a lot of connections and some overhead for each picture, so the page speed testing tools show bad performance.
2. One huge sprite (look up CSS sprites for more info) and the page uses CSS to split it up into separate pictures. Horrible user experience because the user has to wait until the whole sprite loads completely before he sees anything, but the page speed testing tools show improvement.
Google is making the Web unintentionally worse:-
There is a huge problem with Google’s crude attempt to use total page loading time for ranking long pages (that require scrolling down to fully view): it likely uses the total page loading time, not taking into account that in the user’s browser the page could be visible long time before that, if he doesn’t scroll down. We own very popular websites with long pages and we always tried to optimize the experience for the user by showing him what we can as soon as possible. That meant splitting images and JavaScript into small parts that only load when they are actually used in that part of the page. This way the user can see the page on his screen as soon as possible. None of the current tools, such as YSlow, web page test.org or Google’s very own Page Speed understand this, so there is absolutely no reason to think that Google bot could understand it either.
Traffic from Google rankings is important to us, so we did what we think they wanted: we listened to the recommendations of these tools and combined images and JavaScript to make the total page loading time quicker, making our pages appear to load slower to actual users. This is what happens when Google implements crude measures with a lot of secrecy about their methods – the Web becomes worse.
After the changes we made, our pages are faster to load according to all existing testing tools and I’m sure Google Webmaster Tools will show an increase in speed. But these tools are not just slightly flawed, they are totally wrong and misleading, because they use the total page loading time. What matters is what users see on their screens and our pages filled out the user’s screen very quickly and continued to load after that.
Think about a page that has 30 picture thumbnails arranged vertically. You have two choices:
1. Regular page that loads these pictures one by one, with one file for each thumbnail. First pictures will show up quickly and the user won’t even know that the rest of the thumbnails are not yet loaded, unless he scrolls down immediately. Good user experience, but a lot of connections and some overhead for each picture, so the page speed testing tools show bad performance.
2. One huge sprite (look up CSS sprites for more info) and the page uses CSS to split it up into separate pictures. Horrible user experience because the user has to wait until the whole sprite loads completely before he sees anything, but the page speed testing tools show improvement.
Subscribe to:
Posts (Atom)