Hangout from India

Hangout from India

>>Matt Cutts: Hey, everybody. I think we’re
hanging out. We have a lot of people in this Hangout. And we’ve got some questions already
streaming in. But before we start out, let me just say a couple quick things. One is
that we have to stop at three o’clock India time no matter what ’cause they’re gonna do
another Hangout. Like, the Developer Innovations guys. But
I also wanted to just do some very quick introductions. Would you guys be willing to cycle through
a little bit and introduce yourselves so people know what’s going on? Let’s start. I’m Matt
Cutts. I’m the head of the Webspam Team. I work with webmasters and SEO. Wysz, do you
wanna–?>>Michael: Michael or Wysz or Michael Wysz,
or whatever you want to call me. I work on Webmaster Outreach. So, supporting webmasters,
like reconsideration requests, videos, Hangouts, that kind of thing.>>Matt Cutts: Yup. And we’re both normally
in Mountain View and here we are in Hyderabad. And it’s not just us in Hyderabad. Do you
guys from Mountain View want to introduce yourselves?>>Michael: We’ll start with Brian.>>Matt Cutts: Brian?>>Brian White: OK. Can you hear me?>>Matt Cutts: Yup.>>Brian White: Hi, guys. I’m Brian White.
I’m a program manager in the Webspam Team in Mountain View. It’s really good to be here
in India with everybody. So,–.>>Matt Cutts: Absolutely. Nathan, you wanna
introduce yourself?>>Nathan: Yeah, sure. My name is Nathan, Nate.
Search quality, Webspam. Based out of Mountain View as well, here, visiting in India.>>Matt Cutts: Fantastic. You wanna do a quick
hello?>>Aradhana: Sure. Hi, I’m Aradhana. I’m also
out of the Mountain View office on Webmaster Outreach.>>Matt Cutts: Awesome. So, count it. We have
five different people from Mountain View visiting India right now. So, we’re putting a lot of
attention into trying to make sure that we handle it really well, that we provide all
the support that we can. But more importantly, that we have a great team of people right
here in Hyderabad who work on spam fighting and who also work on all kinds of different
Webmaster Outreach. Do you guys wanna introduce yourselves a little bit? Malik? You wanna
go first?>>Malik: Yeah. So, hi. I’m, this is Malik
here again from Hyderabad Team. I work for the Search Quality team here. And I have with
me my team. I’ll just show around. [Matt Cutts laughs] Hi. You see here. Yeah? So, yeah. We all work
for the Search Quality Team in Hyderabad here.>>Matt Cutts: Awesome. Paul, do you wanna
say hello?>>Paul: Hey guys. This is Paul. I also work
with the Search Quality Team, working with Webmaster Outreach locally. So, you guys can
look forward to a lot of support and help from us.>>Matt Cutts: So, Cysai.>>Cysai: Hi, everyone. I’m Cysai. I work for
Hyderabad office with the Search Quality Team.>>Matt Cutts: Very cool. And finally, let’s
have everybody’s boss’s boss’s boss. Vivek, you wanna say hello? [laughter]>>Vivek: Yeah. Hi, guys. I’m Vivek. I, again,
work with the Search Quality Team here in Hyderabad. [Matt Cutts laughs]>>Matt Cutts: OK. So, we’ve got–. As you
can tell there’s a big number of people, both who are local to Hyderabad and do a ton of
quality-related work. I was talking with somebody earlier today who’s an engineer here who’s
worked on Enterprise and all kinds of other things. So, there’s a really big presence. And it’s not just in Hyderabad. We have a
Gurgaon office. There’s Bangalore. I landed in Delhi this weekend and got to see the Taj
Mahal and had the best experience not only because of the architecture and because it
was a fantastic site, but even before I got out of the Delhi airport, somebody said hello. They walked up and they said, “Oh, I really
like your Webmaster videos and so it’s great that you’re here in India.” And it was such
an incredibly warm welcome. And even at the Taj Mahal, somebody walked up and said hello.
And that really makes it feel like doing these videos and doing Hangouts and all that sort
of stuff is worthwhile because people can benefit from them. So without further ado, let’s actually answer
some questions. How about that? A few people starting out saying, “Where are you? How are
you? Where are you going?” I’m here in Hyderabad for the next several days. I didn’t realize
Republic Day [laughs] was gonna be a holiday, so officially I’m working on Thursday, but
I might go and see a little bit of the sights. And then on Friday, I fly back out. I’m gonna
stop by Korea on the way back home. But it’s been fantastic to get to visit with people.
OK. So, a very interesting question starts out with Stephan, actually, which doesn’t
sound like an incredibly Indian name. But he says, “Something special to know from webmasters
in India?” So, I was trying to think. What’s some information
that you guys, that I could share with you guys that maybe not everybody else knows yet?
I think, maybe tomorrow morning, we might do a blog post that talks about ways that
webmasters can standardize their markup on forums so that people can start to make it
easy for autocomplete of web browsers in Chrome and all those sorts of things. If you think about it, whenever you start
to fill out a form and it completes well, that’s fantastic. But everybody uses different
names for the fields. And so, I think–and I’m trying to give you guys a few hours of
preview–I believe that we might put out a blog post pretty soon where we’ve been working
with other people and we’re trying to come up with a good standard so that if you can
markup with various standard form names, then anybody who comes and wants to fill out that
form, [snaps] they can do autocomplete much faster. They can fill out the form much faster. And
that way, it’s much easier for people to buy or subscribe or whatever they’re interested
in. I’m hoping that the Chrome guys don’t get angry that I’m dropping a hint a little
bit early. Hopefully tomorrow that will roll out. And then, by the time everybody in Mountain
View, all those square guys wake up, that blog post will go live and people will be
able to learn about it. So, a really interesting question from Taraka,
wants to know, “Will you include public or private, but shared, Google Docs and Spreadsheets
in ‘Search, plus Your World’ results?” So, I’m just a Search guy. I’m a Webspam guy.
I don’t have special insight on what the product road map looks like for Search, plus Your
World, but I can certainly try to give you a little bit of the insight from the outside. And from where I’m standing, it would be incredibly
helpful if I could type a search into one search box and be able to find the content
no matter where it was. So, I personally would find it incredibly useful. Like, we’ve started
a public dock where we’re talking about some things related to this Hangout. If I could go to just regular Google and search
and find that document, I think that would be fantastic. That said, I don’t know what
our product plans are and so I have no idea how it might work. You guys might have noticed
the privacy policy that we put out just today, which takes over 60 different privacy policies
and combines them together into one privacy policy. There’s still a few, like Google Wallet, maybe
Chrome, Google Books, that are separate, but to be able to take like, tens of thousands
of words of privacy policy and condense it down into one, more simple, easy-to-read privacy
policy, is a good thing. And hopefully, that will let you do things like if there’s something
really useful like being able to do type in the name of a friend and it could automatically
spell correct that because it knows your contacts or something like that, instead of having
these separate silos, I think that would be really, really cool. So, I don’t know how it will work out in the
future, but it would be really nice if they added that sort of functionality in my personal
life. OK. Here’s an interesting one. “What would cause Google to penalize websites when
performing SEO? I have a website that was ranking top three for three very competitive
words and boom; it started sinking like a rock to page three.” So, I might lean back. Anybody else wanna
tackle that one? Anybody else? Brian? Nathan? Paul? Anybody wanna chime in, or Malik, on
why something like drop out of the top one or two or three results? [pause]>>Michael Wysz: Nobody wants to say? I mean,
I can say something.>>Matt Cutts: Yeah. Go for it.>>Michael Wysz: I mean, there’s a whole bunch
of different things that could lead to a rankings drop. The first thing that I would do, though,
is like, open up the Webmaster Guidelines and just use that as a checklist. And I would,
I might even skip down to the last section, which is the Quality Guidelines, and see is
there something that I was doing that maybe some SEO told me to do or maybe I got some
misinformation somewhere? And it turns out that I was doing something
like buying links that pass Page Rank. For example, that actually violates our guidelines.
If you find out that that was the issue, the resolution is actually pretty straight-forward
though. You first clean up whatever you had done, so if you had hidden text on your page,
for example, remove it. Take care of that first. And then, we have
a process called “Reconsideration Request” where you just say, “Hey, please take another
look at my site, Google.” And we’ll re-evaluate it and if we had taken action on the site
and you fixed everything, then we can go ahead and change that. But sometimes, it’s things
like crawling issues. Somebody on your web post might be having
issues or maybe you were working on a test version of your site and that puts [inaudible].
Then, you move from the test server to the production server and you forget to change
that. So, it could be that as well. We have actually–. There’s a checklist that you can
find in our Help Center. So, Click on “Help
Center.” And there’s an article called “So You’re Not Doing Well on Search.” And if you
look at that, it’s compiled from over the years of all the different things that we’ve
seen happen to websites. I don’t know if anyone has anything to add.>>Matt Cutts: Yeah. So, the basic idea is
if something has gone horribly wrong, either because you were spamming or crawl issues
or whatever, it’s often best to start with because we will usually
leave a message there for a large fraction of stuff. So, if you show up and you see a
message, that’ll tell you exactly what’s going on. If you have been doing aggressive SEO, if
it’s really, really bad, egregious Black-Hat SEO often will remove the site entirely. If
it’s stuff where the site is lower quality in terms of like, thin affiliates or not having
enough original content, that sort of thing, than that might be the sort of thing that
would merit a demotion. And what you can do is try to fix or find
out what the original issue was by going to And if you think that
you’ve fixed the issue, you can do a Reconsideration Request. And when you do a Reconsideration
Request, you will find out explicitly with the vast majority of cases whether we’ve actually
taken manual action on your site or not. And if we have taken manual action in the
past, whether we’re going to reconsider that request or whether we think there’s still
some work left to be done. So, there’s a lot of good stuff that you could start out with
there. That’s the highward bits. I don’t want to spend too long on that question, but that’s
where I start whenever I’m diagnosing issues. So, Rupesh asks, “Is Google stepping-up its
anti-webspam efforts?” Well, as you can see here, we’ve got an awful lot of people in
this Hangout who are thinking a lot about spam in addition to all the engineers back
in Mountain View. We actually have spam fighters not only in Hyderabad, but around the world. So, in many places where we can almost get
24-hour coverage in terms of being able to respond to spam. It’s pretty fun. So, you
guys who are on the Webspam Team in Mountain View, how many people are staying past Friday?
OK. So, everybody but me. I have to leave on Friday. Nate’s actually been here a week
before me. And so, a lot of people have been here to
make sure like, all we do is live and breathe and think about spam in a lot of cases. So,
it’s definitely the case that we’re trying to think about what do we need to do to tackle
the next wave of Black-Hat spam? What sort of things we need to do to improve on. OK. Sandeep asks, “A quick question. As a strategy
to recover from Panda. After finding posts which we feel are low in quality, should we
one, no follow and no index it? Two, 404 it? Or three, move it to a sub-domain. If it were
me and you really know that it’s lower quality, I would probably go ahead and 404 it. Right? If that content is not sufficient to–.
If you’re worried that content is so low-quality or so bad that it’s starting to affect the
entire reputation of your site, there’s no harm in just removing it, right? Just remove
it entirely and let it 404. The other things are tactics that you can do. You can try to
segregate it off to a lower quality domain, but I mean, that’s still in some sense associated
with you, whether it’s a sub-domain or domain, depending on what we look at. No follow and no index, you could do that
because Google is then not going to have that show up in its index. And so, that is one
strategy. If you really–. If the quality is good enough, you’re willing for a person
to see it, but you’d really rather not have it be reflected on your site, then you could
go with no follow no index. But it sounds like you’re talking about hosts
which are low enough in quality, either original or auto-generated, where it really might be
affecting the entire reputation of the site. And for those, I would probably just go ahead
and choose the 404.>>Michael Wysz: I guess that last option is
maybe better left for cases where instead of known low-quality content, you might have
content with unknown quality. So, if you had user-generated content, you have an untrusted
user account and you’re still waiting for them to build a reputation, maybe get rated
by the other users or moderators that take care of spam abuse, that stuff you might want
to keep hidden from search engines.>>Matt Cutts: Yeah.>>Michael Wysz: Just while you’re feeling
the–.>>Matt Cutts: Yeah. And if you have user-generated
content and you can’t control the quality of it, then segregating it somewhere a little
bit different can make a lot of sense.>>Michael Wysz: Yeah.>>Matt Cutts: OK. Somebody says, “Don’t forget
to try the Biryani.” [laughs] The chicken has been really, really good. I’ve been enjoying
that. The food has been delicious.>>Michael Wysz: Yeah.>>Matt Cutts: My wife had a bad experience
in college and so we never get to go have Indian food because she thinks she doesn’t
like Indian, but I think she secretly does like Indian. She just had one bad meal. So,
I’ve been having so much Indian food here. It’s been really, really good. And I appreciate
the team hosting me and taking us out to a bunch of great places.>>Michael Wysz: Yeah. It’s been a nice surprise
for me, too, because I pretty much never have Indian food back home. The last time I had
Indian food was–>>Matt Cutts: Yeah.>>Michael Wysz: when I first moved out to
California, like over five years ago. Went to one restaurant in California. It was actually,
it was kind of OK. But I just was like, “OK. Whatever.” But I guess I didn’t have the full,
authentic experience. But I actually had a home cooked meal my first meal here. So, it’s
been kind of a battle ever since.>>Matt Cutts: Yeah. So, not to tell any tales–.
Is it OK if I tell where you’re from?>>Michael Wysz: Yeah.>>Matt Cutts: So, Wysz is from Philadelphia.
So, that’s the land of Philly cheese steaks and hoagies and all that kind of stuff. So,
sometimes he’s a little conservative with what he eats. So, this has been a real teachable,
growing moment for all of us, I think.>>Michael Wysz: That’s right.>>Matt Cutts: [laughs] OK. Okshai asks, “Is
one hundred percent recovery from Google Panda update possible? If yes, what do you suggest?”
And the answer is yes. It is possible to recover a hundred percent from Panda. Oh hey. Projects
screwing up. [high echo sound] And we have echo on. [laughs] OK. So, it is
possible to recover from Panda in the following ways. Remember, Panda is a hundred percent
algorithmic. There’s nothing manual involved in that. And we haven’t made any manual exceptions.
And the Panda algorithm, we tend to run it every so often. It’s not like it runs every
day. It’s one of these things where you might run
it once a month or something like that. Typically, you’re gonna refresh the data for it. And
at the same time, you might pull in new signals. And those signals might be able to say, “Ah,
this is a higher-quality site.” So, there’s a solid group of engineers that I had the
chance to work with who have been looking for signals that differentiate the higher
quality sites from the sites that might be slightly lower quality. And if look at–even in the last, say, two
or three months–we’ve done a better job about differentiating all those. And so, when we
rerun the pipeline to recompute the data and then we push that back out–I think the most
recent one was earlier this month–there was one that was probably about two weeks ago. And so, when that happens, if, according to
our signals, it looks like the site is high-quality or there’s new data or there’s new signals,
then you just wouldn’t, you would pop out of being affected and you wouldn’t have to
worry about it at all. So, in order to recover from Panda, take a fresh look and basically
ask yourself, “How compelling is my site?” We’re looking for high quality. We’re looking
for something where you land on it, you’re really happy, the sort of thing where you
wanna tell your friends about it and come back to it, bookmark it. It’s just incredibly
useful. That’s the sort of thing that we don’t want to get affected. So, yes. It is possible
to recover. All right. Oh, we have a question about a
thing that we just rolled out last week. So, Mustafa asks, “My question is why Google penalized
for showing more above the fold area ads, and Google itself shows more above the fold
area ads, in its search queries?” This is a completely fair question. I’ve seen a few
people ask it. And I felt bad because I was on a plane, like
a 14-hour plane ride, whenever the blog post went live. So, I didn’t have a chance to address
it until now. So, here’s the situation. If you do a search like, say, credit cards or
web hosting, something that has a lot of ads, it is true that you can see several ads on
Google. But the fact is, the vast majority of searches
do not show four ads or a ton of above the fold ads. In fact, if you looked at the overall
fraction of Google queries that trigger ads, it’s not as large as you would think. So,
the change that we do is algorithmic. So, this is the page layout algorithm. Basically,
well–. I wish I can–. OK. You have a page, right? And if [laughs]–.
We’re going low-tech by the way. So, if you land on that page and there’s a lot of ad
blocks, right? Two very large blocks, whether it be AdSense or whatever else, the algorithm
does have a hard-coded list of Ad networks or anything like that. And we algorithmically
detect these ads. And we don’t do anything special. So, we detect
AdSense just like we detect a lot of ads. So, if you show up on a site and a large fraction
of the content above the fold has these large ad blocks, then that’s a bad experience. But
what we do is we compute–. This is incredibly low tech.>>Michael Wysz: But it’s kind of fun.>>Matt Cutts: We basically take the amount
that above the fold and we say, “OK. How much of this looks like it’s ads?” But we don’t
just do it on a single page. We aggregate it across the entire site for all of the pages
that we have in our index. So, if your template is nothing but like, ads as far as the eye
can see until the user scrolls really far down, on every single page, then the page
layout algorithm is like the trigger. Now, every so often, you’ll see a page like
this on Google, but your typical page on Google does not have those sorts of ads, or they
have a very small number of ads. It might have ads on the right-hand side, but they
don’t have a ton up at the top. So, if you were to process Google and aggregate across
all of Google’s pages, you’d find on average we have very few ads above the fold. And the ads there are relatively small. Now,
what’s interesting is Google actually blocks Google search results from crawling because
we don’t wanna pollute Bing, we don’t wanna pollute blekko and other search engines. So,
in the first place, yes we would apply the same processing, but in aggregate, Google
doesn’t have that many above the fold ads, if you look at the mix of ads that we have. In addition, we block ourselves from being
crawled, so we’re not even processing those pages. Now, the last question that I’ve heard
a few people ask about is basically, “Of the pages that you do crawl for Google, do you
still run the same algorithm on that?” And the answer is yes. So, there’s one Google
property, which is, I think,, which is something that got shut down like,
years and years and years ago. And that property is affected. So, we run
the exact same algorithm on all the pages that we’ve crawled. It’s not like we do anything
different for Google. So, Mustafa, thanks very much for asking that question because
I did want to clarify that. You can see a screen shot that says, “Oh, here’s a lot of
ads.” But remember, most people aren’t doing searches
for credit cards. Most people are doing searches for everyday things. They’re searching for
information on error pages. They’re trying to figure out how to set the default printer
on Firefox in Ubuntu. And those are not queries that trigger a lot of ads. So, yes. We are running the same sort of processing.
Ah. Someone asks what he called “the eternal question.” “Is there a limit on the character
count on page titles? Does Google stop reading after a particular number of words?” In the
early days, we might’ve had those kinds of hard code limits, but we try to be more sophisticated
over time. For example, suppose your page title is nothing
but capital letters. That’s gonna take up more space than if you have lowercase or if
you have numbers. And so, what we try to do is we try to actually compute the amount of
screen real estate and you don’t wanna go too far because then it would start to look
raggedy or go all the way over to the edge of the page. So, we try to be relatively sophisticated.
So, what I would recommend is think about the overall appearance in terms of the actual
characters and how wide they are. And you can do some iteration. You can figure out
how much Google will typically trim and you can incorporate that into your calculations.>>Michael Wysz: Yeah. I’d just watch out for
extremely long titles that you know are gonna get cut off. I would think about titles more
in terms of appearance than trying to get those few words in there for ranking purposes
because showing up in the search results is one thing, but then the very next step that
you want, is you want that clip, right? You want that conversion. And if you have
a title that is maybe just a bunch of comma separated words it might look a little unprofessional.
Some of them just gloss over that and go to the next result. Or, maybe the important bit
of your title is the part that gets truncated. So, you probably would wanna have your business
name in every title and then the specific page, right? Whether it’s a product’s name or a contact,
or, or something like that. And we offer in Webmaster Tools, there’s HTML suggestions,
which will tell you if you have duplicate titles or titles that are too long. So, I
think that will help you focus your efforts. Especially if you have a large plethora of
pages. And I’d even look at queries where maybe you’re
getting a lot of impressions, but relatively low play through and see which pages are ranking
there. And look at that title and also look at the snippet, or even the URL.>>Matt Cutts: Yeah.>>Michael Wysz: And say, “OK. Well, that kind
of threw that person off that they looked up about my site.”>>Matt Cutts: Yup. Absolutely. OK. So, let’s
do a question from Seeba, who asks, “How do you take copyrighted content in Google search.
Do you take down sites on request, or do you identify them yourself and remove it?” Great
question, Seeba. So, on the website, typically–. Well, OK. Let me start with YouTube. With YouTube, content publishers will actually
give us content and say, “We own the copyright to this.” If someone uploads the same music,
video, or something like that, then they can choose whether to take it down, whether to
show ads on it and pay the money to them, all sorts of different things. Whether to
let them upload. And so, for YouTube, we have this system.
It’s called Content ID. It lets people register their content and then decide what to do if
someone uploads the same content later. On web search, there’s a couple of ways we approach
it. The first is, if we get a DMCA complaint. That stands for Digital Millennium Copyright
Act. And that basically says, “OK, if I have a
blog post and Wysz scrapes it, or Brian scrapes it, or whoever, and they put up a copy, I
can do a DMCA complaint and I can try and get that taken down.” And that will almost
always succeed unless there’s something like a “he said, she said,” we can’t really tell
what’s going on, or you didn’t do a valid complaint. But that’s the normal process to take care
of it. We also, of course, try to figure out when you first see a blog post, where was
it most likely to show up? You’d like the original content creator to show up if possible.
But it could be really hard if two copies show up at the same time, to know exactly
who wrote what. And so, we’ve experimented with all kinds
of different ways to try to do better on that. In fact, there’s an entire team in Zurich
which has been working really hard on the engineering side to try to return the highest
quality content, so that if a scraper or duplicate site shows up, that it doesn’t outrank the
other site. And they’ve made really good progress on that.
There’s actually a metrics site in Russia that shows Google doing, going up quite a
bit in terms of recent performance. So, that’s been going relatively well. We’re gonna keep
working on doing things better on our side, such as returning original content. But at
the same time, you might want to, if you see someone scraping your site, go ahead and do
a DMCA request. Digital Millennium Copyright Act request. OK. Here’s a fun one. “I think we can get
benefit from no follow links on SEO.” Does anybody else wanna tackle whether no follow
links have any benefit whatsoever? Page rank, or anchor text, or anything? Does anybody
on this Hangout wanna claim that no follow links actually blow page rank?>>Nathan: No.>>Matt Cutts: [laughs] Paul, Paul. Paul’s
willing to wade into it. OK. No. It does not slow page rank. So, every three years we’ll
find a bug where it accidentally in one really weird court case did slow page rank and we’ll
take care of that. But no. In general, we do not want to discover new links and we don’t
want to flow anchor text across the no follow links. And the reason for that is that no follow
is very useful for user-generated sites, or sites where you’ve got blocked comments, things
where you can’t vouch for them. So, people can click on the links, but search engines
don’t necessarily trust those links. So, I can’t speak for Yahoo. I can’t speak for Microsoft. But I can speak for Google. And Google’s crawl
does not use no follow links to flow page rank or anchor text. And if you ever see a
case where it even looks like that’s not true, send me a tweet and let me know and we’ll
dig into it, debug it. And if it’s a bug, we’ll fix it. OK? Oh, this one’s a good one
from Beedar. He wants to know, “Rich Snippets is manually
enabled if I understood things correctly.” Incorrect. It used to be the case then. That
was how things worked. But in actuality, just a few weeks ago we turned it on so that Rich
Snippets can turn on for any site. So, it doesn’t have to be the case now that someone
has to say, “OK. This validates and we’re gonna turn this on.” Now, what that leads to is there have been
a few reports on a few blogs about people trying to abuse Rich Snippets or try to make
like, extra lines show up in their snippet with like star reviews and stuff like that.
And that’s a known problem where if you open it up to everybody, you’ll see a few people
who try to abuse it. And we’ll absolutely look into those cases
and we’ll probably have a forum where anybody can report that abuse. So, if you are thinking
about trying to abuse Rich Snippets, I would really not recommend doing that because you
could lose your ability to use Rich Snippets for quite a long time. And it could eventually
be viewed as webspam as well. So, all your competitors would see that. They’d
be able to report you. Though, before you start to even think about trying to abuse
that as a tactic, I would pull back because it is something where we’ll be gathering feedback
if we see anybody trying to use that in a spammy or a malicious way.>>Michael Wysz: I just learned something.>>Matt Cutts: Yeah. Yeah, yeah. Absolutely.
The SEO value for the author tag. So, we do have an authorship proposal that basically
lets you prove to Google that you wrote some particular content. And in some cases, we’re
willing to show you a picture, show the users a picture of the author. And so, that can
be useful because that means it’s a little bit more trusted content in the sense that
whenever you see the author, you know that they really wrote it. We’ve seen a few people pretending to be Matt
Cutts online and leaving fake Matt Cutts comments. And so, if we can close the loop where you
know the blog post was really written by Matt Cutts, that can totally make a big difference
in terms of knowing that it’s a little bit more–. That it’s not written by a spammer
or something along those lines. Now, as far as how that might help your rankings,
that’s still unclear. So, we’ll have to see how it goes. But I think people have been
very happy with the idea of having more authorship on the web. So, if we can move away from a
nameless, faceless Web to a Web in which I can trust a review because Wysz wrote it or
I can trust a review because Cy wrote it, that’s something that would be fantastic. And I think that that trend will continue.
But as far as direct impact on the rankings, it’s a little too early to say about that.
We’ll see how it goes. OK. “Google says it wants content above the fold in their blogs
using huge images and below those images is the content. Is Google able to understand
that?” So, it was people on my team who worked on the page layout algorithm and they worked
very hard to make it precise. So, I believe they will be able to differentiate
between just static, large images that are on your site versus something that’s really
going to obscure the content and annoy users–ads, or something along those lines. So, they go
to ask, “There’s many different theories as to the number of ads on a page and how many
ads on a single page is a good number. And also, what is a good number of above the fold
ads?” And here’s my rough answer. Ask regular people
about whether they’d get annoyed. Right? Because all of our thresholds are based on the idea
of “is this going to be something where a user lands on the page and they’re angry or
they’re unhappy or they’re annoyed or they’re dissatisfied?” So, for example, if you’re using two of the
largest possible blocks of ads and they’re directly above the content and you have to
scroll down to even find the content, that’s clearly something that’s going to annoy users,
right? Whereas, if you’ve got a relatively thin and it’s maybe only a single ad, that’s
the sort of thing where it’s probably not get in user’s way nearly as much. And so, it might be as much of an impediment.
So, the other thing is people are talking about the number of ads. And the way that
I would think about it is literally more in terms of surface area. Like, how much of the
screen is obscured and how annoying is it? So, this would be annoying, right, if you
have this much. Now, you could say, “Well, that’s only one
ad over here.” Or you could say, “That’s two ads.” I tried to make, like that. You could
say, “That’s four ads.” Or you sit here and make it 16 ads. But it doesn’t matter the
number of ads. It really matters how much space there is and whether people can get
to the content. And so, that’s what we’re doing. We’re actually
looking at the document object model. We’re parsing the page. We’re trying to render it
basically and figure out how it would look to a real user. And so, at UPCON I did a quick
preview. I said we’re going to get better at page understanding, user understanding,
document understanding. And what I meant with document and page understanding
was that we would really understand the layout of documents. Where are the ads? How big are
the ads? How much is the content there? Is the content buried way far on one side or
is it front and center? Those kinds of things. And so, that’s the essence of how the algorithm
works. Those are the sorts of things that we look
at. All right. Somebody is coming in in Spanish. That is [laughs]. That’s a surprise to me.
That one’s too long. So, let me tackle the second half of that ’cause I think we’re–.
How many questions are we getting at this point? Oh, wow. We only have 22 minutes?>>Michael Wysz: Yeah.>>Matt Cutts: OK. We have 22 minutes and we
have 79 comments. So, I am gonna start marching a little bit faster. So, I’ll take this Cyrillic
person’s name and I’ll answer the second one about Google sanctions. Can sanctions be canceled
during conversations with Google Support in fixing site problems like usability problems,
content problems and so on? So, Wysz alluded to this in the beginning,
but basically there is an appeals process. So, if we have to take a manual action you
can do an appeal. It’s called a Reconsideration Request. And the result of that is we will
actually tell you whether your Reconsideration Request has been granted or whether we never
took action on your site or whether there’s still some violations of our guidelines. And so, normally, if there’s a manual action
taken on your site, it lasts for a certain amount of time. So, hidden text might last
for 30 days, for example. Whereas something that’s really, really bad might last for a
much longer period of time. And after that period of time, things would automatically
expire. And then at that point, you come back to Google Search results and then if you’re
still spamming, the theory is that we’ll find you again pretty quickly. But you can absolutely appeal at any point.
And if you were victimized by a bad SEO, or if you made a mistake and you’ve corrected
it now and you’ve promised “here’s how we can assure you that it won’t happen again,”
then you absolutely can have that manual action revoked. And so, that’s the whole point of
the appeals process. And things can happen pretty well with that.
Yeah, that can work pretty well. All right. “How have things changed for SEO’s after March
1st?” So, they’re talking about the policy change on the privacy policy. I really don’t
think for SEO’s their world will change much. The whole idea that we had over 70 different
privacy policies–that was pretty excessive. And so, I think it’ll be good to bring it
down to one that people might actually be able to read the whole thing and have it understand
all the different parts of it and not have to try to read 70 if they’re using 70 different
products. But for SEOs, I don’t think that much will change. The hope is eventually,
our user experience, we might be able to unify it a little bit, make it a little bit more
streamless, a little smoother, that sort of thing. But that’s relatively far removed from SEO.
And so, I don’t think SEO will change that much as a result of the privacy policy changes
or the terms of service policies. OK. So, somebody asks, “Is the Panda update now a
sort of automated process? Like, if we were adjusting or making changes to our site and
it starts showing the effect dynamically, or is it the traditional one where you guys
trigger an update every now and then?” That’s a great question. So, it’s not the
kind of thing–. There are some parts of our processing where when we crawl a page, we
will reprocess that page, like keyword stuffing. And if you’ve taken off the keyword stuffing
and we don’t think the keyword, that page is spam anymore, then they can instantly pop
back up. The Panda update involves more offline processing.
So, it takes more time. There’s a pipeline of data and all these programs that have to
run. And so, we don’t run it every day. It runs typically more like, at this point, once
every month, once every several weeks. And so, a person does kick that off by hand. We look at the data. We make sure that it
meets our standards and everything looks OK. And then, if everything looks good, we push
that data out. So, the data itself is automated. It’s not a group of people at Google saying,
“I like this site and I don’t like that site.” But the actual process of generating it is
done still by a person. Eventually, it might be automated and just
be incorporated into our indexing. But at least for right now, it’s the sort of thing
where the Panda update, a person, an engineer kicks that off and makes sure that the pipeline
is operating smoothly. Ah, OK. Somebody asks, “In what scenarios a site link of a website
would be shown without a description? Is it a sign of an update in the rankings of a website
or is it a bug?” Anybody else wanna take a stab at that? Why would either a site link
or a snippet be empty?>>Michael Wysz: I mean, I can think of one
case, would be if you were actually unable to crawl a page, then we might not have a
snippet. Especially if we can’t find it from some other source. Typically, the snippet
will come from the content on your page, whether it’s something that we automatically grabbed
or it was in your meta description. But robots dot text for example, could be
one that blocked us crawling the page. So, robots.txt says Googlebot or any robot or
whoever you specify–don’t crawl this page. But we still might will have to show a certain
page in our search results if it’s the best result for that answer. Even if somebody has
maybe shot themselves in the foot by blocking us from crawling it. And I think you had mentioned
a few examples of the past where that had happened with–.>>Matt Cutts: Yeah. The California Department
of Motor Vehicles blocked all crawling by Google. And eBay, at one point, blocked every
search engine from crawling it. eBay was like, “Why would I want traffic from search engines?
That’s crazy.” And so, if someone typed in “eBay,” it looks a little silly if we can’t
even return a link to So, normally, if you don’t see a description,
if you don’t see a snippet at all for your result, it’s typically because we can’t crawl
your site. So, check your robots.txt, make sure you’re not blocking us. The next thing
is, in Google dot com slash webmasters, we have a free tool called “Fetch as Googlebot.” And there are some people who try to do the
right thing and then end up cloaking for Googlebot and block Googlebot and not regular people.
So, they’re like reverse cloaking and shooting themselves in the foot. And so, if you can,
dig into that and make sure that you can actually fetch the page as Googlebot. That will let you know for sure that you’re
not blocking. It can also happen if your site was down or something like that.>>Michael Wysz: And one other thing. There
is a kind of more obscure meta tag for no snippet where you can actually request a snippet.
So, maybe you had a template or something that had something in the head of the file
that you didn’t realize had something like no snippet on it.>>Matt Cutts: Yup.>>Michael Wysz: Yeah.>>Matt Cutts: Also, OK. Here’s in interesting
question. “Can we use alpha-numeric keywords in titles?” So, like “India 2 Mail.” And the
answer is yes. We’ll just treat it all as one word. “India2mail.” So, that actually
brings up a point that happened earlier this week that I hadn’t heard anybody talk about
that much, which is you can now search for some punctuation in Google. So, it used to be that we’d have a few keywords
like, C++, or F Sharp, or just a few programming languages–that sort of thing–where you could
search for punctuation. But we recently turned on for almost everything that you can type
on the keyboard without maybe hitting shift and more than five or six weird characters. So, like the percent sign, the ‘at’ sign.
A bunch of these single characters you can now search for as actual search results. And
then over time, you can maybe imagine being able to search for punctuation plus some words.
Like, @ Matt Cutts, or something along those lines. But starting with the single characters
is the first place you go. So, you can absolutely use alpha-numeric keywords
in the title, along those lines. Oh, so many questions. This is great. Clearly, I need
to come to India more often and do more Hangouts with people. [Michael Wysz clears throat] So, yeah. This one’s kinda’ interesting.
“Does Panda affect content with the original versus search?” So, Panda, it really tries
to figure out whether a site is high-quality. It doesn’t try to necessarily figure out the
aspects of duplicate content versus original content. In practice, if you’re a high-quality
site, you’ll tend to have original content. But it’s not so much just looking directly
for duplicate content in Panda. OK. So many questions. OK. Oh, people are also asking
on Twitter. OK. [laughs] “People are using plugins to tag search terms people are landing
through. Is that OK? Or is that keyword stuffing?” I have to admit, I am not a huge fan of tags. You can do a few, like categories or things
along those lines, but the point where people start to say, “These were the last 50 searches
that people did,” or “Try these hundred tags.” It absolutely can start to go overboard. And
so, we do have a keyword stuffing detector and we do have a URL-based spam detector that
looks at how spamming of the words look to us. And so, if you have too many tags, if your
tag cloud is excessive and has things like repetition or every single variant of a word,
those kinds of things, then it can trigger our spam processing and then we don’t think
that the page is as good. So, you do wanna be a little bit more careful about that. Yeah.
Anytime you’re throwing a lot of words on the page and it’s automatically generated,
that can be a little bit tricky. Ah. Somebody asks about a specific site which
is normally–. Normally, we don’t like to get into specific sites. But somebody asks
about And they say that it’s dynamically built around thin content and
trimming the keywords, like hot trims. It’s safe to say I’ve been talking to members of
my team about that site. In fact, over lunch, just a couple hours ago,
we were talking about that. My recommendation is not to generate content automatically based
on things like hot trims, small numbers of original content and lots of auto-generated
or scraped or automatically generated stuff. So, I won’t get into too much detail about
any specific site, but I would not recommend that people use that as their basis to try
to target SEO traffic. And at the point where I can look around and
I’m hearing complaints from lots of different places, even if it’s a high-profile site,
if a lot of what is happening I’m gonna put it in a sub-domain is taking just a lot of
hot trims or what people are typing and very quickly building a very thin content or even
automatically generated content, then I wouldn’t recommend doing that. Let’s leave it at that. That sounds OK. Anybody
else want to wade in on that one or should we keep moving on to different question? [inaudible]>>Matt Cutts: OK. We’ll keep moving on. Yeah.
So, the nice thing is–. I’ll just tell you a very quick story. We had a computer that
we were doing YouTube live-streaming. So, this was our very first. Our very first Google
Hangout. We had not done like, Google Hangout on Air. We were just getting all the computers
hooked up to do this today. And I’ve never done one before and it turns out it’s easy
and it’s fun. And I promise we’ll try to look for ways where
we can have more people participate and we can have more discussion. We wanted to start
out and show just the sheer number of people we had visiting Hyderabad, already in Hyderabad,
working on quality and spam. But the interesting thing is we built a computer for doing YouTube
live-streaming. And should we tell this story?>>Michael Wysz: Yes.>>Matt Cutts: OK.>>Michael Wysz: I think it’s–. I tell it
all the time.>>Matt Cutts: OK. So, and then, somebody sent
an email and they said, “We wanna do a Hangout between the Dali Lama and Desmond Tutu.” Because
Desmond Tutu was turning 80 years old. And they said, “We really need some computers
and your computer, this math that you guys have set up is like the only computer that
we know can do this YouTube live-streaming. Can we borrow this computer and ship it either
to like South Africa or wherever the Dali Lama was?” And we’re like, “Of course.” This
is really good karma for the computer, right? This is a fantastic thing. So, we absolutely
agreed. And they did the Hangout. If you search for Dali Lama and Desmond Tutu, you can find
the news stories and one of these computers was one half of that conversation. And then they started to ship it back. And
the computer got stuck in customs. So, customs has been hanging on to this good karma computer
like, literally for months. And so, we just got it back a few weeks ago and it’s still
in pieces, so we haven’t gotten all of the YouTube stuff all reassembled. And it is in
Mountain View. So, we couldn’t use it that way anyway. But
it’s really funny ’cause I’d done a Q and A and I’d said, “Oh, we’re gonna do more of
these.” And then we sent the computer off and it got used in the live Hangout and then
it got stuck at customs. So, but this is kind of fun. I think if we could do more of these
more often, everybody would be happier. So,–. All right. “Is Search, plus Your World
from Google making SEOs work really tough or easy?” I think an SEO’s job has always
been to make compelling, great, fantastic content–the sort of content that people want
to share. And that’s the sort of thing that works well with social media. If you’re a
really interesting person, people will want to follow you. And if people are following you, whether it’s
on Twitter or Facebook or Google+ or wherever, you’ll have a better chance of getting an
audience. So, Search, plus Your World just makes it easier for people to search over
stuff that they’ve already seen, not just on web search, but things that have been shared
with them. And so, if you’re the kind of person that
makes fantastic content that will get shared with a lot of people, then that can absolutely
benefit. Social search and all those kinds of things can help you quite a bit. So, the
idea remains the same. Try to make original, compelling content, the sort of things that
people would want to share. There’s this great guy, Matt Inman O’Neill,
who was actually an SEO. He was an SEO once and he decided he wanted to be a web cartoonist.
right? And all the time, I see people sharing his cartoons and I’m like, “He used to be
an SEO.” And I go digging around. Are there any paid links at the bottom of the page or
anything like that? And the fact is he just makes these fantastic
cartoons that people really enjoy that get swapped around all over the place. And so,
if you can find an area, a topic, a niche, niche–however you wanna say it–where you
can really do well and really have a lot of people be interested in what you have to say,
then anytime you have social search of any kind, that’s gonna help amplify your audience
and lead to more people being interested in it. OK. So, let’s keep going. Oh, my goodness.
So many questions. [laughs] OK. Someone is literally asking a math question. Like, if
I have four links in a web page and that page has PR ten, and that page has ten links, how
much juice will I get? Will I get one point or four points? It would be good if you can
figure this out. Do multiple links from one page to another
count? Now, that’s a really good question. I forget exactly whether if you have two links
from the same page, whether they count. I better not speak out of turn. I’ll have to
go back and look that back up. In general, if you have a certain amount of page rank,
then you have say, four links; some of that page rank would typically dissipate. And then, the other four would get the page
rank that’s divided between them equally. Now over time, we get smarter and smarter
and we try to figure out better ways of doing page rank besides the original page rank that
was published. And so, in my experience, we have gotten much better about that. I don’t
remember exactly, given one page with two links, how that gets divided. OK?>>Michael Wysz: I don’t have time to watch
the video right now, but I think Matt actually answered that question on one of our videos.>>Matt Cutts: Woohoo.>>Michael Wysz: So, go ahead and, click on the YouTube channel. And if you click on “uploads,” you
can search. And I just typed in “multiple links” and it was the first thing that showed
up.>>Matt Cutts: Yeah.>>Michael Wysz: So, this is a plug for the
YouTube channel.>>Matt Cutts: Yeah. Absolutely. If you do
a search for “webmaster video channel,” that’s what I normally do a search for. You can subscribe
there. And I tweet about the videos when they come out. But if you subscribe, then you’ll
always be the first to know when there’s a new Webmaster video. And we usually try to
make sure that it’s not just duplicate–answer the same question twice. So, we try to answer
all sorts of different things. And there’s something like 400 videos there.
So, that’s a really good place to start out. OK. A great question. Somebody wants to know,
“How does Google define SEO?” OK? And I’ll avoid the cheap answer. It’s not just Search
Engine Optimization. The way that I think about it is Search Engine Optimization is
almost like you’re a coach, right? Like, you’re gonna in and you’re gonna interview.
So, I’m gonna interview with Wysz and I’m gonna be like, “You should hire me for a job
because I’m awesome. I will do that job so good that you’ll love me and you will give
me lots of money. And I’ll be professional. I’ll be really professional. Yeah. I’ll be
fantastic, right?” If I were to do that in an interview, you’d
be like, “This guy is on crack. He’s crazy.” Like, there’s no way I’m gonna interview him.>>Michael Wysz: Don’t call us. We’ll call
you.>>Matt Cutts: Yeah, exactly. We’ll have your
resume on file. If an opening comes open with your specific skill set, we’ll let you know.
And the fact is, you might be the most qualified person for the job, but the way that you’re
coming across could be too much of a hard sell or could be too much of a soft sell,
or it just might not be the appropriate way of talking to somebody. So, a good SEO, in my opinion, is just like
a good coach. They see what you have to say and then they help guide you about better
ways to say it. And so, it’s not just thinking about search engines. It’s thinking about
conversions. It’s thinking about how do I make a site that resonates with people so
that they do what you would like them to do, whether it be buy a product or sign up for
a newsletter or whatever it is you’re interested in. So, a good SEO is like that. In my opinion,
a bad SEO is like someone who encourage you to lie on your resume, someone who says, “Yeah,
say that you can do these things even if you can’t because first, you just get the job.
And then maybe you can learn on the job,” or something like that. That’s like saying,
“Yeah. I’ve got cartoons.” And then, when you land on the site and you
actually have porn, right? So, SEO’s can be fantastic in terms of being these coaches
in terms of being these guides. In an ideal world, people wouldn’t need to know how to
interview for jobs. They wouldn’t need coaching. But the fact is, we’ll always need people
who need a little bit of help, who want to be engineers and who don’t want to think about
how to write a resume or things along those lines. So, I think there will always be a role for
people to play that kind of role and say, “OK. I’ll be your coach. I’ll help you understand
when a visitor types something in; here are the words that he’s interested in finding
out about.” So, that’s how I define it. Oh, man. So many good questions. This always happens,
by the way. When we crawl the web, there are always more
links to be explored and always more pages that we’d like to crawl. At some point, we
have to stop. OK. So–>>Michael Wysz: So about two minutes left…>>Michael Wysz: Aw.>>Matt Cutts: OK. Yeah. Hopefully we’ll have
a chance to follow back to some of these DMCA–. Like, someone’s asking a really complicated
one about DMCA. And I’m like, “I don’t even have enough time to read this in two minutes,
let alone answer it.” In general, we try to make sure that we return original content,
so if you’re getting multiple complaints about DMCA on your domain, it can often be a bad
sign. So, you might want to look into that. Will
I be available later? I’ll try to be available later. We’re gonna–. In theory, we’re recording
this. If all works out well, we might be able to post it later on YouTube or something along
those lines. Now that I’ve done a Hangout and I’ve seen that it’s not that scary and
it can work out pretty well and taking questions on Google+ is a pretty good way to do it,
hopefully we’ll do more. We don’t even need like, a special computer
to do it. It’s like a five thousand dollar Mac. We’re actually–Can’t quite see it. We’re
actually running this Hangout–you can see the tip of it–off of a Mac Mini. So, you
don’t need a super powerful computer to do a Google+ Hangout. And I’ll just close out
with, “hope you have a pleasant stay. Which would you suggest to a blogger? One post daily
of 300 words or two posts in a week of 800 words?” That’s a really good question because on one
hand, it’s better to take your time and write something really compelling that people will
link to, but on the other hand I have noticed with blogs, if you’re able to deliver something
every day consistently, then people are more likely to come and visit. So, whenever I–. I don’t blog quite as much these days ’cause
I feel like I’ve said most of what I need to say and I can say, “Oh, go back to this
one post.” But I have noticed that whenever I would do like three or four blog posts in
a day, boom. Traffic would start to go up ’cause people would start to remember to start
checking every day. So, it’s a really difficult challenge. Do
you post every day of a smaller amount? Or, do you post a couple times a week with a larger
amount? In an ideal world, you post every day. But even if you post every day, try to
make it insightful. I talked to a guy, Mike Masnick at Techdirt, and a lot of people follow
Techmeme, which is this great site to show tech headlines, and they just rehash whatever
the content of the day is. Like, if I go to Techmeme right now, there’s
a story about Apple’s earnings. And Apple had a fantastic quarter. But do you really
need to be the forty-seventh guy that writes about Apple’s fantastic earnings? Probably
not, right? And even if you can make a little bit of money that way, you’ll probably establish
a better reputation–and that’s what this guy Mike Masnick at Techdirt said. He basically said, “I don’t chase the headlines.
I sit and I think and I wait until I have something useful or insightful or original
to say. And once I have an original angle, some way of coming at it differently than
the other 47 people, that’s when I blog.” And so that would probably be my advice, is
try to do it every day if you can, but if you can’t come up with something insightful
or interesting or useful, then hold off until you can. OK. So, I don’t wanna hold all of the people
on the Hangout hostage. So, I think we’d better go ahead and call it a day now so that the
developer relations guys or whoever’s going to be doing a Hangout next can go ahead and
have this room. But I’ve had a fantastic experience so far in India. We’ve had a group of people
that has done an amazing job of working on spam and different types of search quality
in Hyderabad for literally years. And I feel really bad that it’s taken me until
now to get to India. The nice thing is I can go back and all five of us who are in the
office now can go back and tell everybody else who works in Search Quality, who works
in Webspam, it’s a fantastic office. You really need to visit. The food is great. The people
are incredibly nice. So, I can go back and strong arm some of my
engineers to make sure that they show up more often. So, thank you very much to everybody
who’s been extremely hospitable. And hopefully I’ll write a book, blog post, or some sort
of report about the kinds of things that I learned–at least do a Google+ post. But it’s been really fun hanging out and I
look forward to future Hangouts. Thanks very much everybody. Take care.>>Paul: Take care, Matt.


    True – nofollow links don't add to pagerank. But, they can add traffic and might influence search placement… right? Afterall, those that click on the links add to the analytic metrics as far as time spent, click through vs. bounce rates and more. And don't those influence serp's?

    Hey can any one make me aware how can i hangout with you great persons in any next hangout sessions… please leave a message. I work for which offers web presence solutions and now looking to provide SEO solutions.

    I realized this video is a few months old, but I was directed to something Matt said at 15:50 while answering a question about Panda updates, so I just wanted to confirm: If a domain is "caught" by a refresh of Panda/Penguin, is your domain stuck there until the next refresh at which point the domain is auto-reevaluated for quality and either released or kept trapped? If this is the kind of question that can't be answered directly, that would be good to know too 🙂 Thanks!

    wow great information i summarized in this hang out…. really this would not be ever possible on Facebook or Twitter, G+ rocks here too… 🙂

    I agree with Michael Haley. Simply saying that no follow links doesn't influence search placement is definitely incorrect. Since it adds to the site's time spent and other components which affects SEO, its safe to say that it still has a strong bearing as far as affecting the sites SERPS.

Leave a Reply

Your email address will not be published. Required fields are marked *