Why hidden text might still be an excellent SEO hack

  • July 21, 2017
  • SEO
Hidden text SEO

I recently read this article on RebootOnline written by Shar Aharony. He, and this team, performed a very well designed SEO study but left out some crucial details and, in my opinion, concluded findings based on inconclusive data. Furthermore, the findings were applaud by Rand Fishken and ‘shared and spread’ on Moz’s Whiteboard Friday.

This means that thousands of people have seen this post. Although their study was very well designed… Although it sought to increase research on interesting SEO topic… Although it did provide valuable insight…

I still believe they only told half the story. I need to add my 2 cents to this debate.

How was the experiment designed?

Shar and his team bought 20 new domains that had never been registered. They found a keyword (“andefabriles”) that no page on the web had used prior to the experiment. Each page got a slightly individuel website with a slightly individual text. However, they took into account the keyword usage, word count and the keyword placement.

Note: The small differences in site- and text base might cause some irregularities in the results but it is not possible to correct for everything. They choose to include small, small differences in the websites to reduce foodprint and that is a well argumented choice.

Furthermore, they had split the domains into

  • Text fully visible on 5 websites
  • Text hidden by CSS on 5 websites
  • Text hidden by Javascript on 5 websites
  • Text hidden in a Textarea on 5 websites

Lastly, the point is to follow the rankings of these 20 websites for the chosen keyword to track if any of the websites seem to have an advantage. Great designed experiment and awesome idea.

What did their research show?

This is where I disagree a bit with the authors of the experiment.

The article states: “The experiment clearly demonstrates Google’s preference for visible text. The experiment showed Google algorithms clearly gives less weight to text hidden via CSS and JavaScript.

I definitely understand what they mean and I definitely understand why they say what they do. However, they state it as a fact… but it is not.

The data does indicate their points, but the variance and the amount of outliers in the data is too great to conclude anything.

Their dataset, according to their article, included 139 days of rank tracking data. How many days of those 139 days do you think that one of the websites with fully visible text was the best ranking of the 20 websites in the experiment? Only 48.

Just in front of the best CSS site that was in the lead for 43 days and the best Javascript site that was in the lead for 40 days. Waaay behind is best the textarea site that only lead for 8 days. This alone doesn’t prove that their point is invalid, far from it. However, it does support my point that there is too big a variance in their data to conclude anything with certain.

Hidden text days in the lead

I am simply suggesting that there is a fair chance that their data is random – a coincidence – that if they ran it again they would potentially find a different answer.

To support this statement further we need to talk about ranks and variance in the form of standard deviations. Below are the ranks of the 20 websites based on their average rank across the 139 days:

Statistics for js css textarea

As you can probably tell this view of ranks supports the notion that the Javascript- and CSS sites performed poorly. But if you are to buy that conclusion, then you also need to buy the conclusion that textarea is better and more safe/stable (less variance) than having actual visible text. This brings on a dilemma you can figure out for myself. I choose to not believe that textarea is better than fully visible text – Neither in Google nor for the UX of the site.

Furthermore, even with a power of 20 sites, the standard deviation is too high to conclude any superiority with certainty. Take a look at the data below:

Combined statistics

For someone with statistical experience it is obvious, even without any mathematical hypothesis testing, that the deviations are too high for those means to not overlap with certainty. This means that we cannot with conclude with certainty that either textarea- or visible sites are better than either Javascript- or CSS sites. We basically cannot conclude anything from this data alone.

Although the authors are definitely in their right to indicate that CSS does seem like the worst option based on their findings. Furthermore, that textarea and fully visible text seems like the safe options when designing or building your website.

What about the upcoming mobile first index?

Gary Illyes, whom work for Google, has reported that hidden text on mobile devices will be encouraged if it enhances user experience. How they anticipate to bend this statement and how it will work is totally unknown. However, it sounds logically that when they move to a mobile first index it might change the way Google value text in accordions, tabs and other variations.

Gary Illyes has said that hidden text on mobile will be given full weight.


I believe that both Shar Aharony and Rand Fishken jumped to conclusion when analyzing the data from the experiment. Their argument/narrative is definitely probable but the whole point of these advanced experiments/studies is to approach SEO in a scientific and evidence based way. I believe both Rand and Shar would agree with me on the premis and approach for experiments like these, even if we disagree on the findings.

Even when people like Shar or Rand’s experiment group does research, it is often ends up inconclusive, because Google works in mysterious ways. We have great, even very great, reason to think that hiding text with CSS makes a site perform worse, on average, because Google decreases the value of the content. However, the fact remains.. We do not know.

We rarely know anything with certainty when it comes to optimizing organically for Google, and when we know something with certainty, then we rarely know how big the effect really is. We just know that is does something good, and we have an idea of the effect on a continuum from terrible to super good (primarily due to regression models that gives us correlation data).

P.S. How to test if your text is getting full value

Google says that hidden text gets less value. Shar’s experiment suggest the same.

Grab a sentence or two from the your site (link to sample site) and search for it in Google.

This is how it looks when I take the visible text:

Obviously visible text

This is how it looks when I take the hidden text from the same exact URL:

Hidden text not fully indexed

You might think the difference is a coincidence, but it is not. Test is for yourself. This has been a reliable way to test hidden text solutions for years. Notice the difference and you’ll soon be able to determine if hidden text is getting ‘devalued’ by Google.

P.S.S. You can code your site to hide text with Javascript without Google can tell.

I cannot know if Shar and his team was or are aware of this, or decided it was not relevant for the experiment, but I have helped several clients go from hidden devalued text to hidden fully valued text by implementing it the right way.

Christian Højbo Møller


Christian is the CMO and co-founder of Candidlab, which recently expanded to 11 countries. His past includes being Lead- and Senior consultant in the world largest media agency, GroupM, where he worked with clients like HBO, Ford, Just-Eat and Toyota.

Connect on LinkedIn, follow on Twitter or send him a text.

No Comments

Leave a Comment