Click to hear an audio recording of this post
So, let’s get started!
JS is a powerful, dynamic, and flexible programming language that can be implemented on a variety of web browsers. It’s the cornerstone of modern web development that offers interactivity to web pages in the browser.
- It can be used both on the client and server-side. As a result, developers find it easy to use the programming language along with other server-side languages like PHP.
- It is a cross-platform programming language. For instance, you can build applications for desktop and mobile platforms using various frameworks and libraries.
Most brands have adopted JS for creating complex dynamic pages in place of static content. This greatly improves the site’s UX.
However, SEOs should know that, if they do not take the necessary measures, JS can affect their site’s performance in terms of its –
- Indexability- If a crawler reads your page but is unable to process the content it will prevent your content from ranking for the relevant keywords.
Here’s what happens when bots reach a regular HTML/ non-JS page.
- The bot downloads the raw HTML file for your page.
- It passes the HTML to Caffeine (the indexer) to extract all the links and metadata.
- The bot continues to crawl all the discovered links.
- Caffeine indexes the extracted content that is further used for ranking.
Fortunately, Caffeine can now render JS files like a browser would (through Google’s Web Rendering Service). So, here’s how Google with its WRS reaches your JS-powered pages.
- The bot downloads the raw HTML file for your page.
- The extracted links, metadata, and content are passed back to the bots for further crawling.
- The extracted content is indexed during the second indexation process and used for ranking.
Let’s look at each of these in detail.
To begin with, the Google bot fetches a URL for a page from the crawling queue and checks if it allows crawling. Assuming that the page isn’t blocked in the robots.txt file, the bot will follow the URL and parse a response for other URLs in the href attribute of HTML links.
If the URL is marked disallowed, then the bot skips making an HTTP request to this URL and ignores it altogether.
Rendering or Processing
Once it’s rendered, the bots will add the new URLs it discovers to the crawl queue and move the new content (added through JS) for indexing.
There are two types of rendering – server-side and client-side rendering.
- Server-Side Rendering (SSR)
In this type of rendering, the pages are populated on the server. So, every time the site is accessed, the page is rendered on the server and sent to the browser.
Simply put, when a visitor or a bot accesses the site, they receive the content as HTML markup. So, Google doesn’t have to render the JS separately to access the content, thus improving the SEO.
- Client-Side Rendering (CSR)
Client-side rendering is a fairly recent type of rendering that allows SEOs and developers to build sites entirely rendered in the browser with JS. So, CSR allows each route to be created dynamically in the browser.
CSR is initially slow as it makes multiple rounds to the server but once the requests are complete the process through the JS framework is quick.
At this stage, the content (from HTML or new content from JS) is added to Google’s index. So, when a user keys in a relevant query on the search engine, the page will appear.
JS Errors That Impede SEO
- Abandoning HTML Completely
In the past, search engine bots couldn’t crawl JS files. Hence, webmasters often stored them in directories and blocked robots.txt. However, this isn’t needed now as Google bots can crawl JS and CSS pages.
To check if your JS files are accessible to the bots, log in to Google Search Console and inspect the URL. Unblock the robots.txt access to fix this issue.
- Not Using Links Properly
Links help Google’s spiders understand your content better. They also get to know how the various pages on the site connect. In JS SEO, links also play a critical role in engaging users. So, not placing the links properly can negatively impact your site’s UX. Therefore, it’s advisable to set up your links properly.
Use relevant link anchor text and HTML anchor tags, including the URL for the destination page in the href attribute. Also, steer clear of non standard HTML elements and JS event handlers to linkout as they can make it tough for users to follow the links and impact the UX, especially for those using assistive technologies. Also Google bot will not follow those links.
- Placing JS Files above the Fold
Gauge the importance of JS content. If it’s worth making the users wait for it, place it above. Else, it’s wise to place them lower on the page and above the fold space.
- Improper Implementation of Lazy Loading/ Infinite Scrolling
Not implementing lazy loading and infinite scrolling properly can come in the way of bots when they crawl content on a page. These two techniques are great for displaying listings on a page but only when done correctly.
- Using JS Redirects
Using JS redirects is a common practice among SEOs and developers as the bots treat them as standard redirects and can process them. However, these redirects significantly reduce site speed and UX. JS is crawled in the second phase; so, JS redirects may take longer (days or maybe weeks) to get crawled or indexed.
Hence, it’s best to avoid JS redirects.
- Continue Your On-Page SEO Efforts
All the on-page SEO guidelines related to the content, title tags, meta descriptions, alt attributes, and meta robot tags among others still apply.
For instance, unique and descriptive titles and meta descriptions help users and crawlers process the content with ease.
Similarly, using short descriptive URLs is one of the best on-page SEO tactics that get the most clicks as they match the search query.
- Fix Duplicate Content
Duplicate content doesn’t add any value to a site’s SEO; in fact, it confuses the crawlers and increases the risk of important pages being ignored by Google. Make sure you choose a version you want the crawlers to index and set canonical tags to fix duplicate content.
- Use Meaningful HTTP Status Codes
HTTPS status codes like a 404 or 401 are used by the bots to determine if something went wrong during the crawling process. Use meaningful status codes like the ones shared below to inform the bots whether a page needs to be crawled or indexed.
For instance, you can use a 301 or 301 status code to tell Google bots that the page has moved to a new URL. This will help the bots update the index accordingly.
- Employ Clean URLs and Markup
The same applies to markup. So, make sure that the key markup is structured clearly and is included in the HTML for page titles, meta descriptions, canonical tags, alt attributes, and image source attributes among others.
- Improve the Page Loading Speed
Images often eat up the bandwidth and negatively impact the site performance. Use lazy loading to load images, iframes, and non-critical content only when the user is about to see them.
However, if not implemented correctly, lazy loading can hide critical content from the bots. Use this lazy-loading guide by Google to ensure that the bots crawl all your content whenever it’s visible in the viewport.
- Conduct a JS Website Audit
This means conducting a routine manual inspection of the individual elements using the Google Developer Tools from Google Chrome and the Web Developer Extension for Chrome. Here’s what you can include as a part of this audit.
- Visual Inspection: This will help you get an idea of how a user views your site. Check elements like the visible content on the site, hidden content, content from third parties, and product recommendations. The goal is to make these elements crawlable.
Common JS and SEO Myths You Should Ignore
While it’s true that Google bots found it challenging to crawl JS pages in the past, improvements in its algorithm have made its crawlability quite effective.
Today, Google is the best search engine for reading JS content. Other search engines like Bing, Baidu, and Yandex still have a long way to go in terms of crawling JS pages. So, if your site has more Bing traffic, it’s wise to focus on a non-JS-based web development plan for now.
Myth 2: Mobile Sites Should Consider Cutting Off JS
Web developers trying to boost mobile speed and create a mobile-friendly site may consider banishing bulky JS code. However, it’s possible to include this code without hurting your mobile SEO.
For instance, Google offers three types of configurations that can be used to serve JS code to mobile users.
- Combined detection: This configuration uses JS but also has server-side detection of devices. So, it can serve content differently on each option.
Using such tools to add JS to mobile devices can make your site mobile-friendly.
Myth 3: You Should Block Crawlers From Reading JS
If the bots focus primarily on HTML, they should have an idea of the content and the site layout. But that’s not the case. The bots need to access JS files to –
- Render the page completely and make sure it’s mobile-friendly
- Ensure that the content isn’t buried under disruptive ads
- Keep an eye on black-hat practices like keyword and link stuffing
Adding readable JS is easier than you think. You don’t have to be an expert in JS and SEO to build a site that can attract traffic and engage an audience. If you are struggling with platform limitations that impact your SEO, get in touch with a professional SEO consulting firm that can review your JS delivery opportunities and improve your SEO performance.
- What Happens If I Don’t Do JS SEO?
Yes! Techniques like code minifying and compressing, caching, and tree shaking can be used to reduce the necessary bandwidth usage and boost web performance. For instance, tree shaking is a form of dead code elimination that can significantly reduce JS payloads and improve the site’s performance.
- In Light of JS SEO, Is PWA SEO Required?
Using JS as a PWA or Progressive Web Application framework ensures the best possible UX. However, there have been concerns about creating a crawler-friendly JS application.
PWA is a form of application software that is delivered through the web and built on web technologies like HTML and JS. Hence, ideally, all the recommendations applied to JS sites should work with PWAs too. However, the Google Web Rendering service has Service Workers (one of the pillars of the PWA architecture) disabled.
Like JS SEO, PWAs also need to be optimized as they need to be SEO compatible and indexed properly. PWA SEO is possible; however, Google should be able to render JS pages for it to be able to view the content present in the PWA.
Talk to an expert SEO agency that can help get your PWA indexed and make them discoverable.
- How Mobile-First Indexing Affects JS Pages?
So, it’s critical to audit your site’s desktop and mobile versions, allowing Google bots to easily index your pages.
- If Google Cannot Deal with the Client-Side Rendered Website, Can I Serve It a Pre-Rendered Version of My Website?
Check out this video for more information on dynamic rendering.
- Google Constantly Claims to Fix the Issue Related to the Crawlers Processing JS Content. So, Will JS SEO Not Be Relevant in the Future?
Before We Conclude – Tools You’ll Need
- URL Inspection Tool
Google’s URL Inspection Tool helps SEOs determine whether or not Google is rendering its web pages. You can run live tests to see real-time JS warnings and errors that were coming in the way of your pages being indexed or discovered.
You can use this test to determine whether your pages render on smartphones. For this, you do not need a Google Search Console account. The test also points out the errors or blocked resources stopping crawlers from accessing your content.
- Data Comparison Tools
Tools like Diffchecker, Guesty, and Microsoft Flow can perform a quick analysis of a web page’s source code vis-a-vis its rendered code. This comparison offers insights into how the content changes once it’s rendered.
Further, Chrome Extensions like View Rendered Source helps webmasters compare the raw HTML source of a page to the browser rendered DOM. It also shows the difference between the two.
- SEO Crawlers and Log Analyzers
SEO spider tools like DeepCrawl, Screaming Frog, and JetOctopus can be used to get granular insights on each page and test/monitor rendering at scale.
- PageSpeed Insights
- Chrome DevTools