Client-side versus server-side rendering
Different rendering methods are suitable for different purposes. Elimelech advocated dynamic rendering as a means to satisfy both search engine bots and users, but first, you need to understand how client-side and server-side rendering work.
When a user clicks on a link, their browser sends requests to the server on which the site is hosted.
“It’s a lot like building your own furniture because basically the server tells the browser, ‘Hey, these are all the pieces, these are the instructions, build the page. I trust you.’ And that means all the hard work is moved to the browser instead of the server.” Elimelech said.
Dynamic rendering represents “the best of both worlds,” Elimelech said. Dynamic rendering means “switching between client-side rendered content and pre-rendered content for specific user agents,” according to Google.
Below is a simplified diagram explaining how dynamic rendering works for different user agents (users and bots).
So there is a request to the URL, but this time we check: Do we know this user agent? Is this a known bot? Is it Google? Is it Bin? Is it Semrush? Is it something we know? If not, we assume it’s a user and then do client-side rendering,” Elimelech said.
On the other hand, if the client is a bot, server-side rendering is used to serve the fully rendered HTML. “So, you see everything that needs to be seen,” Elimelech said.
But dynamic rendering is not perfect
However, there are complications associated with dynamic rendering. “We have two streams to maintain, two sets of logic, caching, other complex systems; so it’s more complex when you have two systems instead of one,” Elimelech said, noting that site owners should also maintain a list of user agents to identify bots.
Some may worry that serving search engine bots something different than what you’re showing users could be considered cloaking.
“Dynamic rendering is actually a preferred and recommended solution by Google because what Google cares about is whether the important things are the same [between the two versions]”, Elimelech said, adding that “the ‘important things’ are the things that matter to us as SEO: the content, the headers, the meta tags, the internal links, the navigation links, the bots, the title, the canonical markup of structured data. , content, images – anything to do with how a bot would react to the page. . . it is important to maintain identity and when they are kept identical, especially the content and meta tags, Google has no problem with that”.
Since it is necessary to maintain parity between what is serving the bots and what is serving the users, it is also necessary to audit issues that could break that parity.
To audit possible problems, Elimelech recommends Screaming Frog or a similar tool that allows you to compare two crawls. “So what we like to do is crawl a website like Googlebot (or other search engine user agent) and crawl it as a user and make sure there’s no difference,” he said. Comparing the appropriate items between the two crawls can help you identify potential problems.
Elimelech also mentioned the following methods to detect problems:
See the full SMX Next presentation here (free registration required).
New to Search Engine Land