Getting Started with SEO4Ajax

Welcome to SEO4Ajax! We're glad you're here. Spend a few minutes going through this guide to learn some SEO4Ajax basics and start indexing your Ajax website.

Introduction

This document is designed to be an extremely gentle introduction. At the end of this document, we'll refer you on to resources that can help you pursue these topics further.

Before we dive in, here are a few terms that will be used throughout this document:

Site crawling
The process of capturing a whole site by navigating and capturing automatically all the inner links found in the site
SEO4Ajax crawler
The server with the responsibility of crawling sites
Capture
The static HTML snapshot of a page from an Ajax site, which will be served to indexing robots

Check our FAQ for answers to the most commonly asked questions

1 - Log in to the console

Go to console.seo4ajax.com, type your email and your password in the appropriate fields, and click on the "Log in" button.

screenshot of the SEO4Ajax login page
SEO4Ajax login page

2 - Register your site in SEO4Ajax

On the home page, create the SEO4Ajax configuration for your site by clicking on the "Add a new site" button.

screenshot of the Home page of the SEO4Ajax console
Home page of the SEO4Ajax console

A popup is displayed and invites you to type a site name and its URL in the appropriate fields. For example, if you have an Ajax site hosted at https://www.example.com/, you can type My site for the name and https://www.example.com/ for the URL.

screenshot of the New site configuration popup
New site configuration popup

Click on the "Add new site" button to confirm the action. The settings page is displayed.

Get more information about the site configuration popup.

3 - Start the SEO4Ajax crawler

screenshot of the Settings page
Settings page

Most of the time, the default configuration will work without any modification in the site settings. However, before launching the first crawl on your site, please check that it fulfills these requirements:

  • Your site must be publicly available on the Internet in order to be reachable by the SEO4Ajax crawler.
  • Your site must be compatible with Chrome.
  • Each unique page rendered with JavaScript must have its own URL.

Click on the "Crawl site" button to crawl the site for the first time. This action will order the SEO4Ajax crawler to capture all the pages on the site. The status view "pendings" is then displayed.

screenshot of the Pendings view
Pendings view

This view shows the status of the site capture.

Get more information about status views.

4 - Embed snippet in the HTTP server configuration file

This configuration snippet will be used to detect when a bot requests a page, and to retrieve the corresponding capture from the database of SEO4Ajax.

If you use Apache, Nginx or IIS, SEO4Ajax can help you by providing a snippet example to include in your server configuration file. Go to "Settings" view and expand "Server configuration" item to display it.

screenshot of the Settings view displaying Apache configuration snippet
Settings view displaying Apache configuration snippet

Alternatively, you can integrate SEO4Ajax directly in your web application:

You can also integrate SEO4Ajax in Varnish Cache by using this configuration file.

5 - Test it

Once some pages are captured, you can easily test if the server configuration works as expected.

Click on the "captures" link in the header of the status view to display the paths that have been captured by SEO4Ajax. Then, click on the "+" icon on the right of a path in the table to expand the details panel.

screenshot of the Captures view
Captures view

If your server configuration supports the "_escaped_fragment_" query parameter, copy the "escaped URL" and test the integration with cURL as shown below (replace https://example.com/?_escaped_fragment_= with your escaped URL).

curl -I https://example.com/?_escaped_fragment_=

If your server configuration does not support the "_escaped_fragment_" query parameter, test the integration with cURL as shown below.

curl -H "User-Agent: Bot" -I https://example.com/

If the configuration is working properly, you should see the HTTP header X-Powered-By: SEO4Ajax in the console as shown below.

HTTP/1.1 200 OK
Content-Type: text/html
Date: Thu, 17 Sep 2015 09:39:02 GMT
Etag: "f24cd417d7db0e862534328c0a73c642"
Last-Modified: Thu, 31 Mar 2016 13:47:30 GMT
Server: nginx/1.2.9
Vary: Accept-Encoding, User-Agent, X-S4a-Debug
X-Powered-By: SEO4Ajax

You can find more information about the escaped URL format and the Google Ajax Crawling specification here.

6 - Update the index file

This step is needed only if you implement the Crawling Scheme Specification. If you implement the Dynamic Rendering recommendation, go directly to the next step.

In the <head> tag of the index file, add this tag <meta name="fragment" content="!"> in order to explicitly indicate your site supports the oogle Ajax Crawling specification.

7 - Test

If your server implement the Dynamic Rendering recommendation then you can use any SEO tool like the preview in the Google Search Console to test the integration.

Otherwise, if your server implement the the Google Ajax Scheme implementation, the SEO4Ajax Companion allows you to do integration tests directly in your browser. It will also help you to preview exactly what compliant bots see.

Going further

Here are all the pages from the SEO4Ajax documentation site:

By clicking Accept cookies you agree to the storing of third-party cookies on your device to assist us in our marking efforts (Learn more)