To use Screaming Frog for SEO, start by initiating a site crawl by entering your website URL and adjusting settings to include or exclude resources like images, CSS, and JavaScript. Monitor the crawl progress and schedule regular crawls to track changes and issues. Review the SEO issues overview to identify and prioritise technical problems such as server errors and redirect loops.
Use the tool to optimise page titles and meta descriptions, ensuring they are within the recommended character limits and include key phrases. Analyse internal linking to find opportunities and create a well-structured link architecture. Generate and submit XML sitemaps to Google via Webmaster Tools. Assess Core Web Essentials like LCP, FID, and CLS to enhance performance metrics. Finally, optimise on-page SEO elements like meta tags and heading structures to improve visibility and user experience. By following these steps, you’ll be well on your way to boosting your website’s SEO; continue to explore these features for even more detailed insights.
Key Takeaways
- Initiate Site Crawl: Enter the website URL, select subdomain options, and adjust settings to include or exclude resources like images, CSS, and JavaScript for efficient crawling.
- Monitor Crawl Progress: Schedule regular crawls (daily, weekly, monthly) and set alerts for significant changes or issues; use data visualisation tools to compare crawl data over time.
- Identify and Prioritise SEO Issues: Categorise issues as errors, warnings, or optimisation opportunities; prioritise technical SEO problems like server errors and redirect loops for immediate attention.
- Optimise Titles and Meta Descriptions: Use Screaming Frog to find title and description issues; ensure titles are 50-60 characters and meta descriptions are unique, relevant, and under 160 characters.
- Analyse Internal Linking and Sitemaps: Use Custom Search to find linking opportunities, audit existing links for contextual relevance, and generate and submit XML sitemaps to Google via Webmaster Tools.
Initiating the Site Crawl
When you’re ready to initiate a site crawl using Screaming Frog, the first step is to configure the crawl scope carefully. Enter the website URL in the ‘Enter URL to spider’ box and choose whether to crawl the specified subdomain or all subdomains. By default, only the entered subdomain is crawled, and other subdomains are treated as external links. If you need to crawl all subdomains, select ‘Crawl All Subdomains’ in the Spider Configuration menu.
Next, set up your crawl configuration. Adjust the settings to include or exclude resources such as images, CSS, JavaScript, and SWF files. Unchecking these can reduce the size of the crawl, which is particularly useful for larger sites. Consider limiting the analysed area to a subsection of URLs to manage file sizes and data exports. You can also choose between RAM storage mode and database storage mode, depending on whether you have an SSD; database storage mode allows for more URLs to be crawled and data to be stored.
Once your crawl settings are configured, click the ‘Start’ button to initiate the crawl. The SEO Spider will start crawling the site in real-time, analysing the HTML code of the starting page and following internal links to identify other pages. Be aware of the 500 URL limit in the free version, and verify your spider configuration is set up to handle large sites efficiently. Customising your crawl settings further, such as deselecting non-essential resources or using the custom extraction feature, can help you scrape additional information and optimise your crawl process.
Monitoring Crawl Progress
After initiating the site crawl, the next step is to monitor the crawl progress effectively. This involves setting up and managing scheduled crawls to guarantee you stay on top of your website’s changes and issues.
Scheduled Crawls and Frequency
You can schedule regular crawls using Screaming Frog to monitor your website’s health and changes. This feature allows you to set the crawl frequency to daily, weekly, or monthly, depending on your needs. Automating the crawl process saves time and guarantees consistent monitoring. You can configure the crawl settings through the ‘File > Scheduling > Add’ option, where you define the task name, date, and frequency. This ensures you can identify areas needing improvement and technical issues, such as broken links and slow-loading pages, promptly.
Alerts and Notifications
Scheduled crawls enable you to receive alerts for significant changes or issues detected during the crawl. This includes identifying broken links, monitoring redirects, and checking for 404 errors. These alerts help you address technical SEO issues promptly, guaranteeing your website remains optimised and accessible.
Data Visualisation and Analysis
To analyse crawl data effectively, Screaming Frog provides various tools for data visualisation. You can compare crawl data from different time periods to monitor progress and identify changes in SEO issues and opportunities. The tool offers detailed data on changes in elements such as page titles, meta descriptions, and internal linking. Using Force-Directed Crawl Diagrams, you can visualise your site architecture, helping you identify internal linking opportunities and potential areas for improvement.
Reviewing SEO Issues Overview
To effectively review SEO issues, you need to dive deep into the data provided by Screaming Frog, identifying and categorising various types of issues that could impact your website’s performance.
Types of Issues
When conducting a technical SEO audit, you’ll encounter errors, warnings, and opportunities for optimisation. These issues can be prioritised as high, medium, or low based on their potential impact. Errors, such as server errors, security vulnerabilities, and complex redirect chains, are essential and should be fixed immediately. Warnings, like temporary and permanent redirects, and meta robots directives, should be checked and potentially fixed. Opportunities for optimisation include improving page titles, meta descriptions, and H1 and H2 tags.
Technical SEO Problems
Technical SEO problems are a key focus. You need to find and fix server errors, identify and resolve redirect loops, and audit temporary and permanent redirects. Reviewing robots.txt, meta robots, and X-Robots-Tag directives is also important. Additionally, validating Accelerated Mobile Pages (AMP) and ensuring proper site crawling and indexing are necessary.
Content and Metadata Issues
Content and metadata analysis is significant. Detect duplicate content and near-duplicate pages, and analyse page titles, meta descriptions, H1, and H2 tags for optimisation. Identify poorly optimised tags and metadata, check for exact duplicate pages, and review meta character lengths and alt attributes. Ensuring unique and descriptive title tags, relevant meta descriptions, and properly optimised alt text for images is key to helping search engines understand your content.
Advanced Issue Detection
For advanced issue detection, use Screaming Frog to scrape web pages for additional information like H3 tags, cookies, and social media sharing buttons. Extract specific site data using CSS Path, XPath, or regex, and identify pages blocked from indexing by search engines. Integrating with Google services such as Google Analytics, Google Search Console, and PageSpeed Insights provides a holistic analysis of your site’s performance and architecture.
Identifying Broken Links
Identifying broken links is an essential step in maintaining your website’s SEO health and user experience. To start, once you’ve completed the crawling process using Screaming Frog SEO Spider, you need to set filters to view the broken links. Navigate to the response codes section and look for links with 404 status codes, which indicate broken links. You can also identify other types of errors such as server errors and timeouts in the same manner.
Isolate the broken links by filtering the results to show only the URLs with errors. This link analysis will help you pinpoint exactly where the issues are. Export the list of broken links for further analysis and fixing. This step is imperative as broken links can negatively impact both user experience and search engine rankings, particularly by increasing bounce rates.
When analysing these links, review each one to determine the cause of the error. Check for internal broken links that may be causing issues for users and search engines. Enable ‘Crawl Fragment Identifiers’ in the advanced settings to identify broken jump links or bookmarks. Analyse the context of the broken link on the page to understand its impact.
Use the URL tab and filters to categorise and prioritise the broken links. This will help you develop effective redirect strategies, such as using 301 redirects to direct users to relevant existing pages. Confirm all fixed links are re-crawled to verify the corrections, and regularly monitor the website for new broken links to maintain SEO health.
Reviewing Page Titles and Meta Descriptions
When it comes to enhancing your website for search engines, reviewing page titles and meta descriptions is vital for improving your site’s visibility and user engagement.
To start, use the “Page Titles” tab in Screaming Frog to view title information for each URL. Here, you can filter titles by criteria such as missing, duplicate, too long, or too short titles. Make sure that titles are unique and relevant to the content of the page. Ideal title length is between 50-60 characters, with 55 characters being preferred, and consider the pixel width to avoid truncation in SERPs. Place key phrases at the beginning of titles to increase visibility.
For meta descriptions, use the “Meta Descriptions” tab to view and filter descriptions by similar criteria. Meta descriptions should be unique, relevant, and persuasive, typically under 160 characters to avoid truncation. Both character and pixel lengths are vital for SERP snippet visibility.
Custom filters in Screaming Frog can help you identify specific issues with page titles and meta descriptions. Configure these filters under “Configuration” >> “Custom” to find titles or descriptions containing specific keywords or phrases. This can be particularly useful for identifying pages with duplicate meta descriptions, which can negatively impact user engagement and click-through rates. Export the filtered data into a spreadsheet for detailed analysis and reporting, allowing you to track changes and improvements over time. This thorough approach to title optimisation and description relevance will greatly enhance your website’s SEO performance.
Identifying Duplicate Content
After optimising your page titles and meta descriptions, the next step in enhancing your website’s SEO is to tackle duplicate content. The Screaming Frog SEO Spider is a powerful tool for identifying both exact and near-duplicate content.
By default, the SEO Spider automatically identifies exact duplicate content using the MD5 algorithm, which calculates a ‘hash’ value for each page. This check is performed against the full HTML of the page, requiring pages to be 100% identical. You can find these identical pages by sorting the ‘hash’ column in the Content tab. Confirm only one canonical version of a URL exists, and 301 redirect other versions to it.
To identify near duplicates, you need to enable the ‘Enable Near Duplicates’ option under Config > Content > Duplicates. Near duplicates are identified with a default 90% similarity match, which can be adjusted. This requires post-crawl analysis to populate the relevant data in the Content tab. You can view near duplicates in the Content tab, using filters and the Duplicate Details tab to see textual differences between pages.
Refine your content optimisation by adjusting the similarity threshold and excluding specific HTML elements like nav and footer. This helps focus on the main body content, avoiding unnecessary duplication and enhancing your overall content optimisation strategy. Ensuring unique content is crucial, as duplicate content can negatively impact Google rankings by confusing search engines about which page to rank higher.
Analysing Internal Linking
To enhance your website’s SEO, analysing internal linking is an essential step that can considerably impact your site’s navigation, user navigation, and search engine rankings.
Identifying Internal Linking Opportunities
To identify internal linking opportunities, you can leverage Screaming Frog’s Custom Search feature. Open Screaming Frog and navigate to ‘Configuration > Custom > Search’. Click ‘+Add’ and name the column based on your keyword. You can search for keywords in Text or using Regex to include synonyms, and set the search to ‘Page Text No Anchors’ to find opportunities for new links. Optionally, make the search case insensitive using Regex. This process is particularly useful for new SEOs who may find programming languages like Python or Ruby complex.
Auditing Existing Internal Links
When auditing your existing internal links, use Screaming Frog to crawl your website and analyse the link structure. Check which pages are linking to any particular page using the ‘Inlinks’ tab. Confirm internal links make sense contextually and are topically relevant. Review anchor text to confirm it accurately represents the linked page’s content, and identify and remove non-descriptive anchor text in internal outlinks.
Prioritising Pages for Internal Linking
To prioritise pages for internal linking, connect Screaming Frog to Google Search Console or Google Analytics to export data. Sort pages by impressions, clicks, or visitors to prioritise linking to the most popular and valuable pages on your site. Focus on linking to pillar or cornerstone content to establish authority.
- Identify broken links: Use Screaming Frog to find internal links that return error codes.
- Visualise link structure: Utilise the Force Directed Crawl Diagram to visualise your site’s link structure.
- Avoid orphan pages: Confirm every page is linked to from somewhere else to avoid orphan pages. This can be achieved by using Screaming Frog’s feature to crawl JavaScript websites, ensuring all pages are properly indexed and linked.
Creating and Submitting Sitemaps
Start by entering the website URL into the ‘enter URL to spider’ box and click the ‘Start’ button to initiate the crawl. Make certain the spider configuration is set to ‘Crawl All Subdomains’ to include all subdomains in the crawl. Once the crawl is complete, navigate to ‘Sitemaps’ and select ‘XML Sitemap’. By default, only indexable, 2xx HTML URLs are included; adjust settings as necessary to exclude non-indexable pages like 404 errors. This process ensures that search engines can efficiently discover and index all important pages on your site, especially those that may not be easily accessible through internal links.
Including Additional Elements
Include images in the sitemap, especially if they are not in a separate image sitemap, to improve visibility in Google image search. For international sites, include hreflang elements using hreflang links. You can also manually enter external images hosted on Content Delivery Networks (CDNs) and decide whether to include canonicalised or paginated pages, as well as PDF files.
Generating and Submitting the XML Sitemap
After configuring the settings, click ‘Next’ to generate the XML sitemap. Save the sitemap file to a preferred location, noting the 50,000 URL and 50 MB uncompressed size limits. Submit the XML sitemap to Google via Webmaster Tools for tracking indexing.
Include a line in the robots.txt file to inform search engines of the sitemap’s existence: ‘Sitemap: http://www.example.com/sitemap.xml’. For large sites, the SEO spider will create additional sitemap files and a sitemap index file if necessary. Regularly updating the sitemap with new content and site changes is crucial for maintaining optimal SEO performance site indexing efficiency.
Assessing Core Web Vitals
Evaluating Core Web Essentials with Screaming Frog involves a systematic approach to measure and enhance your website’s performance. Here’s how you can do it effectively:
Setting Up the Audit
To start, you need to specify the domain of the website you want to audit. While a paid version of Screaming Frog SEO Spider is not strictly necessary for all features, accessing advanced functionalities like Core Web Essentials analysis may require it. You also need a PageSpeed Insights API key, which you can obtain from Google’s get started page.
Configure the SEO Spider by connecting to the PageSpeed Insights API in the ‘Configuration > API Access > PageSpeed Insights’ section and enter your API key there.
Conducting the Crawl
Begin the crawl by entering the website domain into the SEO Spider. The process includes both a “Crawl” and an “API” progress bar, which must reach 100% before you can analyse the data. The SEO Spider collects Core Web Essential data, including CrUX and lab data, for metrics such as Largest Contentful Paint (LCP), First Input Delay (FID), and Cumulative Layout Shift (CLS).
Analysing the Results
- Open the “Page Speed” tab to view the Core Web Essentials Assessment and individual metric scores.
- Filter results to identify pages that fail Core Web Essentials minimum thresholds.
- Export data to analyse the percentage of pages failing each Core Web Essential metric.
- Use the “Opportunities” section within PageSpeed Insights to see areas for improvement.
Optimising these metrics is crucial as they directly impact search rankings and user satisfaction, leading to better engagement and lower bounce rates.
Optimising On-page SEO Elements
Optimising on-page SEO elements is essential for enhancing your website’s visibility and user experience. Here’s how you can use Screaming Frog to optimise key elements like meta tags and heading structure.
Meta Tag Optimisation
To optimise your meta tags, start by checking their lengths. Confirm your meta titles are within the recommended pixel width, ideally under 600 pixels, and meta descriptions are concise, though Google’s display can vary.
Use Screaming Frog to identify and flag meta tags that are too long or too short, and verify that they include the top keyword the page is ranking for. This can be done by combining data from Screaming Frog and Google Search Console to confirm your titles and descriptions are optimised and include the necessary keywords. For bulk editing, utilise tools like Yoast SEO in WordPress to efficiently update your meta titles and descriptions.
Heading Structure
Assess your heading hierarchy by reviewing the use of H1, H2, H3, etc., tags to confirm a proper structure. Use Screaming Frog to identify pages with missing or improperly used heading tags and confirm each page has a unique and relevant H1 tag. Verify that heading tags are used in a logical and hierarchical order.
Screaming Frog is a powerful tool that can significantly enhance your website’s SEO performance. By initiating comprehensive site crawls, monitoring progress, identifying and prioritising SEO issues, optimising titles and meta descriptions, analysing internal linking, and generating XML sitemaps, you can address technical challenges and improve your site’s visibility.
Regular use of Screaming Frog enables you to maintain a well-optimised website, leading to better search engine rankings and an improved user experience. For more insights and detailed guidance on leveraging Screaming Frog for your SEO efforts, explore the tool’s extensive features and consider consulting additional resources to maximise its potential.