🔎
Google Search for beginners
Home
  • Introduction
  • Google Search Essentials
    • Overview
    • Google Search Technical Requirements
    • Spam Policies
  • SEO Basics
    • SEO Beginner's Guide
    • How Google Search Works
    • Creating Helpful, Reliable Content
    • Do You Need an SEO Expert?
    • Maintaining Your Website’s SEO
    • Developer's Guide to Google Search
    • How to Get Your Website Listed on Google
  • crawling and indexing
    • Overview
    • File formats Google can index
    • URL structure
    • Links
    • Sitemaps
      • Create and submit a sitemap
      • Manage your sitemaps
      • Image-specific sitemaps
      • News-oriented sitemaps
      • Video sitemaps and alternatives
      • Combining different sitemap types
    • Managing Google Crawlers
      • Reducing the crawl rate of Googlebot
      • Verifying the Googlebot and other Google crawlers
      • Managing Crawl Budget for Large Sites
      • HTTP Status Codes, Network, and DNS Errors
      • Types of Google Crawlers
      • Googlebot Explained
      • Google Read Aloud Service
      • Google API
      • Understanding Feedfetcher
    • Robots.txt
      • Creating and Submitting Robots.txt
      • Updating Robots.txt
      • Google's Interpretation of Robots.txt
    • Canonicalization
      • Specifying Canonicals Using rel="canonical" and Other Methods
      • Resolving Canonicalization Issues
    • Canonicalization for Mobile Sites and Mobile-First Indexing
    • AMP (Accelerated Mobile Pages)
      • Understanding How AMP Works in Search Results
      • Enhancing Your AMP Content
      • Validating AMP Content
      • Removing AMP Content
    • JavaScript
      • Fixing Search-Related JavaScript Issues
      • Resolving Issues with Lazy-Loaded Content
      • Using Dynamic Rendering as a Workaround
    • Page and Content Metadata
      • Meta Tags
      • Using Robots Meta Tag, data-nosnippet, and X-Robots-Tag noindex
      • noindex Explained
      • rel Attributes
    • Removals
      • Removing Pages from Search Results
      • Removing Images from Search Results
      • Handling Redacted Information
    • Redirects and Google Search
      • Switching Website Hosting Services
      • Handling URL Changes During Site Moves
      • A/B Testing for Sites
      • Pause or Disable a Website
Powered by GitBook
On this page
  1. crawling and indexing
  2. Removals

Removing Images from Search Results

PreviousRemoving Pages from Search ResultsNextHandling Redacted Information

Last updated 10 months ago

Remove Images Hosted on Your Site from Google Search

This document explains how to control the appearance of images hosted on your site within Google Search results.

Emergency Image Removal

For urgent removal of images from Google's search results, utilize the . This method offers a rapid solution, but be aware:

  • Temporary Solution: Unless you remove the images from your site or block them as explained in the "Non-Emergency Image Removal" section, they may reappear in search results once the removal request expires.

  • Google Specific: This method only applies to Google Search results, not other search engines.

Non-Emergency Image Removal

There are two primary methods for permanently removing your site's images from Google's search results:

  1. robots.txt disallow rules: This method uses a text file on your server to instruct search engines not to index specific files or directories.

  2. noindex X-Robots-Tag HTTP header: This method utilizes a specific HTTP header response to tell search engines not to index a particular image.

Both methods achieve the same outcome – preventing images from appearing in Google Search. Choose the method that best suits your site's setup and your technical comfort level.

Important: Googlebot needs to crawl the image URLs to see the HTTP headers. Implementing both methods simultaneously is redundant.

Limited Server Access? If you lack access to the server hosting your images (e.g., using a CDN) or your CMS doesn't support blocking images through noindex or robots.txt, deleting the images from your server may be the only option.

Method 1: Removing Images Using robots.txt Rules

  1. Locate your robots.txt file: This file is typically found in the root directory of your website (e.g., https://www.yourwebsite.com/robots.txt). If it doesn't exist, create a plain text file named "robots.txt".

  2. Add Disallow Rules: Each rule in your robots.txt file tells search engines not to index specific content.

    • Example 1: Blocking a Single Image

      To prevent the image located at https://www.yourwebsite.com/images/products/coffee-mug.jpg from appearing in Google Search, add the following line to your robots.txt:

      User-agent: Googlebot-Image
      Disallow: /images/products/coffee-mug.jpg
    • Example 2: Blocking Images in a Directory

      To exclude all images within the /images/products/ directory, use the asterisk (*) wildcard:

      User-agent: Googlebot-Image
      Disallow: /images/products/
    • Example 3: Blocking Images by File Type

      To block all JPG images on your website, use the following rule:

      User-agent: Googlebot-Image
      Disallow: /*.jpg$
  3. Save and Upload: Save your changes to the robots.txt file and upload it to your website's root directory.

User-agent: Specifies which search engine crawler the rule applies to.

  • Googlebot: Targets all Google crawlers (including Google Images).

  • Googlebot-Image: Specifically targets the Google Images crawler.

Disallow: Specifies the path or pattern of content you want to block.

Important: Robots.txt directives are not mandatory for search engines to follow, but most reputable ones do. It may take some time for Google to re-crawl your site and reflect these changes in search results.

Method 2: Removing Images with the noindex X-Robots-Tag HTTP Header

  1. Access Server Headers: You'll need a way to modify the HTTP response headers for the specific images you want to block. This is often done through your server configuration, .htaccess file (for Apache servers), or within your CMS settings.

  2. Add the noindex Directive: Include the following line within the HTTP response headers for each image you want to remove:

    X-Robots-Tag: noindex

    Example:

    Let's say you want to prevent the image located at https://www.yourwebsite.com/images/banner.jpg from being indexed. You'd need to configure your server to include the X-Robots-Tag: noindex header in the HTTP response whenever that specific image URL is requested.

  3. Verify Implementation: Use online header checking tools or your browser's developer tools (Network tab) to confirm the X-Robots-Tag: noindex header is being sent correctly for the targeted images.

Note: Adding the noimageindex robots meta tag to a page only prevents images embedded on that specific page from being indexed. If the same image exists on other pages, it could still be indexed from those locations. Using the X-Robots-Tag: noindex HTTP header on the image itself offers more direct control.

Removals tool