- Gemini Search Engine



Query backlinks

Documentation: Searching

Documentation: Indexing

Documentation: Backlinks

Documentation: Indexing is a search engine for all content served over the Gemini Protocol. It can help you track down textual pages (e.g. `text/gemini`, `text/plain`, `text/markdown`) with content containing your search terms, but it can just as easily help you track down binary files (e.g., images, mp3s) which happen to be served over the Gemini protocol.

What does index? will only index content within Geminispace, and will neither follow nor index links out to other protocols, like Http or Gopher. We will only crawl outwards by following Gemini links found within `text/gemini` pages. If you return a `text/plain` mimetype for a page, Gemini links within it will not register with GUS (though the content of the `text/plain` page will itself get indexed). does not crawl capsules behind Onion links.

Textual pages over 10 MB in size will not be indexed.

Please note that there are provisions in place for manually excluding content from indexing, which maintainers will typically use to exclude pages and domains that cause issues with index relevance or crawl success. GUS ends up crawling weird protocol experiments, proofs of concepts, and whatever other bizarre bits of technical creativity folks put up in Geminispace, so it is a continual effort to keep the index healthy. Please don't take it personally if your content ends up excluded, and I promise we are continually working to make GUS indexing more resilient and scalable!

list of filtered URIs

Currently, especially content of the following types is excluded:

- mirrors of large websites like Wikipedia or the Go-docs (it's just to much to add it to the index in the current state)

- mirrors of news sites from the common web (too big and too frequent changes)

Indexing and Redirects checks for specific return codes like 31 PERMANENT REDIRECT and will save this information.

When your capsule served an permanent redirect for some sort of stuff, will not re-crawl this stuff for at least a week.

Controlling what indexes with a robots.txt

To control crawling of your site, you can use a "robots.txt" file. Place it in your capsule's root directory such that a request for "robots.txt" will fetch it. It should be returned with a mimetype of `text/plain`.:

See the robots.txt companion spec for more details.

When interpreting a robots.txt, will use the first line that matches the URI that should be visited.

Keep your robots file as simple as possible, avoid empty lines, wildcards and similar stuff, just stick to the rules defined in the companion spec. obeys the following user-agents, listed in descending priority:

How can I recognize requests?

You can identify us by looking for any requests to your site made by the following IP addresses:

Does GUS keep my content forever?

No. After repeated failed attempts to connect to a page (e.g. because it moved, or because the capsule got taken down, or because of a server error on your host), we will invalidate that page after 1 month of unavailability in its index, thus removing it from search results.

Proxied content from gemini:// (external content)

Gemini request details:

Original URL
Status code
Proxied by

Be advised that no attempt was made to verify the remote SSL certificate.