Tips and Tricks

Here are some useful recommendations to improve your Flounder & Gemini experience.

Home page

Your home page is served by a your file named "index.gmi". If this does not exist, your home page will be a directory listing of all your files.

Use folders

By editing a new file with a / in it, Flounder will automatically create a subfolder for that file. Folders will list all the files contained within them, or redirect to an "index.gmi" file, if it exists.


By creating a "gemlog" folder, you can aggregate new content in a way that uses can subscribe to, either on the /feed page, using a Gemini fed parser, or via an atom.xml reader. For more information:


.hidden folder

If you're working on a draft or other content you aren't ready to share publicly, put it inside the .hidden folder, and it won't show up on any feeds or be viewable publicly.

SFTP Access



Flounder supports the finger protocol. Edit the plaintext file called ".plan" and it will be served at (


This server is configured with the following limits:

If these are constraining to you for some reason, let me know and I can consider an increase. For large images, consider compressing it with a tool such as:

Linking sites you like

While Flounder will link to everyone's site, it is only a small part of the Gemini space. If you find a Gemini site that you like, on or off flounder, make sure to link it on your page. This allows people to "browse" the Gemini network, much like on the old school web.

If you want people to be able to contact you or follow you on other social media, you may want to provide a link in your page. An email is especially helpful if you want people to provide feedback to any of your posts.

Subscribe to the mailing list

To communicate publicly with other Flounder users or discuss the Flounder project, check out the mailing list:


If you want people to be able to comment publicly on your posts, a mailing list is a great way to do that. Sourcehut (the service linked above) is an easy way to set one up.


You can add a special file named robots.txt to your page. This will inform web-crawlers, for example, search engines, how to treat your page. It will not block them from crawling your page, but well behaved crawlers will respect it. For example, if you don't want your web page archived by Google, Internet Archive, or anyone else, add the following to your robots.txt:

User-agent: *
Disallow: /

Here's the format for managing gemini-based crawlers:


For more info, see:

Add a custom domain

If you want to use your own domain instead of, check out this guide!