Isolated SFTP server using Docker

Published:  21/12/2020 16:30


SFTP stands for "SSH File Transfer Protocol" and is part of the SSH (Secure Shell) specification.

It works in a completely different way than your standard FTP would. For one thing, there is a single TCP connection (defaults to port 22) so it's baseline easier to work around strange FTP passive mode firewall issues but more importantly, it's using strong encryption for communications and authentication of the server, also allowing both password or key pair authentication.

FTP-over-SSL (or more precisely FTP-over-TLS), sometimes called FTPS, is an alternative but it relies on standard SSL certificates, which isn't a problem in itself but adds an extra step. However, FTPS still is the old FTP protocol underneath, so the firewalling issues and general problems related to the extra communication channel are still there.


The OpenSSH project is nothing new and spans across multiple *NIX based systems as a reference for the SSH protocol and remote shell administration.

What we'd like to do is configure the SSH server to allow certain accounts to only use SFTP and chroot them to a specific directory they're allowed to access.

Ideally that would have to be on a different TCP port than the default 22, which we keep firewalled for security reason, and it shouldn't allow even trying password authentication on any system account.

While it's possible to achieve these goals by starting multiple sshd processes, you could also just use Docker to basically do the same thing (Docker will effectively run another sshd process on the host inside an isolated environment).

Creating the image

We're aware there are projects like this already. At the time of writing these weren't meeting our password security standards and this article wouldn't be an interesting exercise if it was about running two docker commands and be done with it, so let's follow the process step by step so that you can customize your SFTP server as you will.

At this time we're also not pushing the image to the Docker hub and we don't care if you do.

Let's use the Alpine Linux distribution as the base, since that will give us one of the smallest image sizes.

For convenience, all the files discussed in the current article are in a public Github repository.

OpenSSH config

We'll first need a config file for OpenSSH, let's go with the minimum:

Port 22

UseDNS no

PermitRootLogin no
X11Forwarding no
AllowTcpForwarding no
AllowAgentForwarding no
PermitTunnel no

# Force sftp and chroot jail
Subsystem sftp internal-sftp
ForceCommand internal-sftp
ChrootDirectory %h

Name that file sshd_config (no extension).

Alternatively you could copy one from your favorite distribution or selectively add what you want to that file — For instance, you could add a Match rule to set a specific umask per user or group.


We need a custom entrypoint for the container to generate new SSH host keys the first time the container is ran.

Doing this at image level is obviously not an option, or all the SFTP servers we create from that image will have the exact same host keys.

Since we need an entrypoint script anyway, we're also using it to start the OpenSSH server in foreground mode, so that the container will stop if it crashes for some reason, and we'll get the container logs as we'll ask OpenSSH to log everything to the standard output.

There's just an extra bit of logic required to not regenerate SSH keys at every container start, we just write a file once we've done it and check for the existence of that file before regenerating the keys:


# Regenerate SSH host keys if this is the first time
# the container is started:
if [ ! -f "$INIT_FILE" ]; then
  echo "Generating SSH host keys..."
  ssh-keygen -A
  echo "1" > $INIT_FILE

# Start the OpenSSH server:
/usr/sbin/sshd -D -e

Make that file executable just to be sure, we expect it to be named for the rest of this tutorial.

The Dockerfile itself

We can now add a Dockerfile in the same directory as the and sshd_config files.

Here's the Dockerfile we're using:

FROM alpine


RUN apk add --no-cache openssh
COPY ./sshd_config /etc/ssh/
COPY ./ /usr/bin/

# Default is to run the OpenSSH server
CMD ["/usr/bin/"]

Building the image

From the root of the project directory where the Dockerfile currently is, you can now build the image:

docker build -t <IMAGE_TAG> .

Using the new image

You can now start a new container in the background based on the image we just built:

docker run -d --restart=always --name <CONTAINER_NAME> -p 2222:22 -v <SFTP_DIRECTORY_ROOT>:<SFTP_DIRECTORY_ROOT> <IMAGE_TAG>

Where 2222 is the port we're using locally — You might want to avoid using 22 since that one is probably in use for your regular SSH server if you have one.

More importantly, you have to provide the right volume mounts using the -v option (you can mount more than one location). Ideally you'd want to mount some sort of root directory in which you'll later create the user directories for SFTP users.

The --restart=always option should make it so that the Docker daemon starts the container automatically when the computer starts (obviously won't work if your Docker daemon isn't set to start when the OS starts).

You can now start and stop the new container manually:

docker start <CONTAINER_NAME>
docker stop <CONTAINER_NAME>

Adding user accounts

The new container doesn't do anything without local accounts.

You could push your own /etc/passwd and /etc/shadow files to the container or even the image itself but it's easier and somewhat safer to do it manually from a local shell on the container.

You should probably already pick specific UID and GID for the SFTP users since these will also be used directly for file ownership and permissions on the host filesystem for the mounted volumes.

It's a good idea to pick these IDs higher than 2000 for instance to purposefully avoid IDs that could actually exist on the host system — Unless that's something you specifically want.

The idea is then to give the give the intended chroot directory as the new user home directory.

Important: OpenSSH enforces strong chroot security (to prevent exploits that could go outside the chroot) which requires the root of the chroot to NOT be writable by or belong to the chrooted user.

You need to plan in advance that the writable directory for a SFTP user cannot be their "home directory" or the root of any of the volumes mounted on the container, it has to be directory at a higher level than that. Let's state again that these restrictions have nothing to do with Docker and that some standard FTP servers enforce them too.

Once you're ready to create a new account and the container is already running in the backround, you can get a shell to it using docker exec:

docker exec -it <CONTAINER_NAME> /bin/sh

Then add the user (and group):

addgroup --gid <GID> <GROUP_NAME>

We have to use two seperate commands and these specific arguments because Alpine is using busybox to provide these commands and these are different than user addgroup or adduser you'd find on a fresh Ubuntu.

Of course, feel free to use different UID and GID if you need to.

You should now be able to connect using SFTP and the new username and password — On Linux you could test with the sftp command or your graphical interface's file explorer.

On Windows, the WinSCP client works great.

Contrary to popular belief, the Linux command scp does not use the SFTP protocol and as such will not work with this setup.

Debug access issues

Unless the container is running in foreground, you won't be able to directly see the OpenSSH logs.

However, you can do so using the docker logs command, for instance:

docker logs -f <CONTAINER_NAME>

The "-f" option scans and outputs what the container is logging in real time. Remove it to just output the logs.

You don't have to be on a shell on the container to fix permission issues, it can be equally done from the host provided you know which UID and GID to use — The host doesn't need to have any real user with these IDs either.

How to add/change mount points

To change the port or mount point setup of the container, you have to recreate a new one, which means you'll lose the users and passwords you had already created.

You could just copy and restore /etc/passwd and /etc/shadow, or you could create a new image based on your current container using docker commit, then start a new one (using docker run) with your new options.