How to use PDB inside a docker container.

Vladyslav Krylasov
3 min readJul 8, 2018

Hello there fellow, if you’re reading now this article, then you probably ran into some problems related to debugging inside a docker container by means of pdb.

There are several nuances that uncovered on the internet, so I’d like to share with others my personal experience with those. First of all, I start with the configuration part.

You need to have something like this inside your docker-compose.yml

Nuances to know:

  • It’s important to set stdin_open and tty parameters to true on every service that will use PDB.
  • Usually inside docker-compose.yml specified bundles of services like Nginx & Gunicorn, Nginx & uWSGI etc., so you need to set timeout flag for your WSGI server, because if you won’t do it, then you’ll be only limited with 1 minute (mostly default timeout for all WSGI servers) to debug something and after that, a connection will be lost, so we need to increase it for development purposes.
  • ports parameter should be complemented with 4444 or any other available port, you can check unavailable ports by means such command netstat -lntu inside your terminal. Likewise, the first bullet point, on every service that will be using PDB you need to provide a port.

That’s it for docker-compose.yml file. Other problems will appear when you’ll set breakpoints by means lovely pdb from Python standard library and alongside it, you have continuous spawning of logs to STDOUT. Yes, you can attach a docker container and jump into it, but you won’t be able to type something, because logs will always override your input.

The solution is to use remote-pdb library and debug it remotely by means of telnet . So you need to add remote-pdb to your requirements.txt and set an appropriate breakpoint. E.g.:

The next step is to connect to our remote debugger by means of telnet . You need to specify host and port that you provided inside your docker-compose.yml, but be careful, host inside our Python code always should be 0.0.0.0 and port you set likewise docker-compose.yml, but host differs for telnet . For example, if you use docker-machine then, you need to figure out the IP of this machine. You can do it simply by such command docker-machine ip {name_of_your_machine} and then connect by telnet . E.g.:

  • telnet 0.0.0.0 4444 — it’s a default case of connection if you don’t use docker-machine and the scheme is next if you do telnet {docker_machine_ip} {provided_port_from_docker_compose.yml}
  • worth to mention, you need to connect when your Python interpreter will reach a breakpoint.

UPD: There is a fancy pdb wrapper there called pdb++. You can improve your debugging experience with this tool. All that you need is to add this fancy lib to your requirements.txt and you’ll get something like this when you’ll be debugging inside telnet .

pdb++ preview from GitHub

One thing about pdb++ in sticky mode, when used by telnet, truncate an output what is not really we want. It can be changed easily. In the case of Dockerfile (or docker-compose command), we can put the following inside a bash script and use it as an ENTRYPOINT. E.g.:

Last but not least, there is an option to set argument --log-level CRITICAL for gunicorn and use vanilla pdb with docker attach(thanks Colin Miller). This will avoid usage of telnet and remote-pdb library and won’t overwrite attached pdb screen with gunicorn logs. Moreover, in this case, tab autocomplete will work. See repository to learn more.

You can run python manage.py runserver 0.0.0.0:8000 from a Docker container itself too, so you won’t need to modify docker-compose.yml and use telnet , remote-pdb and even docker attach , just don’t forget to disable logging.

So as you can see there are plenty of ways to use pdb in a docker container. I hope it was helpful for you. Cheers.

--

--