docker – Dockerize PHP Application for Production-ThrowExceptions

Exception or error:

We have a PHP (specifically Laravel) application that should be dockerized for the Production environment. But there is a problem with sharing application source code with the Web Server and PHP-FPM containers.

Both Nginx and PHP-FPM should have access to the Application source codes so here are the workarounds that suggested on the web:

  1. Having two separate containers for Nginx and PHP-FPM, and mounting the source code on the host machine and create a volume of it. Then, assign this volume to those containers. This solution is not desired because every time the application code changes, the entire stack should be built again and created volume should be flushed. Also, these tasks should be executed on all of our servers which may waste a lot of time.
  2. Having both PHP-FPM and Nginx on the same container and keep their process running with supervisor or an entrypoint script. In this solution, when the source code changes, we build the image once and hopefully, there is no shared volume to be flushed, so it seems a good workaround. But, the main problem with this solution is that it violates the idea behind of the containerization. Docker in its documentation says:

    You should have one concern (or running process) per container.

    But here, we have two running processes!

Is there any other solution that may work on the production environment? I have to mention that we are going to use Swarm or Kubernetes in the near future.


How to solve:

In general, both approaches should be avoided in production, but if I compare volume mounting and two processes per container, I will go for two processes per container instead of mounting host code to the container,

There are some cases where the first approach failed, like in the case of Fargate, where there is no host which is a kind of serverless then in this you will definitely go for running two processes per container.

The main issue come with running multiple processes per container is “What if php-fpm is down and the Nginx process is running”. but you can handle this case with multiple approach, you can look suggested approach by docker documentation.


The docker documentation covered this scenario with a custom script or supervisord.

If you need to run more than one service within a container, you can
accomplish this in a few different ways.

  • Put all of your commands in a wrapper script, complete with testing
    and debugging information. Run the wrapper script as your CMD. This is
    a very naive example. First, the wrapper script:

# Start the first process
./my_first_process -D
if [ $status -ne 0 ]; then
  echo "Failed to start my_first_process: $status"
  exit $status

# Start the second process
./my_second_process -D
if [ $status -ne 0 ]; then
  echo "Failed to start my_second_process: $status"
  exit $status

# Naive check runs checks once a minute to see if either of the processes exited.
# This illustrates part of the heavy lifting you need to do if you want to run
# more than one service in a container. The container exits with an error
# if it detects that either of the processes has exited.
# Otherwise it loops forever, waking up every 60 seconds

while sleep 60; do
  ps aux |grep my_first_process |grep -q -v grep
  ps aux |grep my_second_process |grep -q -v grep
  # If the greps above find anything, they exit with 0 status
  # If they are not both 0, then something is wrong
  if [ $PROCESS_1_STATUS -ne 0 -o $PROCESS_2_STATUS -ne 0 ]; then
    echo "One of the processes has already exited."
    exit 1
  • Use a process manager like supervisord. This is a moderately
    heavy-weight approach that requires you to package supervisord and its
    configuration in your image (or base your image on one that includes
    supervisord), along with the different applications it manages. Then
    you start supervisord, which manages your processes for you. Here is
    an example Dockerfile using this approach, that assumes the
    pre-written supervisord.conf, my_first_process, and my_second_process
    files all exist in the same directory as your Dockerfile.

But if you are looking for a supervisor you can check shutdown supervisor once one of the programs is killed and other similar approach to monitor the process.


You can create two separate Docker images, one with only your static assets and one with the runnable backend code. The static-asset image could be as minimal as

# Dockerfile.nginx
FROM nginx:latest
COPY . /usr/share/nginx/html

Don’t bind-mount anything anywhere. Do have your CI system build both images

docker build -t myname/myapp-php:$TAG .
docker build -t myname/myapp-nginx:$TAG -f Dockerfile.nginx .

Now you can run two separate containers (not violating the one-process-per-container guideline), scale them independently (3 nginx but 30 PHP), and not have to manually copy your source code around.

Another useful technique is to publish your static assets to some external hosting system; if you’re running in AWS anyways, S3 works well here. You will still need some kind of proxy to forward requests to either the asset store or your backend service, but that can now just be an Nginx with a custom config file; it doesn’t need any of your application code in it. (In Kubernetes you could run this with an Nginx deployment pointing at a config map with the nginx.conf file.)

When you set up your CI system, you definitely should not bind mount code into your containers at build or integration-test time. Test what’s actually in the containers you’re building and not some other copy of your source code.

Leave a Reply

Your email address will not be published. Required fields are marked *