Setup docker

Build and Ship code efficiently with Docker

Installing docker in all 3 instances :

Goto EC2 AWS web console and make a not of the public ips or from the terraform output.

Ssh into each machine with ubuntu (default user in ubuntu ami) example :ssh [email protected]

Docker is based on Unix LXC container, cgroups, Namespaces (Google to dig more)

Docker containers are safe because they use different process for every container and these will create a sandbox environment for each container as they cannot screwup something in the host or in other containers. It defaults to bridge network between host and a container. We need to expose and bind ports from within the container and from host to the container port so that we can communicate from host to some application which is running inside a container.

Docker is a layered file system which makes it easier to share over the internet as people create something on top of existing layers and you don't have to download a layer if its present in you local. Docker creates a writable layer on top of the read layers when it create a container (analogy : object) from an image _(analogy : class). _Typically these images will have a base images (an operating system) on top of which the applications are running on.

In the code repository cloned you will have a docker folder which has the following scripts too.

# Prerequisites
sudo apt-get install apt-transport-https ca-certificates curl software-properties-common
# Add the docker gpg key
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo apt-key fingerprint 0EBFCD88
# Add the apt repo
sudo add-apt-repository "deb [arch=amd64] https://download.docker.com/linux/ubuntu $(lsb_release -cs) stable"
sudo apt-get -y update
# Install docker community edition
sudo apt-get -y install docker-ce

Docker needs root permissions to run the daemon which runs the containers and they communicate using docker.sock socket file by default. They are not exposed via HTTPs by default but you can make few configuration changes in docker daemon to work though (not recommended to expose to outer world via HTTPs) Reference : installing docker

After installation, instead of giving sudo docker for every command we are adding ourselves into docker group so as to run that without sudoReference : post installation

# Docker needs root permision to run for every command. Instead we are adding ourselves to docker group
sudo usermod -aG docker `whoami`

Log out and log in to get the changes reflected.

To check installation and to get a big picture

docker run hello-world

Play around ! Reference : to continue more on docker commands


Dockerfile

Just a file with no extension which is read by docker to create an image.

This example dockerfile does not convey any meaning but gives various options available Reference : Dockerfile options

FROM java:8-jdk
# From which base image, java with tag 8-jdk will have debian os as its base image and installed java jdk8 for us

MAINTAINER Alan Schegrer <[email protected]>
# Author information

ENV ZK_VERSION=3.4.6 \
    EXHIBITOR_BRANCH=master \
    CP_VERSION=2.4.3 \
    CP_SHA1=2c469a0e79a7ac801f1c032c2515dd0278134790
# Environment variables which can be used in this file and also inside the application

RUN apt-get update && \
    apt-get install -y git

RUN mkdir -p /opt/zk /opt/zk/transactions /opt/zk/snapshots && \
    curl -Lo /tmp/zk.tar.gz \

# Runs any shell commands

COPY wrapper.sh /opt/exhibitor/wrapper.sh
# Copies files from host into container, specify container path from /root directory

USER root
# Under which user permissions the applications should run

WORKDIR /opt/exhibitor
# sets applications root directory

EXPOSE 2181 2888 3888 8181
# Exposes all these ports to host

CMD ["containerpilot", "/opt/exhibitor/wrapper.sh"]
# Which command with list of arguments to start when the container is created

results matching ""

    No results matching ""