Redis Sentinel is a dedicated process to automate and simplify the Redis replication failover and switchover.

Start with 3 Nodes. We will have two Redis instances on two different nodes – 1 master and 1 replica (or slave). Sentinel will be co-located on those 2 nodes, plus an additional node on one of our web servers.

Normally you would co-locate a Redis instance on the web/application server and access it via localhost or through a UNIX socket file. This is the straightforward way to incorporate Redis into the application.

For a scalable and highly available setup app, Redis should be deployed in a centralized approach, or into a different tier called a cache tier. This allows Redis instances to work together as a dedicated cache provider for the applications, decoupling the applications from the local Redis dependencies.

Before deploying Redis Sentinel, we have to deploy a Redis replication consisting of two or more Redis Server instances. Let’s start by installing Redis on both servers, Redis-01 and redis-02:

1 – 192.168.0.30 – App- Web Server + Sentinel
2 – 192.168.0.31 – Redis-01 (replica) Redis Server + Sentinel
3 – 192.168.0.32 – Redis-02 (replica) Redis Server + Sentinel

# apt update
# apt install redis net-tools

Edit /etc/redis/redis.conf:

For redis-01 (master):

	
bind 127.0.0.1 192.168.0.31
protected-mode no
supervised systemd
masterauth Zssy56G21Zx
masteruser redisuser
user redisuser +@all on >Zssy56G21Zx

For redis-02 (replica):

	
bind 127.0.0.1 192.168.0.32
protected-mode no
supervised systemd
replicaof 192.168.0.31 6379
masterauth Zssy56G21Zx
masteruser redisuser
user redisuser +@all on >Zssy56G21Zx
  • bind: List all the IP addresses that you want Redis to listen to. For Sentinel to work properly, Redis must be reachable remotely. Therefore we have to list out the interface the Sentinel will communicate with.
  • protected-mode: This must be set to “no” to allow Redis to serve remote connections. This is required for Sentinel as well.
  • supervised: We use the default systemd unit files provided by the installer package. For Ubuntu 20.04, it uses systemd as the service manager so we specify systemd here.
  • replicaof: This is only for the slave node. For the original topology, we will make redis2 as the replica and redis1 as the master.
  • masterauth: The password for user masteruser.
  • masteruser: The username of the master user.
  • user: We create the master user here. The user shall have no limit (+@all) and a password. This user will be used for Redis to manage replication and failover by Sentinel.

Restart Redis to apply the changes and enable it on boot:

# sudo systemctl restart redis-server
# sudo systemctl enable redis-server

Verify that Redis is running on port 6379 on both interfaces. The following example is the output from redis2:

	
# sudo netstat -tulpn | grep -i redis
tcp        0      0 192.168.0.32:6379       0.0.0.0:*               LISTEN      1246/redis-server 1
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      1246/redis-server 1
tcp        0      0 127.0.0.1:26379         0.0.0.0:*               LISTEN      1128/redis-sentinel
tcp6       0      0 ::1:26379               :::*                    LISTEN      1128/redis-sentinel

Verify the replication is working. On redis-01:

# redis-cli info replication

Output:

# Replication
role:master
connected_slaves:1
slave0:ip=192.168.0.32,port=6379,state=online,offset=10621,lag=1
master_replid:c649cbc9ffab2b6ef7adae7df90a1d5d00913987
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:10754
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:10754

Pay attention to the role, connected_slaves and slave{i} keys. This indicates that redis1 is the master. Also, note that a replica can be a master of another replica – this is also known as chained replication.

While on the redis2:

# redis-cli info replication

Output:

# Replication
role:slave
master_host:192.168.0.31
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_repl_offset:52453
slave_priority:100
slave_read_only:1
connected_slaves:0
master_replid:c649cbc9ffab2b6ef7adae7df90a1d5d00913987
master_replid2:0000000000000000000000000000000000000000
master_repl_offset:52453
second_repl_offset:-1
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:52453

Pay attention to the role, master_host, master_link_status and master_repl_offset. The replication delay between these two nodes can be determined by the master_repl_offset value on both servers.

Redis Sentinel Deployment

Redis Sentinel is basically the same redis-server process running with the “–sentinel” flag and different configuration files and ports. For production usage, it is strongly recommended to have at least 3 Sentinel instances for an accurate observation when performing the automatic failover. Therefore, we will install Sentinel on those 2 Redis nodes that we have, plus one of our web servers, 192.168.44.70 (shown in the architecture diagram).

Install the redis-sentinel package on the selected web server (Sentinel is already installed on our Redis hosts):

	
# apt install redis-sentinel

Stop redis sentinel to make changes

systemctl stop redis-sentinel.service

By default, the Sentinel configuration file is located at /etc/redis/sentinel.conf. Make sure the following configuration lines are set:

App server, 192.168.0.30:

bind 192.168.0.30
port 26379
sentinel monitor mymaster 192.168.0.31 6379 2
sentinel auth-pass mymaster Zssy56G21Zx
sentinel auth-user mymaster redisuser
sentinel down-after-milliseconds mymaster 10000

redis-01, 192.168.0.31:

	
bind 192.168.0.31
port 26379
sentinel monitor mymaster 192.168.0.31 6379 2
sentinel auth-pass mymaster Zssy56G21Zx
sentinel auth-user mymaster redisuser
sentinel down-after-milliseconds mymaster 10000

redis-02, 192.168.0.32:

bind 192.168.0.32
port 26379
sentinel monitor mymaster 192.168.0.31 6379 2
sentinel auth-pass mymaster Zssy56G21Zx
sentinel auth-user mymaster redisuser
sentinel down-after-milliseconds mymaster 10000

Restart the redis-sentinel daemon to apply the changes:

# sudo systemctl restart redis-sentinel
# sudo systemctl enable redis-sentinel

Make sure redis-sentinel is running on port 26379. On redis2, you should see something like this:

# netstat -tupln | grep redis

Output:

tcp        0      0 192.168.0.31:26379      0.0.0.0:*               LISTEN      1170/redis-sentinel
tcp        0      0 192.168.0.31:6379       0.0.0.0:*               LISTEN      881/redis-server 12
tcp        0      0 127.0.0.1:6379          0.0.0.0:*               LISTEN      881/redis-server 12

Verify if the Sentinel is observing our Redis replication link by looking at the log file, /var/log/redis/redis-sentinel.log. Make sure you see the following lines:

# tail -f /var/log/redis/redis-sentinel.log

Output:

553:X 01 Dec 2021 11:55:36.378 # WARNING supervised by systemd - you MUST set appropriate values for TimeoutStartSec and TimeoutStopSec in your service unit.
553:X 01 Dec 2021 11:55:36.379 # oO0OoO0OoO0Oo Redis is starting oO0OoO0OoO0Oo
553:X 01 Dec 2021 11:55:36.379 # Redis version=6.0.16, bits=64, commit=00000000, modified=0, pid=553, just started
553:X 01 Dec 2021 11:55:36.379 # Configuration loaded
553:X 01 Dec 2021 11:55:36.380 * Running mode=sentinel, port=26379.
553:X 01 Dec 2021 11:55:36.380 # Sentinel ID is 1fc2ddf0a56cb12f8cfe91da98d643fde80c7f29
553:X 01 Dec 2021 11:55:36.380 # +monitor master mymaster 192.168.0.31 6379 quorum 2
553:X 01 Dec 2021 11:55:37.399 * +sentinel sentinel 1cf43ff56ead25a832eb0f5ecd9a1e9ae4cdbbc1 192.168.0.31 26379 @ mymaster 192.168.0.31 6379
553:X 01 Dec 2021 11:55:37.525 # +new-epoch 3
553:X 01 Dec 2021 11:55:38.084 * +sentinel sentinel 5d27f1340e5a04dc6703c60d7b60e7c6a121b44c 192.168.0.32 26379 @ mymaster 192.168.0.31 6379

We can get more information on the Sentinel process by using the redis-cli and connect to the Sentinel port 26379. From the app server, run:

1)  1) "name"
    2) "mymaster"
    3) "ip"
    4) "192.168.0.31"
    5) "port"
    6) "6379"
    7) "runid"
    8) "ebad0adccca0d81eb09c119b5a44fd5043994d11"
    9) "flags"
   10) "master"
   11) "link-pending-commands"
   12) "0"
   13) "link-refcount"
   14) "1"
   15) "last-ping-sent"
   16) "0"
   17) "last-ok-ping-reply"
   18) "274"
   19) "last-ping-reply"
   20) "274"
   21) "down-after-milliseconds"
   22) "30000"
   23) "info-refresh"
   24) "4035"
   25) "role-reported"
   26) "master"
   27) "role-reported-time"
   28) "1038344"
   29) "config-epoch"
   30) "0"
   31) "num-slaves"
   32) "1"
   33) "num-other-sentinels"
   34) "2"
   35) "quorum"
   36) "2"
   37) "failover-timeout"
   38) "180000"
   39) "parallel-syncs"
   40) "1"

Check the “num-slaves” value which is 1 and “num-other-sentinels” value which is 2, indicating that we have a total number of 3 Sentinel nodes (one for this node + two other nodes).
Failover Testing

We can now test the failover by simply shutting down the Redis service on redis1(master):

# sudo systemctl stop redis-server

After 10 seconds (down-after-milliseconds value), you should see the following output in the /var/log/redis/redis-sentinel.log file:

# tail -f /var/log/redis/redis-sentinel.log
553:X 01 Dec 2021 12:41:31.440 # +elected-leader master mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:41:31.440 # +failover-state-select-slave master mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:41:31.523 # +selected-slave slave 192.168.0.32:6379 192.168.0.32 6379 @ mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:41:31.523 * +failover-state-send-slaveof-noone slave 192.168.0.32:6379 192.168.0.32 6379 @ mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:41:31.586 * +failover-state-wait-promotion slave 192.168.0.32:6379 192.168.0.32 6379 @ mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:41:32.371 # +promoted-slave slave 192.168.0.32:6379 192.168.0.32 6379 @ mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:41:32.371 # +failover-state-reconf-slaves master mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:41:32.413 # +failover-end master mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:41:32.413 # +switch-master mymaster 192.168.0.31 6379 192.168.0.32 6379
553:X 01 Dec 2021 12:41:32.413 * +slave slave 192.168.0.31:6379 192.168.0.31 6379 @ mymaster 192.168.0.32 6379
553:X 01 Dec 2021 12:41:42.454 # +sdown slave 192.168.0.31:6379 192.168.0.31 6379 @ mymaster 192.168.0.32 6379
553:X 01 Dec 2021 12:43:17.878 # -sdown slave 192.168.0.31:6379 192.168.0.31 6379 @ mymaster 192.168.0.32 6379

Now, the slave, 192.168.0.32, has been promoted to a master. Once our old master (redis-01) comes back online, you should see something like this reported by Sentinel:

553:X 01 Dec 2021 12:41:32.413 * +slave slave 192.168.0.31:6379 192.168.0.31 6379 @ mymaster 192.168.0.32 6379

The above indicates the old master has been converted to slave and now replicating from the current master, redis2. We can confirm this by checking the replication info on redis2:

root@redis-02:~# redis-cli info replication
# Replication
role:master
connected_slaves:0
master_replid:ddf2e69e46513233a51b835babdb18823d6b4a7b
master_replid2:e30c6877b48ef5408cb3f1212d040a271d4d3e92
master_repl_offset:626689
second_repl_offset:611794
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:1
repl_backlog_histlen:626689

If we want to promote redis1 to a master status again, we can simply bring down redis2 or use the Sentinel failover command as below:

root@app:~# redis-cli -h 192.168.0.30 -p 26379 sentinel failover mymaster

Output:

553:X 01 Dec 2021 12:54:03.444 # +switch-master mymaster 192.168.0.32 6379 192.168.0.31 6379
553:X 01 Dec 2021 12:54:03.444 * +slave slave 192.168.0.32:6379 192.168.0.32 6379 @ mymaster 192.168.0.31 6379
553:X 01 Dec 2021 12:54:13.653 * +convert-to-slave slave 192.168.0.32:6379 192.168.0.32 6379 @ mymaster 192.168.0.31 6379

Check:

redis-01

root@redis-01:~# redis-cli info replication
# Replication
role:master
connected_slaves:1
slave0:ip=192.168.0.32,port=6379,state=online,offset=795039,lag=1
master_replid:2b5a609a0d28227480c9126d95a915a870002811
master_replid2:ddf2e69e46513233a51b835babdb18823d6b4a7b
master_repl_offset:795039
second_repl_offset:766456
repl_backlog_active:1
repl_backlog_size:1048576
repl_backlog_first_byte_offset:635864
repl_backlog_histlen:159176

Generate a key:

# openssl rand -hex 32
c68a0164045a04ea2f1d821e3d3275e782d671a166613b3d81c07f99e3b92843

Configuring PSK for server-agent communication
On the hos, edit the zabbix agent file

nano /etc/zabbix/zabbix_agentd.conf

Add:

####### TLS-RELATED PARAMETERS #######
TLSConnect=psk
TLSAccept=psk
TLSPSKIdentity=PSK 001
TLSPSKFile=/etc/zabbix/zabbix.psk

Create the /etc/zabbix/zabbix.psk file:

echo c68a0164045a04ea2f1d821e3d3275e782d671a166613b3d81c07f99e3b92843 > /etc/zabbix/zabbix.psk

Restart the agent.

systemctl restart  zabbix-agent.service

Now you can test the connection using zabbix_get, for example:

$ zabbix_get -s 127.0.0.1 -k "system.cpu.load[all,avg1]" --tls-connect=psk \
            --tls-psk-identity="PSK 001" --tls-psk-file=/etc/zabbix/zabbix.psk

Configure PSK encryption for this agent in Zabbix frontend:

Go to: Configuration → Hosts
Select host and click on Encryption tab

Get the package for Debian 10 Buster:

wget https://repo.zabbix.com/zabbix/5.4/debian/pool/main/z/zabbix-release/zabbix-release_5.4-1%2Bdebian10_all.deb

Install the repo for Debian 10 Buster

dpkg -i zabbix-release_5.4-1+debian10_all.deb

Get the package for Debian 11 Bulleye

wget https://repo.zabbix.com/zabbix/5.4/debian/pool/main/z/zabbix-release/zabbix-release_5.4-1%2Bdebian11_all.deb

Install the repo for Debian 11 Bullseye

dpkg -i zabbix-release_5.4-1+debian11_all.deb
apt update
apt full-upgrade

For Agent 2

apt install zabbix-agent2

We now need to edit the configuration file to tell the agent where to find the server. Open /etc/zabbix/zabbix_agent2.conf in your preferred text editor and make the following changes to tell the agent which Zabbix servers are allowed to talk to it:

nano /etc/zabbix/zabbix_agent2.conf
Server=[IP or hostname of your Zabbix server]
ServerActive=[IP or hostname of your Zabbix server]

We also need to tell Zabbix the hostname of the system. This doesn’t have to be the actual hostname, it is the display name we will use within Zabbix for the system. Comment out the default value of Hostname=Zabbix server and replace it with the following:

HostnameItem=system.hostname

This will tell the agent to automatically populate the hostname value with the system hostname. You could just set the hostname within the configuration file. However, automatically populating it allows you to reuse the same configuration file across all your hosts, simplifying automation if you have a lot of hosts to monitor.

Start Agent 2

systemctl enable zabbix-agent2
sudo systemctl start zabbix-agent2

Add FW rule:

ufw allow from [Zabbix server IP] to any port 10050 proto tcp

Install Wireguard

sudo apt update
sudo apt install wireguard

Now that you have WireGuard installed, the next step is to generate a private and public keypair for the server.

Use the following umask command to ensure new directories and files (in your current terminal session only) get created with limited read and write permissions:

umask 077

Now you can proceed and create the private key for WireGuard using the following command:

wg genkey | sudo tee /etc/wireguard/private.key

The next step is to create the corresponding public key, which is derived from the private key. Use the following command to create the public key file:

sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key

When you run the command you will again receive a single line of base64 enpred output, which is the public key for your WireGuard Server. Copy it somewhere for reference, since you will need to distribute the public key to any peer that connects to the server.

Choosing an IPv4 Range

You can choose any range of IP addresses from the following reserved blocks of addresses:

10.0.0.0 to 10.255.255.255 (10/8 prefix)
172.16.0.0 to 172.31.255.255 (172.16/12 prefix)
192.168.0.0 to 192.168.255.255 (192.168/16 prefix)

For the purposes of this tutorial we’ll use 10.8.0.0/24 as a block of IP addresses from the first range of reserved IPs.

Creating a WireGuard Server Configuration

Once you have the required private key and IP address(es), create a new configuration file using nano or your preferred editor by running the following command:

sudo nano /etc/wireguard/wg0.conf

Add the following lines to the file, substituting your private key in place of the highlighted base64_enpred_private_key_goes_here value, and the IP address(es) on the Address line. You can also change the ListenPort line if you would like WireGuard to be available on a different port:

nano /etc/wireguard/wg0.conf
[Interface]
PrivateKey = base64_enpred_private_key_goes_here
Address = 10.8.0.1/24, fd0d:86fa:c3bc::1/64
ListenPort = 51820
SaveConfig = true

Starting the WireGuard Server

sudo systemctl enable wg-quick@wg0.service

Now start the service:

sudo systemctl start wg-quick@wg0.service

Double check that the WireGuard service is active with the following command. You should see active (running) in the output:

sudo systemctl status wg-quick@wg0.service

Configuring a WireGuard Peer

You can add as many peers as you like to your VPN by generating a key pair and configuration using the following steps. If you add multiple peers to the VPN be sure to keep track of their private IP addresses to prevent collisions.

To configure the WireGuard Peer, ensure that you have the WireGuard package installed using the following apt commands. On the WireGuard peer run:

sudo apt update
sudo apt install wireguard

Creating the WireGuard Peer’s Key Pair

umask 077

create the private key for the peer using the following command:

wg genkey | sudo tee /etc/wireguard/private.key

Next use the following command to create the public key file:

sudo cat /etc/wireguard/private.key | wg pubkey | sudo tee /etc/wireguard/public.key

Copy it somewhere for reference, since you will need to distribute the public key to the WireGuard Server in order to establish an encrypted connection.

Creating the WireGuard Peer’s Configuration File

sudo nano /etc/wireguard/wg0.conf
[Interface]
PrivateKey = base64_enpred_peer_private_key_goes_here
Address = 10.8.0.2/24
[Peer]
PublicKey = The base64 enpred public key from the WireGuard Server.
AllowedIPs = 10.8.0.0/24
Endpoint = 159.65.164.142:51820

Adding the Peer’s Public Key to the WireGuard Server

Ensure that you have a copy of the base64 enpred public key for the WireGuard Peer by running:

sudo cat /etc/wireguard/public.key
7ybiQ/5mQijU87xa2ozd0a73Ix5ABQ9mzwCGX2OPrkI=

Now log into the WireGuard server, and run the following command:

sudo wg set wg0 peer 7ybiQ/5mQijU87xa2ozd0a73Ix5ABQ9mzwCGX2OPrkI= allowed-ips 10.8.0.2

If you would like to update the allowed-ips for an existing peer, you can run the same command again, but change the IP addresses. Multiple IP addresses are supported. For example, to change the WireGuard Peer that you just added to add an IP like 10.8.0.100 to the existing 10.8.0.2, you would run the following:

sudo wg set wg0 peer 7ybiQ/5mQijU87xa2ozd0a73Ix5ABQ9mzwCGX2OPrkI= allowed-ips 10.8.0.2,10.8.0.100

Once you have run the command to add the peer, check the status of the tunnel on the server using the wg command:

sudo wg
interface: wg0
public key: 2KOvl8HbUz1rxTJ/l46o/Yz4G34Q6rfFsmvOROu9HAY=
private key: (hidden)
listening port: 51820

peer: 7ybiQ/5mQijU87xa2ozd0a73Ix5ABQ9mzwCGX2OPrkI=
endpoint: 70.112.179.47:49999
allowed ips: 10.8.0.2/32
latest handshake: 10 minutes, 58 seconds ago
transfer: 20.80 KiB received, 25.17 KiB sent

Connecting the WireGuard Peer to the Tunnel

To start the tunnel, run the following on the WireGuard Peer:

sudo wg-quick up wg0

You will receive output like the following:

[#] ip link add wg0 type wireguard
[#] wg setconf wg0 /dev/fd/63
[#] ip -4 address add 10.8.0.2/24 dev wg0
[#] ip link set mtu 1420 up dev wg0

You can check the status of the tunnel on the peer using the wg command:

sudo wg

You can also check the status on the server again, and you will receive similar output.

Verify that your peer is using the VPN by using the ip route command.

ip route get 10.8.0.1
10.8.0.1 via 167.99.48.1 dev eth0 src 167.99.62.37 uid 0
cache

If your peer has a browser installed, you can also visit ipleak.net and ipv6-test.com to confirm that your peer is routing its traffic over the VPN.

Once you are ready to disconnect from the VPN on the peer, use the wg-quick command:

sudo wg-quick down wg0

Re:
https://www.digitalocean.com/community/tutorials/how-to-set-up-wireguard-on-ubuntu-20-04
https://www.wireguard.com/install/
https://linuxize.com/post/how-to-set-up-wireguard-vpn-on-debian-10/

Training Videos

A Technical Introduction To IPFS

IPFS Simply Explained. Let’s take a look at how IPFS works, how it can solve issue’s like censorship and if it would really work across multiple planets!

DEVCON1: IPFS – Juan Batiz-Benet

Set up the repository

Update the apt package index and install packages to allow apt to use a repository over HTTPS:

$ sudo apt-get update
$ sudo apt-get install \
 ca-certificates \
 curl \
 gnupg \
 lsb-release

Add Docker’s official GPG key:

$ curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Use the following command to set up the stable repository.

 echo \
  "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/ubuntu \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

Update the apt package index, and install the latest version of Docker Engine and containerd, or go to the next step to install a specific version:

$ sudo apt-get update
 $ sudo apt-get install docker-ce docker-ce-cli containerd.io

Verify that Docker Engine is installed correctly by running the hello-world image.

$ sudo docker run hello-world

This command downloads a test image and runs it in a container.

Docker Engine is installed and running. The docker group is created but no users are added to it. You need to use sudo to run Docker commands. Continue to Linux postinstall to allow non-privileged users to run Docker commands and for other optional configuration steps.

Manage Docker as a non-root user. To create the docker group and add your user:

$ sudo groupadd docker

Add your user to the docker group.

$ sudo usermod -aG docker $USER

Log out and log back in so that your group membership is re-evaluated. If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect. On a desktop Linux environment such as X Windows, log out of your session completely and then log back in. On Linux, you can also run the following command to activate the changes to groups:

# newgrp docker 

Verify that you can run docker commands without sudo.

$ docker run hello-world

This command downloads a test image and runs it in a container.

Configure Docker to start on boot

sudo systemctl enable docker.service
sudo systemctl enable containerd.service

Configure where the Docker daemon listens for connections

By default, the Docker daemon listens for connections on a UNIX socket to accept requests from local clients. It is possible to allow Docker to accept requests from remote hosts by configuring it to listen on an IP address and port as well as the UNIX socket. For more detailed information on this configuration option take a look at “Bind Docker to another host/port or a unix socket” section of the Docker CLI Reference article.

Before configuring Docker to accept connections from remote hosts it is critically important that you understand the security implications of opening docker to the network. If steps are not taken to secure the connection, it is possible for remote non-root users to gain root access on the host. For more information on how to use TLS certificates to secure this connection, check this article on how to protect the Docker daemon socket.

Configuring Docker to accept remote connections can be done with the docker.service systemd unit file for Linux distributions using systemd, such as recent versions of RedHat, CentOS, Ubuntu and SLES, or with the daemon.json file which is recommended for Linux distributions that do not use systemd.

systemd vs daemon.json

Configuring Docker to listen for connections using both the systemd unit file and the daemon.json file causes a conflict that prevents Docker from starting.

Configuring remote access with systemd unit file.
Use the command sudo systemctl edit docker.service to open an override file for docker.service in a text editor.

Add or modify the following lines, substituting your own values.

    [Service]
    ExecStart=
    ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375

Save the file. Reload the systemctl configuration.

 $ sudo systemctl daemon-reload

Restart Docker.

$ sudo systemctl restart docker.service

Check to see whether the change was honored by reviewing the output of netstat to confirm dockerd is listening on the configured port.

$ sudo netstat -lntp | grep dockerd

Configuring remote access with daemon.json

Set the hosts array in the /etc/docker/daemon.json to connect to the UNIX socket and an IP address, as follows:

    {
      "hosts": ["unix:///var/run/docker.sock", "tcp://127.0.0.1:2375"]
    }

Restart Docker.

Check to see whether the change was honored by reviewing the output of netstat to confirm dockerd is listening on the configured port.

 sudo netstat -lntp | grep dockerd

Ref:
https://docs.docker.com/engine/install/ubuntu/
https://docs.docker.com/engine/install/linux-postinstall/

Set up the repository
Update the apt package index and install packages to allow apt to use a repository over HTTPS:

$ sudo apt-get update
$ sudo apt-get install ca-certificates curl gnupg lsb-release

Add Docker’s official GPG key:

$ curl -fsSL https://download.docker.com/linux/debian/gpg | sudo gpg --dearmor -o /usr/share/keyrings/docker-archive-keyring.gpg

Use the following command to set up the stable repository.

echo \
 "deb [arch=$(dpkg --print-architecture) signed-by=/usr/share/keyrings/docker-archive-keyring.gpg] https://download.docker.com/linux/debian \
  $(lsb_release -cs) stable" | sudo tee /etc/apt/sources.list.d/docker.list > /dev/null

Install Docker Engine

Update the apt package index, and install the latest version of Docker Engine and containerd, or go to the next step to install a specific version:

$ sudo apt-get update
$ sudo apt-get install docker-ce docker-ce-cli containerd.io

Verify that Docker Engine is installed correctly by running the hello-world image.

$ sudo docker run hello-world

Configure Docker to start on boot

$ sudo systemctl enable docker.service
$ sudo systemctl enable containerd.service

Manage Docker as a non-root user
Create the docker group.

sudo groupadd docker

Add your user to the docker group.

sudo usermod -aG docker $USER

Log out and log back in so that your group membership is re-evaluated. If testing on a virtual machine, it may be necessary to restart the virtual machine for changes to take effect. On a desktop Linux environment such as X Windows, log out of your session completely and then log back in.

On Linux, you can also run the following command to activate the changes to groups:

newgrp docker 

Verify that you can run docker commands without sudo.

$ docker run hello-world

This command downloads a test image and runs it in a container. When the container runs, it prints a message and exits.

Configuring remote access with systemd unit file

Use the command sudo systemctl edit docker.service to open an override file for docker.service in a text editor. Add or modify the following lines, substituting your own values.

[Service]
ExecStart=
ExecStart=/usr/bin/dockerd -H fd:// -H tcp://127.0.0.1:2375

Save the file. Reload the systemctl configuration.

 sudo systemctl daemon-reload

Restart Docker.

 sudo systemctl restart docker.service

Check to see whether the change was honored by reviewing the output of netstat to confirm dockerd is listening on the configured port.

$ sudo netstat -lntp | grep dockerd

Configuring remote access with daemon.json

Set the hosts array in the /etc/docker/daemon.json to connect to the UNIX socket and an IP address, as follows:

    {
      "hosts": ["unix:///var/run/docker.sock", "tcp://127.0.0.1:2375"]
    }

Restart Docker. Check to see whether the change was honored by reviewing the output of netstat to confirm dockerd is listening on the configured port.

 sudo netstat -lntp | grep dockerd

Ref:
https://docs.docker.com/engine/install/debian/
https://docs.docker.com/engine/install/linux-postinstall/