How to update Ubuntu

$ sudo apt update
$ sudo apt upgrade

Info from man apt-get:

update
update is used to resynchronize the package index files from their sources. The indexes of available packages are fetched from the location(s) specified in /etc/apt/sources.list. For example, when using a Debian archive, this command retrieves and scans the Packages.gz files, so that information about new and updated packages is available. An update should always be performed before an upgrade or dist-upgrade. Please be aware that the overall progress meter will be incorrect as the size of the package files cannot be known in advance.

upgrade
upgrade is used to install the newest versions of all packages currently installed on the system from the sources enumerated in /etc/apt/sources.list. Packages currently installed with new versions available are retrieved and upgraded; under no circumstances are currently installed packages removed, or packages not already installed retrieved and installed. New versions of currently installed packages that cannot be upgraded without changing the install status of another package will be left at their current version. An update must be performed first so that apt-get knows that new versions of packages are available.

dist-upgrade
dist-upgrade in addition to performing the function of upgrade, also intelligently handles changing dependencies with new versions of packages; apt-get has a “smart” conflict resolution system, and it will attempt to upgrade the most important packages at the expense of less important ones if necessary. The dist-upgrade command may therefore remove some packages. The /etc/apt/sources.list file contains a list of locations from which to retrieve desired package files. See also apt_preferences(5) for a mechanism for overriding the general settings for individual packages.

The main distinction between apt-get upgrade and apt-get dist-upgrade is that in the former, none of the packages are removed. Software packages with newer versions are upgraded and none whatsoever are removed. In the latter, some newer packages are installed, and some are removed to satisfy certain dependencies.

Error in WordPress Toolkit when updating.

Selected items were updated with errors:
The following errors have occurred while updating WordPress instance #1 ('https://www.captainscbdshop.com'):
- Unable to update plugin 'akismet`4.1.12`1`', details: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 20480 bytes) in /home/captainscbdshop/public_html/wp-content/plugins/woocommerce/packages/action-scheduler/classes/data-stores/ActionScheduler_DBStore.php on line 199
Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 20480 bytes) in /home/captainscbdshop/public_html/wp-includes/plugin.php on line 439

- Unable to update plugin 'creative-mail-by-constant-contact`1.4.6`1`', details: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 32768 bytes) in /home/captainscbdshop/public_html/wp-content/plugins/woocommerce/packages/action-scheduler/classes/data-stores/ActionScheduler_DBStore.php on line 693
Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 32768 bytes) in /usr/local/cpanel/3rdparty/wp-toolkit/plib/vendor/wp-cli/vendor/wp-cli/php-cli-tools/lib/cli/Colors.php on line 1

- Unable to update plugin 'jetpack`10.1`1`', details: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 20480 bytes) in /home/captainscbdshop/public_html/wp-content/plugins/woocommerce/packages/action-scheduler/classes/data-stores/ActionScheduler_DBStore.php on line 199
Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 20480 bytes) in /home/captainscbdshop/public_html/wp-includes/plugin.php on line 439

- Unable to update plugin 'woocommerce`5.7.1`1`', details: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 20480 bytes) in /home/captainscbdshop/public_html/wp-content/plugins/woocommerce/packages/action-scheduler/classes/data-stores/ActionScheduler_DBStore.php on line 199
Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 20480 bytes) in /home/captainscbdshop/public_html/wp-includes/plugin.php on line 439

- Unable to update plugin 'woocommerce-services`1.25.18`1`', details: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 65536 bytes) in /home/captainscbdshop/public_html/wp-includes/widgets/class-wp-widget-rss.php on line 136
Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 65536 bytes) in /home/captainscbdshop/public_html/wp-includes/class-wp-fatal-error-handler.php on line 45

Checking for updates was performed with errors:
Failed to reset cache for the instance #1: Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 20480 bytes) in /home/captainscbdshop/public_html/wp-content/plugins/woocommerce/packages/action-scheduler/classes/data-stores/ActionScheduler_DBStore.php on line 509
Fatal error: Allowed memory size of 41943040 bytes exhausted (tried to allocate 4096 bytes) in /usr/local/cpanel/3rdparty/wp-toolkit/plib/vendor/wp-cli/vendor/composer/ClassLoader.php on line 427

Solution

In File Manager, Edit the file and change the lower memory limit:

public_html/wp-includes/default-constants.php

Change From:

// Define memory limits.
	if ( ! defined( 'WP_MEMORY_LIMIT' ) ) {
		if ( false === wp_is_ini_value_changeable( 'memory_limit' ) ) {
			define( 'WP_MEMORY_LIMIT', $current_limit );
		} elseif ( is_multisite() ) {
			define( 'WP_MEMORY_LIMIT', '64M' );
		} else {
			define( 'WP_MEMORY_LIMIT', '40M' );
		}
	}

Cange to:

// Define memory limits.
	if ( ! defined( 'WP_MEMORY_LIMIT' ) ) {
		if ( false === wp_is_ini_value_changeable( 'memory_limit' ) ) {
			define( 'WP_MEMORY_LIMIT', $current_limit );
		} elseif ( is_multisite() ) {
			define( 'WP_MEMORY_LIMIT', '64M' );
		} else {
			define( 'WP_MEMORY_LIMIT', '128M' );
		}
	}

In WordPress Toolkit, re-run check for updates. Now you can update.

To install Samba, we run:

sudo apt update
sudo apt install samba

We can check if the installation was successful by running:

whereis samba

The following should be its output:

samba: /usr/sbin/samba /usr/lib/x86_64-linux-gnu/samba /etc/samba /usr/share/samba /usr/share/man/man8/samba.8.gz /usr/share/man/man7/samba.7.gz

Setting up Samba

Now that Samba is installed, we need to create a use and directory for it to share:
Add a user

# adduser roger
Adding user `roger' ...
Adding new group `roger' (1000) ...
Adding new user `roger' (1000) with group `roger' ...
Creating home directory `/home/roger' ...
Copying files from `/etc/skel' ...
New password:
Retype new password:
passwd: password updated successfully
Changing the user information for roger
Enter the new value, or press ENTER for the default
        Full Name []: Roger
        Room Number []:
        Work Phone []:
        Home Phone []:
        Other []:
Is the information correct? [Y/n] y

Add a directory to share (replace your username here).

mkdir /home/roger/sambashare/

The command above creates a new folder sambashare in our home directory which we will share later.

The configuration file for Samba is located at /etc/samba/smb.conf. To add the new directory as a share, we edit the file. Lets backup first.

cp /etc/samba/smb.conf /etc/samba/smb.conf-bk
sudo nano /etc/samba/smb.conf

Workgroup- make sure Workgroup is the same for your Windows box.

workgroup = WORKGROUP

At the bottom of the file, add the following lines:

[sambashare]
    comment = Samba on Ubuntu
    path = /home/username/sambashare
    read only = no
    browsable = yes

Then press Ctrl-O to save and Ctrl-X to exit from the nano text editor.
What we’ve just added

comment: A brief description of the share.

path: The directory of our share.

read only: Permission to modify the contents of the share folder is only granted when the value of this directive is no.

browsable: When set to yes, file managers such as Ubuntu’s default file manager will list this share under “Network” (it could also appear as browseable).

Now that we have our new share configured, save it and restart Samba for it to take effect:

sudo service smbd restart

Update the firewall rules to allow Samba traffic:

sudo ufw allow samba

Setting up User Accounts and Connecting to Share

Since Samba doesn’t use the system account password, we need to set up a Samba password for our user account:

sudo smbpasswd -a username

Note
Username used must belong to a system account, else it won’t save.
Connecting to Share

On Ubuntu: Open up the default file manager and click Connect to Server then enter:

smb://ip-address/sambashare

On Windows, open up File Manager and edit the file path to:

\\ip-address\sambashare

Note: ip-address is the Samba server IP address and sambashare is the name of the share.

You’ll be prompted for your credentials. Enter them to connect!

Cuda Toolkit on Ubuntu WSL

https://developer.nvidia.com/cuda-downloads?target_os=Linux

Uninstall all linux distributions.
Uninstall WSL (right-click on the windows start logo> App and Features> Enable or disable windows features> uncheck the box for Windows Subsystem for Linux).
Restart the computer.
Open PowerShell in administrator mode and perform the simplified installation of WSL, as it already (i) enables the optional WSL and Virtual Machine Platform components, (ii) downloads and installs the latest Linux kernel, (iii) defines WSL 2 as the default and (iv) download and install an Ubuntu distribution (Just to inform you, I had previously performed the manual installation):

PS C:\Users\sanam> wsl.exe –install
PS C:\Users\sanam> exit

Restart your computer, and wait while Ubuntu boots.
Within Ubuntu, enter the new user account and password.
Run the commands for installing and configuring the CUDA Toolkit:

$ sudo su

Enter your password to have root access and update Ubuntu.

/home/san# apt update
/home/san# apt upgrade

Then, follow the CUDA Toolkit Documentation > CUDA on WSL tutorial at https://docs.nvidia.com/cuda/wsl-user-guide/index.html 26, which is the same as the video “GPU Accelerated Machine Learning with WSL 2 “from Youtube:

/home/san# apt-key adv –fetch-keys http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64/7fa2af80.pub 9

/home/san# sh -c ‘echo “deb http://developer.download.nvidia.com/compute/cuda/repos/ubuntu1804/x86_64 6 /”> /etc/apt/sources.list.d/cuda.list’

/home/san# apt-get update

/home/san# apt-get install -y cuda-toolkit-11-0

Running a CUDA application:

/home/san# cd /usr/local /cuda/samples/4_Finance/BlackScholes

/usr/local /cuda/samples/4_Finance/BlackScholes# make

/usr/local /cuda/samples/4_Finance/BlackScholes# ./BlackScholes

To install and use WP-CLI, you will need access to your server’s command line. Administrators with root access can log in with SSH. cPanel users can log in with SSH if it’s available or cPanel’s built-in Terminal.

Download WP-CLI

$ curl -O https://raw.githubusercontent.com/wp-cli/builds/gh-pages/phar/wp-cli.phar

Change Permissions

chmod +x wp-cli.phar

https://docs.cpanel.net/knowledge-base/cpanel-developed-plugins/wordpress-toolkit/#install-wordpress-toolkit

In order to get the latest version of Redis, we will use apt to install it from the official Ubuntu repositories.

Update your local apt package cache and install Redis by typing:

$ sudo apt update
$ sudo apt install redis-server

This will download and install Redis and its dependencies.

If on A regular Ubuntu distribution, change the configuration for the server to run as a service.

Open this file with your preferred text editor:

$ sudo nano /etc/redis/redis.conf

Change the following:

supervised no

Change as below.

supervised systemd

Restart

# sudo systemctl restart redis.service

On WSL ubuntu:

sudo /etc/init.d/redis-server restart

Start by checking that the Redis service is running:

$ sudo systemctl status redis

on WSL ubuntu

$ sudo /etc/init.d/redis-server status
* redis-server is running

To test that Redis is functioning correctly, connect to the server using the command-line client:

$ redis-cli

In the prompt that follows, test connectivity with the ping command:

127.0.0.1:6379>ping
PONG

Docker Network commands

# docker network ls
NETWORK ID     NAME                  DRIVER    SCOPE
234dbbb8d381   bridge                bridge    local
e23bbf6e6a54   docker-hive_default   bridge    local
e284120f22c7   host                  host      local
019daa8ddd49   none                  null      local
$ docker ps --format "table {{.ID}}\t{{.Status}}\t{{.Names}}"
CONTAINER ID   STATUS             NAMES
608fe6f7a1c4   Up About an hour   docker-tutorial

Docker Example

To illustrate this, we will use a Hive and Hadoop environment, containing 5 Docker Containers from – https://github.com/mesmacosta/docker-hive.
Since I am on windows, I use Github desktop.

Launch Github desktop and then go to File >> Clone Repository >> URL.

Go to https://github.com/mesmacosta/docker-hive – Click on Code > Copy. Paste URL into Github Desktop. Click Clone.

Now open command prompt or Powershell – AS ADMINISTRATOR – and go to the directory where the docker files are located. In my case its in Documents > Github > docker-hive.

Now let’s start up those containers:

# docker-compose up -d

Note: If you receive this error:
Error response from daemon: Ports are not available: listen tcp 0.0.0.0:50070: bind: An attempt was made to access a socket in a way forbidden by its access permissions.

Run this in command prompt or PS:

net stop winnat
net start winnat

We can see 5 containers:

>docker ps --format "table {{.ID}}\t{{.Status}}\t{{.Names}}"
CONTAINER ID   STATUS                   NAMES
30714f65fc36   Up 2 minutes             docker-hive_hive-metastore_1
cc281caa92ba   Up 2 minutes             docker-hive_hive-server_1
66aed41cdc5e   Up 2 minutes             docker-hive_hive-metastore-postgresql_1
d90c10f7cfe6   Up 2 minutes (healthy)   docker-hive_datanode_1
baf998183015   Up 2 minutes (healthy)   docker-hive_namenode_1

Next let’s check our Docker networks:

>docker network ls
NETWORK ID     NAME                  DRIVER    SCOPE
234dbbb8d381   bridge                bridge    local
d438c2ba7c56   docker-hive_default   bridge    local
e284120f22c7   host                  host      local
019daa8ddd49   none                  null      local

By default docker compose sets up a single network for your app. And your app’s network is given a name based on the “project name”, originated from the name of the directory it lives in.

So since our directory is named docker-hive, this explains the new network.

Getting more information.

Docker inspect can retrieve low-level information on Docker objects. You can pick out any field from the returned JSON.

Let’s get the IP Address from the dockerhive_datanode.

>docker ps --format "table {{.ID}}\t{{.Status}}\t{{.Names}}"
CONTAINER ID   STATUS                   NAMES
30714f65fc36   Up 2 minutes             docker-hive_hive-metastore_1
cc281caa92ba   Up 2 minutes             docker-hive_hive-server_1
66aed41cdc5e   Up 2 minutes             docker-hive_hive-metastore-postgresql_1
d90c10f7cfe6   Up 2 minutes (healthy)   docker-hive_datanode_1
baf998183015   Up 2 minutes (healthy)   docker-hive_namenode_1

Get the container ID from the above command to find the following:

$ docker inspect -f '{{range.NetworkSettings.Networks}}{{.IPAddress}}{{end}}' d90c10f7cfe6
172.20.0.2

Docker Logs

How to check Docker logs
sudo docker logs where is the ID of the docker container

Get Docker Container:

# sudo docker ps --format "table {{.ID}}\t{{.Status}}\t{{.Names}}"

Now view the logs:

$ sudo docker logs d90c10f7cfe6   

Docker Ports

$ docker container ls --format "table {{.ID}}\t{{.Names}}\t{{.Ports}}" -a
CONTAINER ID   NAMES             PORTS
a624f0ae744e   cool_moore
a0d9f2b7ce84   zealous_mclean    0.0.0.0:80->80/tcp, :::80->80/tcp

docker inspect

This method allows one to return low-level information on the container or image.
Syntax

docker inspect Container/Image

Select IP

# docker inspect c52b91aa0dea | grep -i ip

Ports

docker inspect c52b91aa0dea | grep -i port

Run sudo /usr/bin/nvidia-uninstall to uninstall a manually installed driver, if you still got one.

Remove everything of your old NVIDIA driver (simulate first):

sudo apt remove --purge -s nvidia-*
sudo apt remove --purge -s libnvidia-*

If only nvidia packages are there to be removed, remove really:

sudo apt remove --purge nvidia-*
sudo apt remove --purge libnvidia-*

Run sudo apt update and ubuntu-drivers devices again.

If nvidia-390 is still recommended, add your output of sudo ubuntu-drivers autoinstall again.