Linux-in-the-House.svg


Linux in the House

The amazing things you can do with computer software in your home, ya GNU?


By:

  • Don Cohoon

Website:

RSS:

Feedback:

Book Last Updated: 29-March-2024

Version: 2.280

Creative Commons License
This work is licensed under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.


Tux Image By lewing@isc.tamu.edu Larry Ewing and The GIMP, Attribution, https://commons.wikimedia.org/w/index.php?curid=574842

Created with:


Preface

Everyone with some computer skills should take control of your data and programs in your house, with a few minutes work every day to gain knowledge, security, and improve your life.

This guide will walk through from beginner level to intermediate. You should be able to find something useful to you no matter what your current level of experience, even the most advanced. This book will explore mostly the software side (programs) of a home computer, while briefly looking at what hardware will support it.

Advanced users might want to jump ahead at this point to The Most Import Feature of All.

Build Your Own PC at MicroCenter

BuildYourOwnPC.jpg

Reference: https://www.microcenter.com/site/content/howtochooseyourpcparts.aspx


Table of Contents


Not just pieces and parts

It is possible to build a computer from parts, and just as satisfying to have someone else build it. A pre-built PC is like working with a house builder or selecting a new car from the factory. Discovering options available and only pay for what you need will make ownership of the hardware and software more satisfying.

For example, I like small form factor systems that consume less power. Partly to save electrical costs and enable longer run-time on a battery backup (UPS) system. That's why I chose the Intel NUC [1] and BeagleBone [2] Small Board Computer (SBC) systems for light duty jobs like e-mail and cloud, and larger ATX [3] motherboards in a server case for Network Attached Storage (NAS) [4], housing up to five SATA disk drives.

  1. https://www.intel.com/content/www/us/en/products/details/nuc.htm
  2. https://beagleboard.org/bone
  3. https://en.wikipedia.org/wiki/ATX
  4. https://www.truenas.com/truenas-scale/

How to decide

The first thing to think about is what do you want to do. What computing feature of my life would you like to learn about and have control over.

Next consider having a backup plan. Like when bad weather hits, where would you go? Same thing with building your computer system(s). I generally have two pieces of hardware for each task, then document and backup the software. Backup systems can be hand-me-downs or replaced systems and parts to save money.

Finally you have to choose how much time and money you want to invest. Usually time equals money. Some things you may not be comfortable doing just now, and change your mind later when you have more experience.

Why Open Source

After investing your time and money it is a shame to find out that company you bought something from has discontinued your product or gone out of business. This can also occur in Open Source environments if the developers retire, move on or just get tired of working on it.

With Open Source you can obtain the source code and keep it going yourself or find someone you can. Using your skill and knowledge while building your system will provide confidence and know how. Plus there are many resources on the Internet to ask questions, and answer along the way.

Running your services on your own system can provide better security as hackers like to go for bigger targets. Your data is not on someone else's disk (cloud), but your own that only you provide access to. System administrators have access to everything on the computer so be your own Sys Admin over your own data.

Advanced experience affords you the ability to change what you do not like or fix a problem the original owner does not. You have the source code, you can make the change.

Evaluating a Project

Open Source projects will post lots of information about themselves that you can use to decide if it is worth your time to use it.

  1. Does it do what you want
  2. Does it have an active community
  3. When was the last release
  4. Do they care about security
  5. What language does it use
  6. How does it fit on your existing hardware

First I ask myself, "self", "What are you trying to do?" My answers were: e-mail, cloud storage, private contacts and calendar, read the news, monitor the house when I'm out, listen to music, watch a movie or TV, and save my notes where I can find them.

Then search the Internet for Open Source software that seems to do that. Sites like:

are good starters. Avoid sites with lots of advertising, binary downloads and non-secure web servers. Do not download packages unless the Linux package manager cannot find it. Just look to see if the things you want are available, what the name is, then do

$ sudo apt search <name>

to see if it is already packaged for you. This way you get secure software and it will be updated automatically.

Linux packages are a sign of active development and the distributions

should have origination support website links back to each project. Remember once you pick a Linux distribution, you need to stick with it's packages.

Smaller products can be found on github.com [1] and gitlab.com [2] and are generally compatible with any distribution.

  1. https://github.com/search
  2. https://gitlab.com/explore/projects

Be Curious, Just Try

Take some time to read the use-case and security pages. Passwords should be encrypted, network access should be none or limited, test cases should be extensive and developer participant counts made available. If you open this to the internet, expect hackers to come at it, so error logging is a must.

Find out what programming language(s) they use, because you should have a quick look at it to decide if it is readable and something you might tackle if needed. Otherwise be prepared to discard it if a big problem arises. When you get some time, try learning bash [1], Python [2], C [3], C++ [4], and Rust [5]. That will have you covered for most projects, and open up new doors of fun.

Lastly think about what computer will run it, does it have enough disk, memory, cpu and backup methodologies.

  1. https://linuxconfig.org/bash-scripting-tutorial-for-beginners
  2. https://www.learnpython.org/
  3. https://www.cprogramming.com/
  4. https://www.codecademy.com/learn/learn-c-plus-plus
  5. https://google.github.io/comprehensive-rust/welcome.html

Ok, Jump in the Water, it's Just Fine

Now you obtain the hardware, load up an operating system, and start installing packages to do something useful.

  • Buy or find hardware
  • Pick and install a Linux distribution on it
  • Install packages 😃

For me I chose Ubuntu because it's been around a long time, supports all my hardware, has good package support and I've used it for years. BeagleBone SBC comes with Debian, which Ubuntu is built from, so I stick with it. My laptop is Linux Mint, which has a nice user interface.

My hardware is three levels:

  1. Small: BeagleBone AI and AI-64 single board computers (SBC) [1]. Applications: News, Music
  2. Medium: Intel NUC [2] boxes with SSD and NVME storage, and about 16~32MB of memory. The included one ethernet and graphics built onto the motherboard seem to work just fine for my usage. Also consider a laptop, Dell XPS-13 [3] is a good one, as well as System76 [4]. Applications: E-Mail, Cloud, Home Automation
  3. Large: Gigabyte ATX motherboard [5] with SATA disk drive connections for NAS. Seagate Ironwolf 2TB disk drives, and 64GB of memory. Applications: NAS

Try to install applications on suitable boxes, and don't forget battery backup to save the life of your computers from dropouts, surges, and spikes in electrical power. APC has a nice 865w back-ups [6] with an extention battery pack [7] that will run several systems for several hours. With less power demand, the longer a battery backup will last. One nice thing about laptops; built-in battery backup.

  1. https://beagleboard.org/bone
  2. https://www.intel.com/content/www/us/en/products/details/nuc.htm
  3. https://www.dell.com/en-us/shop/dell-laptops/xps-13-laptop/spd/xps-13-9315-laptop
  4. https://system76.com/laptops
  5. https://www.microcenter.com/site/content/howtochooseyourpcparts.aspx
  6. https://www.apc.com/us/en/product/BR1500G/apc-backups-pro-1500va-865w-tower-120v-10x-nema-515r-outlets-avr-lcd-user-replaceable-battery/
  7. https://www.apc.com/us/en/product/BR24BPG/apc-backups-pro-external-battery-pack-for-1500va-backups-pro-models-formerly-backups-rs-1500/

Distribution Install

This is documented well, and may change slightly from release to release so I will not waste time or space here, just list the common websites.

Common Linux Distributions:

Download and installation for Ubuntu can be found here:

https://ubuntu.com/tutorials/install-ubuntu-desktop

Get Familiar with the Command Line

Linux command line for beginners: https://ubuntu.com/tutorials/command-line-for-beginners#1-overview

Bash Guide for Beginners: https://tldp.org/LDP/Bash-Beginners-Guide/html/index.html

Advanced Bash-Scripting Guide: https://tldp.org/LDP/abs/html/index.html

Command line handbook: https://linuxhandbook.com/a-to-z-linux-commands/

Use man <command> to read the man page at the comand line for any Linux command.

Example:

  • <space> or CTRL-F to go forward
  • CTRL-B to go back
  • /<word> to search the page for <word>
  • q to quit
$ man rsync
rsync(1)                          User Commands                         rsync(1)

NAME
       rsync - a fast, versatile, remote (and local) file-copying tool

SYNOPSIS
       Local:
           rsync [OPTION...] SRC... [DEST]

       Access via remote shell:
           Pull:
               rsync [OPTION...] [USER@]HOST:SRC... [DEST]
           Push:
               rsync [OPTION...] SRC... [USER@]HOST:DEST

       Access via rsync daemon:
           Pull:
               rsync [OPTION...] [USER@]HOST::SRC... [DEST]
               rsync [OPTION...] rsync://[USER@]HOST[:PORT]/SRC... [DEST]
           Push:
 Manual page rsync(1) line 1 (press h for help or q to quit)
Edit Files

Learn how to edit files with vi here [1] (thanks to the University of Tennessee)

  1. https://www.jics.utk.edu/files/images/csure-reu/PDF-DOC/VI-TUTORIAL.pdf
Tilde - text editor from the 1990's 🤔

Or edit with a simple text editor, tilde [1]. Hint: to open the file menu use ESC+F, Edit menu is ESC+E.

$ sudo apt-get install tilde
$ tilde

tilde-text-editor.png

  1. https://github.com/gphalkes/tilde

Most Important Feature of All

SECURITY of course!

Do not skimp on security. If you think it is hard, try recovering from Identity Theft or Ransomware. Lock the windows and doors to your computer.

Security Tips:

  • Start with your router.

    • Change the administrative password BEFORE connecting to the Internet.
    • Make sure all ports are closed for incoming access.
    • Read => RouterSecurity.org [1] <= for excellent advice.
  • Use a password manager.

    • KeePassXC [2] is a multi platform, offine database. It works on Linux, Phone, Windows, and Mac.
    • Sync your password database with your own service. Nextcloud is excellent and easy.
  • Set your passwords.

    • Do not keep the default password for any application.
    • Use different passwords for EVERY app.
  • Look at your log files and e-mail alerts.

    • The next steps will have services like Logwatch and Fail2ban. Read the messages filtered out by them.

Remember the old saying; An ounce of prevention is worth a pound of cure.

  1. https://routersecurity.org/
  2. https://keepassxc.org/

Go to => Setup a New Server

Proceed in the order presented, some things are depending on prior setups.

Conventions used in this book

Text in code block format, like this -> code block, are meant to be executed on the Linux command line, or text to be copied into a configuration file.

Commands like $ sudo ..., shoud not type in the dollar sign ($). Just start with sudo .... The dollar sign is a common Linux prompt.

All files have their code blocks preceeded by -> File: .

Anything surrounded by angle brackets, like , is meant to be substituted by your particular name without the angle brackets.

Hyperlinks in the document will open in the current web browser page. To open in a new tab/window use right-click and select 'open in new tab' from the on screen menu.

All host names are given as example.com. You are expected substitute your domain name. Also check the hostname, as www and mail hosts may not exist on your network.

RedHat and Debian differ in some commands, usually system administration. To show these alternatives without cluttering up the page, collapsable blue bars will be shown, like this:

Debian

Hi from Debian!

RedHat

Hi from RedHat!

Book Last Updated: 29-March-2024

Set Up a New Server


Table of Contents


Use this as a guide to enable proper monitoring and maintenace of any new server on the network.

Sample Home Network:

graph TD;
        Modem<-->Router;
        Router<-->MainSwitch;
        Router<-->WiFi;
        Router-->GuestWiFi;
        GuestWiFi-->Camera;
        MainSwitch<-->ServerSwitch;
        MainSwitch<-->Desktop;
        MainSwitch<-->Laptop;
        ServerSwitch-->MailServer;
        ServerSwitch-->WebServer;
        ServerSwitch-->NetworkAccessStorage;

Sometimes the Modem, Router and Main Switch are one unit, or there is no modem.

-----> Single Arrow is a limited access network (VLAN)

<----> Double Arrows is an Open Network

The key point here is that the servers are isolated on a seperate switch for performance and security reasons, using a VLAN (Virtual Local Area Network) local to the Server Switch. Server VLAN network packets between each other never leave the Server Switch. Each server has another IP address not on the VLAN for public access.

A guest WiFi service does not have access to the Main Switch because it is on it's own VLAN, so local resources are protected from that experimental 12 year old guest.

If your camera accesses a cloud service (most do) link it to the Guest WiFi, for security purposes. Any other untrusted device should also be on the Guest Wifi, like Robot Vacuum Cleaners, Car Chargers, Car, TV streaming box, VOIP (Telephone VoiceOverIP), Garage Door Opener, Door Locks, etc...

Check your Network

  • See Network Interfaces
$ ifconfig
eno1: flags=4163<UP,BROADCAST,RUNNING,MULTICAST>  mtu 1500
        inet 192.168.0.8  netmask 255.255.255.0  broadcast 192.168.0.255
        inet6 fe80::0000:1111:3333:2222  prefixlen 64  scopeid 0x20<link>
        ether 2a:53:9b:00:f9:21  txqueuelen 1000  (Ethernet)
        RX packets 342047883  bytes 378663045018 (378.6 GB)
        RX errors 0  dropped 54131  overruns 0  frame 0
        TX packets 221663343  bytes 165067861773 (165.0 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 16  memory 0xc0b00000-c0b20000  

lo: flags=73<UP,LOOPBACK,RUNNING>  mtu 65536
        inet 127.0.0.1  netmask 255.0.0.0
        inet6 ::1  prefixlen 128  scopeid 0x10<host>
        loop  txqueuelen 1000  (Local Loopback)
        RX packets 32619960  bytes 99080107335 (99.0 GB)
        RX errors 0  dropped 0  overruns 0  frame 0
        TX packets 32619960  bytes 99080107335 (99.0 GB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
# ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: ens3: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 36:27:10:52:32:96 brd ff:ff:ff:ff:ff:ff
    altname enp0s3
    inet 192.168.0.5/24 brd 10.0.2.255 scope global dynamic noprefixroute ens3
       valid_lft 66645sec preferred_lft 66645sec
    inet6 fec0::000:ff:1111:1111/64 scope site dynamic noprefixroute 
       valid_lft 86390sec preferred_lft 14390sec
    inet6 fe80::000:ff:1111:1111/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
  • System Log is
/var/log/syslog
/var/log/messages
  • Bring Network Up:
$ sudo cat /etc/netplan/01-network-manager-all.yaml
network:
    version: 2
    renderer: networkd
    ethernets:
        enp3s0:
            dhcp4: true
$ sudo netplan try
$ sudo netplan -d apply
$ sudo systemctl restart system-networkd

Reference: https://netplan.io/

Update Package Repository

Debian

Always refresh the package repository before getting started.

$ sudo apt-get update

You may need to add extra repositories, just check the sources.

The four main repositories are:

  • Main - Canonical-supported free and open-source software.
  • Universe - Community-maintained free and open-source software.
  • Restricted - Proprietary drivers for devices.
  • Multiverse - Software restricted by copyright or legal issues.
  • Backports[1] - Software from older releases that should still work.
  1. Backports for old stuff you really really need.
$ sudo add-apt-repository universe
$ sudo grep '^deb ' /etc/apt/sources.list
...
deb http://us.archive.ubuntu.com/ubuntu/ jammy universe
...
$ sudo add-apt-repository --remove universe

RedHat

Always refresh the package repository before getting started.

$ sudo dnf update
  • Now you can disable subscription-manager if you do not have a RedHat subscription.

Change: From -> enabled=1 To -> enabled=0

$ sudo vi /etc/yum/pluginconf.d/subscription-manager.conf
$ sudo yum clean all
0 files removed

  • You may need to add extra repositories, just check the sources.

The EPEL repository provides additional high-quality packages for RHEL-based distributions. EPEL is a selection of packages from Fedora, but only packages that are not in RHEL or its layered products to avoid conflicts.

The folks at Fedora have very nicely put up an automatic build and repo system and they are calling it COPR (Cool Other Package Repositories).

Reference:

$ sudo dnf install epel-release 'dnf-command(copr)' 

New User and Group to Use

Be sure to match the same user numbers across systems, because when you share files using NFS, the numbers need to match.

$ sudo adduser don --uid 1001
Adding user `don' ...
Adding new group `don' (1001) ...
Adding new user `don' (1001) with group `don' ...
Creating home directory `/home/don' ...
Copying files from `/etc/skel' ...
New password:
Retype new password:
passwd: password updated successfully
Changing the user information for don
Enter the new value, or press ENTER for the default
 Â Â Â  Full Name []: Don
 Â Â Â  Room Number []:
 Â Â Â  Work Phone []:
 Â Â Â  Home Phone []:
 Â Â Â  Other []:
Is the information correct? [Y/n] y
Adding new user `don' to extra groups ...
Adding user `don' to group `dialout' ...
Adding user `don' to group `i2c' ...
Adding user `don' to group `spi' ...
Adding user `don' to group `cdrom' ...
Adding user `don' to group `floppy' ...
Adding user `don' to group `audio' ...
Adding user `don' to group `video' ...
Adding user `don' to group `plugdev' ...
Adding user `don' to group `users' ...

If this user is an administrator;

Debian

Add them to group sudo.
$ sudo usermod -aG sudo rootbk

Check your user for group '27(sudo)'.

$ id rootbk
uid=1002(rootbk) gid=1002(rootbk) 
groups=1002(rootbk),20(dialout),24(cdrom),25(floppy),27(sudo),29(audio),44(video),46(plugdev),100(users),114(i2c),993(spi)

RedHat

Add them to group wheel.
$ sudo usermod -aG wheel rootbk

Check your user for group '10(wheel)'.

# id don
uid=1002(rootbk) gid=1002(rootbk) groups=1002(rootbk),10(wheel)
Also set the root password in case the system will not boot
$ sudo passwd root
[sudo] password for don: 
New password: 
Retype new password: 
passwd: password updated successfully

Firewall

Every machine should have a firewall enabled, especially before connecting to the internet.

Now adays there is a choice between Uncomplicated Firewall (ufw) and firewalld. Choose wisely or be hacked.

Debian

ufw - Uncomplicated Firewall
$ sudo apt-get install ufw
$ sudo ufw allow 22/tcp
$ sudo ufw enable
$ sudo ufw status numbered
Status: active

     To                         Action      From
     --                         ------      ----
[ 1] 22/tcp                     ALLOW IN    Anywhere                               
[ 2] 22/tcp (v6)                ALLOW IN    Anywhere (v6)           

$ sudo ufw delete 2
$ sudo ufw logging low
$ sudo ufw logging on

NOTE: Logging values are: low|medium|high. Only network blocks are logged on low.

Reference: https://launchpad.net/ufw

RedHat

firewalld
$ sudo dnf install firewalld
$ sudo firewall-cmd --add-service=ssh
success
$ sudo firewall-cmd --list-services
cockpit dhcpv6-client ssh
$ sudo firewall-cmd --remove-service=cockpit
success
$ sudo firewall-cmd --remove-service=dhcpv6-client
success
$ sudo firewall-cmd --list-services
ssh
$ sudo firewall-cmd --runtime-to-permanent
success

Reference: https://firewalld.org/

Block bad actors, whole Autonomous System [1] groups at a time

Using IP addresses from Logwatch. Logcheck, and Logwatcher, feed them into the firewall.sh script.

Run the git clone and install the dependencies based on your operating system.

File: ~/firewall.sh

#!/bin/bash
##############################################################################
#
# File: firewall.sh
#
# Purpose: Block IP address or CIDR range using OS firewall
#
# Dependencies: 
#
#  git clone https://github.com/nitefood/asn 
#
#  * For more detail get an ~/.asn/iqs_token
#     from https://github.com/nitefood/asn#ip-reputation-api-token
#
# * **Debian 10 / Ubuntu 20.04 (or newer):**
#
#  ```
#  apt -y install curl whois bind9-host mtr-tiny jq ipcalc grepcidr nmap ncat aha
#  ```
#  * Enable ufw and allow the ports you need
#
#  # ufw allow 22
#  # ufw enable
#
#  * Delete rules not needed:
#  # ufw status numbered
#  # ufw delete <number>
#
# * **CentOS / RHEL / Rocky Linux 8:**
#
# # Install repos:
# $ sudo dnf repolist
# repo id                                repo name
# appstream                              CentOS Stream 9 - AppStream
# baseos                                 CentOS Stream 9 - BaseOS
# epel                                   Extra Packages for Enterprise Linux 9 - x86_64
# epel-next                              Extra Packages for Enterprise Linux 9 - Next - x86_64
# extras-common                          CentOS Stream 9 - Extras packages
#
# $ ls /etc/yum.repos.d
# centos-addons.repo  centos.repo  epel-next.repo  epel-next-testing.repo  epel.repo  epel-testing.repo  redhat.repo
#
# dnf install bind-utils jq whois curl nmap ipcalc grepcidr aha
#
#   If you have a list of IP addresses to block (text file, each IP on a separate line),
#     you can easily import that to your block list:
#
#    # firewall-cmd --permanent --ipset=networkblock --add-entries-from-file=/path/to/blocklist.txt
#    # firewall-cmd --reload
#
#   To view ipsets:
#    # firewall-cmd --permanent --get-ipsets
#     networkblock
#    # firewall-cmd --permanent --info-ipset=networkblock
#     networkblock
#       type: hash:net
#       options: maxelem=1000000 family=inet hashsize=4096
#       entries: 46.148.40.0/24
#
#   # firewall-cmd --add-service=smtp
#   success
#   # firewall-cmd --add-service=smtps
#   success
#   # firewall-cmd --list-services
#   cockpit dhcpv6-client smtp smtps ssh
#   # firewall-cmd --remove-service=cockpit
#   success
#   # firewall-cmd --remove-service=dhcpv6-client
#   success
#   # firewall-cmd --list-services
#   smtp smtps ssh
#   # firewall-cmd --runtime-to-permanent
#   success
#
#   # firewall-cmd --list-all
#     public (active)
#       target: default
#       icmp-block-inversion: no
#       interfaces: ens3
#       sources: 
#       services: smtp smtps ssh
#       ports: 
#       protocols: 
#       forward: no
#       masquerade: no
#       forward-ports: 
#       source-ports: 
#       icmp-blocks: 
#       rich rules: 
#
# https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/security_guide/sec-setting_and_controlling_ip_sets_using_firewalld
#
# * TO UNDO A MISTAKEN BLOCK:
#   # firewall-cmd --permanent --ipset=networkblock --remove-entry=x.x.x.x/y
#   # firewall-cmd --reload
#
# * To drop ipset
#   # firewall-cmd --permanent --delete-ipset=networkblock
#   # firewall-cmd --reload
#
# Author     Date     Description
# ---------- -------- --------------------------------------------------------
# D. Cohoon  Jan-2023 Created
# D. Cohoon  Feb-2023 Add RedHat firewalld
##############################################################################
DIR=/root
LOG=${DIR}/firewall.log
WHO=/tmp/whois.txt
CIDR=/tmp/whois.cidr
IP="${1}"
OS=$(/usr/bin/hostnamectl|/usr/bin/grep 'Operating System'|/usr/bin/cut -d: -f2|/usr/bin/awk '{print $1}')
#.............................................................................
function set_ipset() {
  FOUND_IPSET=0
  while read SET
  do
    if [ ! -z ${SET} ] && [ ${SET} == "networkblock" ]; then
      FOUND_IPSET=1
    fi
  done <<< $(sudo /usr/bin/firewall-cmd --permanent --get-ipsets) 
  #
  if [ $FOUND_IPSET -eq 0 ]; then
    # Create networkblock ipset
    sudo /usr/bin/firewall-cmd --permanent --new-ipset=networkblock --type=hash:net \
      --option=maxelem=1000000 --option=family=inet --option=hashsize=4096
    # Add new ipset to drop zone
    sudo /usr/bin/firewall-cmd --permanent --zone=drop --add-source=ipset:networkblock
    # reload
    sudo /usr/bin/firewall-cmd --reload
  fi
}
#
#.............................................................................
function run_asn() {
  ${DIR}/asn/asn -n ${IP} > ${WHO}
  /usr/bin/cat ${WHO}
  RANGE=$(/usr/bin/cat ${WHO} | /usr/bin/grep 'NET' | /usr/bin/grep '/' | /usr/bin/awk -Fm '{print $6}' | /usr/bin/cut -d" " -f1)
  /usr/bin/echo "CDR: ${RANGE}"
  /usr/bin/echo "${RANGE}" > ${CIDR}
}
#.............................................................................
#
if [ ${1} ]; then
  run_asn
else
  /usr/bin/echo "Usage: ${0} <IP Address>"
  exit 1
fi
#.............................................................................
#
  /usr/bin/grep -v deaggregate ${CIDR} > ${CIDR}.block
  while read -r IP
  do
    /usr/bin/echo "$(/usr/bin/date) - OS: ${OS}" | /usr/bin/tee -a ${LOG}
    /usr/bin/echo "Blocking: ${IP}" | /usr/bin/tee -a ${LOG}
    case ${OS} in
      AlmaLinux|CentOS) 
        /usr/bin/echo "Firewalld"
        set_ipset
        sudo /usr/bin/firewall-cmd --permanent --ipset=networkblock --add-entry=${IP}
        sudo /usr/bin/firewall-cmd --reload
        ;;
      Ubuntu|Debian) 
        /usr/bin/echo "ufw"
        sudo /usr/sbin/ufw prepend deny from ${IP} to any 2>&1 |tee -a $LOG
        ;;
    esac
  done < ${CIDR}.block
  1. https://www.arin.net/resources/guide/asn/

Set your host and domain names

File: /etc/hosts

127.0.0.1	  localhost
192.168.1.5 	www.example.com 	example.com	www

# The following lines are desirable for IPv6 capable hosts
::1     localhost ip6-localhost ip6-loopback
ff02::1 ip6-allnodes
ff02::2 ip6-allrouters

File: /etc/hostname

don.example.com

Rsyslog - Send Syslog Entries to Remote Syslog Host

It is a good idea to send log messages to another host in case the system crashes. You will be able to see that last gasping breath of the dying server. Also in the event of a compromised system hackers usually zero out the local syslog to cover their tracks. Now you still have any trace of the hackers on the central rsyslog host. It makes things simpler for detailed log analysis with combined logs on one system too.

Local System

Replicate log entries: Add the following to cause log entries to be in /var/log/syslog locally and be sent to a remote syslog host. If you do not have a Remote Syslog Host, skip this.

File: /etc/rsyslog.conf

Add these lines on local system.

~
# Remote logging - Aug 2020 Don
# Provides UDP forwarding
*.* @192.168.1.5:514 #this is the logging host
~

Alert: Create the following to send syslog alerts to email if the severity is high (3 or below).

File: /etc/rsyslog.d/alert.conf

Create the file if it does not exist and replace with these lines.

module(load="ommail")
template (name="mailBody"  type="string" string="Alert for %hostname%:\n\nTimestamp: %timereported%\nSeverity:  %syslogseverity-text%\nProgram:   %programname%\nMessage:  %msg%")
template (name="mailSubject" type="string" string="[%hostname%] Syslog alert for %programname%")

if $syslogseverity <= 3 and not ($msg contains 'brcmfmac') then {
   action(type="ommail" server="192.168.1.3" port="25"
          mailfrom="rsyslog@localhost"
          mailto="don@example.com"
          subject.template="mailSubject"
          template="mailBody"
          action.execonlyonceeveryinterval="3600")
}

Remote Syslog Host

Allow remote hosts to log here: Open firewall port 514/udp on remote syslog host.

$ sudo ufw allow 514/udp

File: /etc/rsyslog.conf

Add these lines to remote syslog host.

~
# provides UDP syslog reception
module(load="imudp")
input(type="imudp" port="514")
~
# Process remote logs into seperate directories, then stop. Do not duplicate into syslog
$template RemoteLogs,"/var/log/%HOSTNAME%/%PROGRAMNAME%.log"
*.* ?RemoteLogs
& stop

Restart rsyslog

$ sudo systemctl restart rsyslog

Time Control

All servers should be set up to synchronize their time over the network using Network Time Protocol (NTP). This is critical in validating security certificates. For offline systems, consider using a Real Time Clock (RTC) attached to something like BeagleBone.

timezone

Change to match your timezone.

File: /etc/timezone

$ cat /etc/timezone
America/New_York

Set timezone with timedatactl, and verify.

$ sudo timedatectl set-timezone America/New_York
$ timedatectl
               Local time: Sun 2022-10-09 18:27:11 EDT
           Universal time: Sun 2022-10-09 22:27:11 UTC
                 RTC time: n/a
                Time zone: America/New_York (EDT, -0400)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no

RedHat

RedHat uses chronyd service

File: /etc/chrony.conf

server pool.ntp.org iburst
driftfile /var/lib/chrony/drift
makestep 1.0 3
rtcsync
keyfile /etc/chrony.keys
leapsectz right/UTC
logdir /var/log/chrony

Restart to pick up new config

$ sudo systemctl restart chronyd
$ sudo systemctl status chronyd
● chronyd.service - NTP client/server
   Loaded: loaded (/usr/lib/systemd/system/chronyd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2023-02-17 15:47:28 EST; 20h ago
     Docs: man:chronyd(8)
           man:chrony.conf(5)
  Process: 798 ExecStartPost=/usr/libexec/chrony-helper update-daemon (code=exited, status=0/SUCCESS)
  Process: 789 ExecStart=/usr/sbin/chronyd $OPTIONS (code=exited, status=0/SUCCESS)
 Main PID: 796 (chronyd)
    Tasks: 1 (limit: 11366)
   Memory: 2.3M
   CGroup: /system.slice/chronyd.service
           └─796 /usr/sbin/chronyd

Test

$ chronyc sources -v

  .-- Source mode  '^' = server, '=' = peer, '#' = local clock.
 / .- Source state '*' = current best, '+' = combined, '-' = not combined,
| /             'x' = may be in error, '~' = too variable, '?' = unusable.
||                                                 .- xxxx [ yyyy ] +/- zzzz
||      Reachability register (octal) -.           |  xxxx = adjusted offset,
||      Log2(Polling interval) --.      |          |  yyyy = measured offset,
||                                \     |          |  zzzz = estimated error.
||                                 |    |           \
MS Name/IP address         Stratum Poll Reach LastRx Last sample               
===============================================================================
^? your-ip-name-d>             0   8     0     -     +0ns[   +0ns] +/-    0ns
...
$ timedatectl
               Local time: Wed 2023-07-12 09:03:57 EDT
           Universal time: Wed 2023-07-12 13:03:57 UTC
                 RTC time: Wed 2023-07-12 13:03:57
                Time zone: America/New_York (EDT, -0400)
System clock synchronized: yes
              NTP service: active
          RTC in local TZ: no
...
$ systemctl is-active chronyd.service
active
...
$ chronyc tracking
Reference ID    : 404F64C5 (ntpool1.258.ntp.org)
Stratum         : 3
Ref time (UTC)  : Wed Jul 12 12:47:34 2023
System time     : 0.000000002 seconds slow of NTP time
Last offset     : +1.114915133 seconds
RMS offset      : 1.114915133 seconds
Frequency       : 32.362 ppm slow
Residual freq   : +22.860 ppm
Skew            : 3.901 ppm
Root delay      : 0.046558209 seconds
Root dispersion : 0.050582517 seconds
Update interval : 0.0 seconds
Leap status     : Normal

Reference: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/8/html/configuring_basic_system_settings/using-chrony-to-configure-ntp_configuring-basic-system-settings

Debian

Debian uses systemd-timesyncd service.

$ sudo systemctl status systemd-timesyncd
● systemd-timesyncd.service - Network Time Synchronization
   Loaded: loaded (/lib/systemd/system/systemd-timesyncd.service; enabled; vendor preset: enabled)
  Drop-In: /lib/systemd/system/systemd-timesyncd.service.d
           └─disable-with-time-daemon.conf
   Active: active (running) since Sun 2022-07-24 12:06:36 EDT; 2 weeks 3 days ago
     Docs: man:systemd-timesyncd.service(8)
 Main PID: 559 (systemd-timesyn)
   Status: "Synchronized to time server for the first time 192.155.94.72:123 (2.debian.pool.ntp.org)."
    Tasks: 2 (limit: 951)
   Memory: 1.0M
   CGroup: /system.slice/systemd-timesyncd.service
           └─559 /lib/systemd/systemd-timesyncd

Reference:

E-Mail - Client for Sending Local Mail

Identify Mail Client Host and Domain

Edit the following files:

  • /etc/hostname (add fully qualified host & domain; i.e.: www.example.com)
  • /etc/mailname (add domain; i.e.: example.com)

Mail Transport Agent (MTA) packages

Install an SMTP daemon to transfer mail to the E-Mail server.

  • Debian

Install postfix

$ sudo apt-get install postfix

Reconfigure postfix, if it does not pop up, and select sattelite system.

$ sudo dpkg-reconfigure postfix

The following assumes your host is named app and your email server is smtp.<domain>

File: /etc/postfix/main.cf

# See /usr/share/postfix/main.cf.dist for a commented, more complete version


# Debian specific:  Specifying a file name will cause the first
# line of that file to be used as the name.  The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname

smtpd_banner = $myhostname ESMTP $mail_name (Debian/GNU)
biff = no

# appending .domain is the MUA's job.
append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h

readme_directory = no

# See http://www.postfix.org/COMPATIBILITY_README.html -- default to 3.6 on
# fresh installs.
compatibility_level = 3.6



# TLS parameters
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_tls_security_level=may

smtp_tls_CApath=/etc/ssl/certs
smtp_tls_security_level=may
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache


smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = app
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname
mydestination = app.example.com, $myhostname, app, localhost.localdomain, localhost
relayhost = smtp.example.com
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = loopback-only
inet_protocols = all

Check postfix systemd service.

$ sudo systemctl status postfix
● postfix.service - Postfix Mail Transport Agent
     Loaded: loaded (/lib/systemd/system/postfix.service; enabled; preset: enabled)
     Active: active (exited) since Sat 2023-07-22 09:32:00 EDT; 5h 26min ago
       Docs: man:postfix(1)
    Process: 1178 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
   Main PID: 1178 (code=exited, status=0/SUCCESS)
        CPU: 1ms

Jul 22 09:32:00 app.example.com systemd[1]: Starting postfix.service - Postfix Mail Transport Agent...
Jul 22 09:32:00 app.example.com systemd[1]: Finished postfix.service - Postfix Mail Transport Agent.
  • RedHat

Install postfix

$ sudo dnf install postfix

The following assumes your host is named app and your email server is smtp.<domain>

File: /etc/postfix/main.cf

smtpd_banner = $myhostname ESMTP $mail_name (Linux)
biff = no

# appending .domain is the MUA's job.
append_dot_mydomain = no

# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h

readme_directory = no

# See http://www.postfix.org/COMPATIBILITY_README.html -- default to 2 on
# fresh installs.
compatibility_level = 2

# TLS parameters
smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_use_tls=yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# See /usr/share/doc/postfix/TLS_README.gz in the postfix-doc package for
# information on enabling SSL in the smtp client.

smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination
myhostname = app.example.com
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
mydestination = app.example.com, localhost.example.com, localhost
relayhost = smtp.example.com
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.1.0/32
mailbox_size_limit = 0
recipient_delimiter = +
inet_interfaces = loopback-only
inet_protocols = all

Check the status of postfix

$ sudo systemctl status postfix
[sudo] password for don: 
● postfix.service - Postfix Mail Transport Agent
     Loaded: loaded (/usr/lib/systemd/system/postfix.service; enabled; preset: disabled)
     Active: active (running) since Wed 2023-06-07 17:43:43 EDT; 22h ago
   Main PID: 2182 (master)
      Tasks: 3 (limit: 99462)
     Memory: 8.6M
        CPU: 1.975s
     CGroup: /system.slice/postfix.service
             ├─ 2182 /usr/libexec/postfix/master -w
             ├─ 2184 qmgr -l -t unix -u
             └─15581 pickup -l -t unix -u

MAC OS send mail to local server, not system configured in mail app

Use the IP address of the local mail server, or you can edit /etc/hosts and use that name. Postfix does not run as a daemon, but is run by the SMTP process, probably fired of by a listener for port 25.

File: /etc/postfix/main.cf

% sudo vi /etc/postfix/main.cf
~
myhostname = square.example.com
~
mydomain = example.com
~
relayhost = [192.168.1.3]
 

Test mail

First install the command line mail interface(s). I use mail and mutt.

  • Debian
$ sudo apt-get install mailutils mutt
  • RedHat
$ sudo dnf install s-nail mutt
% mail -s "Hello internal mail"  don@example.com </dev/null
Null message body; hope that's ok

Shows up as: don@square.example.com
...
mutt -s "Hello internal mail from mutt"  don@example.com

Shows up as: don@square.local

Mutt change from address

File: ~/.muttrc

set from="Square <don@square.example.com>"
set hostname="square.example.com"

Shows up as: don@square.example.com

  • Update root destination in aliases

File: /etc/aliases

~
# Person who should get root's mail
#root:		marc
root:		bob@example.com
~

Update aliases into database format

[don@ash ~]$ sudo newaliases
  • Create mail script to set variables

File: ~/mail.sh

#!/bin/bash
#######################################################################
#
# File: mail.sh
#
# Usage: mail.sh <File Name to Mail> <Subject>
#  Change the REPLYTO, FROM, and MAILTO variables
#  and choose RedHat or Debian
#
# Who       When        Why
# --------- ----------- -----------------------------------------------
# D. Cohoon Feb-2023    VPS host name cannot be changed, so set headers
#######################################################################
function usage () {
   /usr/bin/echo "Usage: ${0} <File Name to Mail> <Subject>"
   exit 1
}
#------------------
if [ $# -lt 2 ]; then
  usage
fi
#
if [ ! -z ${1} ] && [ ! -f ${1} ]; then
  usage 
fi
#
#------------------
HOSTNAME=$(hostname -s)
DOMAINNAME=$(hostname -d)
FILE=${1}    # First arg
shift 1
SUBJECT="${HOSTNAME}.${DOMAINNAME}:${@}" # Remainder of args
#
#------------------
export REPLYTO=root@app.example.com
FROM=root@app.example.com
#FROM="${HOSTNAME}@${DOMAINNAME}"
MAILTO=bob@example.com
#
#------------------
# Debian: install mailutils
#/usr/bin/cat ${FILE} | /usr/bin/mail -aFROM:${FROM}  -s "${SUBJECT}" ${MAILTO}
# RedHat: install s-nail
#/usr/bin/cat ${FILE} | /usr/bin/mail --from-address=${FROM} -s "${SUBJECT}" ${MAILTO}

Monit - Monitor System and Restart Processes

Monit is a small Open Source utility for managing and monitoring Unix systems. Monit conducts automatic maintenance and repair and can execute meaningful causal actions in error situations.

Reference: https://mmonit.com/monit/

Installation

$ sudo apt-get install monit
$ sudo dnf install monit

Configuration

Change the mailserver to yours, and add some general monitoring.

File: /etc/monit/monitrc:

# Mail server
set mailserver www.example.com port 25   # primary mailserver
# Don 28-Dec 2021 - general monitoring
check system $HOST
  if loadavg (1min) per core > 2 for 5 cycles then alert
  if loadavg (5min) per core > 1.5 for 10 cycles then alert
  if cpu usage > 95% for 5 cycles then alert
  if memory usage > 90% then alert
  if swap usage > 50% then alert

check device root with path /
  if space usage > 90% then alert
  if inode usage > 90% then alert
  if changed fsflags then alert
  if service time > 250 milliseconds for 5 cycles then alert
  if read rate > 500 operations/s for 5 cycles then alert
  if write rate > 200 operations/s for 5 cycles then alert

check network eth0 with interface eth0
  if failed link then alert
  if changed link then alert
  if saturation > 90%  for 2 cycles then alert
  if download > 10 MB/s for 5 cycles then alert
  if total uploaded > 1 GB in last hour then alert

check host REACHABILITY with address 1.1.1.1
  if failed ping with timeout 10 seconds then alert

Process

Monitor and restart the ssh process (and others that you may need using this as a guide).

File: /etc/monit/conf.d/sshd

check process sshd with pidfile /var/run/sshd.pid
    alert root@example.com with mail-format {
           from: monit@example.com
        subject: monit alert: $SERVICE $EVENT $DATE
        message: $DESCRIPTION
    }
  start program "/etc/init.d/ssh start"
  stop program "/etc/init.d/ssh stop"

Munin - Resource History Monitor

Munin is a networked resource monitoring tool (started in 2002) that can help analyze resource trends and what just happened to kill our performance? problems. It is designed to be very plug and play.

A default installation provides a lot of graphs with almost no work. Requires Apache or nginx for graphs.

Reference: http://guide.munin-monitoring.org/en/latest/tutorial/alert.html

Munin-Architecture.png

Installation

On all nodes

package libdb-pg-perl is required for postgresql

# sudo apt-get install munin libdbd-pg-perl

package perl-DBD-Pg is required for postgresql

# sudo dnf install munin perl-DBD-Pg

On Munin-Master node, add the following list of hosts to monitor:

File: /etc/munin.munin.conf

~
# Local Host
[app.example.com]
    address 127.0.0.1
    use_node_name yes

# E-Mail host
[www.example.com]
    address 192.168.1.3
    use_node_name yes
~

On Munin-Node node, allow master into IPv6 port:

$ sudo ufw allow 4949
$ sudo ufw status | grep 4949
4949                       ALLOW       Anywhere                  
4949 (v6)                  ALLOW       Anywhere (v6)    
$ sudo firewall-cmd --permanent --zone=public --add-port=4949/tcp
$ sudo firewall-cmd --reload
$ sudo firewall-cmd --permanent --list-ports
4949/tcp

On Munin-Node node, add the Munin-Master IP address to the following:

File: /etc/munin/munin-node.conf

~
# A list of addresses that are allowed to connect.  This must be a
# regular expression, since Net::Server does not understand CIDR-style
# network notation unless the perl module Net::CIDR is installed.  You
# may repeat the allow line as many times as you'd like

allow ^127\.0\.0\.1$
allow ^::1$
allow ^192\.168\.1\.3$
allow ^fe80::abcd:1234:0000:abcd$
~

Check your munin-node functions from command line using the network cat utility Debian -> $ sudo apt-get install ncat: RedHat -> $ sudo dnf install ncat:

Try comands:

  • list
  • nodes
  • config
  • fetch
  • version
  • quit quit
$ ncat 127.0.0.1 4949
# munin node at app.example.com
list
acpi apache_accesses apache_processes apache_volume cpu df df_inode entropy forks fw_packets http_loadtime if_em1 if_eno1 if_err_em1 if_err_eno1 if_err_tun0 if_err_wlp58s0 if_tun0 if_wlp58s0 interrupts irqstats load lpstat memory munin_stats netstat ntp_198.23.200.19 ntp_208.94.243.142 ntp_216.218.254.202 ntp_91.189.94.4 ntp_96.126.100.203 ntp_kernel_err ntp_kernel_pll_freq ntp_kernel_pll_off ntp_offset ntp_states open_files open_inodes postfix_mailqueue postfix_mailvolume postgres_autovacuum postgres_bgwriter postgres_cache_ALL postgres_cache_twotree postgres_checkpoints postgres_connections_ALL postgres_connections_db postgres_connections_twotree postgres_locks_ALL postgres_locks_twotree postgres_querylength_ALL postgres_querylength_twotree postgres_scans_twotree postgres_size_ALL postgres_size_twotree postgres_transactions_ALL postgres_transactions_twotree postgres_tuples_twotree postgres_users postgres_xlog proc_pri processes swap threads uptime users vmstat
.
fetch df
_dev_nvme0n1p2.value 34.4721606114914
_dev_shm.value 0.000540811610733636
_run.value 0.183629672785198
_run_lock.value 0.15625
_run_qemu.value 0
_dev_sda1.value 39.2182617590549
_dev_nvme0n1p1.value 1.02513530868728
_dev_sdb1.value 29.1143742265263
.
fetch cpu
user.value 3392679
nice.value 310628
system.value 792413
idle.value 49254331
iowait.value 202415
irq.value 0
softirq.value 56281
steal.value 0
guest.value 0
.

Apache monitoring requires the mod_status to be enabled and add your IP address range to the status.conf.

Enable apache module mod_status:

$ sudo a2enmod status

Check the IP addresses in the apache status configuration. Change Require ip <address> to allow other IP addresses to connect to the munin monitor.

File: /etc/apache2/mods-enabled/status.conf

~
	<Location /server-status>
		SetHandler server-status
		Require local
		Require ip 192.168.1.0/24
		#Require ip 192.0.2.0/24
	</Location>
~

Check apache plugin:

$ sudo munin-run apache_volume
volume80.value 500736

Check postgresql plugin:

$ sudo munin-run postgres_connections_miniflux
active.value 0
idle.value 1
idletransaction.value 0
unknown.value 0
waiting.value 0

Check the munin-node daemon status:

$ sudo systemctl status munin-node

Check the munin-master daemon status:

$ sudo systemctl status munin

The utility munin-node-configure is used by the Munin installation procedure to check which plugins are suitable for your node and create the links automatically. It can be called every time when a system configuration changes (services, hardware, etc) on the node and it will adjust the collection of plugins accordingly. '-shell' will display new configuration plugin links 'ln -s ...' for you.

For instance, below a new network interface (if) was discovered since the last configuration of munin. To enable the new monitoring simply execute the 'ln -s ...' commands to create soft links, so interface veth2e40fe9 will be monitored.

$ sudo munin-node-configure -shell
ln -s '/usr/share/munin/plugins/if_' '/etc/munin/plugins/if_veth2e40fe9'
ln -s '/usr/share/munin/plugins/if_err_' '/etc/munin/plugins/if_err_veth2e40fe9'

To have munin-node-configure display 'rm ...' commands for plugins with software that may no longer be installed, use the option ‘–remove-also’.

$ sudo munin-node-configure -shell -remove-also
ln -s '/usr/share/munin/plugins/if_' '/etc/munin/plugins/if_veth2e40fe9'
rm -f '/etc/munin/plugins/if_veth0049d71'
ln -s '/usr/share/munin/plugins/if_err_' '/etc/munin/plugins/if_err_veth2e40fe9'
rm -f '/etc/munin/plugins/if_err_veth0049d71'

Enabled monitors can be found in the same location

$ ls -l /etc/munin/plugins | grep apache
lrwxrwxrwx 1 root root 40 Jun 30  2018 apache_accesses -> /usr/share/munin/plugins/apache_accesses
lrwxrwxrwx 1 root root 41 Jun 30  2018 apache_processes -> /usr/share/munin/plugins/apache_processes
lrwxrwxrwx 1 root root 38 Jun 30  2018 apache_volume -> /usr/share/munin/plugins/apache_volume

Testing new plugins has an autoconf option to munin-run. Errors will be displayed, and a debug '-d' option is also available.

$ sudo munin-run postgres_connections_miniflux autoconf
yes

$ sudo munin-run -d postgres_connections_miniflux autoconf
# Running 'munin-run' via 'systemd-run' with systemd properties based on 'munin-node.service'.
# Command invocation: systemd-run --collect --pipe --quiet --wait --property EnvironmentFile=/tmp/YRfAa1dq9U --property UMask=0022 --property LimitCPU=infinity --property LimitFSIZE=infinity --property LimitDATA=infinity --property LimitSTACK=infinity --property LimitCORE=infinity --property LimitRSS=infinity --property LimitNOFILE=524288 --property LimitAS=infinity --property LimitNPROC=14150 --property LimitMEMLOCK=65536 --property LimitLOCKS=infinity --property LimitSIGPENDING=14150 --property LimitMSGQUEUE=819200 --property LimitNICE=0 --property LimitRTPRIO=0 --property LimitRTTIME=infinity --property SecureBits=0 --property 'CapabilityBoundingSet=cap_chown cap_dac_override cap_dac_read_search cap_fowner cap_fsetid cap_kill cap_setgid cap_setuid cap_setpcap cap_linux_immutable cap_net_bind_service cap_net_broadcast cap_net_admin cap_net_raw cap_ipc_lock cap_ipc_owner cap_sys_module cap_sys_rawio cap_sys_chroot cap_sys_ptrace cap_sys_pacct cap_sys_admin cap_sys_boot cap_sys_nice cap_sys_resource cap_sys_time cap_sys_tty_config cap_mknod cap_lease cap_audit_write cap_audit_control cap_setfcap cap_mac_override cap_mac_admin cap_syslog cap_wake_alarm cap_block_suspend cap_audit_read cap_perfmon cap_bpf cap_checkpoint_restore' --property AmbientCapabilities= --property DynamicUser=no --property MountFlags= --property PrivateTmp=yes --property PrivateDevices=no --property ProtectClock=no --property ProtectKernelTunables=no --property ProtectKernelModules=no --property ProtectKernelLogs=no --property ProtectControlGroups=no --property PrivateNetwork=no --property PrivateUsers=no --property PrivateMounts=no --property ProtectHome=yes --property ProtectSystem=full --property NoNewPrivileges=no --property LockPersonality=no --property MemoryDenyWriteExecute=no --property RestrictRealtime=no --property RestrictSUIDSGID=no --property RestrictNamespaces=no --property ProtectProc=default --property ProtectHostname=no -- /usr/sbin/munin-run --ignore-systemd-properties -d postgres_connections_miniflux autoconf
# Processing plugin configuration from /etc/munin/plugin-conf.d/README
# Processing plugin configuration from /etc/munin/plugin-conf.d/dhcpd3
# Processing plugin configuration from /etc/munin/plugin-conf.d/munin-node
# Processing plugin configuration from /etc/munin/plugin-conf.d/spamstats
# Setting /rgid/ruid/ to /130/117/
# Setting /egid/euid/ to /130 130/117/
# Setting up environment
# Environment PGPORT = 5432
# Environment PGUSER = postgres
# About to run '/etc/munin/plugins/postgres_connections_miniflux autoconf'
yes

Alert via e-mail:

Reference: https://guide.munin-monitoring.org/en/latest/tutorial/alert.html#alerts-send-by-local-system-tools

Change your email here

File: /etc/munin/munin.conf

~
contact.email.command mail -s "Munin-notification for ${var:group} :: ${var:host}" your@email.address.here
~

Adjust disk full thresholds:

Adjust this in master /etc/munin.conf section for node

File: /etc/munin.conf

[beaglebone]
    address 192.168.1.7
    use_node_name yes
    diskstats_latency.mmcblk0.avgrdwait.warning 0:10
    diskstats_latency.mmcblk0.avgrdwait.critical -5:5
    diskstats_latency.mmcblk0.avgwdwait.warning 0:10
    diskstats_latency.mmcblk0.avgwdwait.critical -5:5
    diskstats_latency.mmcblk0.avgwait.warning 0:10
    diskstats_latency.mmcblk0.avgwait.critical -5:5

Graph Examplemunin.png

Redhat Alternative

Cockpit

Cockpit Dashboard

$ sudo systemctl start Cockpit

To log in to Cockpit, open your web browser to localhost:9090 and enter your Linux username and password.

Reference: https://www.redhat.com/sysadmin/intro-cockpit

Fail2ban - Automatic Firewall Blocking

Daemon to ban hosts that cause multiple authentication errors by monitoring system logs.

Reference: https://github.com/fail2ban/fail2ban

Install

$ sudo apt-get install fail2ban
$ sudo dnf install fail2ban

Configure

Create a jail.local file to override the defaults. Update your email and IP addresses to suit your environment. Also add or disable applications you do not run. See the reference above for example of how to do that.

action = %(action_)s This defines the action to execute when a limit is reached. By default it will only block the user.

To receive an email at each ban, set it to:

action = %(action_mw)

To receive the logs with the mail, set it to:

action = %(action_mwl)

File: /etc/fail2ban/jail.local

[DEFAULT]
# email
destemail = don@example.com
sender = root@example.com
# ban & send an e-mail with whois report and relevant log lines
# to the destemail.
action = %(action_mwl)s
# whitelist
ignoreip = 127.0.0.1 192.168.1.0/24 8.8.8.8 1.1.1.1

Secure sshd

File: /etc/fail2ban/jail.d/sshd.local

[sshd]
enabled = true
port = 22
filter = sshd
action = iptables-multiport[name=sshd, port="ssh"]
logpath = /var/log/auth.log
maxretry = 3
bantime = 1d
[sshd]
enabled = true
port = 22
filter = sshd
action = iptables-multiport[name=sshd, port="ssh"]
logpath = /var/log/secure
maxretry = 3
bantime = 1d

More filters show which daemons are available to be enabled here: /etc/fail2ban/filter.d/

Backup - Save Your Files Daily

To find the proper name of your USB stick, check the current mounts:

$ sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0   1.8T  0 disk  
└─sda1        8:1    0   1.8T  0 part  
sdb           8:16   1     0B  0 disk  
sdc           8:32   1     0B  0 disk  
nvme0n1     259:0    0 238.5G  0 disk  
├─nvme0n1p1 259:1    0   300M  0 part  /boot/efi
├─nvme0n1p2 259:2    0 119.2G  0 part  /
├─nvme0n1p3 259:3    0  16.9G  0 part  [SWAP]

Plug it is, then check dmesg -x immediately after plugging it in. Look for :

[201373.210797]  sdd: sdd1
[201373.211917] sd 2:0:0:0: [sdd] Attached SCSI removable disk

Then run blkid again. You can see the new entry, sdd.

$ sudo lsblk
NAME        MAJ:MIN RM   SIZE RO TYPE  MOUNTPOINTS
sda           8:0    0   1.8T  0 disk  
└─sda1        8:1    0   1.8T  0 part  
sdb           8:16   1     0B  0 disk  
sdc           8:32   1     0B  0 disk  
sdd           8:48   1  28.9G  0 disk  <----- New USB Stick
nvme0n1     259:0    0 238.5G  0 disk  
├─nvme0n1p1 259:1    0   300M  0 part  /boot/efi
├─nvme0n1p2 259:2    0 119.2G  0 part  /
├─nvme0n1p3 259:3    0  16.9G  0 part  [SWAP]

Automatic backup to USB disk

Format new USB stick.

Here are the commands to fdisk:

  • m - menu
  • p - print existing partitions
  • d - delete partition
  • n - create new partition (in this case only a primary partition is required)
  • w - write partition
  • q - quit
$ sudo fdisk /dev/sdd

Check the USB stick label and filesystem (this example has no filesystem)

$ sudo blkid /dev/sdd1
/dev/sdd1: PARTUUID="66bc7da7-1234-abcd-1234-ea4bfe7e00a7"

So create an ext4 filesystem on the new partition(1)

$ sudo mkfs.ext4 /dev/sdd1
mke2fs 1.46.2 (28-Feb-2021)
/dev/sdc1 contains a vfat file system
Proceed anyway? (y,N) y
Creating filesystem with 7566075 4k blocks and 1892352 inodes
Filesystem UUID: 8e33672c-1283-49de-98b8-6fd841372db6
Superblock backups stored on blocks: 
	32768, 98304, 163840, 229376, 294912, 819200, 884736, 1605632, 2654208, 
	4096000

Allocating group tables: done                            
Writing inode tables: done                            
Creating journal (32768 blocks): done
Writing superblocks and filesystem accounting information: done 

Check the new filesystem, now we see it is TYPE="ext4"

$ sudo blkid /dev/sdd1
/dev/sdc1: UUID="8e33672c-1234-49de-abcd-6fd841372db6" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="66bc7da7-c9c3-4342-8073-ea4bfe7e00a7"

Label the USB stick as 'backup' so autobackup can find and mount it, then verify that LABEL="backup"

$ sudo e2label /dev/sdd1 backup
$ sudo blkid /dev/sdd1
/dev/sdd1: LABEL="backup" UUID="8e33672c-1234-49de-abcd-6fd841372db6" BLOCK_SIZE="4096" TYPE="ext4" PARTUUID="66bc7da7-c9c3-4342-8073-ea4bfe7e00a7"

Copy the autobackup software onto this server

$ git clone https://github.com/bablokb/autobackup-service

Install the autobackup software

$ cd autobackup-service/
$ sudo ./tools/install

RedHat Changes

Lines: 21, 30, 31

File: ./tools/install

 17 check_packages() {
 18   local p
 19   for p in "$@"; do
 20     echo -en "Checking $p ... " >&2
 21     rpm -qa "$p" 2>/dev/null | grep -q "Status.*ok" || return 0
 22     echo "ok" >&2
 23   done
 24   return 1
 25 }
 26 
 27 install_packages() {
 28   if [ -n "$PACKAGES" ] && check_packages $PACKAGES; then
 29     echo -e "[INFO] installing additional packages" 2>&1
 30     dnf update
 31     dnf -y --no-upgrade install $PACKAGES
 32   fi
 33 }

Edit the autobackup configuration file, assigning LABEL=backup, and other items below:

File: /etc/autobackup.conf

~
# => File: /etc/autobackup.conf <=
# label of backup partition
LABEL=backup

# write messages to syslog
SYSLOG=1

# wait for device to appear (in seconds)
WAIT_FOR_DEVICE=2

# run a backup on every mount (i.e. multiple daily backups)
force_daily=0

# backup-levels - this must match your entries in /etc/rsnapshot.conf,
i.e.
# you must have a corresponding 'retain' or 'interval' entry.
# The autobackup-script will skip empty levels
daily="day"
weekly="week"
monthly="month"
yearly=""
"/etc/autobackup.conf" line 29 of 29 --100%--

Edit the rsnapshot configuration file, be sure to use TABS in the BACKUP POINTS / SCRIPTS section.

File: /etc/rsnapshot.conf

~
# => File /etc/rsnapshot.conf <=
###########################
# SNAPSHOT ROOT DIRECTORY #
###########################
 
# All snapshots will be stored under this root directory.
#
#snapshot_root  /var/cache/rsnapshot/
snapshot_root   /tmp/autobackup/.autobackup/
~ 
~
#########################################
#     BACKUP LEVELS / INTERVALS         #
# Must be unique and in ascending order #
# e.g. alpha, beta, gamma, etc.         #
#########################################

retain  day     7
retain  week    4
retain  month   3
#retain year    3

###############################
### BACKUP POINTS / SCRIPTS ###
###############################
 
# LOCALHOST
# backup  /etc/           ./
# backup  /var/backups/   ./
# backup  /usr/local/     ./
# backup  /home           ./
# NOTE: Use tabs!
# LOCALHOST$
backup^I/etc/^I./$
backup^I/var/backups/^I./$
backup^I/usr/local/^I./$
backup^I/home^I^I./$    
~
> "/etc/rsnapshot.conf"

Copy autobackup script from install to your home directory

cp autobackup-service/files/usr/local/sbin/autobackup $HOME/autobackup-service/autobackup.sh

Comment out lines 58 through 61 from "<" to ">" below, to allow running in cron.

autobackup-service normally runs automatically when a USB stick with the proper label is inserted into the machine. Comment out the if statement to allow it to run by cron.

File: $HOME/autobackup-service/autobackup.sh

58,61c60,64
<   if [ "${DEVICE:5:3}" != "$udev_arg" ]; then
<     msg "info: partition with label $LABEL is not on newly plugged device $udev_arg"
<     exit 0
<   fi
---
> # Don -> do not check, as we are screduling through cron
> #  if [ "${DEVICE:5:3}" != "$udev_arg" ]; then
> #    msg "info: partition with label $LABEL is not on newly plugged device $udev_arg"
> #    exit 0
> #  fi

Schedule in /etc/cron.d (change your home directory):

File: /etc/cron.d/autobackup-daily

# This is a cron file for autobackup/rsnapshot.
# 0 */4		* * *		root	/usr/bin/rsnapshot alpha
# 30 3   	* * *		root	/usr/bin/rsnapshot beta
# 0  3   	* * 1		root    /usr/bin/rsnapshot gamma
# 30 2   	1 * *		root	/usr/bin/rsnapshot delta
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO="don@example.com"
# m h  dom mon dow user  command
55 12  *   *   *   root  /home/don/autobackup-service/autobackup.sh

Log entries will be in the syslog

$ sudo grep autobackup.sh /var/log/syslog
Jan 23 12:51:11 box autobackup.sh: info: LABEL           = backup
Jan 23 12:51:11 box autobackup.sh: info: WAIT_FOR_DEVICE = 2
Jan 23 12:51:11 box autobackup.sh: info: force_daily     = 0
Jan 23 12:51:11 box autobackup.sh: info: yearly          = 
Jan 23 12:51:11 box autobackup.sh: info: monthly         = month
Jan 23 12:51:11 box autobackup.sh: info: weekly          = week
Jan 23 12:51:11 box autobackup.sh: info: daily           = day
Jan 23 12:51:13 box autobackup.sh: info: checking: 
Jan 23 12:51:13 box autobackup.sh: info: mount-directory: /tmp/autobackup
Jan 23 12:51:13 box autobackup.sh: info: current year:  2021
Jan 23 12:51:13 box autobackup.sh: info: current month: 01
Jan 23 12:51:13 box autobackup.sh: info: current week:  03
Jan 23 12:51:13 box autobackup.sh: info: current day:   023
Jan 23 12:51:13 box autobackup.sh: info: starting backup for interval: month (last backup: 0)
Jan 23 12:51:13 box autobackup.sh: info: starting backup for interval: week (last backup: 0)
Jan 23 12:51:13 box autobackup.sh: info: starting backup for interval: day (last backup: 0)
Jan 23 12:51:15 box autobackup.sh: info: umounting /dev/sda1

Automatic Backup of PostgreSQL Database

Place a script in /etc/cron.daily and it will be run once a day, using the root account.

cron.daily

To check the times look here:

$ grep run-parts /etc/crontab
17 *	* * *	root    cd / && run-parts --report /etc/cron.hourly
25 6	* * *	root	test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.daily )
47 6	* * 7	root	test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.weekly )
52 6	1 * *	root	test -x /usr/sbin/anacron || ( cd / && run-parts --report /etc/cron.monthly )
# cat /etc/anacrontab 
# /etc/anacrontab: configuration file for anacron

# See anacron(8) and anacrontab(5) for details.

SHELL=/bin/sh
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root
# the maximal random delay added to the base delay of the jobs
RANDOM_DELAY=45
# the jobs will be started during the following hours only
START_HOURS_RANGE=3-22

#period in days   delay in minutes   job-identifier   command
1	5	cron.daily		nice run-parts /etc/cron.daily
7	25	cron.weekly		nice run-parts /etc/cron.weekly
@monthly 45	cron.monthly		nice run-parts /etc/cron.monthly

So our daily runs start at 6:25am every day.

Backing up a PostrgreSQL database can be done while everything is up and running with the following script. Backups are stored in /data/backups.

File: /etc/cron.daily/backup_nextcloud

#!/bin/bash
LOGFILE=/var/log/backup_db.log
ID=$(id -un)
if [ ${ID} != "root" ]; then
  echo "Must run as root, try sudo"
  exit 1
fi
#
echo $(date) ${0} >> $LOGFILE

umask 027
export DATA=/data/backups
if cd ${DATA}; then
  # Postgres
  #/usr/bin/pg_dump -c nextcloud > $DATA/nextcloud.db.$(date +%j) </dev/null
  sudo -u postgres /usr/bin/pg_dump -c nextcloud > $DATA/nextcloud.db </dev/null
  
  date >> $LOGFILE
  sync
  sync
  sync
  sync
  savelog -c 7 nextcloud.db >>$LOGFILE 2>&1
fi

Rsync - Remote File Synchronization

Rsync is a good way to keep a daily backup as it only copies changed files to the destination. Make sure you use a seperate disk and preferrably seperate system, as rsync works great over the network.

The PostgreSQL backup above should be sent off to another system using this method. Another rsync script should be called by PostgreSQL backup to do the database network backup. Just copy this one, change the directories, and call it at the end of the database backup.

Schedule

This cron entry will run at 8:40am every day by user root.

File: /etc/cron.d/rsync

# This is a cron file for rsync to NAS 
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO="don@example.com"
# m h  dom mon dow user  command
40 8  *   *   *   root  /mnt/raid1/rsync.sh noask

Script

This script will backup the local directory, /mnt/raid1/data, to a remote system (IP Address 192.168.1.2). The files on the remote system will be at /mnt/vol09/backups. The first run will copy everything, all next runs will only copy changed files. Any files deleted on the source will also be deleted on the destination.

To schedule in cron the parameter 'noask' is used, as shown above. Otherwise there is a prompt for y/n.

The last run's history is in log file /mnt/raid1/rsync.log.

File: /mnt/raid1/rsync.sh

#!/bin/bash
DIR=/mnt/raid1
LOG=${DIR}/rsync.log
cd ${DIR}
date >${LOG}
ASK=${1}
if [ -z ${ASK} ]; then
	echo "Asking"
fi
#
if [ -z ${ASK} ]; then
  echo -n "Copy data? y/n: "
  read askme
  if [[ $askme =~ ^[Yy]$ ]]; then
    rsync -avzz --ignore-errors --progress --delete ${DIR}/data root@192.168.1.2:/mnt/vol09/backups/ |tee -a ${LOG}
  else
    echo "Sync of data skipped"
    echo ". . ."
  fi
else
  rsync -avzz --ignore-errors --progress --delete ${DIR}/data root@192.168.1.2:/mnt/vol09/backups/ |tee -a ${LOG}
fi
#
date >>${LOG}

Logwatch - Daily Alert of Logging Activity

Logwatch is a customizable, pluggable log-monitoring system. It will go through your logs for a given period of time and make a report in the areas that you wish with the detail that you wish. Logwatch is being used for Linux and many types of UNIX.

Installation

Debian:

$ sudo apt-get install logwatch

Redhat:

$ sudo dnf install logwatch

Schedule

File: /etc/cron.daily/00logwatch

#!/bin/bash

#Check if removed-but-not-purged
test -x /usr/share/logwatch/scripts/logwatch.pl || exit 0

#execute
#/usr/sbin/logwatch --output mail
/usr/sbin/logwatch --mailto don@example.com

#Note: It's possible to force the recipient in above command
#Just pass --mailto address@a.com instead of --output mail

Add services

You can add iptables summary on the daily report. It shows which IP addresses have been blocked by UFW.

$ sudo cp /usr/share/logwatch/default.conf/services/iptables.conf /etc/logwatch/conf/services/

You may need to add syslog on Ubuntu servers

File: /etc/logwatch/conf/services/iptables.conf

~
# Which logfile group...
LogFile = syslog
~

Logcheck - mails anomalies in the system logfiles to the admin

The logcheck program helps spot problems and security violations in your logfiles automatically and will send the results to you periodically in an e-mail. By default logcheck runs as an hourly cronjob just off the hour and after every reboot.

Installation

$ sudo apt-get install logcheck
$ sudo dnf install epel-release 'dnf-command(copr)'
$ sudo dnf copr enable brianjmurrell/epel-8
$ sudo dnf install logcheck
$ sudo setfacl -R -m u:logcheck:rx /var/log/secure*
$ sudo setfacl -R -m u:logcheck:rx /var/log/messages*
$ sudo dnf copr disable brianjmurrell/epel-8

Schedule

Normally the package installation will schedule a cron job for you. Check it here:

File: /etc/cron.d/logcheck

# Cron job runs at 2 minutes past every hour
# /etc/cron.d/logcheck: crontab entries for the logcheck package

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root

@reboot         logcheck    if [ -x /usr/sbin/logcheck ]; then nice -n10 /usr/sbin/logcheck -R; fi
2 * * * *       logcheck    if [ -x /usr/sbin/logcheck ]; then nice -n10 /usr/sbin/logcheck; fi

# EOF

Change email destination

Change SENDMAILTO variable to point to your email.

File: /etc/logcheck/logcheck.conf

~
# Controls the address mail goes to:
# *NOTE* the script does not set a default value for this variable!
# Should be set to an offsite "emailaddress@some.domain.tld"

#SENDMAILTO="logcheck"
SENDMAILTO="don@example.com"
~

Sysstat - Gather System Usage Statistics

The sysstat[1] package contains various utilities, common to many commercial Unixes, to monitor system performance and usage activity:

  • iostat reports CPU statistics and input/output statistics for block devices and partitions.
  • mpstat reports individual or combined processor related statistics.
  • pidstat reports statistics for Linux tasks (processes) : I/O, CPU, memory, etc.
  • tapestat reports statistics for tape drives connected to the system.
  • cifsiostat reports CIFS statistics.

Sysstat also contains tools you can schedule via cron or systemd to collect and historize performance and activity data:

  • sar collects, reports and saves system activity information (see below a list of metrics collected by sar).
  • sadc is the system activity data collector, used as a backend for sar.
  • sa1 collects and stores binary data in the system activity daily data file. It is a front end to sadc designed to be run from cron or systemd.
  • sa2 writes a summarized daily activity report. It is a front end to sar designed to be run from cron or systemd.
  • sadf displays data collected by sar in multiple formats (CSV, XML, JSON, etc.) and can be used for data exchange with other programs. This command can also be used to draw graphs for the various activities collected by sar using SVG (Scalable Vector Graphics) format.

Default sampling interval is 10 minutes but this can be changed of course (it can be as small as 1 second).

Redhat Cockpit uses pmlogger.service [2] from systemd. Install from Cockpit's Overview, Metrics and history.

RedHat pmstat [3]

$ pmstat
@ Mon Jun 12 10:02:01 2023
 loadavg                      memory      swap        io    system         cpu
   1 min   swpd   free   buff  cache   pi   po   bi   bo   in   cs  us  sy  id
    0.00 116224 231636   3284 13116m    0    0    0   17  348  387   0   0 100
    0.00 116224 233328   3284 13116m    0    0    0    0  339  383   0   0 100
    0.00 116224 228704   3284 13116m    0    0    0    0  333  358   0   0 100
    0.00 116224 227192   3284 13116m    0    0    0    6  493  548   0   0  99
^C
$ pmstat -a /var/log/pcp/pmlogger/bob.example.com/20230610.0.xz -t 2hour -A 1hour -z
Note: timezone set to local timezone of host "bob.example.com" from archive

@ Sat Jun 10 01:00:00 2023
 loadavg                      memory      swap        io    system         cpu
   1 min   swpd   free   buff  cache   pi   po   bi   bo   in   cs  us  sy  id
    0.08   2048  7646m   6440  6591m    0    0    0    3  198  237   0   0 100
    0.08   2048  7650m   6440  6596m    0    0    0    3  202  237   0   0 100
    0.06   2048  7643m   6440  6600m    0    0    0    3  204  236   0   0 100
    0.00   2048  7597m   6440  6624m    0    0    2   27  219  261   0   0 100
    0.09   2048  7609m   6440  6629m    0    0    0    3  215  259   0   0 100
    0.03   2048  7593m   6440  6633m    0    0    0    3  220  261   0   0 100
    0.00   2048  7585m   6440  6638m    0    0    0    4  223  263   0   0 100
    0.01      0 14402m   6740 495508    ?    ?    ?    ?    ?    ?   ?   ?   ?
    0.00      0 14268m   6740 630344    ?    ?    ?    ?    ?    ?   ?   ?   ?
    0.15      0 14272m   6740 634764    0    0    0    2  162  151   0   0 100
    0.13      0 14266m   6740 639188    0    0    0    2  164  152   0   0 100
 pmFetchGroup: End of PCP archive log

Reference: 1 https://github.com/sysstat/sysstat 2 https://cockpit-project.org/guide/latest/feature-pcp.html 3 https://pcp.readthedocs.io/en/latest/UAG/MonitoringSystemPerformance.html#the-pmstat-command

Installation

$ sudo apt-get install sysstat
$ sudo dnf install sysstat

Configuration

It should configure itself, but just in case:

$ sudo dpkg-reconfigure sysstat
Replacing config file /etc/default/sysstat with new version
$ vi /etc/sysconfig/sysstat
$ sudo systemctl enable --now sysstat

The history files are kept here:

$ ls /var/log/sysstat/
sa07
$ ls /var/log/sa/
sa07

The timer is in the systemd configuration file. OnCalendar defines the interval. In this case data is collected every ten minutes. WantedBy defines that the timer should be active when the sysstat.service is running.

Use systemctl edit sysstat-collect.timer [1] to edit this file. It will automatically create an override file in the right place [2] and enable it for you, and preserve the change over release updates.

File: /usr/lib/systemd/system/sysstat-collect.timer

# /lib/systemd/system/sysstat-collect.timer
# (C) 2014 Tomasz Torcz <tomek@pipebreaker.pl>
#
# sysstat-12.5.2 systemd unit file:
#        Activates activity collector every 10 minutes

[Unit]
Description=Run system activity accounting tool every 10 minutes

[Timer]
OnCalendar=*:00/10

[Install]
WantedBy=sysstat.service
  1. https://www.catalyst2.com/knowledgebase/server-management/how-to-install-configure-sysstat/
  2. Systemd edit override example changing the interval from 10 minutes to 5:
# ls -lrt /etc/systemd/system/sysstat-collect.timer.d/
total 4
-rw-r--r--. 1 root root 27 Feb 19 09:38 override.conf
# more /etc/systemd/system/sysstat-collect.timer.d/override.conf 
[Timer]
OnCalendar=*:00/05

Past Statistics Report

Report on system statistics over the last few days.

File: sar.sh

#!/bin/bash
#################################
# Files are here:
# ls -l /var/log/sysstat/
#  -rw-r--r-- 1 root root 49064 Feb  9 16:35 sa09
#
# Report on some other day:
#  sar -u 2 3 -f /var/log/sysstat/sa15
#
# Output to file:
#  sar -u 2 3 -o /tmp/logfile
#################################
echo "Disk"
sar -d 2 3
echo "Network"
sar -n DEV 2 3
echo "CPU"
sar -u 2 3
sar -P ALL -u 2 3
echo "Memory"
sar -r 2 3
echo "Paging"
sar -B 2 3
echo "Swap"
sar -S 2 3
echo "Load"
sar -q 2 3

Sample Runs

Memory

$ sar -r
Linux 5.10.120-ti-arm64-r64 (app.example.com) 	11/07/2022 	_aarch64_	(2 CPU)

02:09:36 PM kbmemfree   kbavail kbmemused  %memused kbbuffers  kbcached  kbcommit   %commit  kbactive   kbinact   kbdirty
02:10:01 PM    872556   2467364   1037140     27.37    162024   1529680   3898804     66.24    999112   1645384       536
Average:       872556   2467364   1037140     27.37    162024   1529680   3898804     66.24    999112   1645384       536

IO

$ sar -b
Linux 5.10.120-ti-arm64-r64 (app.example.com) 	11/07/2022 	_aarch64_	(2 CPU)

02:09:36 PM       tps      rtps      wtps      dtps   bread/s   bwrtn/s   bdscd/s
02:10:01 PM      6.63      0.16      6.47      0.00      2.25    312.46      0.00
Average:         6.63      0.16      6.47      0.00      2.25    312.46      0.00

Network

$ sar -n DEV
Linux 5.10.120-ti-arm64-r64 (app.example.com) 	11/07/2022 	_aarch64_	(2 CPU)

02:09:36 PM     IFACE   rxpck/s   txpck/s    rxkB/s    txkB/s   rxcmp/s   txcmp/s  rxmcst/s   %ifutil
02:10:01 PM        lo      3.94      3.94      1.66      1.66      0.00      0.00      0.00      0.00
02:10:01 PM      eth0      4.10      5.27      0.37      1.72      0.00      0.00      0.00      0.00
02:10:01 PM      usb0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
02:10:01 PM      usb1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
02:10:01 PM   docker0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:           lo      3.94      3.94      1.66      1.66      0.00      0.00      0.00      0.00
Average:         eth0      4.10      5.27      0.37      1.72      0.00      0.00      0.00      0.00
Average:         usb0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:         usb1      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00
Average:      docker0      0.00      0.00      0.00      0.00      0.00      0.00      0.00      0.00

Load and Run Queue

$ sar -q
Linux 5.10.120-ti-arm64-r64 (app.example.com) 	11/07/2022 	_aarch64_	(2 CPU)

02:09:36 PM   runq-sz  plist-sz   ldavg-1   ldavg-5  ldavg-15   blocked
02:10:01 PM         3       531      0.74      0.61      0.51         0
Average:            3       531      0.74      0.61      0.51         0

S.M.A.R.T. Disk Monitoring

Monitor and notify disk health using smartmontools, and email any notifications.

Install software:

$ sudo apt-get install smartmontools
$ sudo dnf install smartmontools

Configure

Add Long monitoring test for Sunday (/dev/sda through /dev/sdX) and comment out DEVICESCAN:

File: /etc/smartd.conf

File: /etc/smartmontools/smartd.conf

~
# Don - 5-Nov-2021
#   -a      Default: equivalent to -H -f -t -l error -l selftest -C 197 -U 198
#   -d TYPE Set the device type: ata, scsi, marvell, removable, 3ware,N, hpt,L/M/N
#   -n MODE No check. MODE is one of: never, sleep, standby, idle
#   -s REGE Start self-test when type/date matches regular expression (see man page)
#           T/MM/DD/d/HH 
#           ^ ^  ^  ^ ^
#           | |  |  | + 24 Hour
#           | |  |  +-- Day of week, 1(monday) through 7(sunday)
#           | |  +----- Day of month, 1 ~ 31
#           | +-------- Month of year, 01 (January) to 12 (December)
#           +---------- T is the type of test that should be run, options are:
#
#                       L for long self-test
#                       S for short self-test
#                       C for conveyance test
#                       O for an Offline immediate Test
#
#   -W D,I,C Monitor Temperature D)ifference, I)nformal limit, C)ritical limit
#   -m ADD  Send warning email to ADD for -H, -l error, -l selftest, and -f
#/dev/nvme0 -a -n never -W 2,30,40 -m don@example.com
# Start long tests on Sunday 9am and short
#  self-tests every night at 2am and send errors to me
#/dev/sda   -a -n never -s (L/../../7/09|S/../.././02) -W 2,30,40 -m don@example.com -M test
/dev/sda   -a -n never -s (L/../../7/09|S/../.././02) -W 2,42,50 -m don@example.com -M diminishing
#/dev/sdb   -a -n never -s (L/../../7/09|S/../.././02) -W 2,30,40 -m don@example.com
# Don - 5-Nov-2021
~
#DEVICESCAN -d removable -n standby -m root -M exec /usr/share/smartmontools/smartd-runner
~

Restart

Restart smartd daemon to pick up configuration changes

$ sudo systemctl restart smartd

Monitoring script for testing and reporting.

Change the DEV below and see if your disks support SMART monitoring.

#!/bin/bash
DEV="/dev/sda"
# Info
sudo smartctl -i "${DEV}"
# Show
sudo smartctl -P show "${DEV}"
# turn smart on/off
#sudo smartctl -s on "${DEV}"
# Errors?
sudo smartctl -l error "${DEV}"
# Health Check
sudo smartctl -Hc "${DEV}"
# Selftest Log
sudo smartctl -l selftest "${DEV}"
# Attributes
#  Problems if...
#    Reallocated_Sector_Ct > 0
#    Current_Pending_Sector > 0
sudo smartctl -A "${DEV}"
#
#.... T E S T S ....
# -> short ... couple of minutes
# sudo smartctl -t short /dev/sda
# -> long ... one hour
# sudo smartctl -t long /dev/sda
# -> Look at test results
# sudo smartctl -a /dev/sda
# 
#.... R E P O R T ....
sudo smartctl --attributes --log=selftest   "${DEV}"
#
# - Get the temprature
sudo hddtemp /dev/sda

smartd database

The history of each smartd monitored disk is located here:

$ ls /var/lib/smartmontools/
drivedb						       smartd.Samsung_SSD_980_1TB-S64ANS0T408956M.nvme.state~  smartd.Samsung_SSD_980_1TB-S64ANS0T418940T.nvme.state~
smartd.Samsung_SSD_980_1TB-S64ANS0T408956M.nvme.state  smartd.Samsung_SSD_980_1TB-S64ANS0T418940T.nvme.state

If you replace the disks and get error reports, you can remove the files and new ones will be created

$ sudo rm /var/lib/smartmontools/smartd.Samsung_SSD_980_1TB-S64ANS0T4*

Run smartctl for each disk:

sudo smartctl -i /dev/sda
sudo smartctl -i /dev/sdb
...

Then check the history data files. New ones should show up.

$ ls /var/lib/smartmontools/
attrlog.CT1000BX500SSD1-2251E695AE97.ata.csv  smartd.CT1000BX500SSD1-2251E695AE97.ata.state   smartd.CT1000BX500SSD1-2251E695AE9E.ata.state~
attrlog.CT1000BX500SSD1-2251E695AE9E.ata.csv  smartd.CT1000BX500SSD1-2251E695AE97.ata.state~  smartd.Samsung_SSD_980_1TB-S64ANS0T408956M.nvme.state
drivedb					      smartd.CT1000BX500SSD1-2251E695AE9E.ata.state   smartd.Samsung_SSD_980_1TB-S64ANS0T418940T.nvme.state

Login Notices to Users - motd/issue/issue.net

You can customize the Message of the Day (motd), and infomation displayed when loggin in to the system.

Installation

$ sudo apt-get install cowsay fortune
$ sudo dnf install cowsay fortune-mod

Configuration

Update the message (/etc/motd) every hour.

File: /etc/cron.hourly/motd

#!/bin/bash
/usr/games/cowsay $(/usr/games/fortune) > /etc/motd
#!/bin/bash
/bin/cowsay $(/bin/fortune) > /etc/motd

Verify

File: /etc/motd

 _________________________________________
/ Your mind is the part of you that says, \
| "Why'n'tcha eat that piece of cake?"    |
| ... and then, twenty minutes later,     |
| says, "Y'know, if I were you, I         |
| wouldn't have done that!" -- Steven and |
\ Ondrea Levine                           /
 -----------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||

File: /etc/issue

Look out!

File: /etc/issue.net

Looke out!

Example login

% ssh don@example         
Looke out!
 _________________________________________
/ Individuality My words are easy to      \
| understand And my actions are easy to   |
| perform Yet no other can understand or  |
| perform them. My words have meaning; my |
| actions have reason; Yet these cannot   |
| be known and I cannot be known. We are  |
| each unique, and therefore valuable;    |
| Though the sage wears coarse clothes,   |
| his heart is jade. -- Lao Tse, "Tao Te  |
\ Ching"                                  /
 -----------------------------------------
        \   ^__^
         \  (oo)\_______
            (__)\       )\/\
                ||----w |
                ||     ||
Last login: Sun Aug 21 10:13:57 2022 from 192.168.0.5

Login notification

Add the following lines to the end of the system bashrc for notification whenever any user logs into the system (with the bash shell).

File: /etc/bash.bashrc

File: /etc/bashrc

~
# Email logins - Don November 2020
echo $(who am i) ' just logged on ' $(hostname) ' ' $(date) $(who) | mail -s "Login on" don@example.com

Continue

Now that you have set up your new server, consider giving it an internet name with DNS.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Domain Name Service (DNS)


Table of Contents


A Domain Name Service (DNS) allows your E-Mail and Web Server to use a name instead of an IP Address, so other people can find you by name. Some Internet Service Providers (ISP) will change your IP address every few months in a residential environment.

You pay a company to register [1] your unique name and assign it to an IP address. In turn they look up your IP address for anyone trying to connect to your name [2]. Kind of like the old telephone white/yellow pages where telephones were assigned to a fixed location (wire).

  1. Registration
  2. DNS

Dynamic DNS Providers (DDNS)

The primary reason to use a DDNS is to keep your IP address updated when it changes. They should supply a software/programming interface, or web page to change it.

Dynamic IP Address and E-Mail

Another requirement fulfilled by DDNS is support for E-Mail sending and receiving over SImple Mail Transport Protocol (SMTP). Some residential IP providers will block port 25 required by SMTP receiving and some providers will block E-Mails sending from a residential IP address.

This is where DDNS providers step in and accept port 25 on your behalf and redirect to another port at your server. They also provide outgoing SMTP services using their fixed IP business address to pass through other E-Mail handler's block lists.

SMTP2GO

SMTP2GO [1] is an example of a service that sends your E-Mail with a good reputation and is accepted by most all E-Mail handlers. They offer a free plan with a limit of sending 1000 per month. This is a good option if you can not send E-Mails on port 25.

  1. https://www.smtp2go.com/

No-IP

No-IP [1] sells a Mail Reflector to receive your E-Mails at their IP address if you have your domain registered with them. They will also hold your mail for up to seven days if your E-Mail server is down. Just like SMTP2GO they sell an SMTP Alternative Port service to send mail on another port from your host.

  1. https://www.noip.com

Putting It All Together

If your ISP blocks port 25, here is a workflow you can use to send and receive E-Mails over the Internet to and from a Recipent using SMTP2GO service and NO-IP extra service.

E-Mail Send over non-25 port using SMTP2GO -or- NO-IP Alternative SMTP Address.
sequenceDiagram
    participant Your Server
    participant SMTP2GO
    participant E-Mail Contact
    Your Server->>SMTP2GO: Sends E-Mail over port 2567
    SMTP2GO->>E-Mail Contact: Sends E-Mail over port 25
sequenceDiagram
    participant Your Server
    participant NO-IP
    participant E-Mail Contact
    Your Server->>NO-IP: Sends E-Mail over port 2567
    NO-IP->>E-Mail Contact: Sends E-Mail over port 25
E-Mail Receive over non-25 port using No-IP Mail Reflector.
sequenceDiagram
    participant Your Server
    participant NO-IP
    participant E-Mail Contact
    E-Mail Contact->>NO-IP: Receives E-Mail over port 25
    NO-IP->>Your Server: Receives E-Mail over port 2567

No-IP

No-IP [1] handles IP Address to Domain Name Services (DNS) and registration of domains.

They offers DNS services, DDNS, E-Mail, network monitoring and SSL certificates. E-Mail services include IMAP, POP3, SMTP, E-Mail backup services, E-Mail reflection and filtering.

A basic free account requires you to log in periodically. You can create up to three free hostnames, using several No-IP domains, or register your own domain. Some routers also support No-IP.

  1. https://www.noip.com

Dynamic Update Client

With a purchased dynamic domain you can run a Dynamic Update Client. This runs on several servers to check the IP address from your Internet Service Provider (ISP) for changes and updates my DNS resolution at noip if it happens to change.

It is started upon every boot by these lines in file /etc/rc.local:

# no-ip dynamix DNS
/usr/local/bin/noip2

Download here: https://www.noip.com/download

cloudflare

Cloudflare primarily acts as a reverse proxy between a website's visitor and the Cloudflare customer's hosting provider. They do name registration and also supply free personal DNS services complete with SSL certificate and Distributed Denial-of-Service protection, DNS Security (DNSSEC), and other services [1]. With twice as many Points of Presence (POP) than No-IP.

If security concerns top your priority list you can't go wrong with cloudflare. Their public DNS server addresses, 1.1.1.1 and 9.9.9.9 are super fast.

The cloudflare free tier supports uploading your web pages, integration with github's git workflow and several other methods for hosting a web site, except there is no ssh into a cloud server. These free web pages will be hosted on the domain pages.dev.

If you register your domain with then you can enable E-Mail forwarding [2] of the new domain to an existing E-Mail address. Their domain registration is very well priced compared to other reputable resistrars and comes with privacy protection of your whois records, something others charge for. They even have DDNS client update automation [3].

  1. https://www.cloudflare.com/plans/application-services/
  2. https://developers.cloudflare.com/email-routing/
  3. https://developers.cloudflare.com/dns/manage-dns-records/how-to/managing-dynamic-ip-addresses/

easyDNS

easyDNS [1] is one of the oldest domain registrar, DNS, web hosting and email provider. Their services are very similar to no-ip with a free entry level DNS [2] and easyMAIL services. However, just like no-ip you will want to buy domain privacy to protect your home address from web searches, $7.50yr in this case.

  1. https://easydns.com/
  2. https://easydns.com/dns/

Dynu DNS

Dynu DNS [1] is a single site DNS, DDNS, E-Mail, Certificates and VPS provider from Arizona USA. Their prices are much lower than No-IP for Email Forward and Outbound SMTP Relay, around $10 year. They also offer full service E-Mail and full access to your DNS records.

If I still used No-IP I would probably switch to them. If nothing else, check out the nice array of Network Tools [2], like SPF generator and DKIM Wizard.

  1. https://www.dynu.com/en-US/
  2. https://www.dynu.com/en-US/NetworkTools

NameCheap

Well now, doesn't that name say it all? Surprising to me, it actually works quite well and is half the price of the Big Boys.

Registering a new domain name and setting up DNS records is easy and painless. The wait for DNS world propogation is reasonable, and you don't have to be a DNS expert as they help you some of the trickier parts.

Best of all, no charge for Privacy protection of your registration address and phone number! Very nice.

  1. https://www.namecheap.com/

Virtual Private Server (VPS)

A VPS is located in someone else's building and only accessible over the network, like a cloud. These plans vary a lot from click only menus to complete root access. Of course they range from $3.00 per month to $30, with contracts into a one to three year time frame.

The advantage here is that they keep up the hardware, DNS and install the operating system. VPS setups offer fixed IP addresses, great for E-Mail hosting as no DDNS is required and port 25 is open, also they have a Graphical Control Panel (cpanel) for administration, and lastly they are a gauntlet against the Internet attacking your home residential network directly.

The disadvantages include Linux updates may be handled by the vendor, backups have to occur over the Internet, and their administrators will have full access to your host. So you should not put any financial or personal data on a VPS host. Also the extra cost could be an issue.

You need to invent your list of requirements. Something like:

  • OS: Linux
  • Database: PostgreSQL
  • Login: root shell
  • Network Bandwidth: 1TB per month
  • Memory: 2GB
  • Disk: 100GB
  • CPU: 2 cores

Here are just a couple of contenders in the market recently:

InMotion

Reference: https://www.inmotionhosting.com/cloud-vps

  • Linux versions available: CentOS, Ubuntu or Debian
  • Starting at $6.00 month

Configuring Your VPS or Dedicated Server as a Mail Server: https://www.inmotionhosting.com/support/email/configuring-your-vps-dedicated-server-as-a-mail-server/

Hostwinds

Reference: https://www.hostwinds.com/vps/unmanaged-linux

  • Choice of Debian, Ubuntu, Fedora, and CentOS
  • Hourly or monthly billing, starting at $4.99 mo

How to Install iRedMail on a VPS (CentOS 7): https://www.hostwinds.com/tutorials/how-to-install-iredmail-on-a-vps-centos-7

Kamatera

Reference: https://www.kamatera.com/Products/201/Cloud_Servers

  • 40+ Linux distros, from FreeBSD to CloudLinux
  • 30 day free trial, starting at $4.00 mo

How to Create a Linux VPS Server on Kamatera: https://www.linuxbabe.com/linux-server/how-to-create-a-linux-vps-server-on-kamatera

Install a Web Server

Now check that you have:

  • Registered your name with a registrar [1].
    • Example registration check using google:
$ whois google.com|grep 'Domain Name'
   Domain Name: GOOGLE.COM
Domain Name: google.com

  • Assigned an Internet IP address to that name using a DNS [2] provider.
    • Example DNS lookup check using google:
$ nslookup google.com|head -6
Server:		1.1.1.1
Address:	1.1.1.1#53

Non-authoritative answer:
Name:	google.com
Address: 172.253.62.102
  1. https://www.icann.org/en/accredited-registrars
  2. https://www.rfc-editor.org/rfc/rfc1034.html

If the above tests using your new name respond like google.com does, congradulations, you are ready to test a web site using that name.

Apache

Apache web server is the oldest web server still popular today with good support.

Installation

It's easy to install.

  • Debian:
$ sudo apt-get install apache2
  • Redhat:
$ sudo dnf install httpd

Configuration

Edit a new configuration file for a site called me..

  • Debian: File: /etc/apache2/sites-available/me.conf

  • RedHat: File: /etc/httpd/sites-available/me.conf

<VirtualHost *:80>
    ServerAdmin mail@www.example.com
    DocumentRoot /var/www/html
    ServerName www.example.com
	ErrorLog ${APACHE_LOG_DIR}/error.log
	# Possible values include: debug, info, notice, warn, error, crit,
	# alert, emerg.
	LogLevel warn
	#LogLevel debug
	CustomLog ${APACHE_LOG_DIR}/access.log combined
</VirtualHost>

Then enable that configuration.

  • Debian:
$ sudo ln -s /etc/apache2/sites-available/me.conf /etc/apache2/sites-enabled
$ sudo systemctl reload apache2
  • RedHat:
$ sudo ln -s /etc/httpd/sites-available/me.conf /etc/httpd/sites-enabled
$ sudo systemctl reload httpd

Check the link is enabled:

$ ls -l /etc/*/sites-enabled/
total 0
lrwxrwxrwx 1 root root 36 Oct 13  2018 me.conf -> ../sites-available/me.conf

To disable a site, simply remove the link file in /etc/*/sites-enabled/me.conf

First Page

Now create a landing page for your web site. They are normally placed in an index.html file.

File: /var/www/html/index.html

<!DOCTYPE html>
<html>
    <head>
        <title>ERROR 404 - Nothing to See</title>
      
        <style type="text/css">
            html,
            body {
                height: 100%;
                background-color: #666;
                font-family: Helvetica, sans-serif
            }

            body {
                color: #fff;
                text-align: center;
                text-shadow: 0 1px 3px rgba(0,0,0,.5);
            }

            h1 {
                font-size: 58px;
                margin-top: 20px;
                margin-bottom: 10px;
                font-family: inherit;
                font-weight: 500;
                line-height: 1.1;
                color: inherit;
            }
            
            .site-wrapper {
                display: table;
                width: 100%;
                height: 100%;
                min-height: 100%;
            }

            .site-wrapper-inner {
                display: table-cell;
                vertical-align: top;
            }

            .cover-container {
                margin-right: auto;
                margin-left: auto;
            }

            .site-wrapper-inner {
                vertical-align: middle;
            }
            .cover-container {
                width: 100%;
            }     
          .button {
    background-color: ##fff;
    border: none;
    color: white;
    padding: 15px 32px;
    text-align: center;
    text-decoration: none;
    display: inline-block;
    font-size: 16px;
    margin: 4px 2px;
    cursor: pointer;
}
        </style>    
    </head>    
    <body>
        <div class="site-wrapper">
          <div class="site-wrapper-inner">
            <div class="cover-container">
                <h1 class="cover-heading">ERROR 404 - Move along, nothing to see here</h1>
            </div>
          </div>
        </div>
    </body>
</html>

Check your file permissions, index.html should be owned by www-data and read for others:

$ chown www-data:www-data /var/www/html/index.html

$ chmod 644 /var/www/html/index.html

$ ls -l /var/www/html/index.html
-rw-r--r-- 1 www-data www-data 1894 Nov  8 19:47 index.html

Network Direction

Next you need to do Port Forwarding on your router. If your server IP address is 192.168.1.5, then in your router redirect port 80 to 192.168.1.5.

Don't forget to open port 80 on your firewall:

  • Debian
$ sudo ufw allow 80/tcp
  • RedHat
$ sudo firewall-cmd --permanent --add-port=80/tcp
$ sudo firewall-cmd --reload

The Big Test

Start the service:

  • Debian
$ sudo systemctl enable --now apache2
  • RedHat
$ sudo systemctl enable --now httpd

Now try your website by name:

http://example.com

If all went well, you should see this:

404.png

Continue

Now that you have set up an Internet name for your new server, consider giving it an internet certificate with Let's Encrypt.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Web Certificate


Table of Contents


Web certificates encrypt different messages over the public internet so nobody can see what these messages contain.

Let's Encrypt - Certificate Authority (CA)

Let's Encrypt offers free 90 day SSL/TLS internet certificates, so you can run https:// to encrypt web page bodies, instead of http:// plain text over the internet. Certbot is used to obtain and renew certificates from the Let's Encrypt CA.

Reference: https://letsencrypt.org/

Certbot - Certificate Robot

This is a systemd service and software that will watch for expired certificates and notify you via E-Mail. Other pieces will let you perform a dry-run update of your certificate, and actually perform the certificate update and any configuration changes in your Apache or Nginx web server.

Reference: https://certbot.eff.org/

Install

sudo apt-get install certbot

Configure

Get a certificate and update the apache configuration.

Just follow the prompts, entering your host + domain name.

Instructions: https://certbot.eff.org/instructions

sudo certbot --apache

Schedule

The install creates a systemd timer to check for expiration and hopefully e-mail you a warning 30 days in advance of your 90 day certificate expiring.

$ sudo systemctl list-timers | grep certbot
Mon 2022-08-29 15:46:31 EDT 4h 16min left Mon 2022-08-29 06:38:15 EDT 4h 52min ago  certbot.timer                certbot.service 

Renewal is done on web host

I wrapped the certbot updater into a script to remind me of the various steps and places the certificate is used. I do not open port 80 and block malicious hosts, so I disable those for a few minutes while the update occurs.

File: ~/linux/certbot.sh

#!/bin/bash
#---------------------------------------------
# Change port forwarding on router
#---------------------------------------------
#
echo "REMINDER: Open port 80 on ROUTER first!"
read ans
#
#---------------------------------------------
# Disable firewall
#---------------------------------------------
#
echo "Disabling firewall"
sudo ufw disable
#
#---------------------------------------------
# Automatic renewal
#---------------------------------------------
#
read -p "Dry-run [y]: " reply
reply=${reply:-y}
 echo $reply
 if [[ $reply == "y" ]]; then
   sudo certbot renew --expand --dry-run
 else
   sudo certbot renew --expand
 fi
#
#---------------------------------------------
# Check certbot service timer is running
#---------------------------------------------
#
sudo systemctl list-timers|grep certbot
##NEXT                         LEFT           LAST                         PASSED       UNIT                   
##Sat 2019-12-28 13:31:07 EST  7h left        Sat 2019-12-28 01:26:25 EST  4h 37min ago certbot.timer          
#
#---------------------------------------------
# Enable firewall
#---------------------------------------------
#
echo "Enabling firewall"
sudo ufw enable
#
#---------------------------------------------
# Copy to mail for it's devecot (e-mail) service
#---------------------------------------------
#
read -p "copy to mail [y]: " reply
reply=${reply:-y}
 echo $reply
 if [[ $reply == "y" ]]; then
   ./copy-cert-to-mail.sh
 fi
#
#---------------------------------------------
# Change port forwarding on router
#---------------------------------------------
#
echo "REMINDER: Close port 80 on ROUTER now!"
read ans
#
#---------------------------------------------
# Restart matrix-synapse to pick up new certs
#---------------------------------------------
#
echo "NOTE: Restarting matrix-synapse service"
sudo systemctl restart matrix-synapse
#

Apache - Web Server for Nextcloud

Certbot will probably add the SSLCertificate[FIle|KeyFile] lines to the apache Virtual host entry.

Check that Strict-Transport-Security is set to force http to https conversions. The max-age[1], 31536000 seconds, is 365 days and will expire shared cache after that. Adjust if desired.

File: /etc/apache2/sites-enabled/nextcloud.conf

~
# Don - begin
# Use HTTP Strict Transport Security to force client to use secure connections only      
Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains;"
SSLEngine on

# Don certbot
SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>
~
  1. https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control

Dovecot - Server for E-Mail Clients

If you have E-Mail User Agent Dovecot installed this allows IMAPS, which is Internet Message Access Protocol Secure. Basically SSL for E-Mail to encrypt E-Mails over the network.

File: /etc/dovecot/conf.d/10-ssl.conf

~
# SSL/TLS support: yes, no, required. <doc/wiki/SSL.txt>
ssl = yes

# PEM encoded X.509 SSL/TLS certificate and private key. They're opened before
# dropping root privileges, so keep the key file unreadable by anyone but
# root. Included doc/mkcert.sh can be used to easily generate self-signed
# certificate, just make sure to update the domains in dovecot-openssl.cnf
# Don - begin
#ssl_cert = </etc/dovecot/private/dovecot.pem
#ssl_key = </etc/dovecot/private/dovecot.key
ssl_cert = </etc/letsencrypt/live/example.com/fullchain.pem                                        
ssl_key = </etc/letsencrypt/live/example.com/privkey.pem  
# Don - end

~

# Directory and/or file for trusted SSL CA certificates. These are used only
# when Dovecot needs to act as an SSL client (e.g. imapc backend or
# submission service). The directory is usually /etc/ssl/certs in
# Debian-based systems and the file is /etc/pki/tls/cert.pem in
# RedHat-based systems.
ssl_client_ca_dir = /etc/ssl/certs

~

# SSL DH parameters
# Generate new params with `openssl dhparam -out /etc/dovecot/dh.pem 4096`
# Or migrate from old ssl-parameters.dat file with the command dovecot
# gives on startup when ssl_dh is unset.
ssl_dh = </usr/share/dovecot/dh.pem

Matrix - Messaging Server

If you have the Matrix messaging server installed, this allows secure communication to clients.

File: /etc/matrix-synapse/homeserver.yaml

grep letsencrypt /etc/matrix-synapse/homeserver.yaml
tls_certificate_path: "/etc/letsencrypt/live/example.com/fullchain.pem"
tls_private_key_path: "/etc/letsencrypt/live/example.com/privkey.pem"

Verify Certificate

This script is good to run before and after the certbot update to view the begin/end valid dates of your certificate. It ensures everything went well and the certs are in a valid location.

File: ~/linux/cert_expire.sh

#!/bin/bash
# ----------------------------------------------------------------------
#
# File: cert_expire.sh
#
# Purpose: See what the expiration date is for Let's Encrypt Certificate
#
#
#  s_client : The s_client command implements a generic SSL/TLS client
#              which connects to a remote host using SSL/TLS.
#  -servername $DOM : Set the TLS SNI (Server Name Indication) extension
#                      in the ClientHello message to the given value.
#  -connect $DOM:$PORT : This specifies the host ($DOM) and optional
#                         port ($PORT) to connect to.
#  x509 : Run certificate display and signing utility.
#  -noout : Prevents output of the encoded version of the certificate.
#  -dates : Prints out the start and expiry dates of a TLS or SSL certificate.
#
# Don Cohoon - Jan 2023
# ----------------------------------------------------------------------
#
#
if [ $# -gt 0 ]; then
  A=${1}
else
  echo "1) E-Mail"
  echo "2) File"
  echo "3) Web"
  echo "4) Local"
  read A
fi
case ${A}
 in
   1)
	echo "REMINDER: Restart dovecot to enable new certs"
	echo "=> E-Mail Certificate: CTRL-C to exit"
	openssl s_client -connect mail.example.com:25 -starttls smtp 2>/dev/null|openssl x509 -noout -dates
	;;
   2)
	echo "=> File Certificate"
	sudo openssl x509 -enddate -noout -in /etc/letsencrypt/live/example.com/fullchain.pem
	;;
   3)
	echo "REMINDER: Restart apache2 and nginx to enable new certs"
	echo "=> www.example.com Certificate: CTRL-C to exit"
	openssl s_client -servername example.com -connect www.example.com:443 2>/dev/null | openssl x509 -noout -dates
	;;
   4)
	echo "REMINDER: Restart apache2 and nginx to enable new certs"
	echo "=> Local Web Certificate: CTRL-C to exit"
	openssl s_client -connect localhost:443 | openssl x509 -noout -dates
	;;
esac

Continue

Now that you have set up a certificate for your new server, consider installing some Network Attached Storage.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Network Attached Storage (NAS)


Table of Contents


The NAS provides a safe place to store important data. Set this up before running E-Mail or a cloud service because it is the best place to put important files. If an E-Mail server crashes you still have all your E-Mail files on the NFS (see below) attached NAS server. That's what it's for.

Three (common) ways to do this:

  1. TrueNAS Storage
  2. Microsoft Windows - SMB/CIFS
  3. Linux - NFS

TrueNAS supports SMB/CIFS, NFS and several other protocols.

TrueNAS Storage

This is a complete machine install that creates a TrueNAS Application and the operating system using either:

  • Operating System -> FreeBSD; TrueNAS -> Core [1]
  • Operating System -> Linux; TrueNAS -> Scale [2]

Personally I use Linux after running FreeBSD for years. Both have a complete Graphical Web Interface (GWI), with no need to learn the operating system details. Be aware this package will take over the whole machine and you should not install other packages or change the configuration without using the provided GWI.

Additionally TrueNAS enables apps [3] to be installed with a single click. These are docker containers running the newer versions of popular applications.

  1. https://www.truenas.com/download-truenas-core/#
  2. https://www.truenas.com/download-truenas-scale/
  3. https://www.truenas.com/apps/

Installation

  • Make sure you have at least three disks/SSDs. One for the Operating System (OS) and at least two more for data. An ideal setup would be one M2 on the motherboard for the OS, and five SATA disks (NAS friendly). Also at least 32GB RAM and a PCI 1Gbit ethernet network board.

  • Download the iso image here:

https://www.truenas.com/download-truenas-scale/

  • Boot into the new image with a bootable USB stick and do the install:

https://www.truenas.com/docs/scale/gettingstarted/install/installingscale/

Configuration

  • Console Setup Menu Configuration [1]

This article provides instructions on configuration network settings using the Console setup menu after you install TrueNAS SCALE from the iso file.

  1. https://www.truenas.com/docs/scale/gettingstarted/install/consolesetupmenuscale/

  • Setting Up Storage [2]

This article provides basic instructions for setting up your first storage pool, and also provides storage requirement information.

  1. https://www.truenas.com/docs/scale/gettingstarted/install/setupstoragescale/

  • Setting Up Data Sharing [3]

This article provides general information on setting up basic data sharing on TrueNAS SCALE.

  1. https://www.truenas.com/docs/scale/gettingstarted/install/setupsharing/

  • Backing Up TrueNAS [4]

This article provides general information and instructions on setting up storage data backup solutions and saving the system configuration file in TrueNAS SCALE.

  1. https://www.truenas.com/docs/scale/gettingstarted/install/setupbackupscale/

Set Admin User: Enable your personal userid for administration and use it to log into the web interface instead of root.

  • Add groups 544(builtin_administrators) and 27(sudo) as secondary groups to your personal user via the web interface.
  1. Credentials > Local Users >
  2. Un-click 'Show Built-in Users' on the top right
  3. Find user, select it, then edit
  4. Auxiliary Groups > add: sudo and builtin_administrators

For Core -> Scale upgrades, you may need to unmount /var/tmp/firmware to unpack the update archive. Filesystem /var has more disk space. umount -f /var/tmp/firmware

There is a large community of support around each of these, ready for research and question asking.

It supports Redundant Array of Disks using ZFS [1], so one disk failure will not interrupt a running system, and you can replace a failed drive [2] (check out the GUI action) without loss of data.

If you do not use TrueNAS, at least Mirror your Disks [3]

  1. https://en.wikipedia.org/wiki/ZFS
  2. NAS_Disk_Replacement
  3. Mirror_Disks

Unix/Linux Server - Network File System (NFS)

NFS allow one server to share it's filesystem to another server. To the other server the file system appears to be local, but all changes on the local client are actually done on the remote NFS server.

Install NFS software

On the server with the physical filesystem:

$ sudo apt install nfs-kernel-server

Enable NFS Service

$ sudo systemctl enable --now nfs-server

Create Directory to Share

$ sudo mkdir -p /media/nfs

Export Share

Edit the /etc/exports configuration file. Here, you can configure which directories you’re sharing and who can access them. You can also set specific permissions for the shares to further limit access.

$ sudo vi /etc/exports

In the file, each share gets its own line. That line begins with the location of the share on the server machine. Across from that, you can list the hostname of an accepted client, if is available in the server’s hosts file, or an IP or range of IPs. Directly behind the IP address, place the rules for the share in a set of parenthesis. Altogether, it should look something like this:

/media/nfs		192.168.1.0/24(rw,sync,no_subtree_check)

You can include as many shares as you like, provided each has its own line. You can also include more than one hostname or IP in each line and assign them different permissions. For example:

/media/nfs		192.168.1.112(rw,sync,no_subtree_check) 192.168.1.121(ro,sync,no_subtree_check)

In the second instance, each of those machines could view and read from the share, but only the computer at 192.168.1.112 could write to it.

Options:

ro – specifies that the directory may only be mounted as read only
rw – grants both read and write permissions on the directory
no_root_squash – is an extremely dangerous option that allows remote root users the same privilege as the root user of the host machine
subtree_check – specifies that, in the case of a directory is exported instead of an entire filesystem, the host should verify the location of files and directories on the host filesystem
no_subtree_check – specifies that the host should not check the location of the files being accessed within the host filesystem
sync – this just ensures that the host keeps any changes uploaded to the shared directory in sync
async – ignores synchronization checks in favor of increased speed

Load exports into live system

$ sudo exportfs -arv
exporting 192.168.1.0/24:/media/nfs

You should consider running NFS over a VLAN. The February 2023 Blog has information on setting up a vlan.

Connect to NFS server from Linux client

Install Software on Client

On the remote server, access the NFS share over the network.

Debian:

$ sudo apt install nfs-common

Redhat:

$ sudo dnf install nfs-utils

See what servers are available. This also shows allowed IP addresses, so make sure yours is in the list.

$ showmount -e nas01
Exports list on nas01:
/mnt/nfs 192.168.1.2 192.168.1.3      

Mount Directory

$ sudo mkdir -p /media/share

$ sudo mount -t nfs4 192.168.1.110:/media/nfs /media/share

Make mount permanent

Add an entry to file /etc/fstab

192.168.1.110:/media/nfs	/media/share	nfs4	defaults,user,exec	   0   0

Add noauto to the list of options to prevent your system from trying to mount it automatically.

# NAS
192.168.1.2:/mnt/nfs /data nfs rw,soft,intr,rsize=8192,wsize=8192,timeo=300,nofail,nolock 0 0

Reference: https://linuxconfig.org/how-to-configure-nfs-on-linux

NFS mount on Macos Client

See what servers are available. This also shows allowed IP addresses, so make sure yours is in the list.

% showmount -e nas01
Exports list on nas01:
/mnt/nfs 192.168.1.2 192.168.1.3      

Create local directory

% mkdir $HOME/nfs

Mount

Create a directory, say /Users/don/nfs, then mount nfs on it:

% sudo mount -o rw -t nfs nas01:/nfs /Users/don/nfs

Optional performance options

sudo mount -t nfs -o soft,intr,rsize=8192,wsize=8192,timeo=900,retrans=3,proto=tcp nas01:/nfs /Users/don/nfs

Microsoft Windows (SMB/CIFS)

This is done on Linux using Samba software.

$ sudo apt-get install samba samba-common-bin

At the bottom of the config file, add:

$ sudo vi /etc/samba/smb.conf

~
[shared]
path=/mnt/raid1/shared
writeable=Yes
create mask=0777
directory mask=0777
public=no
~
:wq

Disabling the Automatic Printer Sharing

To disable the automatic printer sharing:

Add the following parameter to the [global] section of your /etc/samba/smb.conf file:

load printers = no

This will disable samba trying to open port 631 TCP every 12 minutes, eliminating ufw block warnings in the syslog.

Restart Samba

$ sudo systemctl restart smbd
$ sudo systemctl status smbd
● smbd.service - Samba SMB Daemon
   Loaded: loaded (/lib/systemd/system/smbd.service; enabled; vendor preset: enabled)
   Active: active (running) since Fri 2021-11-26 19:08:12 UTC; 5s ago
     Docs: man:smbd(8)
           man:samba(7)
           man:smb.conf(5)
  Process: 3337 ExecStartPre=/usr/share/samba/update-apparmor-samba-profile (code=exited, status=0/SUCCESS)
 Main PID: 3346 (smbd)
   Status: "smbd: ready to serve connections..."
    Tasks: 4 (limit: 951)
   Memory: 4.5M
   CGroup: /system.slice/smbd.service
           ├─3346 /usr/sbin/smbd --foreground --no-process-group
           ├─3348 /usr/sbin/smbd --foreground --no-process-group
           ├─3349 /usr/sbin/smbd --foreground --no-process-group
           └─3350 /usr/sbin/smbd --foreground --no-process-group

Nov 26 19:08:09 beaglebone systemd[1]: Starting Samba SMB Daemon...
Nov 26 19:08:12 beaglebone systemd[1]: Started Samba SMB Daemon.

Add Linux owner

$ sudo adduser bone
Adding user `bone' ...
Adding new group `bone' (1001) ...
Adding new user `bone' (1001) with group `bone' ...
Creating home directory `/home/bone' ...
Copying files from `/etc/skel' ...
New password:
Retype new password:
passwd: password updated successfully
Changing the user information for bone
Enter the new value, or press ENTER for the default
    Full Name []: bone
    Room Number []:
    Work Phone []:
    Home Phone []:
    Other []:
Is the information correct? [Y/n] y
Adding new user `bone' to extra groups ...
Adding user `bone' to group `dialout' ...
Adding user `bone' to group `i2c' ...
Adding user `bone' to group `spi' ...
Adding user `bone' to group `cdrom' ...
Adding user `bone' to group `floppy' ...
Adding user `bone' to group `audio' ...
Adding user `bone' to group `video' ...
Adding user `bone' to group `plugdev' ...
Adding user `bone' to group `users' ...

Add SMB User

Use different password for SMB:

$ sudo smbpasswd -a bone
New SMB password:
Retype new SMB password:
Added user bone.

Secure the filesystem

If you want to create file shares that are private to individual users, just create their own directory on the RAID array.

mkdir /mnt/raid1/shared/username
sudo chown -R username /mnt/raid1/shared/username
sudo chmod -R 700 /mnt/raid1/shared/username

Replace username with the user you want. Now only that user can access that directory.

Alternatively, you can create additional entries in smb.conf for multiple shares.

Samba Share mount on Linux client

//nas/cifs2_share /mnt/share cifs credentials=/home/don/.smbcredentials,rw,noauto,user,uid=1000 0 0 

Where credentials format is:

File: /home/don/.smbcrendentials

user=<name>
pass=<password>

Samba Share mount on Mac client

File: /Users/don/mount-smb.sh

#!/bin/zsh
export USER=<user>
export PASS=<password>
export NAS=<192.168.1.8>
export HOME=/Users/don
#
mkdir -p ${HOME}/share
#
/sbin/mount -t smbfs //${USER}:${PASS}@${NAS}/share ${HOME}/share

Mirror Disks for Failure Protection

TODO: Refer to the Mirror Disk page.

Virtual Machines

  • Virtual Machines can be created via the web GUI, selection Virtualization. It uses the qemu/kvm method. If the selection is disabled, you may be able to fix that by going into the system BIOS and enabling the Secure VM (SVM) option or some other tweak.

    • On AMD Ryzen, for example, it is found in the Advanced > Tweaker section. Turn SVM from Disabled to Enabled, and try the VM screen on TrueNAS again.
  • To create a VM, I used these option for Debian 12:

    • Create a DataSet in advance, i.e.: Local-VM, assign it to your VM and new VM's Storage Volumes will reside there
    • Set Threads and Core to 1, VM hyper threading on AMD is not supported
    • 4 virtual CPUs, 1 core 1 hyper thread
    • CPU Model: HOST Model
    • 8 GB memory
    • Use Legacy BIOS, not UEFI

After installing OS...iso file from the Virtual CD-ROM, power off the VM and go into it's settings. Under devices find CD-ROM and delete the device. This will keep it from rebooting back into the installer.

Continue

Now that you have set a NAS data protection, consider installing an E-Mail server using some of that safe storage.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Mirror Disks


Table of Contents


Mirroring disks on Linux is simple and prudent to ensure your data survives hardware, software and human failures. All data from one disk is duplicated on another disk and if any one disk fails, you will never know it. So it is important after setting up to monitor the disk array.

Install mdadm

$ sudo apt-get install mdadm

Assembling your arrays

If you had disks before but lost them due to a boot disk rebuild, you can bring them back by using the assemble command:

$ sudo mdadm --assemble --scan

This is the command that runs in the background at boot, and assembles and runs all your arrays (unless something goes wrong, when you usually end up with a partially assembled array. This can be a right pain if you don't realise that's what's happened).

Status of array

Schedule this to run every day and send you the output, probably by e-mail. As you can see below, on disk is missing, so this array is degraded. If you find yourself in this position, add another disk back as soon as possible!

 $ sudo mdadm -D /dev/md/0
/dev/md/0:
           Version : 1.2
     Creation Time : Fri Nov 26 18:39:56 2021
        Raid Level : raid1
        Array Size : 976629440 (931.39 GiB 1000.07 GB)
     Used Dev Size : 976629440 (931.39 GiB 1000.07 GB)
      Raid Devices : 2
     Total Devices : 1
       Persistence : Superblock is persistent

     Intent Bitmap : Internal

       Update Time : Mon Sep  4 13:16:39 2023
             State : clean, degraded 
    Active Devices : 1
   Working Devices : 1
    Failed Devices : 0
     Spare Devices : 0

Consistency Policy : bitmap

              Name : beaglebone:0
              UUID : 5b7b53b5:778a2dbf:be80ae03:938abef
            Events : 33884

    Number   Major   Minor   RaidDevice State
       -       0        0        0      removed
       1       8        1        1      active sync   /dev/sda1

removed is keyword for failed!

Adding a drive to a mirror

This will add a new drive to your mirror. The "--grow and --raid-devices" are optional, if you increase the number of raid devices, the new drive will become an active part of the array and the existing drives will mirror across. If you don't increase the number of raid devices, the new drive will be a spare, and will only become part of the active array if one of the other drives fails.

$ sudo  mdadm [--grow] /dev/md/mirror --add /dev/sdc1 [--raid-devices=3]
$ sudo  mdadm /dev/md/0 --add /dev/sdb1

Creating a mirror raid

The simplest example of creating an array, is creating a mirror.

$ sudo mdadm --create /dev/md/name /dev/sda1 /dev/sdb1 --level=1 --raid-devices=2

This will copy the contents of sda1 to sdb1 and give you a clean array. There is no reason why you can't use the array while it is copying (resyncing). This can be suppressed with the "--assume-clean" option, but you should only do this if you know the partitions have been wiped to null beforehand. Otherwise, the dead space will not be a mirror, and any check command will moan blue murder.

Configuration File

Run update-initramfs -u after updating this file.

File: /etc/mdadm/mdadm.conf

# mdadm.conf
#
# !NB! Run update-initramfs -u after updating this file.
# !NB! This will ensure that initramfs has an uptodate copy.
#
# Please refer to mdadm.conf(5) for information about this file.
#

# by default (built-in), scan all partitions (/proc/partitions) and all
# containers for MD superblocks. alternatively, specify devices to scan, using
# wildcards if desired.
#DEVICE partitions containers

# automatically tag new arrays as belonging to the local system
HOMEHOST <system>

# instruct the monitoring daemon where to send mail alerts
MAILADDR root

# definitions of existing MD arrays
ARRAY /dev/md/0  metadata=1.2 UUID=5b7b53b5:778a2dbf:be80ae03:938abef name=beaglebone:0

# This configuration was auto-generated on Mon, 04 Sep 2023 14:42:26 +0000 by mkconf

Hint: To re-create the ARRAY definition line above, use this:

$ sudo mdadm --detail --scan

Reference:

Auto Mount Array

I usually put the mount as a seperate command, running through /etc/rc.local, in case it does not build the array at boot for some reason. This way you can still log in, fix the error, and mount the array. Otherwise you have the run through boot recovery to fix it.

Note the delay to allow time for mdadm to assemble the array

File: /etc/rc.local

#!/bin/bash
#
# Mount mdadm array
#
sleep 60
#
mount /dev/md/0 /mnt/raid1
#
exit 0

Make /etc/rc.local executable, and start rc-local systemd process and it will run /etc/rc.local on every boot.

$ sudo chmod 755 /etc/rc.local
$ sudo systemctl start rc-local

Monitor Array

Minimal monitoring could be a status script in e-mail every morning.

File: /mnt/raid9/raid.sh

#!/bin/bash
TMP=$(mktemp)
sudo /sbin/mdadm -D /dev/md0 >$TMP
/bin/cat $TMP | /usr/bin/mail -s "Raid status" bob@bob.com
rm $TMP

Schedule in cron.

File: /etc/cron.d/mdadm

#
# cron.d/mdadm -- schedules periodic redundancy checks of MD devices
#
# Copyright © martin f. krafft <madduck@madduck.net>
# distributed under the terms of the Artistic Licence 2.0
#

# By default, run at 00:57 on every Sunday, but do nothing unless the day of
# the month is less than or equal to 7. Thus, only run on the first Sunday of
# each month. crontab(5) sucks, unfortunately, in this regard; therefore this
# hack (see #380425).
57 0 * * 0 root if [ -x /usr/share/mdadm/checkarray ] && [ $(date +\%d) -le 7 ]; then /usr/share/mdadm/checkarray --cron --all --idle --quiet; fi
# Don - email status every morning
11 7 * * * root /mnt/raid9/raid.sh

E-Mail


Table of Contents


Electronic Mail (E-Mail) is a way to type letters onto a computer and send them to other people. These other people's computers will then use E-Mail to read your letters. It is an electronic version of the Post Office.

Dovecot - Presents E-Mails to Clients

Dovecot [1] provides a way for Mail User Agents (MUA) to manage their E-Mail. Typical MUAs are Thunderbird [2], Evolution [3], and Mutt [4].

Dovecot supports Internet Message Access Protocol (IMAP, port 993) [5] as a server over the network to multiple clients at the same time. It is commonly referred to as a Mail Delivery Agent (MDA) delivering mail from a file repository on some server to the MUA.

The Maildir database sets each E-Mail as a seperate file on the server, arranged into folders as dictated by the MUA. Indexing is automatic.

Postfix [6] is a Mail Transfer Agent (MTA) that receives E-Mail over the Internet using Simple Mail Transfer Protocol (SMTP, port 25 [7]) and delivers it locally to Dovecot. MUA sending is also done by postfix using Submission (ports 587 [8], and 465 for SSL [9]). Message relay from one mail server to another is done by postfix using SMTP too.

sequenceDiagram
    participant Thunderbird
    participant Dovecot
    participant Postfix
    Thunderbird->>Dovecot: Manage Mail (IMAP)
    Postfix->>Dovecot: Receive Mail
    Internet->>Postfix: Receive Mail (SMTP)
    Thunderbird->>Postfix: Send Mail (Submission)
    Postfix->>Internet: Send Mail (SMTP)
  1. https://www.dovecot.org/
  2. https://www.thunderbird.net/en-US/
  3. https://help.gnome.org/users/evolution/stable/
  4. http://www.mutt.org/
  5. https://www.rfc-editor.org/rfc/rfc9051
  6. http://www.postfix.org/
  7. https://www.rfc-editor.org/rfc/rfc5321.html
  8. https://datatracker.ietf.org/doc/html/rfc4409
  9. https://datatracker.ietf.org/doc/html/rfc8314

Installation

Install the four main packages:

  • core - core files
  • imapd - IMAP daemon
  • managesieved - ManageSieve server
  • sieve - Sieve filters support

Debian

$ sudo apt-get install dovecot-core dovecot-imapd dovecot-managesieved dovecot-sieve 

RedHat

$ sudo dnf install dovecot dovecot-pigeonhole

User Settings

Create a symbolic link for the mail location (~/Maildir) to an NFS mount that you created in the NAS page, such as /home/<user>/Maildir. This will provide the extra protection of ZFS for your E-Mail database, should a disk fail.

For instance, for NFS Mount at /media/share and Linux <user>@<domain>.com:

  • Create Linux user for E-Mail bob@example.com, and directory:
$ sudo useradd bob@example.com
$ sudo mkdir /home/bob@example.com
$ sudo chown bob@example.com  /home/bob@example.com
$ sudo mkdir -p /media/share/bob/Maildir
$ sudo chown -R bob@example.com  /media/share/bob
$ ln -s /media/share/bob/Maildir /home/bob@example.com 

Dovecot MUA logins are Linux logins. Multiple MUAs will log into Dovecot using different logins so ~/Maildir will also be different. Data will be stored on the /media/share NFS mount.

UserMaildir
bob/media/share/bob/Maildir -> /home/bob@example.com/Maildir
ted/media/share/ted/Maildir -> /home/ted@example.com/Maildir

Non-Default Settings

Create a configuration file that will override the default setting you want to change. Default settings are in directory: /etc/dovecot/conf.d/

File: /etc/dovecot/local.conf

# Hostname: mail
# Version: 21-Jan-2023
mail_fsync = always
mail_location = maildir:~/Maildir
mail_privileged_group = mail
managesieve_notify_capability = mailto
managesieve_sieve_capability = fileinto reject envelope encoded-character vacation subaddress comparator-i;ascii-numeric relational regex imap4flags copy include variables body enotify environment mailbox date index ihave duplicate mime foreverypart extracttext
mmap_disable = yes
namespace inbox {
  inbox = yes
  location = 
  mailbox Drafts {
    special_use = \Drafts
  }
  mailbox Junk {
    special_use = \Junk
  }
  mailbox Sent {
    special_use = \Sent
  }
  mailbox "Sent Messages" {
    special_use = \Sent
  }
  mailbox Trash {
    special_use = \Trash
  }
  prefix = 
}
passdb {
  driver = pam
}
plugin {
  sieve = file:~/sieve;active=~/.dovecot.sieve
  sieve_default = /var/lib/dovecot/sieve/default.sieve
}
protocols = " imap sieve"
service auth {
  unix_listener /var/spool/postfix/private/auth {
    group = postfix
    mode = 0660
    user = postfix
  }
}
service imap-login {
  inet_listener imap {
    port = 143
  }
  inet_listener imaps {
    port = 993
    ssl = yes
  }
}
service imap {
  process_limit = 1024
}
ssl_cert = </etc/letsencrypt/live/example.com/fullchain.pem
ssl_client_ca_dir = /etc/ssl/certs
ssl_dh = </etc/dovecot/dh.pem
ssl_key = </etc/letsencrypt/live/example.com/privkey.pem
userdb {
  driver = passwd
}
protocol lda {
  mail_plugins = " sieve"
}
protocol imap {
  mail_max_userip_connections = 1024
}

Update systemd startup service

Change systemd startup After dependencies to wait for the network to be online and NFS filesystem to be mounted.

$ sudo systemctl edit dovecot.service

Add these lines

#After=local-fs.target network-online.target
# Add fs-remote... Don - Jan 2023
[Unit]
After=syslog.target network-online.target local-fs.target remote-fs.target nss-lookup.target

This creates a new file to override the system defaults: /etc/systemd/system/dovecot.service.d/override.conf

Generate a file with Diffie-Hellman parameters

$ sudo openssl dhparam -dsaparam -out /etc/dovecot/dh.pem 2048

Depending on the hardware and entropy on the server, generating Diffie-Hellman parameters with 4096 bits can take several minutes.

Restart systemd and Dovecot to pick up changes:

$ sudo systemctl daemon-reload
$ sudo systemctl restart dovecot

Sieve - filters mail to certain boxes

Edit your rules.

File: /var/lib/dovecot/sieve/default.sieve

$ cat /var/lib/dovecot/sieve/default.sieve
require ["fileinto", "envelope"];
#if header  :contains "X-Spam-Flag" "YES"  {
if header :comparator "i;ascii-casemap" :contains "X-Spam-Flag" "YES"  {
    fileinto "INBOX.Spam";
    stop;
} elsif address :is "to" "bob@example.com" {
 fileinto "INBOX.Bob";
} elsif address :is "from" "logcheck@example.com" {
 fileinto "INBOX.Bob.logcheck";
} elsif header :contains "subject" ["Change to Camera"] {
 fileinto "INBOX.Camera";
} else {
 # The rest goes into INBOX
 # default is "implicit keep", we do it explicitly here
 keep;
}

Compile when done, then restart dovecot to pick up new changes

$ sudo sievec /var/lib/dovecot/sieve/default.sieve
$ sudo systemctl restart dovecot

Reference: https://doc.dovecot.org/configuration_manual/sieve/usage/

Postfix - sends and recieves e-mail over the network

Configuration - main

Create aliases to enable mail to go through from several standard unix accounts

seperate words with the TAB character, not spaces.

File: /etc/aliases

# See man 5 aliases for format
postmaster:    root
mail:	root
nobody:	root
monit:  root
clamav: root
logcheck: root

Update the aliases so postfix can read them

$ sudo newaliases

Unix account uses home_mailbox of ~/Maildir.

Reference: http://www.postfix.org/BASIC_CONFIGURATION_README.html

File: /etc/postfix/main.cf

# See /usr/share/postfix/main.cf.dist for a commented, more complete version
# Debian specific:  Specifying a file name will cause the first
# line of that file to be used as the name.  The Debian default
# is /etc/mailname.
#myorigin = /etc/mailname

# misc
# only hostname
smtpd_banner = $myhostname ESMTP e-mail (Linux)
biff = no
# appending .domain is the MUA's job.
append_dot_mydomain = no
# Uncomment the next line to generate "delayed mail" warnings
#delay_warning_time = 4h
readme_directory = no

# alias
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases

# hosts
myhostname = www.example.com
myorigin = /etc/mailname
mydestination = example.com, example, localhost.localdomain, localhost
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128 192.168.0.0/16

# mail box
home_mailbox = Maildir/
mailbox_size_limit = 0
message_size_limit = 52428800
header_size_limit = 4096000
recipient_delimiter = +
inet_interfaces = all
mailbox_command = /usr/lib/dovecot/deliver -c /etc/dovecot/dovecot.conf -m "${EXTENSION}"

# transport
virtual_transport = dovecot
dovecot_destination_recipient_limit = 1
compatibility_level = 2
inet_protocols = ipv4

# TLS parameters
smtpd_use_tls = yes
smtpd_tls_cert_file = /etc/letsencrypt/live/example.com/fullchain.pem
smtpd_tls_key_file  = /etc/letsencrypt/live/example.com/privkey.pem
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# No-IP - begin
# http://www.noip.com/support/knowledgebase/configure-postfix-work-alternate-port-smtp/
#debug_peer_list = 192.168.1.1
#
# sasl
smtp_sasl_auth_enable = yes
smtp_sasl_security_options = noanonymous
smtp_sasl_password_maps = hash:/etc/postfix/sasl/sasl_passwd
#
# relay
relayhost = [smtp-auth.no-ip.com]:465
relay_destination_concurrency_limit = 20	
relay_domains = $mydestination
#
# tls
smtp_tls_wrappermode = yes
smtp_tls_security_level = encrypt
# No-IP - end

# sasl authentication
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_authenticated_header = yes
smtpd_sasl_local_domain = example.com

# Block spammers
smtpd_sender_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unknown_reverse_client_hostname, reject_unknown_client_hostname,
#
smtpd_client_restrictions = 
  check_client_access hash:/etc/postfix/blacklist
smtpd_relay_restrictions = permit_mynetworks, permit_sasl_authenticated, reject_unauth_destination
# block spammers...end

# TLS
smtpd_tls_received_header = yes
smtpd_tls_mandatory_protocols = SSLv3, TLSv1
smtpd_tls_mandatory_ciphers = medium
smtpd_tls_auth_only = yes

# CA
smtp_tls_CAfile = /etc/postfix/cacert.pem
tls_random_source = dev:/dev/urandom

# extra spam protection, 6-April-2019 : begin
smtpd_helo_required = yes
smtpd_helo_restrictions =
    permit_mynetworks
    permit_sasl_authenticated
# extra spam protection, 6-April-2019 : end

# address max connection rate  9-May-2019
smtpd_error_sleep_time = 1s
smtpd_soft_error_limit = 10
smtpd_hard_error_limit = 20

Configuration - Login

Change *'s to real passwords

File: /etc/postfix/sasl/sasl_passwd

[smtp-auth.no-ip.com]:465 example.com@noip-smtp:***************

Use the postmap command whenever you change the /etc/postfix/sasl/sasl_passwd file.

Reference: http://www.postfix.com/SASL_README.html

Create sasl_passwd database for postfix relay to nop-ip

$ sudo postmap /etc/postfix/sasl_passwd

# Protect the source file
$ sudo chown root:root /etc/postfix/sasl /etc/postfix/sasl/sasl_passwd
$ sudo chmod 0600      /etc/postfix/sasl /etc/postfix/sasl/sasl_passwd

# Protect the database file
$ sudo chown root:root /etc/postfix/sasl /etc/postfix/sasl/sasl_passwd.db
$ sudo chmod 0600      /etc/postfix/sasl /etc/postfix/sasl/sasl_passwd.db

Configuration - Master

Reference: http://www.postfix.org/master.5.html

Add smtp, smtps and submission internet services (smtpd) and spamassassin, dovecot local services (pipe) to the default master.cf file.

The -o lines override options in the main.cf file.

File: /etc/postfix/master.cf

# ==========================================================================
# service type  private unpriv  chroot  wakeup  maxproc command + args
#               (yes)   (yes)   (yes)   (never) (100)
# ==========================================================================
#............................................................................
# P O S T F I X
smtp       inet  n       -       y       -       -       smtpd -o content_filter=spamassassin
submission inet  n       -       y       -       -       smtpd
    -o smtpd_tls_security_level=encrypt
    -o smtpd_sasl_auth_enable=yes
    -o smtpd_sasl_type=dovecot
    -o smtpd_sasl_path=private/auth
    -o smtpd_sasl_security_options=noanonymous
    -o smtpd_sasl_local_domain=$myhostname
    -o smtpd_sender_login_maps=hash:/etc/postfix/virtual
smtps      inet  n       -       y       -       -       smtpd
    -o smtpd_tls_wrappermode=yes
    -o smtpd_sasl_auth_enable=yes
    -o milter_macro_daemon_name=ORIGINATING
#............................................................................
# S P A M A S S A S S I N
spamassassin unix -     n       n       -       -       pipe
  user=debian-spamd argv=/usr/bin/spamc -f -e  /usr/sbin/sendmail -oi -f ${sender} ${recipient}
#............................................................................
# D O V E C O T
dovecot   unix  -       n       n       -       -       pipe
 flags=DRhu user=mail:mail argv=/usr/libexec/dovecot/deliver -f ${sender} -d ${recipient} -a ${original_recipient}
#............................................................................
# M I S C
pickup     fifo  n       -       y       60      1       pickup
cleanup    unix  n       -       y       -       0       cleanup
qmgr      fifo  n       -       n       300     1       qmgr
#qmgr     fifo  n       -       -       300     1       oqmgr
tlsmgr     unix  -       -       y       1000?   1       tlsmgr
rewrite    unix  -       -       y       -       -       trivial-rewrite
bounce     unix  -       -       y       -       0       bounce
defer      unix  -       -       y       -       0       bounce
trace      unix  -       -       y       -       0       bounce
verify     unix  -       -       y       -       1       verify
flush      unix  n       -       y       1000?   0       flush
proxymap  unix  -       -       n       -       -       proxymap
proxywrite unix -       -       n       -       1       proxymap
relay      unix  -       -       y       -       -       smtp
    -o smtp_fallback_relay=
showq      unix  n       -       y       -       -       showq
error      unix  -       -       y       -       -       error
retry      unix  -       -       y       -       -       error
discard    unix  -       -       y       -       -       discard
local     unix  -       n       n       -       -       local
virtual   unix  -       n       n       -       -       virtual
lmtp       unix  -       -       y       -       -       lmtp
anvil      unix  -       -       y       -       1       anvil
#
scache     unix  -       -       y       -       1       scache
maildrop  unix  -       n       n       -       -       pipe
  flags=DRhu user=vmail argv=/usr/bin/maildrop -d ${recipient}

Spamassasin - Puts SPAM into a SPAM folder automatically

Configure

Install

  • RedHat
dnf install spamassassin
  • Debian
apt-get install spamassassin spamc

Create a spam user, unless Debian/Ubuntu did this for you.

Check for spam user:

$ grep spam /etc/passwd
debian-spamd:x:135:140::/var/lib/spamassassin:/bin/sh

If no user exists yet, create one

$ adduser spamd --disabled-login

Config file

Be sure to set CRON=1 and allow IPv6.

  • Debian File: /etc/default/spamassassin
# /etc/default/spamassassin
# Duncan Findlay

# WARNING: please read README.spamd before using.
# There may be security risks.

# Prior to version 3.4.2-1, spamd could be enabled by setting
# ENABLED=1 in this file. This is no longer supported. Instead, please
# use the update-rc.d command, invoked for example as "update-rc.d
# spamassassin enable", to enable the spamd service.

# Options
# See man spamd for possible options. The -d option is automatically added.

# SpamAssassin uses a preforking model, so be careful! You need to
# make sure --max-children is not set to anything higher than 5,
# unless you know what you're doing.

#OPTIONS="--create-prefs --max-children 5 --helper-home-dir"
# Don 17-Jan-2022 - fix connection refused on ipv6
OPTIONS="-A 127.0.0.1 -A ::1 --create-prefs --max-children 5 --helper-home-dir"

# Pid file
# Where should spamd write its PID to file? If you use the -u or
# --username option above, this needs to be writable by that user.
# Otherwise, the init script will not be able to shut spamd down.
PIDFILE="/var/run/spamd.pid"

# Set nice level of spamd
#NICE="--nicelevel 15"

# Cronjob
# Set to anything but 0 to enable the cron job to automatically update
# spamassassin's rules on a nightly basis
CRON=1

All local customization happen in the next file.

I like to change the header to add the SPAM_SCORE, modify the original E-Mail with the new header information, and lower the threshold to mark as spam from 5 to 3.

  • RedHat File: /etc/mail/spamassassin/local.cf

  • Debian File: /etc/spamassassin/local.cf

# This is the right place to customize your installation of SpamAssassin.
#
# See 'perldoc Mail::SpamAssassin::Conf' for details of what can be
# tweaked.
#
# Only a small subset of options are listed below
#
###########################################################################

#   Add *****SPAM***** to the Subject header of spam e-mails
#
# rewrite_header Subject *****SPAM*****
# Don - b
rewrite_header Subject ***** SPAM _SCORE_ ***** 
# Don - e


#   Save spam messages as a message/rfc822 MIME attachment instead of
#   modifying the original message (0: off, 2: use text/plain instead)
#
# report_safe 1
# Don - b
report_safe 0
# Don - e


#   Set which networks or hosts are considered 'trusted' by your mail
#   server (i.e. not spammers)
#
# trusted_networks 212.17.35.


#   Set file-locking method (flock is not safe over NFS, but is faster)
#
# lock_method flock


#   Set the threshold at which a message is considered spam (default: 5.0)
#
# required_score 5.0
# Don -b
required_score 3.0
# Don -e


#   Use Bayesian classifier (default: 1)
#
# use_bayes 1
# Don -b
use_bayes 1
# Don -e


#   Bayesian classifier auto-learning (default: 1)
#
# bayes_auto_learn 1
# Don -b
bayes_auto_learn 1
# Don -e


#   Set headers which may provide inappropriate cues to the Bayesian
#   classifier
#
# bayes_ignore_header X-Bogosity
# bayes_ignore_header X-Spam-Flag
# bayes_ignore_header X-Spam-Status


#   Whether to decode non- UTF-8 and non-ASCII textual parts and recode
#   them to UTF-8 before the text is given over to rules processing.
#
# normalize_charset 1

#   Textual body scan limit    (default: 50000)
#
#   Amount of data per email text/* mimepart, that will be run through body
#   rules.  This enables safer and faster scanning of large messages,
#   perhaps having very large textual attachments.  There should be no need
#   to change this well tested default.
#
# body_part_scan_size 50000

#   Textual rawbody data scan limit    (default: 500000)
#
#   Amount of data per email text/* mimepart, that will be run through
#   rawbody rules.
#
# rawbody_part_scan_size 500000

#   Some shortcircuiting, if the plugin is enabled
# 
ifplugin Mail::SpamAssassin::Plugin::Shortcircuit
#
#   default: strongly-whitelisted mails are *really* whitelisted now, if the
#   shortcircuiting plugin is active, causing early exit to save CPU load.
#   Uncomment to turn this on
#
#   SpamAssassin tries hard not to launch DNS queries before priority -100. 
#   If you want to shortcircuit without launching unneeded queries, make
#   sure such rule priority is below -100. These examples are already:
#
# shortcircuit USER_IN_WHITELIST       on
# shortcircuit USER_IN_DEF_WHITELIST   on
# shortcircuit USER_IN_ALL_SPAM_TO     on
# shortcircuit SUBJECT_IN_WHITELIST    on

#   the opposite; blacklisted mails can also save CPU
#
# shortcircuit USER_IN_BLACKLIST       on
# shortcircuit USER_IN_BLACKLIST_TO    on
# shortcircuit SUBJECT_IN_BLACKLIST    on

#   if you have taken the time to correctly specify your "trusted_networks",
#   this is another good way to save CPU
#
# shortcircuit ALL_TRUSTED             on

#   and a well-trained bayes DB can save running rules, too
#
# shortcircuit BAYES_99                spam
# shortcircuit BAYES_00                ham

endif # Mail::SpamAssassin::Plugin::Shortcircuit

These next SPAM settings to the /etc/postfix/master.cf were also shown above, so just repeating here for clarity.

File: /etc/postfix/master.cf

=> Then we need to find the following line and add the spamassassin filter:
~
smtp      inet  n       -       -       -       -       smtpd
-o content_filter=spamassassin
~

=> Finally we need to append the following parameters:
~
spamassassin unix -     n       n       -       -       pipe
user=spamd argv=/usr/bin/spamc -f -e
/usr/sbin/sendmail -oi -f ${sender} ${recipient}
~

Start spamassassin and restart postfix

IMPORTANT: Spamassassin must connect to the network to complete initialization, but during reboot the network is not fully up and DNS resolvable, so we need to force a wait in the systemd service script for spamassassin.

Replace 'ExecStartPre' with the bash line below. - and - Replace 'After' to add dependency on network and nslookup working

File: /lib/systemd/system/spamassassin.service

~
[Unit]
# Depend on: online, remote, nss...
After=syslog.target network-online.target remote-fs.target nss-lookup.target
~
[Service]
# Wait for dns resolver
ExecStartPre=/bin/bash -c 'until host google.com; do sleep 1; done'
~

Now restart systemd, spamassassin and postfix to pick up new configuration changes.

$ sudo systemctl daemon-reload
$ sudo systemctl restart spamassassin
$ sudo systemctl restart postfix

Daily update in /etc/cron.daily

This is to update the spam databases from the internet

FYI: Check the file in /etc/cron.daily for the scheduled entry

$ cat /etc/cron.daily/spamassassin
#!/bin/bash
# -v verbose
# -D debug
/bin/sa-update -v -D

Put spam/ham learning into a script

If you find spam in your inbox, move it to the SPAM folder and the sa-learn command will update the local learning. Conversely, if you find good E-Mail in the SPAM folder, move it your your INBOX and the next learning cycle will mark it as good E-Mail (ham).

In the next script, change to your Maildir directory, and add/delete E-Mail folders as required for spam and ham actions.

Reference: https://spamassassin.apache.org/doc.html

File: /home/bob/spam

$ cat spam
HOME=/home/bob
# https://spamassassin.apache.org/full/3.1.x/doc/sa-learn.html
sa-learn -u debian-spamd --backup >/tmp/spam.bkup
sa-learn -u debian-spamd --no-sync --spam $HOME/Maildir/.Junk/{cur,new}
sa-learn -u debian-spamd --no-sync --spam $HOME/Maildir/.Junk\ E-mail/{cur,new}
sa-learn -u debian-spamd --no-sync --ham  $HOME/Maildir/.INBOX.Bob/{cur,new}
sa-learn -u debian-spamd --sync
sa-learn -u debian-spamd --dump magic

Now schedule the spam local learning. Create this script and put in in ```/etc/cron.daily`` so it will run once a day.

File: /etc/cron.daily/spam

#!/bin/bash
DIR=/tmp
RESULT=${DIR}/spam.txt
/home/bob/spam >${RESULT}
if [ ! -s "${RESULT}" ]; then
  rm ${RESULT}
else
  cat ${RESULT} | mail -s "Spam refresh" bob@example.com 2>/dev/null
fi

Mail Readers

Mutt

Mutt [1] is a text only e-mail reader, capable of running over an ssh connection.

mutt_index.gif

Install

$ sudo apt-get install mutt

Configure

Global options are in file /etc/Muttrc. User options are in file: ~/.muttrc

Assuming your local maildir is in /backup/Maildir...

source ~/.mutt/mailboxes
folder-hook Home set from="bob@example.com"
#folder-hook Work set from="youremail@work.com"
set mbox_type=Maildir
set folder="/backup/Maildir/Home"
set mask="!^\\.[^.]"
set mbox="/backup/Maildir/Home"
set record="+.Sent"
set postponed="+.Drafts"
set spoolfile="/backup/Maildir/Home/.INBOX"

If your mail server is over a network, use this configuration

#    Tell mutt to use your IMAP INBOX as your $spoolfile: set spoolfile=imap://hostname/INBOX
#    Set your $folder to your IMAP root: set folder=imap://hostname/
# activate TLS if available on the server
set ssl_starttls=yes
# always use SSL when connecting to a server
set ssl_force_tls=yes

set spoolfile   = imaps://example.org:993/INBOX
set folder      = imaps://example.org:993/
set imap_user   = bob@example.org
set imap_pass   = abcdIfYouSeeMe1234
set spoolfile   = +INBOX
mailboxes       = +INBOX
set smtp_url    = smtps://bob:abcdIfYouSeeMe1234@example.org:25

# Refresh new messages
set mail_check = 3

# Store message headers locally to speed things up.
# If hcache is a folder, Mutt will create sub cache folders for each account which may speeds things
set header_cache = ~/.cache/mutt

# Store messages locally to speed things up, like searching message bodies.
# Can be the same folder as header_cache.
# This will cost important disk usage according to your e-mail amount.
set message_cachedir = "~/.cache/mutt"

# Specify where to save and/or look for postponed messages.
set postponed = +Drafts

# Allow Mutt to open a new IMAP connection automatically.
unset imap_passive

# Keep the IMAP connection alive by polling intermittently (time in seconds).
set imap_keepalive = 300

# How often to check for new mail (time in seconds).
set mail_check = 120
  1. http://www.mutt.org/

Evolution

Evolution [1] is a Graphical User Interface (GUI) mail reader, the best one for Linux desktop.

evolution_window-overview-layers.png
evolution_legend.png

Install

$ sudo apt-get install evolution

Configure

Launch the application and configure the receiving (IMAPS), sending server(SMTP) and options like timezone.

  1. https://help.gnome.org/users/evolution/stable/

Thunderbird

Thunderbird [1] is a GUI mail reader, the best one for MacOS or Windows.

Thunderbird-email.png

Install

$ sudo apt-get install thunderbird

Configure

Launch the application and configure the receiving (IMAPS), sending server(SMTP) and options like timezone.

  1. https://www.thunderbird.net/en-US/

Offlineimap - Makes a backup copy of all email

OfflineIMAP [1] will save a workable entire E-Mail clone in case of total loss on the E-Mail server. You can even run Evolution/Thunderbird/mutt on the new remote server.

Install

$ sudo apt-get install offlineimap

Configure

If you run IMAPS, get your cert_fingerprint using the following on the E-Mail server:

$ grep -v ^- /etc/letsencrypt/live/example.com/cert.pem | base64 -d | sha1sum

Create the .offlineimaprc file in your $HOME directory (~) on the remote host, and change things like localfolders, remotehost, remoteuser, remotepass, and cert_fingerprint.

File: ~/.offlineimaprc

# Sample minimal config file.  Copy this to ~/.offlineimaprc and edit to
# get started fast.
# sha1 fingerprint:
# grep -v ^- cert.pem  | base64 -d | sha1sum

[general]
accounts = Home

[Account Home]
localrepository = LocalHome
remoterepository = RemoteHome

[Repository LocalHome]
type = Maildir
localfolders = /backup/Maildir/Home

# Translate your maildir folder names to the format the remote server expects
# So this reverses the change we make with the remote nametrans setting
nametrans = lambda name: re.sub('^\.', '', name)


[Repository RemoteHome]
type = IMAP
remotehost = example.com
remoteuser = mail
remotepass = *************
# openssl_sha1
cert_fingerprint = *************************************
# Need to exclude '' otherwise it complains about infinite naming loop?
folderfilter = lambda foldername: foldername not in ['']
# For Dovecot to see the folders right I want them starting with a dot,
# and dovecot set to look for .INBOX as the toplevel Maildir
nametrans = lambda name: '.' + name

[mbnames]
enabled = yes
filename = ~/.mutt/mailboxes
header = "mailboxes "
peritem = "+%(accountname)s/%(foldername)s"
sep = " "
footer = "\n"

Reference: https://blog.wikichoon.com/2017/05/configuring-offlineimap-dovecot.html

Create a script to run it on remote host

File: ~/offlineimap.sh

#!/bin/bash
export HOME=/home/bob
LOGFILE=/var/log/offlineimap.log
if [ -d ~/Maildir ]; then
  /usr/bin/date > $LOGFILE
  /usr/bin/offlineimap >> $LOGFILE 2>&1
  /usr/bin/date >> $LOGFILE
fi

Schedule

Schedule the script to run on the remote host

File: /etc/cron.d/offlineimap

# This is a cron file for offlineimap
# 
SHELL=/bin/bash
PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO="bob@example.com"
# m h  dom mon dow user  command
33  *  *   *   *   bob   /home/bob/offlineimap.sh
  1. http://www.offlineimap.org/

Postfix Log Summary

Pflogsumm is a log analyzer/summarizer for the Postfix MTA. It is designed to provide an over-view of Postfix activity. Pflogsumm generates summaries and, in some cases, detailed reports of mail server traffic volumes, rejected and bounced email, and server warnings, errors and panics.

Install

  • Debian
$ sudo apt-get install pflogsumm
  • RedHat
$ sudo dnf install postfix-perl-scripts

Schedule

Create a script in /etc/cron.daily to run it, like this:

  • Debian
/usr/sbin/pflogsumm -d yesterday /var/log/mail.log --problems-first --rej-add-from --verbose-msg-detail | /usr/bin/mail -s "`uname -n` daily mail stats" me@example.com
  • RedHat
/usr/sbin/pflogsumm -d yesterday /var/log/maillog --problems-first --rej-add-from --verbose-msg-detail | /usr/bin/mail -s "`uname -n` daily mail stats" me@example.com

Alternatives

exim4

Exim (Experimental Internet Mailer) [1] receives and sends mail, referred to as an MTA like postfix. It does not provide POP or IMAP interfaces to read mail. It is available on most Linux distributions as a package install, but was removed from RedHat due to low populatity.

What it does

  • RFC 2821 SMTP and RFC 2033 LMTP email message transport.

  • Incoming (as SMTP server):

    • SMTP over TCP/IP (Exim daemon or inetd);
    • SMTP over the standard input and output (the -bs option);
    • Batched SMTP on the standard input (the -bS option).
  • Exim also supports RFC 5068 Message Submission, as an SMTP server with (for example, encrypted and authenticated connections on port 587).

  • Outgoing email (as SMTP or LMTP client):

    • SMTP over TCP/IP (the smtp transport);
    • LMTP [2] over TCP/IP (the smtp transport with the protocol option set to “lmtp”);
    • LMTP over a pipe to a process running in the local host (the lmtp transport);
    • Batched SMTP to a file or pipe (the appendfile and pipetransports with the use_bsmtp option set).
  • Configuration

    • Access Control Lists - flexible policy controls.
    • Content scanning, including easy integration with and other spam and virus scanners like SpamAssassin and ClamAV.
    • Encrypted SMTP connections using TLS/SSL.
    • Authentication with a variety of front end and back end methods, including PLAIN, LOGIN, sasl, dovecot, spa, cram_md5.
    • Rewrite - rewrite envelope and/or header addresses using regular expressions.
    • Routing controls - use routers to redirect, quarantine, or deliver messages.
    • Transports - use transports to deliver messages by smtp, lmtp, or to files, directories, or other programs.
    • Flexible retry rules for temporary delivery problems.

I usually install it on non-email Debian servers because it is very light weight and works great sending monitoring messages from servers to the main E-Mail server [3].

Pros:

  • Small footprint is able to run on SBC like Rasberry PI
  • Simple configuration on Debian only
  • Extendable

Cons:

  • Only MTA, does not support mailboxes without an MDA like Dovecot
  • Not as well known as postfix, probably not as many people or businesses supporting it
  • Not available on RedHat mainstream, Postfix and Sendmail are the only alternative
  1. https://www.exim.org/docs.html
  2. Local Mail Transfer Protocol
  3. Setup_Server

iRedMail

With iRedMail [1], you can deploy an OPEN SOURCE, FULLY FLEDGED, FULL-FEATURED mail server in several minutes, for free.

It supports all major Linux distributions, has calendar/contact sync,antispam/anitvirus protection, TLS security and webmail locally on your server. This would replace the Dovecot/postfix combination described above.

Read the documentation [2] and decide for yourself. This takes over a host, installing many different products, like database (mySQL/PostgreSQL,LDAP), DKIM, Spam filter, fail2ban, netdata, postfix, Dovecot, webmail, etc. You will need about 4GB of memory and a couple CPUs along with 20GB disk.

Pros:

  • multiple E-Mail domains
  • multiple E-Mail accounts
  • Nice GUI for managing the E-Mail accounts
  • Includes massive system monitor, netdata [3]
  • You can buy support

Cons:

  • Need a bigger, dedicated machine to host it
  • Puts much of the configuration inside a database
  • Not well suited for small setup at home, due to the complexity
  1. https://www.iredmail.org/
  2. https://docs.iredmail.org/index.html
  3. https://www.netdata.cloud/

Mail-in-a-Box

Technically, Mail-in-a-Box [1] turns a fresh cloud computer into a working mail server. But you don’t need to be a technology expert to set it up.

Each Mail-in-a-Box provides webmail and an IMAP/SMTP server for use with mobile devices and desktop mail software. It also includes contacts and calendar synchronization.

This project provides a simple, turn-key solution. There are basically no configuration options and you can’t tweak the machine’s configuration files after installation.

My observation is that this is good for a dedicated mail server machine, and that's all that machine should do. Perhaps it would work well on a Rasberry PI SBC.

Pros:

  • Do not need to know all the technical details of E-Mail to setup and use
  • Small system requirements, runable on a SBC

Cons:

  • Requires dedicated machine, as it takes over
  • Not sure how well support will be, especially for critital systems
  1. https://mailinabox.email/

Citadel

This open source project provides "Email, collaboration, groupware, and content management - up and running in minutes, on your own hardware or in the cloud."

Citadel is groupware with BBS roots, and still offers a traditional text-based BBS front end and chat. If you like old school, this is for you.

To find out more, just read the FAQS [2]. Looks interesting to me, at least one person posted about running it on a Rasberry PI [3].

Pros:

  • Do not need to know all the technical details of E-Mail to setup and use
  • Small system requirements, runable on a SBC

Cons:

  • Does more than E-Mail, you may not need all the features installed
  • Not sure how well support will be, especially for critital systems
  1. https://www.citadel.org/
  2. https://www.citadel.org/faq.html
  3. https://www.ionos.com/digitalguide/server/configuration/set-up-your-own-raspberry-pi-mail-server/

Continue

Now that you have set up E-Mail on your server, you will need a Database for many more things, so now is a good time to install the versatile PostgreSQL database.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

E-Mail Forward



Overview

The following diagrams are what this document will accomplish. The challenge is to obtain and keep a good reputation that other E-Mail handlers around the world will accept. Otherwise your messages will be dumped in the trash can (SPAM) or sent back to you (bounced).

It is a combination of:

  • IP Address Reputation
  • DNS assignment of an E-Mail name to that IP Address
  • Certificates for Authentication and Encryption
  • Login protection against Open Spam Relays

A VPS (Virtual Private Server) is a way to obtain a good IP address reputation, and it will stay the same number over time. There are many to choose from, some already have bad reputations and constant streams of hackers knocking on the network, while others have a good reputation and just a few random hackers on your doorstep. Choose wisely!

  • MUA - Mail User Agent; Thunderbird, Evolution - Read, send, user interface
  • MDA - Mail Delivery Agent; Dovecot - File and organize mail, authorize user accounts
  • MTA - Mail Transport Agent; Postfix - Move messsages from one network stop to another

Outgoing: Relay from Inside Home

graph TD;
        Home_E-Mail.MUA-->Home_Postfix.MTA;
        Home_Postfix.MTA-->VPS_Postfix.MTA;
        VPS_Postfix.MTA-->Internet;

Outgoing: Relay from Outside Home

graph TD;
        Mobile_E-Mail.MUA-->VPS_Postfix.MTA;
        VPS_Postfix.MTA<-->VPS_Dovecot.MDA;
        VPS_Dovecot.MDA<-->VPS_Linux;
        VPS_Postfix.MTA-->Internet;

Incoming: Transport from Internet

graph TD;
        Internet-->VPS_Postfix.MTA;
        VPS_Postfix.MTA-->Home_Postfix.MTA;
        Home_Postfix.MTA-->Home_Dovecot.MDA;
        Home_Dovecot.MDA<-->Home_Linux;
        Home_Dovecot.MDA-->Home_E-Mail.MUA;

This document will use:

  • your-domain.org : The E-Mail relay we are setting up in this document (VPS_Postfix)
  • your-domain.com : The E-Mail server set up in the prior document (Home_Postfix)

Recommendation:

  • Bring up a host on a VPS somewhere, and use the Setup Server instructions to secure it.
  • Get a domain name and certificate set up and working.
  • Install postfix/dovecot combo and relay to your main E-Mail host. Test sending out mail too.
  • Install and configure SPF Policy Agent. Test it and make sure all is well.
  • Install and configure OpenDKIM to sign your E-Mails. Ensure it works good.

VPS

Create a VPS Cloud Server, less than $10 month for 1GB RAM, 30GB Disk, 1 CPU, 2TB Bandwidth month.

Criteria:

  1. IP address reputation. May have to test drive to determine IP address. Lookup tools:
  2. Working control panel. Check online reviews, or test drive. Make sure every button works. Extra points if they allow firewall changes and creating a PTR record.
  3. Current OS release. Check references below.
  4. Support availability and response agreements. Test creating a support ticket. Check references below.
  5. Low cost, check references below.

Secure VPS on your-domain.org

Change your root password IMMEDIATELY. Hackers know the algorithm and expliot it from creation time to change time.

Change the ssh port from 22 (File: /etc/ssh/sshd_config). Hackers know that port and constantly attack it.

Check Network

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Update Package Repository

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Create New User(s)

The Linux mail user can be <name>@example.com, to keep things clear.

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Set Host and Domain

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Set Date and Time

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Monitor crack attempts

Install firewall and fail2ban on your-domain.org

Be sure the system firewall is installed and also the "Block Bad Actors" firewall.sh script.

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Add rules for E-Mail smtp and smtps.

  • Debian:
$ sudo apt-get install fail2ban mailutils
  • RedHat:
$ sudo dnf install fail2ban mailx

Make sure firewall rules open port smtp(25) and smtps(465):

  • Debian
$ sudo ufw allow 25
$ sudo ufw allow 465
$ sudo ufw status numbered

     To                         Action      From
     --                         ------      ----
[ 1] 22                         ALLOW IN    Anywhere                  
[ 2] 25                         ALLOW IN    Anywhere                  
[ 3] 465                        ALLOW IN    Anywhere                  
[ 4] 22 (v6)                    ALLOW IN    Anywhere (v6)             
[ 5] 25 (v6)                    ALLOW IN    Anywhere (v6)  

  • RedHat
$ sudo firewall-cmd --add-service=smtp
 success
$ sudo firewall-cmd --add-service=smtps
 success
$ sudo firewall-cmd --list-services
  smtp smtps ssh
$ firewall-cmd --runtime-to-permanent
 success
$ firewall-cmd --reload
 success

Configure fail2ban

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Secure postfix

File: /etc/fail2ban/jail.d/postfix.local

[postfix]
enabled  = true
maxretry = 3
bantime = 1d
port     = smtp,465,submission

[postfix-sasl]
enabled  = true
maxretry = 3
bantime = 1d
port     = smtp,465,submission,imap,imaps,pop3,pop3s

File: /etc/fail2ban/jail.d/dovecot.local

[dovecot]
enabled = true
port    = pop3,pop3s,imap,imaps,submission,465,sieve
logpath = /var/log/maillog
findtime = 1200
bantime = 1d
Secure ssh

File: /etc/fail2ban/jail.d/sshd.local

[sshd]
port = XXXX
enabled = true
maxretry = 3
bantime = 1d

Log Monitors on your-domain.org

Install Logwatch

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Install Logcheck:

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

LogWatcher

Create

Put this script in the /root directory, and install the dependencies listed.

#!/bin/bash
##############################################################################
#
# File: logwatcher.sh
#
# Purpose: Watch for interesting new things in log files and e-mail them
#
# Dependencies: 
#  * Debian:
#   apt-get install git clang zlib1g-dev
#  * RedHat:
#   dnf install git clang zlib-devel 
#
#   git clone https://github.com/mbucc/retail
#
# Author     Date     Description
# ---------- -------- --------------------------------------------------------
# D. Cohoon  Feb-2023 Created
##############################################################################
MAILTO=root
DIR=/root
OS=$(/usr/bin/hostnamectl|/usr/bin/grep 'Operating System'|/usr/bin/cut -d: -f2|/usr/bin/awk '{print $1}')
cd ${DIR}
#--------------
log_check () {
  OFFSET=${1}
  LOGFILE=${2}
  FILTER=${3}
  OUTPUT=$(/usr/bin/mktemp)
  /root/retail/retail -o ${OFFSET} ${LOGFILE} | ${FILTER} >${OUTPUT}
  if [ -s ${OUTPUT} ]; then
    /bin/cat ${OUTPUT} | /usr/bin/mail -s "Logwatcher.sh: ${OFFSET}" ${MAILTO} 2>/dev/null
  fi
  rm -rf ${OUTPUT}
}
#
#--------------
case ${OS} in
    AlmaLinux) 
        FAIL2BAN_LOG=/var/log/fail2ban.log
        POSTFIX_LOG=/var/log/maillog
        AUTH_LOG=/var/log/secure 
        ;;
    Ubuntu|Debian) 
        FAIL2BAN_LOG=/var/log/fail2ban.log
        POSTFIX_LOG=/var/log/syslog
        AUTH_LOG=/var/log/auth.log
        ;;
esac
#           Offset  Log File               Filter
#          -------- ---------------------  -------------------------
log_check .fail2ban "${FAIL2BAN_LOG}"      ${DIR}/filter_fail2ban.sh
#
log_check .postfix  "${POSTFIX_LOG}"       ${DIR}/filter_postfix.sh
#
log_check .sshd     "${AUTH_LOG}"          ${DIR}/filter_ssh.sh
#
log_check .dovecot  "${AUTH_LOG}"          ${DIR}/filter_dovecot.sh

Schedule

# cat /etc/cron.d/logwatcher 
# /etc/cron.d/logwatcher: crontab entries for the logwatcher.sh script

PATH=/usr/local/sbin:/usr/local/bin:/sbin:/bin:/usr/sbin:/usr/bin
MAILTO=root

@reboot         root   /root/logwatcher.sh 
2 * * * *       root   /root/logwatcher.sh 

# EOF

Filters

File: filter_fail2ban.sh

/usr/bin/grep "Ban"

File: filter_postfix.sh

/usr/bin/grep "postfix"|/usr/bin/grep 'disconnect from unknown'

File: filter_ssh.sh

/usr/bin/grep "ssh" | /usr/bin/egrep "Failed|invalid format"

File: filter_dovecot.sh

/usr/bin/grep "dovecot" | /usr/bin/grep "authentication failure"

Make the Scripts Executable

$ chmod 755 *.sh

Reference:

Apache install (enable) on your-domain.org

Click on this heading, then refer to these instructions, then come back here with the Back Arrow on the Browser:

Test it on http://<domain.name>

Add SSL

  • Debian
$ sudo ufw allow 443/tcp
  • RedHat
$ sudo firewall-cmd --add-service=https
  success
$ sudo firewall-cmd --list-services
  http https smtp smtps ssh
$ sudo firewall-cmd --runtime-to-permanent
  success
$ sudo firewall-cmd --reload
  success

Change the default landing page to blank

File: /var/www/html/index.html

<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd">
<html xmlns="http://www.w3.org/1999/xhtml">
  <head>
  </head>
  <body>
    <div class="main_page">
    </div>
  </body>
</html>

DNS Registrar (your-domain.org)

Register name.

https://godaddy.com https://www.namecheap.com/ https://www.domain.com/

DNS Updates

Create Advanced DNS records, using IP Address of VPS server:

HOST

Type       Host        Value               TTL
---------- ----------- ------------------- ----
A Record   @           <IP Address>        Auto
A Record   mail        <IP Address>        Auto
TXT Record @           v=spf1 mx -all      Auto

MAIL

Type       Host        Value               Priority TTL
---------- ----------- ------------------- -------- ----
MX Record  @           <Domain Name>       10       Auto

Lookup status via dig -t <Type> <Domain>

$ dig -t a +short <Domain Name>
123.123.123.123
$ dig -t mx +short <Domain Name>
10 <Domain Name>.

To install dig: $ sudo apt-get install dnsutils $ sudo dnf install bind-utils

Create PTR Record

Using the Control Panel on the VPS, update the PTR record. If they do not allow this, create a support ticket, they will do it then, just trying to limit spammers.

Test your PTR record. Using the IP Address, it should show your domain, proving you have control over the IP Address.

dig -x 1.2.3.4
;; ANSWER SECTION:
1.2.3.4.in-addr.arpa. 86400 IN	PTR	mail.<Domain Name>.

certbot - Certificate Management for Let's Encrypt

To create and maintain LetsEncrypt certificates using Apache web server

Install

$ sudo systemctl unmask apache2
$ sudo systemctl enable apache2
$ sudo systemctl start apache2
$ sudo apt-get install certbot
$ sudo apt-get install python3-certbot-apache
$ sudo systemctl unmask httpd
$ sudo systemctl enable httpd
$ sudo systemctl start httpd
$ sudo dnf install certbot
$ sudo dnf install python3-certbot-apache
$ sudo certbot  plugins

Saving debug log to /var/log/letsencrypt/letsencrypt.log

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
* apache
Description: Apache Web Server plugin
Interfaces: IAuthenticator, IInstaller, IPlugin
Entry point: apache = certbot_apache._internal.entrypoint:ENTRYPOINT

* standalone
Description: Spin up a temporary webserver
Interfaces: IAuthenticator, IPlugin
Entry point: standalone = certbot._internal.plugins.standalone:Authenticator

* webroot
Description: Place files in webroot directory
Interfaces: IAuthenticator, IPlugin
Entry point: webroot = certbot._internal.plugins.webroot:Authenticator
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -

Configure

Be sure to get certificates for the domain and all subdomains (hosts)

$ certbot  --apache -d your-domain.org -d mail.your-domain.org
Saving debug log to /var/log/letsencrypt/letsencrypt.log
Plugins selected: Authenticator apache, Installer apache
Enter email address (used for urgent renewal and security notices)
 (Enter 'c' to cancel): you@your-domain.org

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Please read the Terms of Service at
https://letsencrypt.org/documents/LE-SA-v1.3-September-21-2022.pdf. You must
agree in order to register with the ACME server. Do you agree?
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: y

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Would you be willing, once your first certificate is successfully issued, to
share your email address with the Electronic Frontier Foundation, a founding
partner of the Let's Encrypt project and the non-profit organization that
develops Certbot? We'd like to send you email about our work encrypting the web,
EFF news, campaigns, and ways to support digital freedom.
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
(Y)es/(N)o: y
Account registered.
No names were found in your configuration files. Please enter in your domain
name(s) (comma and/or space separated)  (Enter 'c' to cancel): your-domain.org
Requesting a certificate for your-domain.org
Performing the following challenges:
http-01 challenge for your-domain.org
Enabled Apache rewrite module
Waiting for verification...
Cleaning up challenges
Created an SSL vhost at /etc/apache2/sites-available/000-default-le-ssl.conf
Enabled Apache socache_shmcb module
Enabled Apache ssl module
Deploying Certificate to VirtualHost /etc/apache2/sites-available/000-default-le-ssl.conf
Enabling available site: /etc/apache2/sites-available/000-default-le-ssl.conf
Enabled Apache rewrite module
Redirecting vhost in /etc/apache2/sites-enabled/000-default.conf to ssl vhost in /etc/apache2/sites-available/000-default-le-ssl.conf

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Congratulations! You have successfully enabled https://your-domain.org
- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
Subscribe to the EFF mailing list (email: you@your-domain.org).

IMPORTANT NOTES:
 - Congratulations! Your certificate and chain have been saved at:
   /etc/letsencrypt/live/your-domain.org/fullchain.pem
   Your key file has been saved at:
   /etc/letsencrypt/live/your-domain.org/privkey.pem
   Your certificate will expire on 2023-05-08. To obtain a new or
   tweaked version of this certificate in the future, simply run
   certbot again with the "certonly" option. To non-interactively
   renew *all* of your certificates, run "certbot renew"
 - If you like Certbot, please consider supporting our work by:

   Donating to ISRG / Let's Encrypt:   https://letsencrypt.org/donate
   Donating to EFF:                    https://eff.org/donate-le

Apache (disable)

Apache is not needed, so disable it to reduce threat footprint.

$ sudo systemctl stop apache2
$ sudo systemctl disable apache2
$ sudo systemctl mask apache2
$ sudo ufw status numbered
# -> ufw delete <n> for ports 80 & 443
$ sudo systemctl stop httpd
$ sudo systemctl disable httpd
$ sudo systemctl mask httpd
$ sudo  firewall-cmd --remove-service=https
  success
$ sudo firewall-cmd --list-services
  http smtp smtps ssh
$ sudo  firewall-cmd --remove-service=http
  success
$ sudo firewall-cmd --list-services
  smtp smtps ssh
$ sudo firewall-cmd --runtime-to-permanent
  success

When the Let's Encrypt Certificate expires, you will need to enable it again and open up ports.

postfix - E-Mail Transport Agent

Install postfix MTA and remove exim4, if it is installed. Postfix is more mature and full-featured.

TLS http://www.postfix.org/TLS_README.html

Install on your-domain.org

$ sudo apt-get remove exim4-base
$ sudo apt-get install postfix
$ sudo dnf remove exim
$ sudo dnf install postfix

Configure on your-domain.org

  • smtp -> outgoing mail
  • smtpd <- incoming mail (daemon)

Complete main.cf file contents

~
mydomain = your-domain.org
myorigin = $mydomain
#myorigin = /etc/mailname
~
Also in RedHat: Log is in /var/log/maillog

File main.cf

#.........................................................................
#
# * smtpd <- incoming mail (daemon)
#
smtpd_tls_cert_file = /etc/letsencrypt/live/your-domain.org/fullchain.pem
smtpd_tls_key_file  = /etc/letsencrypt/live/your-domain.org/privkey.pem
smtpd_use_tls = yes
smtpd_tls_security_level=may
smtpd_tls_auth_only = yes
smtpd_tls_loglevel = 1
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
#
# Enforce TLSv1.3 or TLSv1.2
#  smtpd_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
#  smtpd_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
#  smtp_tls_mandatory_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
#  smtp_tls_protocols = !SSLv2, !SSLv3, !TLSv1, !TLSv1.1
#
# Sending relays allowed only if:
# -> you are on mynetworks
# -> you are logged in
#  all others are deferred (temp error 4.x.x), 
#   like a permanent failure that won't go away
#   until 1) added to mynetworks, or 2) sasl authenticated
smtpd_relay_restrictions = permit_mynetworks permit_sasl_authenticated defer_unauth_destination

# Sending sasl auth from Dovecot
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes

# I am a relay to the world
relayhost = 

#.........................................................................
#
# * smtp  -> outgoing mail 
#
smtp_tls_CApath=/etc/ssl/certs
smtp_tls_security_level=may
smtp_tls_loglevel = 1
smtp_tls_session_cache_database = btree:${data_directory}/smtp_scache

# Receiving transport table specifies a mapping from email addresses to 
#  message delivery  transports  and  next-hop  destinations. 
# (relay forwarding back out)
transport_maps = hash:/etc/postfix/transport
relay_domains = your-domain.com

# .org host
myhostname = mail.your-domain.org
alias_maps = hash:/etc/aliases
alias_database = hash:/etc/aliases
myorigin = /etc/mailname

# Accept final destination mail for mail.your-domain.org, your-domain.org, or locals
mydestination = $myhostname, your-domain.org, localhost.your-domain.org, localhost
#.........................................................................

# Allow my host and the .com host to send (relay out)
mynetworks = 127.0.0.0/8 [::ffff:127.0.0.0]/104 [::1]/128  5.6.7.8
inet_interfaces = all
inet_protocols = ipv4

# Message restrictions
mailbox_size_limit = 0
message_size_limit = 52428800
header_size_limit = 4096000
recipient_delimiter = +

#compatibility_level = 2

Complete relay contents of master.cf

No spaces between = sign

File: master.cf

~
# ==========================================================================
# service type  private unpriv  chroot  wakeup  maxproc command + args
#               (yes)   (yes)   (no)    (never) (100)
# ==========================================================================
smtp      inet  n       -       y       -       -       smtpd
smtps     inet  n       -       y       -       -       smtpd
    -o syslog_name=postfix/smtps
    -o smtpd_tls_wrappermode=yes
    -o smtpd_sasl_auth_enable=yes
    -o milter_macro_daemon_name=ORIGINATING
  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
  -o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject
  -o smtpd_sasl_type=dovecot
  -o smtpd_sasl_path=private/auth
# The preceeding 4 lines require SSL login for relay to internet (.com -> .org -> internet)
#  need cert for mail.your-domain.org, your-domain.org; 
#  and sasl_passwd for mail.your-domain.org unix login on www.your-domain.com
~
:wq

Reload Config

$ sudo postfix reload

Definitions of master fields

  • service

    This refers to the name in /etc/services and matches a name to a port number.

  • type

    Internet or pipe. Internet is a network communication, while pipe is a special file on disk that echos input as output used for local communication only.

  • private

    Access to some components is restricted to the Postfix system itself. This column is marked with a y for private access (the default) or an n for public access. inet components must be marked n for public access, since network sockets are necessarily available to other processes.

  • unpriv

    Postfix components run with the least amount of privilege required to accomplish their tasks. They set their identity to that of the unprivileged account specified by the mail_owner parameter. The default installation uses postfix. The default value of y for this column indicates that the service runs under the normal unprivileged account. Services that require root privileges are marked with n.

  • chroot

    Many components can be chrooted for additional security. The chroot location is specified in the queue_directory parameter in main.cf. The default is for a service to run in a chroot environment; however, the normal installation marks all components with an n so they are not chrooted when they run. Chrooting a service adds a level of complexity that you should thoroughly understand before taking advantage of the added security. See Section 4.8 later in the chapter for more information on running Postfix services in a chroot environment.

  • wakeup

    Some components require a wake-up timer to kick them into action at the specified interval. The pickup daemon is one example. At its default setting of 60 seconds, the master daemon wakes it up every minute to see if any new messages have arrived in the maildrop queue. The other services that require a wake-up are the qmgr and flush daemons. A question mark character (?) can be added at the end of the time to indicate that a wake-up event should be sent only if the component is being used. A 0 for the time interval indicates that no wake-up is required. The default is 0, since only the three components mentioned require a wake-up. The values as they are set in the Postfix distribution should work for almost all situations. Other services should not have wakeup enabled.

  • maxproc

    Limits the number of processes that can be invoked simultaneously. If unspecified here, the value comes from the parameter default_process_limit in main.cf, which is set to 100 by default. A setting of 0 means no process limit. You may want to adjust maxproc settings if you run Postfix on a system with limited resources or you want to optimize different aspects of the system.

  • command

    The actual command used to execute a service is listed in the final column. The command is specified with no path information, because it is expected to be in the Postfix daemon directory specified by the daemon_directory parameter in main.cf. By default the directory is /usr/libexec/postfix. All of the Postfix commands can be specified with one or more -v options to turn on increasingly more verbose logging information, which can be helpful if you must troubleshoot a problem. You can also enable information for a debugging program with the -D option. See the DEBUG_README file that comes with the Postfix distribution for more information on debugging if necessary.

Proxy Postfix using Transport Maps for Incoming Mails on your-domain.org

A postfix transport(5) table allows one domain to transfer incoming SMTP messages to another domain. For instance the .org domain will transfer all .com messages to the .com domain automatically.

Repeated postfix file contents from above to clarify this is for transporting from .org to .com

Configure

File: /etc/postfix/main.cf:

~
    transport_maps = hash:/etc/postfix/transport
~

Set postfix to accept mails for the .com addresse.

Repeated postfix file contents from above to clarify this is for transporting from .org to .com

File: /etc/postfix/main.cf:

~
    relay_domains = example.com
~

Transport Definitions

Create a transport table to redirect all mail for one domain as well as mail for "user@mydomain.org" to another domain. You can also specify another port, to bypass port 25 restrictions.

File: /etc/postfix/transport

    example.com          smtp:[example.com]:10025
    user@mydomain.org   smtp:[example.com]:10025

Make it a database

Create a postmap database from the flat ascii file.

$ sudo postmap /etc/postfix/transport

Reload Config

Finally, reload the postfix configuration files.

$ sudo postfix reload

Reference:

Proxy Postfix Relay for SMTP Outgoing Mails

To send mails from a non-standard port, use a .com domain to relay to a .org domain, with the .org relay set to = (basically null) and the .com relay = ...org.

Change MX records in no-ip.com from mail1.no-ip.com to mail.your-domain.org

Log into noip.com, go to DNS and select *.your-domain.com. At the bottom of the page you can change the mail1.no-ip.com record to mail.your-domain.org.

Don't forget to save and lookup the new value using dig -t MX your-domain.com.

Change TXT spf record in no-ip.com from noip, to:

For your-domain.com domain:

TypeHostValuePriorityTTL
MX Record@your-domain.org10Auto
TXT Record@v=spf1 mx a include:your-domain.org ~allAuto

SPF tools: https://www.dynu.com/en-US/NetworkTools/SPFGenerator# https://mxtoolbox.com/spf.aspx

  • Try not to add more than required, you might create a potential infinite loop. 8

On the DNS for .com, create a TXT record like this

"v=spf1 mx a include:<Domain of .org> ~all"

Basically is says "In the DNS TXT record for email destination you@your-domain.com, validate the MX IP Address of DNS record for domain your-domain.com" for ~all; When an SPF record includes ~all (softfail qualifier), receiving servers typically accept messages from senders that aren't in your SPF record, but mark them as suspicious.

Reference: http://www.open-spf.org/FAQ/Common_mistakes/#list-domains

Test e-mail results from google:

~
ARC-Authentication-Results: i=1; mx.google.com;
       spf=pass (google.com: domain of you@your-domain.com designates 1.2.3.4 as permitted sender) smtp.mailfrom=you@your-domain.com
Received: from mail.your-domain.org (mail.your-domain.org. [1.2.3.4])
        by mx.google.com with ESMTPS id o6-20020a0dcc06000000b005363cf948basi2222119ywd.61.2023.02.25.05.45.43
        for <you.name@gmail.com>
        (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256);
        Sat, 25 Feb 2023 05:45:44 -0800 (PST)
Received-SPF: pass (google.com: domain of you@your-domain.com designates 1.2.3.4 as permitted sender) client-ip=1.2.3.4;
Authentication-Results: mx.google.com;
       spf=pass (google.com: domain of you@your-domain.com designates 1.2.3.4 as permitted sender) smtp.mailfrom=you@your-domain.com
Received: from www.your-domain.com (unknown [5.6.7.8])
	by mail.your-domain.org (Postfix) with ESMTPSA id 9071FD3C93
	for <you.name@gmail.com>; Sat, 25 Feb 2023 13:45:43 +0000 (UTC)
~

Change your-domain.com postfix/main.cf relay to your-domain.org

File: /etc/postfix/main.cf

relayhost = [mail.your-domain.org]:465

Add your-domain.org unix login on host your-domain.com

File: /etc/postfix/sasl/sasl_passwd

[mail.your-domain.org]:465 you:********

Create a postfix DB from flat file

$ sudo postmap /etc/postfix/sasl/sasl_passwd

Enable TLS on your-domain.org

TLS requires certificate(s) and SASL login verification.

The logon verification is done by Docevot using the userdb setting pam[1], while the postfix SASL verification uses a unix pipe /var/spool/postfix/private/auth to talk to Dovecot.

The encryption is enabled by Let's Encrypt certificates (one for the domain, another for each subdomain), over the smtps (post 465) network socket to the client. The master.cf file -o settings override the main.cf settings, ensuring a connection is only accepted if:

  • TLS certificate is working and accepted by both parties, on port 465
  • SASL logon passes, using postfix -> Dovecot -> pam login.
OR
  • The incoming IP address is part of mynetwork, on port 25

Plugable Authentication Module (PAM) is the Linux login method for users

Two Steps:

  1. install dovecot
$ sudo apt-get install dovecot-core
$ sudo dnf install dovecot

Add login to dovecot auth

File: /etc/dovecot/conf.d/10-auth.conf

~
     auth_mechanisms = plain login
~
:wq

Set the certificates to letsencrypt, require ssl and prefer server of the ciphers

File: /etc/dovecot/conf.d/10-ssl.conf

~
    ssl = required
~
    ssl_cert = </etc/letsencrypt/live/your-domain.org/fullchain.pem
    ssl_key = </etc/letsencrypt/live/your-domain.org/privkey.pem
~
    # Comment out default certs
~
    ssl_prefer_server_ciphers = yes
~
:wq

Set up a local unix pipe for dovecot and postfix to communicate this authorization data, under the service_auth set of brackets.

File: /etc/dovecot/conf.d/10-master.conf

~
  unix_listener /var/spool/postfix/private/auth {
    group = postfix
    mode = 0660
    user = postfix
  }
~
:wq
  1. Set the receiving certificates in Postfix, require TLS using Dovecot.

Repeated postfix file contents from above to clarify this is for relaying TLS on .org to the Internet

File: /etc/postfix/main.cf

~
#smtpd_tls_cert_file=/etc/ssl/certs/ssl-cert-snakeoil.pem
#smtpd_tls_key_file=/etc/ssl/private/ssl-cert-snakeoil.key
smtpd_tls_cert_file = /etc/letsencrypt/live/your-domain.org/fullchain.pem
smtpd_tls_key_file  = /etc/letsencrypt/live/your-domain.org/privkey.pem
smtpd_use_tls = yes
smtpd_tls_session_cache_database = btree:${data_directory}/smtpd_scache
smtpd_tls_security_level=may

~

# sasl auth from Dovecot
smtpd_sasl_type = dovecot
smtpd_sasl_path = private/auth
smtpd_sasl_auth_enable = yes

~
:wq

Repeated postfix file contents from above to clarify this is for relaying TLS on .org to the Internet

File: /etc/postfix/master.cf

~
# ==========================================================================
# service type  private unpriv  chroot  wakeup  maxproc command + args
#               (yes)   (yes)   (no)    (never) (100)
# ==========================================================================
smtp      inet  n       -       y       -       -       smtpd
smtps     inet  n       -       y       -       -       smtpd
    -o syslog_name=postfix/smtps
    -o smtpd_tls_wrappermode=yes
    -o smtpd_sasl_auth_enable=yes
    -o milter_macro_daemon_name=ORIGINATING
  -o smtpd_relay_restrictions=permit_sasl_authenticated,reject
  -o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject
  -o smtpd_sasl_type=dovecot
  -o smtpd_sasl_path=private/auth
# The preceeding 4 lines require SSL login for relay to internet (.com -> .org -> internet)
#  need cert for mail.your-domain.org, your-domain.org; 
#  and sasl_passwd for mail.your-domain.org unix login on www.your-domain.com
~
:wq
  • Make sure port 465 (smtps) is opened by Postfix

Open on localhost

# nmap -sT -O localhost
Starting Nmap 7.70 ( https://nmap.org ) at 2023-02-19 16:46 EST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.00019s latency).
Other addresses for localhost (not scanned): ::1
Not shown: 992 closed ports
PORT    STATE SERVICE
22/tcp  open  ssh   <-
25/tcp  open  smtp  <-
80/tcp  open  http  <-
110/tcp open  pop3
143/tcp open  imap
443/tcp open  https <-
465/tcp open  smtps <-
993/tcp open  imaps
995/tcp open  pop3s

RedHat 9 /etc/services shows port 465 as urd, so I switched the urd and alias smtps.

# grep smtps /etc/services
smtps           465/tcp           urd   # URL Rendesvous Directory for SSM / SMTP over SSL (TLS)

Now ss looks better:

# ss -ltap
State        Recv-Q    Send-Q        Local Address:Port            Peer Address:Port     Process                                                   
LISTEN       0         128                 0.0.0.0:ssh                  0.0.0.0:*         users:(("sshd",pid=1314,fd=3))                           
LISTEN       0         100                 0.0.0.0:smtp                 0.0.0.0:*         users:(("master",pid=23736,fd=13))                       
LISTEN       0         100                 0.0.0.0:imaps                0.0.0.0:*         users:(("dovecot",pid=23599,fd=41))                      
LISTEN       0         100                 0.0.0.0:pop3s                0.0.0.0:*         users:(("dovecot",pid=23599,fd=23))                      
LISTEN       0         100                 0.0.0.0:pop3                 0.0.0.0:*         users:(("dovecot",pid=23599,fd=21))                      
LISTEN       0         100                 0.0.0.0:imap                 0.0.0.0:*         users:(("dovecot",pid=23599,fd=39))                      
LISTEN       0         100                 0.0.0.0:smtps                0.0.0.0:*         users:(("master",pid=23736,fd=17))                       
ESTAB        0         52            1.2.3.4:ssh                   5.6.7.8:44958     users:(("sshd",pid=2402,fd=4),("sshd",pid=2278,fd=4))    
LISTEN       0         128                    [::]:ssh                     [::]:*         users:(("sshd",pid=1314,fd=4))                           
LISTEN       0         100                    [::]:imaps                   [::]:*         users:(("dovecot",pid=23599,fd=42))                      
LISTEN       0         100                    [::]:pop3s                   [::]:*         users:(("dovecot",pid=23599,fd=24))                      
LISTEN       0         100                    [::]:pop3                    [::]:*         users:(("dovecot",pid=23599,fd=22))                      
LISTEN       0         100                    [::]:imap                    [::]:*         users:(("dovecot",pid=23599,fd=40))   

Open to internet

$ sudo firewall-cmd --list-services
  http https smtp smtps ssh

Clients Mail User Agents (MUA)

These are the programs where a human interface resides, and where everybody reads, sends, deletes, and manages their E-Mail. Names such as: Thunderbird, Evolution, Mutt, Gmail, Outlook, etc.

  • Mail Transport Agents (MTA) only connect here. Do not connect your MUA to this port!

    • SMTP (port 25): This is an restricted relay port, defined on host example.org and controlled by the /etc/postfix/master.cf smtp definition. Only addresses in $mydestination will be accepted, and nothing else will be relayed in. The transport definition will forward to it's list of addresses/domains. After a client is authenticated, it will be allowed to relay out on port 25. Process master, forked from postfix, performs this responsibility.

      • -o smtpd_relay_restrictions=permit_sasl_authenticated,reject: means that only authenticated connections can relay out
  • Client (MUA) connect here to SEND mail only

    • SMTPS (port 465): This is defined on host example.org, and is always an encrypted port, controlled by the /etc/postfix/master.cf smtps definition. All authentications occur here. Process master, forked from postfix, performs this responsibility.

      • -o smtpd_recipient_restrictions=permit_mynetworks,permit_sasl_authenticated,reject: means that only $mynetworks or authenticated connections can submit messages to be sent. MUA will connect using this rule.

      • Authentication is done using /etc/dovecot/conf.d/10-auth.conf auth_mechanism=login definition, over unix pipe unix_listener /var/spool/postfix/private/auth declared in file /etc/dovecot/conf.d/10-master.conf. Linux PAM verification occurs using file /etc/pam.d/dovecot and passdb{driver=pam} in file /etc/dovecot/conf.d/auth-system.conf.ext.

Unix pipe for postfix to dovecot authenication of clients.

$ sudo ls -l /var/spool/postfix/private/auth
srw-rw---- 1 postfix postfix 0 Feb 10 10:21 /var/spool/postfix/private/auth
  • Client (MUA) connect here to READ/MANAGE mail only

    • IMAPS (port 993): This is defined on the example.com host Dovecot installation, file /etc/dovecot/conf.d/10-master.conf, section service imap_listener imaps. Process imap, forked from dovecot, performs this responsibility.

    • Authentication is done using /etc/dovecot/conf.d/10-auth.conf auth_mechanism=plain definition, over unix_listener auth_userdb declared in file /etc/dovecot/conf.d/10-master.conf. Linux PAM verification occurs using file /etc/pam.d/dovecot and passdb{driver=pam} in file /etc/dovecot/conf.d/auth-system.conf.ext.

Port 587, aka: submission, is purposely ommited because it is a partial encryption called STARTTLS. This is deemed un-secure and not recommended.

MUA Connection Definitions

Read Mail

  • Server: mail.example.com
  • User: <linux user defined on the E-Mail .com host>
  • Password: <linux user password defined on the E-Mail .com host>
  • SSL: ON
  • Port: 993

Send Mail

  • Server: smtp.example.org
  • User: <linux user defined above on the E-Mail .org host>
  • Password: <linux user password defined above on the E-Mail .org host>
  • SSL: ON
  • Port: 465

Debug

If mail queues up to send/resend, you can check and clear to queue

List mail queue
mailq
Look at message
postcat -vq DCE2182DEA
Flush queue
postsuper -d ALL deferred
or
postqueue -f

Cache files (send/receive) are in data_directory (/var/lib/postfix) as Berkley Databasees

Install a Mail User Agent for ssh

mutt is a nice local mail reader, for times when the network mail may not work, at least the local mail will.

$ sudo apt-get install mutt
$ sudo dnf install mutt

Install a nice ssh capable log reader

I like lnav. Just run lnav and by default it will read the syslog. Or run lnav /var/log/mail.log.

$ sudo apt-get install lnav
$ sudo dnf install lnav

Install a package marking tool

Debian only: apt-clone will create a nice bundle of all your current packages. Save this off in case you have to re-build your server.

$ sudo apt-get install apt-clone

Certificate Expiration Check

Run this every week/day in cron /root/cert_expire.sh 2 to E-Mail you reminders before it expires.

File: ~/cert_expire.sh

#!/bin/bash
# ----------------------------------------------------------------------
#
# File: cert_expire.sh
#
# Purpose: See what the expiration date is for Let's Encrypt Certificate
#
#
#  s_client : The s_client command implements a generic SSL/TLS client
#              which connects to a remote host using SSL/TLS.
#  -servername $DOM : Set the TLS SNI (Server Name Indication) extension
#                      in the ClientHello message to the given value.
#  -connect $DOM:$PORT : This specifies the host ($DOM) and optional
#                         port ($PORT) to connect to.
#  x509 : Run certificate display and signing utility.
#  -noout : Prevents output of the encoded version of the certificate.
#  -dates : Prints out the start and expiry dates of a TLS or SSL certificate.
#
# Don Cohoon - Jan 2023
# ----------------------------------------------------------------------
#
#
if [ $# -gt 0 ]; then
  A=${1}
else
  /usr/bin/echo "1) E-Mail"
  /usr/bin/echo "2) File"
  /usr/bin/echo "3) Web"
  /usr/bin/echo "4) Local"
  read A
fi
case ${A}
 in
   1)
	/usr/bin/echo "REMINDER: Restart postfix and dovecot to enable new certs"
	/usr/bin/echo "=> E-Mail Certificate: CTRL-C to exit"
	#/usr/bin/openssl s_client -connect mail.your-domain.org:25 -starttls smtp 2>/dev/null|/usr/bin/openssl x509 -noout -dates
	/usr/bin/openssl s_client -connect mail.your-domain.org:465  2>/dev/null|/usr/bin/openssl x509 -noout -dates
	;;
   2)
	/usr/bin/echo "=> File Certificate"
	sudo /usr/bin/openssl x509 -enddate -noout -in /etc/letsencrypt/live/your-domain.org/fullchain.pem
	;;
   3)
	/usr/bin/echo "REMINDER: Restart apache2 and nginx to enable new certs"
	/usr/bin/echo "=> www.your-domain.org Certificate: CTRL-C to exit"
	/usr/bin/openssl s_client -servername your-domain.org -connect www.your-domain.org:443 2>/dev/null | /usr/bin/openssl x509 -noout -dates
	;;
   4)
	/usr/bin/echo "REMINDER: Restart apache2 and nginx to enable new certs"
	/usr/bin/echo "=> Local Web Certificate: CTRL-C to exit"
	/usr/bin/openssl s_client -connect localhost:443 | /usr/bin/openssl x509 -noout -dates
	;;
esac

Configuring SPF Policy Agent

We also need to tell our Postfix SMTP server to check for SPF record of incoming emails. This doesn’t help ensure outgoing email delivery but help with detecting forged incoming emails.

Install required packages:

  • Debian
sudo apt install postfix-policyd-spf-python
  • RedHat
sudo dnf install pypolicyd-spf

Test SPF

If you know the sender, recipient, and client_address, you can test them before turning SPF on in postfix.

Must have blank line as last input for policyd-spf

# Python Site Packages = /usr/lib/python3.9/site-packages/
# Source = /usr/lib/python3.9/site-packages/spf_engine/*
/usr/libexec/postfix/policyd-spf <<EOF
 request=smtpd_access_policy
 protocol_state=RCPT
 protocol_name=SMTP
 helo_name=ccpub6
 queue_id=hv8rp02v1sso
 instance=12345.6789
 sender=foo
 recipient=bar
 client_address=1.2.3.4
 client_name=bubba

EOF

Configure SPF

Add the following lines at the end of the file, which tells Postfix to start the SPF policy daemon when it’s starting itself.

  • Debian

File: /etc/postfix/master.cf

policyd-spf  unix  -       n       n       -       0       spawn
    user=policyd-spf argv=/usr/bin/policyd-spf
  • Redhat

File: /etc/postfix/master.cf

policyd-spf  unix  -       n       n       -       0       spawn
    user=nobody argv=/usr/libexec/postfix/policyd-spf

Append the following lines at the end of the file. The first line specifies the Postfix policy agent timeout setting. The following lines will impose a restriction on incoming emails by rejecting unauthorized email and checking SPF record.

File: /etc/postfix/main.cf

policyd-spf_time_limit = 3600
smtpd_recipient_restrictions =
   permit_mynetworks,
   permit_sasl_authenticated,
   reject_unauth_destination,
   check_policy_service unix:private/policyd-spf

Restart Postfix.

sudo systemctl restart postfix

Next time, when you receive an email from a domain that has an SPF record, you can see the SPF check results in the raw email header. The following header indicates the sender sent the email from an authorized host.

Received-SPF: Pass (sender SPF authorized).

Debug SPF

Config file, debug setting:

File: /etc/python-policyd-spf/policyd-spf.conf

~
# level 1 is default
debugLevel = 5
~
:wq
man policyd-spf.conf

check_policy_service Unix pipe:

File: /var/spool/postfix/private/policyd-spf

srw-rw-rw-. 1 postfix postfix 0 Feb 25 14:37 /var/spool/postfix/private/policyd-spf

Block Domains and E-Mail Addresses using Postfix access

The following example uses an indexed file, so that the order of table entries does not matter.

File: /etc/postfix/main.cf:

 smtpd_client_restrictions =
   check_client_access hash:/etc/postfix/access

The example permits access by the client at address 1.2.3.4 but rejects all other clients in 1.2.3.0/24.

File: /etc/postfix/access:

1.2.3   REJECT
1.2.3.4 OK
aol.com REJECT

Create hash map of ascii file, and restart postfix

$ sudo postmap  /etc/postfix/access
$ sudo systemctl restart postfix

Reference:

Setting up DKIM

OpenDKIM is an open source implementation of the DKIM (Domain Keys Identified Mail) sender authentication system proposed by the E-mail Signing Technology Group (ESTG), now standardized by the IETF (RFC6376). It also includes implementations of the RFC5617) Vouch By Reference (VBR, RFC5518) proposed standard and the experimental Authorized Third Party Signatures protocol (ATPS, RFC6541).

Install OpenDKIM

  • Debian
sudo apt install opendkim opendkim-tools
  • RedHat
# enable the CodeReady Linux Builder repository. You already have access to it; you just need to enable it.
dnf config-manager --set-enabled crb
# install the EPEL RPM
dnf install epel-release epel-next-release
# install
sudo dnf install opendkim opendkim-tools

Configure OpenDKIM

Add user postfix to group opendkim.

sudo gpasswd -a postfix opendkim

Check OpenDKIM main configuration file for Syslog, Logwhy.

Logwhy will generate more detailed logs for debugging.

File: /etc/opendkim.conf

Syslog               yes
Logwhy               yes

Set Canonicalization used when signing messages. The recognized values are relaxed and simple as defined by the DKIM specification. The default is simple. The value may include two different canonicalizations separated by a slash ("/") character, in which case the first will be applied to the header and the second to the body.

Set operating Modes. The string is a concatenation of characters that indicate which mode(s) of operation are desired. Valid modes are s (signer) and v (verifier). The default is sv except in test mode (see the opendkim(8) man page) in which case the default is v. When signing mode is enabled, one of the following combinations must also be set: (a) Domain, KeyFile, Selector, no KeyTable, no SigningTable; (b) KeyTable, SigningTable, no Domain, no KeyFile, no Selector; (c) KeyTable, SetupPolicyScript, no Domain, no KeyFile, no Selector.

File: /etc/opendkim.conf

Canonicalization   relaxed/simple
Mode               sv
  • Do not set Domain or SubDomains, they are not required because we will use SigningTable.
  • Do not set Selector, it is not required because we will use SigningTable.
  • Do not set KeyFile, it is not required because we will use KeyTable.

Add restart definitions to the end of the file.

  • AutoRestart (Boolean): Automatically re-start on failures. Use with caution; if the filter fails instantly after it starts, this can cause a tight fork(2) loop.

  • AutoRestartCount (integer): Sets the maximum automatic restart count. After this number of automatic restarts, the filter will give up and terminate. A value of 0 implies no limit; this is the default.

  • AutoRestartRate (string): Sets the maximum automatic restart rate. If the filter begins restarting faster than the rate defined here, it will give up and terminate. This is a string of the form n/t[u] where n is an integer limiting the count of restarts in the given interval and t[u] defines the time interval through which the rate is calculated; t is an integer and u defines the units thus represented ("s" or "S" for seconds, the default; "m" or "M" for minutes; "h" or "H" for hours; "d" or "D" for days). For example, a value of "10/1h" limits the restarts to 10 in one hour. There is no default, meaning restart rate is not limited.

File: /etc/opendkim.conf

AutoRestart       yes
AutoRestartCount  10
AutoRestartRate   10/1H

Reference: http://www.opendkim.org/opendkim.conf.5.html

Map Domains to Keys

The next two configuration items will create maps for E-Mail Domains in the From: header, to keys used to sign messages.

  • KeyTable (dataset): Gives the location of a file mapping key names to signing keys. If present, overrides any KeyFile setting in the configuration file. The data set named here maps each key name to three values: (a) the name of the domain to use in the signature’s "d=" value; (b) the name of the selector to use in the signature’s "s=" value; and (c) either a private key or a path to a file containing a private key. If the first value consists solely of a percent sign ("%") character, it will be replaced by the apparent domain of the sender when generating a signature. If the third value starts with a slash ("/") character, or "./" or "../", then it is presumed to refer to a file from which the private key should be read, otherwise it is itself a PEM-encoded private key or a base64-encoded DER private key; a "%" in the third value in this case will be replaced by the apparent domain name of the sender. The SigningTable (see below) is used to select records from this table to be used to add signatures based on the message sender.

  • SigningTable (dataset): Defines a table used to select one or more signatures to apply to a message based on the address found in the From: header field. Keys in this table vary depending on the type of table used; values in this data set should include one field that contains a name found in the KeyTable (see above) that identifies which key should be used in generating the signature, and an optional second field naming the signer of the message that will be included in the "i=" tag in the generated signature. Note that the "i=" value will not be included in the signature if it conflicts with the signing domain (the "d=" value).

    • If the first field contains only a "%" character, it will be replaced by the domain found in the From: header field. Similarly, within the optional second field, any "%" character will be replaced by the domain found in the From: header field.

    • If this table specifies a regular expression file ("refile"), then the keys are wildcard patterns that are matched against the address found in the From: header field. Entries are checked in the order in which they appear in the file.

    • For all other database types, the full user@host is checked first, then simply host, then user@.domain (with all superdomains checked in sequence, so "foo.example.com" would first check "user@foo.example.com", then "user@.example.com", then "user@.com"), then .domain, then user@*, and finally *.

    • In any case, only the first match is applied, unless MultipleSignatures is enabled in which case all matches are applied.

File: /etc/opendkim.conf

KeyTable        /etc/opendkim/KeyTable
SigningTable    refile:/etc/opendkim/SigningTable

KeyTable need "refile:"? (wildcards not allowed)

Hosts to ignore when verifying signatures

Identifies a set of "external" hosts that may send mail through the server as one of the signing domains without credentials as such. This has the effect of suppressing the "external host (hostname) tried to send mail as (domain)" log messages. Entries in the data set should be of the same form as those of the PeerList option below. The set is empty by default.

File: /etc/opendkim.conf

ExternalIgnoreList      refile:/etc/opendkim/TrustedHosts

A set of internal hosts whose mail should be signed

Identifies a set internal hosts whose mail should be signed rather than verified. Entries in this data set follow the same form as those of the PeerList option below. If not specified, the default of "127.0.0.1" is applied. Naturally, providing a value here overrides the default, so if mail from 127.0.0.1 should be signed, the list provided here should include that address explicitly.

File: /etc/opendkim.conf

InternalHosts   refile:/etc/opendkim/TrustedHosts

Save and close the file.

Create Signing Table, Key Table and Trusted Hosts File

Check directory structure for OpenDKIM

$ sudo ls -lR /etc/opendkim*
-rw-r--r-- 1 root     root     5346 Mar  2 11:15 /etc/opendkim.conf

/etc/opendkim:
total 12
drwx--x--- 2 opendkim opendkim    6 Feb 24  2022 keys
-rw-r----- 1 opendkim opendkim  339 Feb 24  2022 KeyTable
-rw-r----- 1 opendkim opendkim 1221 Feb 24  2022 SigningTable
-rw-r----- 1 opendkim opendkim  378 Feb 24  2022 TrustedHosts

/etc/opendkim/keys:
total 0

If not the same, change the owner from root to opendkim and make sure only opendkim user can read and write to the keys directory.

sudo chown -R opendkim:opendkim /etc/opendkim

sudo chmod go-rw /etc/opendkim/keys

Create the signing table

Add two lines to the file.

The first line tells OpenDKIM that if a sender on your server is using a @your-domain.com address, then it should be signed with the private key identified by default._domainkey.your-domain.com.

The second line tells that your sub-domains will be signed by the private key also.

File: /etc/opendkim/SigningTable

*@your-domain.com    default._domainkey.your-domain.com
*@*.your-domain.com    default._domainkey.your-domain.com

Save and close the file.

Create the key table

Add the following line, defining the location of the private key.

File: /etc/opendkim/KeyTable

default._domainkey.your-domain.com     your-domain.com:default:/etc/opendkim/keys/your-domain.com/default.private

Save and close the file.

Create the trusted hosts file

Tell OpenDKIM if an email is from localhost or the same domain, then only sign the email, and not perform DKIM verification.

File: /etc/opendkim/TrustedHosts

127.0.0.1
localhost

.your-domain.com

Save and close the file.

Do not add an asterisk to the domain name like this: *.your-domain.com. Put only a dot before the domain name.

Generate Private/Public Keypair

DKIM is used to sign outgoing messages and verify incoming messages, so we need to generate a private key for signing and a public key for remote verifier. Only the public key will be published in DNS.

Create a separate folder for the domain.

sudo mkdir /etc/opendkim/keys/your-domain.com

Generate keys using opendkim-genkey tool.

You should rotate in new keys every once and a while for protection against private key leaks. Allow seven days before deleting the old keys for existing E-Mails to be delivered.

sudo opendkim-genkey -b 2048 -d your-domain.com -D /etc/opendkim/keys/your-domain.com -s default -v
opendkim-genkey: generating private key
opendkim-genkey: private key written to default.private
opendkim-genkey: extracting public key
opendkim-genkey: DNS TXT record written to default.txt

The above command will create 2048 bits keys. -d (domain) specifies the domain. -D (directory) specifies the directory where the keys will be stored and we use default as the selector (-s), also known as the name. Once the command is executed, the private key will be written to default.private file and the public key will be written to default.txt file.

Ensure opendkim is the owner of the private key.

sudo chown opendkim:opendkim /etc/opendkim/keys/your-domain.com/default.private

Also change the permission, so only the opendkim user has read and write access to the file.

sudo chmod 600 /etc/opendkim/keys/your-domain.com/default.private

Publish Your Public Key in DNS Records

Display the public key

sudo cat /etc/opendkim/keys/your-domain.com/default.txt
default._domainkey	IN	TXT	( "v=DKIM1; k=rsa; "
	  "p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAutisB7xnL1B1j88Er2VsEd6WuwifqSThKEcrnlkhnsVhs/UkCd2lHL+dZwivjbfH+4RXIP0LK9shGokPwaA2MHNH3GgAuWZ/Wb6ZrZwqDlmHy+H6Q0/cLsB2Py2HFthq1JUhHW31ZOIqa4qOn2suBntQdizGExHsuMMb1nJpu0lgFJLU848qPQO76QMTcC/TyssiCjLXXSQEsS"
	  "Kx0UmeODJ43NKAAS0OqkGBD2UE7/SW54bVpESK32lTIfzk91OdW+zDMzX6myToJtEE9WgOkgD2evSTp02dhKBBRkQvGJ0SF7el34e/smeS+XvodjjOvP2f3qW5cLvrCRByIkFzRwIDAQAB" )  ; ----- DKIM key default for your-domain.com

The string after the p parameter is the public key, it spans two lines in the cat output because the limit per line is 256 characters. It really should be one big long string with no quotes.

In your DNS manager,

  • create a TXT record,
  • enter default._domainkey in the name field.
  • Copy everything between the parentheses and paste it into the value field of the DNS record. Delete all double quotes and white spaces in the value field. Join all the lines into one line.

For Example; look at a recent google e-mail:

X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed;
        d=1e100.net; s=20210112; t=1677769129;
        h=to:list-unsubscribe:user-agent:mime-version:subject:message-id:from
         :date:dkim-signature:dkim-signature:delivered-to:x-gm-message-state
         :from:to:cc:subject:date:message-id:reply-to;
        bh=V22zSr/CKU90o7uszazuWVoXgsouLolWaiMhbqvqG8Y=;
        b=FUH9YP2CWhupcc6eU3hVYigxSWJ2fVJHF1F3DrkJoMb1K3hf9O7vrTWDxqNIOvGmPS
         6sCsgvynVtQ+cccVgzc6vZLBYDg3XBdQF6u2hxiMIAAkyPGVUUrYpj/OreMj1WkGDkG0
         9yVxpp0UuIK8uyrfswX9zBWT/QORjQ4Lfh3KCbzaLX8DbfWoc3P907Ebc8cfvVGDu2wX
         oNBeYjEXb4sywHcVmuUNdg//O78sAY5CnSVZ3Gc/41/pFtNHdCCjQAWSQ/W/Czfsy2TY
         Ovuebwb71h3VlUpqDIqkaIMZ5rF9pWxqeQgHkIs8Ktgd8CnAhqnk77ZXWk0SR9Q8hkuQ
         +BQw==

The s and d tags are used to look up the DKIM record. s is the selector while d is the domain.

s=20210112 d=1e100.net

Together they form the DNS lookup string: 20210112._domainkey.1e100.net

$ dig +short txt 20210112._domainkey.1e100.net
"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA8Sabq+yC2d91PkWTEkjdH2AsaH31YzVJ8PKlQy2GZ9u8vtqfQM88nACYwWAbCQvLwUfCz7hbF/PyM3K/SPlDSk0HqKq89AQjv60br0pK90vWFmzt04ioNcf4QoiJjnnTWD6h5gOM" "ATz4WfdwCrQ9MRF0SjDHteEVeHCK4WKsWKdPshaSLiVfZxiGLv4SZkWye7Zh5iM66MUvYAr3x151AyCQroTNfJY9RN9RK2ZqLdcoulg7S/XMbnzY7EW0P8nPj2jqvMp0bcr13tOzBRnysYiQIu3cjrtyLNfAZobK6tlmy737vkVH27D0rsUrcABrFqvVox61h61JssaRYcwRpwIDAQAB"

Your key lookup should look like:

$ dig +short txt default._domainkey.your-domain.com
"v=DKIM1; k=rsa; p=MIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAutisB7xnL1B1j88Er2VsEd6WuwifqSThKEcrnlkhnsVhs/UkCd2lHL+dZwivjbfH+4RXIP0LK9shGokPwaA2MHNH3GgAuWZ/Wb6ZrZwqDlmHy+H6Q0/cLsB2Py2HFthq1JUhHW31ZOIqa4qOn2suBntQdizGExHsuMMb1nJpu0lgFJLU848qPQO76QMTcC/TyssiCjLXXSQEsS" "Kx0UmeODJ43NKAAS0OqkGBD2UE7/SW54bVpESK32lTIfzk91OdW+zDMzX6myToJtEE9WgOkgD2evSTp02dhKBBRkQvGJ0SF7el34e/smeS+XvodjjOvP2f3qW5cLvrCRByIkFzRwIDAQAB"

Reference: https://www.cloudflare.com/learning/dns/dns-records/dns-dkim-record/

Test DKIM Key

Test your key.

sudo opendkim-testkey -d your-domain.com -s default -vvv

You should see Key OK in the output.

opendkim-testkey: using default configfile /etc/opendkim.conf
opendkim-testkey: checking key 'default._domainkey.your-domain.com'
opendkim-testkey: key secure
opendkim-testkey: key OK

Your DKIM record will take some time to propagate to the Internet.

To check, go to https://www.dmarcanalyzer.com/dkim/dkim-check/, use default as the selector your domain name to check DKIM record propagation.

Connect Postfix to OpenDKIM

Postfix can talk to OpenDKIM via a Unix socket file. It is a good idea to put it where all the other postfix socket files are.

Create the directory OpenDKIM for a socket file and limit is to the opendkim user and postfix group.

$sudo mkdir                  /var/spool/postfix/opendkim
$sudo chown opendkim:postfix /var/spool/postfix/opendkim

Set Socket to local

File: /etc/opendkim.conf

Socket    local:/var/spool/postfix/opendkim/opendkim.sock

Change the default file as well (if it exists):

Add openDKIM to Postfix main.cf

milter is a modifier filter

File: /etc/postfix/main.cf

# Milter configuration
milter_default_action = accept
milter_protocol = 6
smtpd_milters = local:opendkim/opendkim.sock
non_smtpd_milters = $smtpd_milters

Restart opendkim and postfix service

$ sudo systemctl restart opendkim postfix

SPF and DKIM Check

Send a test email to another domain's account, and see if SPF and DKIM checks are passed in the message source.

Example:

Authentication-Results: dkim-verifier.icloud.com;
	dkim=pass (2048-bit key) header.d=your-domain.com header.i=@your-domain.com header.b=yn+tGP2N
Authentication-Results: spf.icloud.com; spf=pass (spf.icloud.com: domain of you@your-domain.com designates 1.2.3.4 as permitted sender) smtp.mailfrom=you@your-domain.com

Your email server will also perform SPF and DKIM checks on other domains.

Example:

Received-SPF: Pass (mailfrom) identity=mailfrom; client-ip=1234:f8b0:5678:90::372; helo=mail-yw1-xc2d.google.com; envelope-from=someone@gmail.com; receiver=<UNKNOWN> 
Authentication-Results: you.your-domain.com;
	dkim=pass (2048-bit key; unprotected) header.d=gmail.com header.i=@gmail.com header.b="IsoKGudn";
	dkim-atps=neutral

Postfix Can’t Connect to OpenDKIM

Check the logs

$ vi /var/log/mail*
$ sudo journalctl -eu opendkim

If you see this:

connect to Milter service local:opendkim/opendkim.sock: No such file or directory

check the opendkim systemd service.

$ sudo systemctl status opendkim

If opendkim is running, it means Postfix can’t connect to OpenDKIM via the Unix domain socket (local:opendkim/opendkim.sock).

Check the socket file:

$ ls -l /var/spool/postfix/opendkim/opendkim.sock 
srwxrwxr-x 1 opendkim opendkim 0 Mar  2 15:09 /var/spool/postfix/opendkim/opendkim.sock

And postfix user:

$ id postfix
uid=89(postfix) gid=89(postfix) groups=89(postfix),12(mail),988(opendkim)

Be sure the postfix user has group opendkim.

If all else fails, you can configure OpenDKIM to use a TCP/IP socket instead of Unix local socket.

File: /etc/opendkim.conf

Socket     inet:8892@localhost

File: /etc/postfix/main.cf

smtpd_milters = inet:127.0.0.1:8892

Restart OpenDKIM and Postfix.

sudo systemctl restart opendkim postfix

Now Postfix will connect to OpenDKIM via the TCP/IP socket.

Configuration Error in Email Client

DKIM signing could fail if you do not use STARTTLS or SSL/TLS.

TypePortEncryptionPassword
SMTP587STARTTLSNormal
IMAP143STARTTLSNormal
SMTP465SSL/TLSNormal
IMAP993SSL/TLSNormal

Wrong Settings

Port 25 as the SMTP port in mail clients to submit outgoing emails. No encryption method was selected.

Testing Email Score and Placement

Visit Mail-Tester https://www.mail-tester.com and send an E-Mail to the unique email address displayed on the home page. They will analyze it and return a sender score.

GlockApps https://glockapps.com/ will show you where your emails are being delivered at Gmail, Outlook, & all major ISPs.

What is DMARC?

DMARC stands for Domain-based message authentication, reporting and conformance. DMARC is a protcol for protecting your Internet Domain from abuse.

It extends SPF and DKIM using another DNS entry.

Originators of Internet Mail need to be able to associate reliable and authenticated domain identifiers with messages, communicate policies about messages that use those identifiers, and report about mail using those identifiers. These abilities have several benefits: Receivers can provide feedback to Domain Owners about the use of their domains; this feedback can provide valuable insight about the management of internal operations and the presence of external domain name abuse.

Reference: https://datatracker.ietf.org/doc/html/rfc7489

Create DMARC Record

DMARC policies are published as a TXT record in DNS.

  • Before creating a DMARC record, you must first create SPF and DKIM records.

  • Send a test email from your domain, and check the raw email headers at the recipient’s mailbox. Ensure the domain is the same in:

    • Return-path: you@domain
    • From: you@domain
    • d=domain in the DKIM signature

If the 3 domains are identical, then they are aligned.

If Return-Path: or DKIM d= uses a subdomain instead of the main domain name, then this is called relaxed alignment. If no subdomain is used and the main domain names are the same, it’s called strict alignment.

DMARC Record TXT

Domain Owner DMARC preferences are stored as DNS TXT records in subdomains named "_dmarc". For example, the Domain Owner of "example.com" would post DMARC preferences in a TXT record at "_dmarc.example.com".

In your Domain Registration DNS manager add a new TXT record.

Name field:

_dmarc

Value field:

v=DMARC1; p=none; pct=100; rua=mailto:dmarc-reports@your-domain.com

Definition:

  • v=DMARC1: Version (plain-text; REQUIRED). Identifies the record retrieved as a DMARC record. It MUST have the value of "DMARC1".
  • p=none: Requested Mail Receiver policy (plain-text; REQUIRED for policy records).
  • pct=100: (plain-text integer between 0 and 100, inclusive; OPTIONAL; default is 100). Percentage of messages from the Domain Owner's mail stream to which the DMARC policy is to be applied.
  • rua: Addresses to which aggregate feedback is to be sent (comma- separated plain-text list of DMARC URIs; OPTIONAL).

There are 3 policies you can choose from:

  • none: The Domain Owner requests no specific action be taken regarding delivery of messages.
  • quarantine: The Domain Owner wishes to have email that fails the DMARC mechanism check be treated by Mail Receivers as suspicious.
  • reject: The Domain Owner wishes for Mail Receivers to reject email that fails the DMARC mechanism check. Rejection SHOULD occur during the SMTP transaction.

Another option to consider is:

  • fo: Failure reporting options (plain-text; OPTIONAL; default is "0") Provides requested options for generation of failure reports. Report generators MAY choose to adhere to the requested options. This tag's content MUST be ignored if a "ruf" tag (below) is not also specified. The value of this tag is a colon-separated list of characters that indicate failure reporting options as follows:

    • 0: Generate a DMARC failure report if all underlying authentication mechanisms fail to produce an aligned "pass" result.

    • 1: Generate a DMARC failure report if any underlying authentication mechanism produced something other than an aligned "pass" result.

    • d: Generate a DKIM failure report if the message had a signature that failed evaluation, regardless of its alignment. DKIM- specific reporting is described in [AFRF-DKIM].

    • s: Generate an SPF failure report if the message failed SPF evaluation, regardless of its alignment. SPF-specific reporting is described in [AFRF-SPF].

  • ruf: Addresses to which message-specific failure information is to be reported (comma-separated plain-text list of DMARC URIs; OPTIONAL). If present, the Domain Owner is requesting Mail Receivers to send detailed failure reports about messages that fail the DMARC evaluation in specific ways (see the "fo" tag above).

Try fo=1 at first for detailed DMARC failure reports. When you change to a more restrictive policy, use fo=0.

v=DMARC1; p=none; pct=100; fo=1; ruf=mailto:dmarc-reports@your-domain.com

If you have a domain name that will not send emails, use p=reject policy.

v=DMARC1; p=reject; pct=100;

DMARC Record Check

You can check your DMARC record from Linux terminal with the following command:

$ dig txt +short _dmarc.example.com
"v=DMARC1; p=none; pct=100; rua=mailto:postmaster@your-domain.com"

You can also install opendmarc-check that to check a DMARC record.

Install opendmarc package

sudo apt install opendmarc

It checks DNS and translates the DMARC record to a human readable form.

$ opendmarc-check your-domain.com
DMARC record for your-domain.com:
	Sample percentage: 100
	DKIM alignment: relaxed
	SPF alignment: relaxed
	Domain policy: none
	Subdomain policy: unspecified
	Aggregate report URIs:
		mailto:postmaster@your-domain.com
	Failure report URIs:
		(none)

DMARC Test E-Mail

Send an email from your domain to another domain's account. If DMARC is configured correctly then you will see dmarc=pass in the Authentication-Results: header.

Authentication-Results: dmarc.icloud.com; dmarc=pass header.from=your-domain.com
X-DMARC-Info: pass=pass; dmarc-policy=none; s=r1; d=r1; pdomain=your-domain.com
X-DMARC-Policy: v=DMARC1; p=none; pct=100; rua=mailto:postmaster@your-domain.com
Authentication-Results: dkim-verifier.icloud.com;
	dkim=pass (2048-bit key) header.d=your-domain.com header.i=@your-domain.com header.b=wVFZ+19t
Authentication-Results: spf.icloud.com; spf=pass (spf.icloud.com: domain of you@your-domain.com designates 1.2.3.4 as permitted sender) smtp.mailfrom=you@your-domain.com
Received-SPF: pass (spf.icloud.com: domain of you@your-domain.com designates 1.2.3.4 as permitted sender) receiver=spf.icloud.com; client-ip=1.2.3.4; helo=mail.your-domain.org; envelope-from=you@your-domain.com

Interpret a DMARC Report

There are two kinds of DMARC reports.

  • Daily XML-based [1] aggregate report generated by Gmail, Yahoo, Hotmail, etc.
  • Real-time forensic reports (copies of individual pieces of email that fail the DMARC check)

Normally you only want to receive the aggregate (rua) report. The data that DMARC produces is invaluable for understanding what is going on for any given email domain. However, raw DMARC report data is super hard to read and understand.

Postmark offers a free service to process these reports. The nice part about Postmark is that you can tell receiving email servers to send XML reports directly to Postmark for processing. So instead of entering your email address in the DMARC record, you enter an email address of postmarkapp.com that is unique to you.

v=DMARC1; p=none; pct=100; fo=1; rua=mailto:unique-to-you@dmarc.postmarkapp.com;

You can also specify multiple email addresses, separated by commas.

v=DMARC1; p=none; pct=100; fo=1; rua=mailto:unique-to-you@dmarc.postmarkapp.com,mailto:dmarc-report@your-domain.com;

After your DMARC record has been verified by Postmark, you will receive a DMARC report weekly every Monday in your email inbox. You don’t need to register an account at Postmark.

Many other firms exist to create DMARC reports. If you are into a lot of E-Mail marketing, you should check some more out.

  1. https://datatracker.ietf.org/doc/html/rfc7489#appendix-C

Reference:

Postgresql


Table of Contents


Postgresql relational SQL database for systems. This is a great database with high reliablity, performance and extendable. External extensions allow even more customizations.

  • User Roles
  • Stored Procedures
  • JSON native support
  • Filesystem access
  • Federation with other databases
  • Physical and logical replication
  • Partitioning
  • Small system requirements

Reference: https://www.postgresql.org/

Installation (version 13)

The current version is 15, and Ubuntu 22.04 installs 14.

Install

This example is version 13, but it has not changed the package install command.

$ sudo apt-get install postgresql postgresql-server

Enable auto startup

Yea, still example on version 13, but should be the same on 14 and 15.

$ sudo systemctl start postgresql@13-main
$ sudo systemctl status postgresql@13-main
  • RedHat

All files will be stuffed into a single directory by default (/var/lib/pgsql/data); data, config, log, etc...

If you want to have multiple versions, or upgrade versions, read the file /usr/share/doc/postgresql/README.rpm-dist [1]. RedHat gives you more control over the placemant and layout of the postgres database(s).

See pg_lscluster.sh [2] below for more info on versions.

$ sudo -u postgres /usr/bin/postgresql-setup --initdb
 * Initializing database in '/var/lib/pgsql/data'
 * Initialized, logs are in /var/lib/pgsql/initdb_postgresql.log
$ sudo systemctl enable --now postgresql
$ sudo systemctl status postgresql
  1. https://fedoraproject.org/wiki/PostgreSQL/README.rpm-dist
  2. Clusters

Check out all these Options

Check the databases installed, main (postgres) and two templates and list out available command line directives (they start with the backslash (\) character)

$ sudo -u postgres psql
postgres-# \l
                              List of databases
   Name    |  Owner   | Encoding | Collate |  Ctype  |   Access privileges   
-----------+----------+----------+---------+---------+-----------------------
 postgres  | postgres | UTF8     | C.UTF-8 | C.UTF-8 | 
 template0 | postgres | UTF8     | C.UTF-8 | C.UTF-8 | =c/postgres          +
           |          |          |         |         | postgres=CTc/postgres
 template1 | postgres | UTF8     | C.UTF-8 | C.UTF-8 | =c/postgres          +
           |          |          |         |         | postgres=CTc/postgres
(3 rows)

# \?
General
  \copyright             show PostgreSQL usage and distribution terms
  \crosstabview [COLUMNS] execute query and display results in crosstab
  \errverbose            show most recent error message at maximum verbosity
  \g [(OPTIONS)] [FILE]  execute query (and send results to file or |pipe);
                         \g with no arguments is equivalent to a semicolon
  \gdesc                 describe result of query, without executing it
  \gexec                 execute query, then execute each value in its result
  \gset [PREFIX]         execute query and store results in psql variables
  \gx [(OPTIONS)] [FILE] as \g, but forces expanded output mode
  \q                     quit psql
  \watch [SEC]           execute query every SEC seconds

Help
  \? [commands]          show help on backslash commands
  \? options             show help on psql command-line options
  \? variables           show help on special variables
  \h [NAME]              help on syntax of SQL commands, * for all commands

Query Buffer
  \e [FILE] [LINE]       edit the query buffer (or file) with external editor
  \ef [FUNCNAME [LINE]]  edit function definition with external editor
  \ev [VIEWNAME [LINE]]  edit view definition with external editor
  \p                     show the contents of the query buffer
  \r                     reset (clear) the query buffer
  \s [FILE]              display history or save it to file
  \w FILE                write query buffer to file

Input/Output
  \copy ...              perform SQL COPY with data stream to the client host
  \echo [-n] [STRING]    write string to standard output (-n for no newline)
  \i FILE                execute commands from file
  \ir FILE               as \i, but relative to location of current script
  \o [FILE]              send all query results to file or |pipe
  \qecho [-n] [STRING]   write string to \o output stream (-n for no newline)
  \warn [-n] [STRING]    write string to standard error (-n for no newline)

Conditional
  \if EXPR               begin conditional block
  \elif EXPR             alternative within current conditional block
  \else                  final alternative within current conditional block
  \endif                 end conditional block

Informational
  (options: S = show system objects, + = additional detail)
  \d[S+]                 list tables, views, and sequences
  \d[S+]  NAME           describe table, view, sequence, or index
  \da[S]  [PATTERN]      list aggregates
  \dA[+]  [PATTERN]      list access methods
  \dAc[+] [AMPTRN [TYPEPTRN]]  list operator classes
  \dAf[+] [AMPTRN [TYPEPTRN]]  list operator families
  \dAo[+] [AMPTRN [OPFPTRN]]   list operators of operator families
  \dAp[+] [AMPTRN [OPFPTRN]]   list support functions of operator families
  \db[+]  [PATTERN]      list tablespaces
  \dc[S+] [PATTERN]      list conversions
  \dC[+]  [PATTERN]      list casts
  \dd[S]  [PATTERN]      show object descriptions not displayed elsewhere
  \dD[S+] [PATTERN]      list domains
  \ddp    [PATTERN]      list default privileges
  \dE[S+] [PATTERN]      list foreign tables
  \des[+] [PATTERN]      list foreign servers
  \det[+] [PATTERN]      list foreign tables
  \deu[+] [PATTERN]      list user mappings
  \dew[+] [PATTERN]      list foreign-data wrappers
  \df[anptw][S+] [FUNCPTRN [TYPEPTRN ...]]
                         list [only agg/normal/procedure/trigger/window] functions
  \dF[+]  [PATTERN]      list text search configurations
  \dFd[+] [PATTERN]      list text search dictionaries
  \dFp[+] [PATTERN]      list text search parsers
  \dFt[+] [PATTERN]      list text search templates
  \dg[S+] [PATTERN]      list roles
  \di[S+] [PATTERN]      list indexes
  \dl                    list large objects, same as \lo_list
  \dL[S+] [PATTERN]      list procedural languages
  \dm[S+] [PATTERN]      list materialized views
  \dn[S+] [PATTERN]      list schemas
  \do[S+] [OPPTRN [TYPEPTRN [TYPEPTRN]]]
                         list operators
  \dO[S+] [PATTERN]      list collations
  \dp     [PATTERN]      list table, view, and sequence access privileges
  \dP[itn+] [PATTERN]    list [only index/table] partitioned relations [n=nested]
  \drds [ROLEPTRN [DBPTRN]] list per-database role settings
  \dRp[+] [PATTERN]      list replication publications
  \dRs[+] [PATTERN]      list replication subscriptions
  \ds[S+] [PATTERN]      list sequences
  \dt[S+] [PATTERN]      list tables
  \dT[S+] [PATTERN]      list data types
  \du[S+] [PATTERN]      list roles
  \dv[S+] [PATTERN]      list views
  \dx[+]  [PATTERN]      list extensions
  \dX     [PATTERN]      list extended statistics
  \dy[+]  [PATTERN]      list event triggers
  \l[+]   [PATTERN]      list databases
  \sf[+]  FUNCNAME       show a function's definition
  \sv[+]  VIEWNAME       show a view's definition
  \z      [PATTERN]      same as \dp

Formatting
  \a                     toggle between unaligned and aligned output mode
  \C [STRING]            set table title, or unset if none
  \f [STRING]            show or set field separator for unaligned query output
  \H                     toggle HTML output mode (currently off)
  \pset [NAME [VALUE]]   set table output option
                         (border|columns|csv_fieldsep|expanded|fieldsep|
                         fieldsep_zero|footer|format|linestyle|null|
                         numericlocale|pager|pager_min_lines|recordsep|
                         recordsep_zero|tableattr|title|tuples_only|
                         unicode_border_linestyle|unicode_column_linestyle|
                         unicode_header_linestyle)
  \t [on|off]            show only rows (currently off)
  \T [STRING]            set HTML <table> tag attributes, or unset if none
  \x [on|off|auto]       toggle expanded output (currently off)

Connection
  \c[onnect] {[DBNAME|- USER|- HOST|- PORT|-] | conninfo}
                         connect to new database (currently "postgres")
  \conninfo              display information about current connection
  \encoding [ENCODING]   show or set client encoding
  \password [USERNAME]   securely change the password for a user

Operating System
  \cd [DIR]              change the current working directory
  \setenv NAME [VALUE]   set or unset environment variable
  \timing [on|off]       toggle timing of commands (currently off)
  \! [COMMAND]           execute command in shell or start interactive shell

Variables
  \prompt [TEXT] NAME    prompt user to set internal variable
  \set [NAME [VALUE]]    set internal variable, or list all if no parameters
  \unset NAME            unset (delete) internal variable

Large Objects
  \lo_export LOBOID FILE
  \lo_import FILE [COMMENT]
  \lo_list
  \lo_unlink LOBOID      large object operations
(END)

#\q

Clusters

Before you can do anything, you must initialize a database storage area on disk. This is called a database cluster. A database cluster is a collection of databases that are managed by a single instance of a running database server.

RedHat does not install pg_ctlcluster as it is an extension for Ubuntu package postgresql-common

Clusters are the best way to run several versions of PostgreSQL on the same host,

Reference: https://www.postgresql.org/docs/current/creating-cluster.html

$ pg_ctlcluster
Usage: /usr/bin/pg_ctlcluster <version> <cluster> <action>

$ pg_lsclusters
Ver Cluster Port Status Owner    Data directory              Log file
13  main    5432 down   postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log

$ pg_ctlcluster 13 main start

$ pg_lsclusters
Ver Cluster Port Status Owner    Data directory              Log file
13  main    5432 online postgres /var/lib/postgresql/13/main /var/log/postgresql/postgresql-13-main.log

  • RedHat Version

The location of files is determined by the initdb program, which was run earlier.

File: ~/pg_lscluster.sh

#!/bin/bash
##############################################################
# 
# File: pg_lscluster
#
# Purpose: List information on postgresql installed clusters
#
# Reason: pg_lscluster only exists for the postgresql-common
#          package on Ubuntu. This script was created for
#          non-Ubuntu Linux distributions.
#
# Versions: To do that simply 
#  1) Place the postgresql.conf files in versions:
#     i.e.: /etc/postgres/<version>/main/postgresql.conf
#  2) Create services with versions:
#     i.e.: postgresql@<version>-main.service
#  Of course data_directory and log_directory must also have
#   versions, and port numbers must be different,
#   but this script does not care. ;-)
#
# History
# When        Who         Description
# ----------- ----------- -----------------------------------------
# 12-Jun-2023 Don Cohoon  Created
##############################################################
TMP=$(mktemp)
#-----------
function details() {
  SERVICE=${1}
  PORT=${2}
  #
  sudo systemctl status ${SERVICE} > ${TMP}
  grep -e active -e PID ${TMP}
  # Main PID: 904 (postmaster)
  PROCESS=$(grep PID ${TMP}| cut -d'(' -f2| cut -d')' -f1)
  # Grab the '-D <directory>' run-time parameter
  DIR=$(grep /bin/${PROCESS} ${TMP} | awk '{print $4}')
  #
  echo
  #
  sudo -u postgres ls ${DIR}/log >${TMP} 2>/dev/null
  if [[ -s ${TMP} ]]; then
    echo "   Log directory: ${DIR}/log"
  elif [[ -d /var/log/postgresql ]]; then
    echo "   Log directory: /var/log/postgresql"
  fi
  #
  echo
  #
  sudo -u postgres psql -tq -p ${PORT} <<EOF 2>/dev/null
--  SELECT '    Current Logfile: ' || pg_current_logfile();
  SELECT '  Port: ' || setting
    FROM pg_settings
   WHERE name = 'port';
  SELECT '  Data directory: ' || setting
    FROM pg_settings
   WHERE name = 'data_directory';
  SELECT '  Config File: ' || setting
    FROM pg_settings
   WHERE name = 'config_file';
  SELECT '  Host Based Access File: ' || setting
    FROM pg_settings
   WHERE name = 'hba_file';
EOF
  rm ${TMP}
} # details
#-----------
echo
#
if [[ -f /usr/bin/pg_config ]];then
  V=$(/usr/bin/pg_config | tail -1 | awk '{print $4}')
else
  V=$(/usr/bin/postgres -V | awk '{print $NF}')
fi
echo "   Default Version:  ${V}"
echo
#
if [[ -d /etc/postgresql/ ]]; then
 for VER in $(ls /etc/postgresql/) # return installed versions 
  do
   echo "= = = = => Version: $VER"
   PORT=$(grep '^port' /etc/postgresql/${VER}/main/postgresql.conf 2>/dev/null | awk '{print $3}')
   details postgresql@${VER}-main.service ${PORT}
  done
else # only one default version
  details postgresql 5432
fi
#
  • Ubuntu
# ~/pg_lscluster.sh 

   Default Version:  14.8

= = = = => Version: 10
     Active: inactive (dead)

   Log directory: /var/log/postgresql

= = = = => Version: 12
     Active: active (running) since Sat 2023-06-10 16:10:47 EDT; 2 days ago
   Main PID: 1485 (postgres)

   Log directory: /var/log/postgresql

   Port: 5432

   Data directory: /var/lib/postgresql/12/main

   Config File: /etc/postgresql/12/main/postgresql.conf

   Host Based Access File: /etc/postgresql/12/main/pg_hba.conf

= = = = => Version: 14
     Active: active (running) since Sat 2023-06-10 16:10:47 EDT; 2 days ago
   Main PID: 1462 (postgres)

   Log directory: /var/log/postgresql

   Port: 5433

   Data directory: /var/lib/postgresql/14/main

   Config File: /etc/postgresql/14/main/postgresql.conf

   Host Based Access File: /etc/postgresql/14/main/pg_hba.conf
  • RedHat
# ~/pg_lscluster.sh 

   Default Version:  13.10

     Active: active (running) since Tue 2023-06-13 06:04:30 EDT; 2h 13min ago
   Main PID: 904 (postmaster)

   Log directory: /var/lib/pgsql/data/log

   Port: 5432

   Data directory: /var/lib/pgsql/data

   Config File: /var/lib/pgsql/data/postgresql.conf

   Host Based Access File: /var/lib/pgsql/data/pg_hba.conf

Log File

$ tail -40  /var/log/postgresql/postgresql-13-main.log
2018-11-16 06:04:16.889 EST [1019] LOG:  received fast shutdown request
2018-11-16 06:04:16.892 EST [1019] LOG:  aborting any active transactions
2018-11-16 06:04:16.897 EST [1019] LOG:  worker process: logical replication launcher (PID 1034) exited with exit code 1
2018-11-16 06:04:16.901 EST [1029] LOG:  shutting down
2018-11-16 06:04:16.942 EST [1019] LOG:  database system is shut down

Configuration File

There are many configuration parameters that affect the behavior of the database system.

Reference: https://www.postgresql.org/docs/current/runtime-config.html

Debian

postgres=# show config_file;
               config_file               
-----------------------------------------
 /etc/postgresql/12/main/postgresql.conf
(1 row)

RedHat

# show config_file;
             config_file             
-------------------------------------
 /var/lib/pgsql/data/postgresql.conf
(1 row)

Host Based Authentication (HBA) File

Client authentication is controlled by a configuration file, which traditionally is named pg_hba.conf and is stored in the database cluster's data directory. (HBA stands for host-based authentication.)

Reference: https://www.postgresql.org/docs/current/auth-pg-hba-conf.html

Debian

# show hba_file;
              hba_file               
-------------------------------------
 /etc/postgresql/12/main/pg_hba.conf
(1 row)

RedHat

# show hba_file;
            hba_file             
---------------------------------
 /var/lib/pgsql/data/pg_hba.conf
(1 row)

psql pager - pspg

The command line paging defaults to less but you can install a much better one.

Pspg supports searching, selecting rows, columns, or block and export selected area to clipboard. Check out the screenshots, installation options and animation on the website. Super useful productivity tool!

Reference: https://github.com/okbob/pspg

Install

$ sudo apt-get install pspg

Helper Script

Create a helper script to run it. Add /usr/local/bin to your PATH if not already.

I defaulted it to a database named mydb. Change it to your favorite database, or just remove the -d mydb part of the sudo line.

File: /usr/local/bin/psql.sh

#!/bin/bash
# https://github.com/okbob/pspg
echo "-> Fancy pager (pspg) [-s 1,2,3,4,...20] <- themes"
echo "\set xx '\\\\setenv PAGER \'pspg -bX --no-mouse -s 5\''"
echo ":xx"
echo "-> Default pager (less)"
echo "\set x '\\\\setenv PAGER less'"
echo ":x"
sudo -u postgres psql -d mydb $@

Example

pspg allows paging forward, backward, as well as scrolling right and left with arrow keys.

Execute a file from the script parameter name

$ cat now.sql 
select * from now();

$ psql.sh -f now.sql
-> Fancy pager (pspg) [-s 1,2,3,4,...20] <- themes
\set xx '\\setenv PAGER \'pspg -bX --no-mouse -s 5\''
:xx
-> Default pager (less)
\set x '\\setenv PAGER less'
:x
              now              
-------------------------------
 2023-01-28 19:42:07.897614-05
(1 row)

Example using the pager.

Copy & Paste \set xx '\\setenv PAGER \'pspg -bX --no-mouse -s 5\'' into the prompt, hit Enter. Remember to enter :xx to run the pspg pager.

$ psql.sh 
-> Fancy pager (pspg) [-s 1,2,3,4,...20] <- themes
\set xx '\\setenv PAGER \'pspg -bX --no-mouse -s 5\''
:xx
-> Default pager (less)
\set x '\\setenv PAGER less'
:x
psql (14.5 (Ubuntu 14.5-0ubuntu0.22.04.1), server 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1))
Type "help" for help.

mydb=# \set xx '\\setenv PAGER \'pspg -bX --no-mouse -s 5\''
mydb=# :xx
mydb-# select * from mydb.log_file ;

pspg.png

Continue

Now that you have installed a shiny new database, consider how upgrades work.

In our next episode of Linux in the Home.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

PostgreSQL Install and Upgrade


Table of Contents


Establish Baseline

Oh my, installing Ubuntu 22.04 installed PostgreSQL 14... wonder what version I am using...

What is installed?

$ sudo -u postgres pg_lsclusters 
Ver Cluster Port Status                Owner     Data directory               Log file
9.3 main    5433 down,binaries_missing <unknown> /var/lib/postgresql/9.3/main /var/log/postgresql/postgresql-9.3-main.log
9.5 main    5432 down,binaries_missing <unknown> /var/lib/postgresql/9.5/main /var/log/postgresql/postgresql-9.5-main.log
10  main    5434 down                  postgres  /var/lib/postgresql/10/main  /var/log/postgresql/postgresql-10-main.log
12  main    5432 online                postgres  /var/lib/postgresql/12/main  /var/log/postgresql/postgresql-12-main.log
14  main    5435 online                postgres  /var/lib/postgresql/14/main  /var/log/postgresql/postgresql-14-main.log

Well now, that's quite a mess. Let's try to clean that up and get current.

Binaries:

$ ls /var/lib/postgresql/
10  12	14

So 10,12 and 14 are installed. That matches our pg_lsclusters perl script.

Configurations:

$ ls /etc/postgresql/*
/etc/postgresql/10:
main

/etc/postgresql/12:
main

/etc/postgresql/14:
main

/etc/postgresql/9.3:
main

/etc/postgresql/9.5:
main

Configs are hanging around for 9.3, 9.5, 10, 12, 14

Cleaning up: List of config files

$ ls -l /etc/postgresql/9.3/main/
total 48
-rw-r--r-- 1 postgres postgres   315 Apr 18  2015 environment
-rw-r--r-- 1 postgres postgres   143 Apr 18  2015 pg_ctl.conf
-rw-r----- 1 postgres postgres  4970 Jun  4  2016 pg_hba.conf
-rw-r----- 1 postgres postgres  1636 Apr 18  2015 pg_ident.conf
-rw-r--r-- 1 postgres postgres 20816 Mar  9  2018 postgresql.conf
-rw-r--r-- 1 postgres postgres   382 Mar  9  2018 start.conf

What was the data directory for v 9.3?

$ sudo -u postgres grep data_directory /etc/postgresql/9.3/main/postgresql.conf 
data_directory = '/var/lib/postgresql/9.3/main'		# use data in another directory

Is it empty?

$ sudo -u postgres ls -l /var/lib/postgresql/9.3/main
ls: cannot access '/var/lib/postgresql/9.3/main': No such file or directory

Then it is ok to remove~

$ dpkg -l|grep 'postgresql.*.9'

Packages are long gone, just remove left overs.

$ sudo -u postgres rm -rf /var/run/postgresql/9.3/
$ sudo -u postgres rm -rf /var/lib/postgresql/9.3/
$ sudo             rm -rf /usr/lib/postgresql/9.3/
$ sudo -u postgres rm -rf /etc/postgresql/9.3/

(repeat for 9.5)

Version 10 is installed, but down as per pg_lsclusters. Check v10:

$ sudo -u postgres grep data_directory /etc/postgresql/10/main/postgresql.conf 
data_directory = '/var/lib/postgresql/10/main'		# use data in another directory

$ sudo -u postgres ls -l /var/lib/postgresql/10/main
total 80
drwx------ 8 postgres postgres 4096 Jan  1  2020 base
drwx------ 2 postgres postgres 4096 Aug 14  2020 global
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_commit_ts
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_dynshmem
drwx------ 4 postgres postgres 4096 Aug 14  2020 pg_logical
drwx------ 4 postgres postgres 4096 Dec 31  2019 pg_multixact
drwx------ 2 postgres postgres 4096 Aug 14  2020 pg_notify
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_replslot
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_serial
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_snapshots
drwx------ 2 postgres postgres 4096 Aug 14  2020 pg_stat
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_stat_tmp
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_subtrans
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_tblspc
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_twophase
-rw------- 1 postgres postgres    3 Dec 31  2019 PG_VERSION
drwx------ 3 postgres postgres 4096 Jan  6  2020 pg_wal
drwx------ 2 postgres postgres 4096 Dec 31  2019 pg_xact
-rw------- 1 postgres postgres   88 Dec 31  2019 postgresql.auto.conf
-rw------- 1 postgres postgres  170 Aug 14  2020 postmaster.opts

Yep, havn't been touched in three+ years

Which one is running?

$ ls /var/run/postgresql/
12-main.pg_stat_tmp  12-main.pid  14-main.pg_stat_tmp  14-main.pid

Again, this matches pg_lsclusters.

Any open ports?

$ sudo -u postgres grep port /etc/postgresql/10/main/postgresql.conf 
port = 5434				# (change requires restart)
					# supported by the operating system:
					# supported by the operating system:
					#   %r = remote host and port

$ nmap localhost  -p 5434
Starting Nmap 7.80 ( https://nmap.org ) at 2023-01-30 10:34 EST
Nmap scan report for localhost (127.0.0.1)
Host is up (0.000082s latency).

PORT     STATE  SERVICE
5434/tcp closed sgi-arrayd

Nmap done: 1 IP address (1 host up) scanned in 0.03 seconds

Good, port is closed for version 10 of postgresql. No one could be possibly using it, and the database files have not been touched in over 3 years. (Call me parinoid, but I have been burned before.)

Then it is ok to remove~

$ dpkg -l|grep 'postgresql.*.10'
ii  postgresql-10                                 10.12-0ubuntu0.18.04.1                       amd64        object-relational SQL database, version 10 server
ii  postgresql-client-10                          10.12-0ubuntu0.18.04.1                       amd64        front-end programs for PostgreSQL 10
ii  postgresql-doc-10                             10.12-0ubuntu0.18.04.1                       all          documentation for the PostgreSQL database management system
ii  postgresql-server-dev-10                      10.12-0ubuntu0.18.04.1                       amd64        development files for PostgreSQL 10 server-side programming
$ sudo apt-get purge postgresql-10 postgresql-client-10 postgresql-doc-10 postgresql-server-dev-10
$ sudo -u postgres rm -rf /var/run/postgresql/10/
$ sudo -u postgres rm -rf /var/lib/postgresql/10/
$ sudo             rm -rf /usr/lib/postgresql/10/
$ sudo -u postgres rm -rf /etc/postgresql/10/

Checking the pg_lsclusters once more

$ sudo -u postgres pg_lsclusters 
Ver Cluster Port Status Owner    Data directory              Log file
12  main    5432 online postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
14  main    5435 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log

Which one is live

Ahhh yes, much better now. Let's see where is the live database in use.

$ sudo -u postgres psql
psql (14.6 (Ubuntu 14.6-0ubuntu0.22.04.1), server 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1))
Type "help" for help.

postgres=# \q

The banner of psql tells us the executiable psql is version 14.6, while the server is 12.12. Also notice the current excutable path is postgresql version 14.

Which database is actually running?

$ ps -jH -U postgres -u postgres u 
USER         PID    PGID     SID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
postgres    1645    1645    1645  0.0  0.1 248112 19972 ?        Ss   Jan13   5:27 /usr/lib/postgresql/12/bin/postgres -D /var/lib/post
postgres    1789    1789    1789  0.0  0.0 248216  4252 ?        Ss   Jan13   0:00   postgres: 12/main: checkpointer   
postgres    1791    1791    1791  0.0  0.0 248112  3808 ?        Ss   Jan13   0:20   postgres: 12/main: background writer   
postgres    1793    1793    1793  0.0  0.0 248112  4364 ?        Ss   Jan13   0:20   postgres: 12/main: walwriter   
postgres    1795    1795    1795  0.0  0.0 248796  6748 ?        Ss   Jan13   0:37   postgres: 12/main: autovacuum launcher   
postgres    1796    1796    1796  0.0  0.0 102752  4012 ?        Ss   Jan13   1:34   postgres: 12/main: stats collector   
postgres    1798    1798    1798  0.0  0.0 248656  4684 ?        Ss   Jan13   0:00   postgres: 12/main: logical replication launcher   
postgres    1564    1564    1564  0.0  0.1 218008 20920 ?        Ss   Jan13   0:20 /usr/lib/postgresql/14/bin/postgres -D /var/lib/post
postgres    1762    1762    1762  0.0  0.0 218128  5964 ?        Ss   Jan13   0:00   postgres: 14/main: checkpointer 
postgres    1763    1763    1763  0.0  0.0 218008  4080 ?        Ss   Jan13   0:10   postgres: 14/main: background writer 
postgres    1764    1764    1764  0.0  0.0 218008  4664 ?        Ss   Jan13   0:10   postgres: 14/main: walwriter 
postgres    1765    1765    1765  0.0  0.0 218548  6576 ?        Ss   Jan13   0:09   postgres: 14/main: autovacuum launcher 
postgres    1766    1766    1766  0.0  0.0  72600  3668 ?        Ss   Jan13   0:10   postgres: 14/main: stats collector 
postgres    1767    1767    1767  0.0  0.0 218444  4968 ?        Ss   Jan13   0:00   postgres: 14/main: logical replication launcher 

Both!

More information on the current database software can be found with pg_config

$ pg_config | grep VERSION
VERSION = PostgreSQL 14.6 (Ubuntu 14.6-0ubuntu0.22.04.1)

The executable winner is v14. What about our database?

Remember the port number?

$ sudo -u postgres pg_lsclusters 
Ver Cluster Port Status Owner    Data directory              Log file
12  main    5432 online postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
14  main    5435 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log

The port number for v14 is 5435. Check v14 databases:

$ sudo -u postgres psql -p 5435
psql (14.6 (Ubuntu 14.6-0ubuntu0.22.04.1))
Type "help" for help.

postgres=# \l
                                  List of databases
   Name    |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
-----------+----------+----------+-------------+-------------+-----------------------
 postgres  | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
 template1 | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
           |          |          |             |             | postgres=CTc/postgres
(3 rows)

postgres=# \q

The port number for v12 is 5432: Check v12 databases:

$ sudo -u postgres psql -p 5432
psql (14.6 (Ubuntu 14.6-0ubuntu0.22.04.1), server 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1))
Type "help" for help.

postgres=# \l
                                      List of databases
        Name        |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
--------------------+----------+----------+-------------+-------------+-----------------------
 contrib_regression | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 owncloud           | mycloud  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 postgres           | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 roundcube          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
 template0          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                    |          |          |             |             | postgres=CTc/postgres
 template1          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                    |          |          |             |             | postgres=CTc/postgres
(6 rows)

postgres=# \q

I see owncloud and roundcube databases on v12, but not v14. So I have to decide; I want to upgrade v12 to 14, and v14 has nothing I need.

Two ways to go:

  • pg_dump [1] v12 data, then load into v14
  • pg_upgrade [2] cluster v12 to v14
  1. https://www.postgresql.org/docs/current/app-pgdump.html
  2. https://www.postgresql.org/docs/current/pgupgrade.html

Export and Import (pg_dump)

Our first example will focus on the roundcube database.

First take the application down

$ sudo systemctl disable apache2
  1. pg_dump saves the data and schema (port 5432; v12)
$ sudo -u postgres pg_dump -p 5432 -d roundcube -f /tmp/roundcube_upgrade.sql
  1. create a new database in the new version (port 5435; v14)
$ sudo -u postgres createdb -p 5435 roundcube
  1. Check for alternate tablespace and create it in the new db as needed:
$ grep -i tablespace /etc/postgresql/12/main/postgresql.conf 
#default_tablespace = ''		# a tablespace name, '' uses the default
#temp_tablespaces = ''			# a list of tablespace names, '' uses
					# only default tablespace

$ grep -i tablespace /tmp/roundcube_upgrade.sql 
SET default_tablespace = '';

Create if needed in the new v14 cluster

$ sudo -u postgres -p 5435
postgres# create tablespace 'your_tablespace' location 'your_tablespace_location'
postgres# \q
  1. At last reload the dump file:
$ sudo -u postgres psql -p 5435 -d roundcube -f /tmp/roundcube_upgrade.sql >/rmp/roundcube_upgrade.log

Ooops~ Role is missing~

$ grep -i error /tmp/roundcube_upgrade.log
psql:/tmp/roundcube_upgrade.sql:35: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:50: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:66: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:79: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:93: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:106: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:121: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:135: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:155: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:169: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:182: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:198: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:212: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:234: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:248: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:263: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:277: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:291: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:303: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:322: ERROR:  role "roundcube" does not exist
psql:/tmp/roundcube_upgrade.sql:336: ERROR:  role "roundcube" does not exist

Get attributes from the old database

$ sudo -u postgres psql -p 5432 -d roundcube
psql (14.6 (Ubuntu 14.6-0ubuntu0.22.04.1), server 12.12 (Ubuntu 12.12-0ubuntu0.20.04.1))
Type "help" for help.

roundcube=# \du+ roundcube
                  List of roles
 Role name | Attributes | Member of | Description 
-----------+------------+-----------+-------------
 roundcube |            | {}        | 

roundcube=# \q

User is just a fancy name for role.

Add to new database.

$ sudo -u postgres psql -p 5435
psql (14.6 (Ubuntu 14.6-0ubuntu0.22.04.1))
Type "help" for help.

postgres=# create user roundcube;
CREATE ROLE

Re-apply grant statements from pg_dump sql. The commands are at the line numbers above, and we can grep for these commands:

$ grep -i ' OWNER TO roundcube'  /tmp/roundcube_upgrade.sql 
ALTER TABLE public.cache OWNER TO roundcube;
ALTER TABLE public.cache_index OWNER TO roundcube;
ALTER TABLE public.cache_messages OWNER TO roundcube;
ALTER TABLE public.cache_shared OWNER TO roundcube;
ALTER TABLE public.cache_thread OWNER TO roundcube;
ALTER TABLE public.contactgroupmembers OWNER TO roundcube;
ALTER TABLE public.contactgroups OWNER TO roundcube;
ALTER TABLE public.contactgroups_seq OWNER TO roundcube;
ALTER TABLE public.contacts OWNER TO roundcube;
ALTER TABLE public.contacts_seq OWNER TO roundcube;
ALTER TABLE public.dictionary OWNER TO roundcube;
ALTER TABLE public.filestore OWNER TO roundcube;
ALTER TABLE public.filestore_seq OWNER TO roundcube;
ALTER TABLE public.identities OWNER TO roundcube;
ALTER TABLE public.identities_seq OWNER TO roundcube;
ALTER TABLE public.searches OWNER TO roundcube;
ALTER TABLE public.searches_seq OWNER TO roundcube;
ALTER TABLE public.session OWNER TO roundcube;
ALTER TABLE public.system OWNER TO roundcube;
ALTER TABLE public.users OWNER TO roundcube;
ALTER TABLE public.users_seq OWNER TO roundcube;

Then copy/paste them into v14 database:

$ sudo -u postgres psql -p 5435 -d roundcube
psql (14.6 (Ubuntu 14.6-0ubuntu0.22.04.1))
Type "help" for help.

roundcube=# 
ALTER TABLE public.cache OWNER TO roundcube;
ALTER TABLE public.cache_index OWNER TO roundcube;
ALTER TABLE public.cache_messages OWNER TO roundcube;
ALTER TABLE public.cache_shared OWNER TO roundcube;
ALTER TABLE public.cache_thread OWNER TO roundcube;
ALTER TABLE public.contactgroupmembers OWNER TO roundcube;
ALTER TABLE public.contactgroups OWNER TO roundcube;
ALTER TABLE public.contactgroups_seq OWNER TO roundcube;
ALTER TABLE public.contacts OWNER TO roundcube;
ALTER TABLE public.contacts_seq OWNER TO roundcube;
ALTER TABLE public.dictionary OWNER TO roundcube;
ALTER TABLE public.filestore OWNER TO roundcube;
ALTER TABLE public.filestore_seq OWNER TO roundcube;
ALTER TABLE public.identities OWNER TO roundcube;
ALTER TABLE public.identities_seq OWNER TO roundcube;
ALTER TABLE public.searches OWNER TO roundcube;
ALTER TABLE public.searches_seq OWNER TO roundcube;
ALTER TABLE public.session OWNER TO roundcube;
ALTER TABLE public.system OWNER TO roundcube;
ALTER TABLE public.users OWNER TO roundcube;
ALTER TABLE public.users_seq OWNER TO roundcube;
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
ALTER TABLE
roundcube=# \q

Finished!

Of course you need to test the app, but that's another story.

Do the Upgrade (pg_upgrade)

This consists of upgrading the whole cluster. That takes every database in v12 to v14.

The new cluster must not contain user databases with the same name. If you are following along, drop the new roundcube database on v14.

Check for required libraries, or else it will fail with this error:

Your installation references loadable libraries that are missing from the new installation. You can add these libraries to the new installation, or remove the functions using them from the old installation. A list of problem libraries is in the file: loadable_libraries.txt

Failure, exiting

$ cat loadable_libraries.txt
could not load library "$libdir/plv8-2.3.13": ERROR:  could not access file "$libdir/plv8-2.3.13": No such file or directory
In database: postgres

Fix:

postgres=# drop extension plv8 cascade;;
NOTICE:  drop cascades to function plv8_test(text[],text[])
DROP EXTENSION

First gather some information.

Executable directories:

$ pg_config | grep BINDIR
BINDIR = /usr/lib/postgresql/14/bin

$ ls -ld /usr/lib/postgresql/*/bin
drwxr-xr-x 2 root root 4096 Aug 19 06:01 /usr/lib/postgresql/12/bin
drwxr-xr-x 2 root root 4096 Jan 11 18:12 /usr/lib/postgresql/14/bin

Config directories:

$ pg_config|grep SYSCONFDIR
SYSCONFDIR = /etc/postgresql-common

$ ls -ld /etc/postgresql/*/main/
drwxr-xr-x 3 postgres postgres 4096 Jan 30 15:55 /etc/postgresql/12/main/
drwxr-xr-x 3 postgres postgres 4096 Sep 29 15:32 /etc/postgresql/14/main/
  1. First take the application and both database clusters down down:

    $ sudo systemctl disable apache2
    
    $ sudo systemctl stop postgresql@12-main
    
    $ sudo systemctl stop postgresql@14-main
    
    $ sudo -u postgres pg_lsclusters 
    Ver Cluster Port Status Owner    Data directory              Log file
    12  main    5432 down   postgres /var/lib/postgresql/12/main /var/log/postgresql/postgresql-12-main.log
    14  main    5435 down   postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log
    
    $ ps -jH -U postgres -u postgres u 
    USER         PID    PGID     SID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
    
  2. pg_upgrade is the animal of choice. Log into a shell of the postgres user and change to a writeable directory, it will create a log file (pg_upgrade_internal.log) there.

    $ sudo -u postgres bash
    $ cd ~
    $ pwd
    /var/lib/postgresql
    
    $ id
    uid=136(postgres) gid=143(postgres) groups=143(postgres)
    

    -link will not copy the data files, just make a hard filesystem link [1] from the existing directory to the new data_directory specified in the /etc/postgresql/*/main/postgresql.conf file. This is useful for very large databases.

    • -b : old PostgreSQL executable directory
    • -B : new PostgreSQL executable directory
    • -d : old database cluster configuration directory
    • -D : new database cluster configuration directory
    $ /usr/lib/postgresql/14/bin/pg_upgrade \
           -b /usr/lib/postgresql/12/bin/  -B /usr/lib/postgresql/14/bin/ \
           -d /etc/postgresql/12/main/     -D /etc/postgresql/14/main/    \
           --link
    

    Output:

    $ /usr/lib/postgresql/14/bin/pg_upgrade           -b /usr/lib/postgresql/12/bin/  -B /usr/lib/postgresql/14/bin/           -d /etc/postgresql/12/main/     -D /etc/postgresql/14/main/              --link
    Finding the real data directory for the source cluster      ok
    Finding the real data directory for the target cluster      ok
    Performing Consistency Checks
    -----------------------------
    Checking cluster versions                                   ok
    Checking database user is the install user                  ok
    Checking database connection settings                       ok
    Checking for prepared transactions                          ok
    Checking for system-defined composite types in user tables  ok
    Checking for reg* data types in user tables                 ok
    Checking for contrib/isn with bigint-passing mismatch       ok
    Checking for user-defined encoding conversions              ok
    Checking for user-defined postfix operators                 ok
    Checking for incompatible polymorphic functions             ok
    Creating dump of global objects                             ok
    Creating dump of database schemas
                                                                ok
    Checking for presence of required libraries                 ok
    Checking database user is the install user                  ok
    Checking for prepared transactions                          ok
    Checking for new cluster tablespace directories             ok
    
    If pg_upgrade fails after this point, you must re-initdb the
    new cluster before continuing.
    
    Performing Upgrade
    ------------------
    Analyzing all rows in the new cluster                       ok
    Freezing all rows in the new cluster                        ok
    Deleting files from new pg_xact                             ok
    Copying old pg_xact to new server                           ok
    Setting oldest XID for new cluster                          ok
    Setting next transaction ID and epoch for new cluster       ok
    Deleting files from new pg_multixact/offsets                ok
    Copying old pg_multixact/offsets to new server              ok
    Deleting files from new pg_multixact/members                ok
    Copying old pg_multixact/members to new server              ok
    Setting next multixact ID and offset for new cluster        ok
    Resetting WAL archives                                      ok
    Setting frozenxid and minmxid counters in new cluster       ok
    Restoring global objects in the new cluster                 ok
    Restoring database schemas in the new cluster
                                                                ok
    Adding ".old" suffix to old global/pg_control               ok
    
    If you want to start the old cluster, you will need to remove
    the ".old" suffix from /var/lib/postgresql/12/main/global/pg_control.old.
    Because "link" mode was used, the old cluster cannot be safely
    started once the new cluster has been started.
    
    Linking user relation files
                                                                ok
    Setting next OID for new cluster                            ok
    Sync data directory to disk                                 ok
    Creating script to delete old cluster                       ok
    Checking for extension updates                              ok
    
    Upgrade Complete
    ----------------
    Optimizer statistics are not transferred by pg_upgrade.
    Once you start the new server, consider running:
        /usr/lib/postgresql/14/bin/vacuumdb --all --analyze-in-stages
    
    Running this script will delete the old cluster's data files:
        ./delete_old_cluster.sh
    
  3. Now bring up the PostgreSQL 14 service:

    $ sudo systemctl start postgresql@14-main
    $ psql -p 5435
    psql (14.6 (Ubuntu 14.6-0ubuntu0.22.04.1))
    Type "help" for help.
    
    postgres=# \l
                                          List of databases
            Name        |  Owner   | Encoding |   Collate   |    Ctype    |   Access privileges   
    --------------------+----------+----------+-------------+-------------+-----------------------
     contrib_regression | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
     owncloud           | mycloud  | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
     postgres           | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
     roundcube          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | 
     template0          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | =c/postgres          +
                        |          |          |             |             | postgres=CTc/postgres
     template1          | postgres | UTF8     | en_US.UTF-8 | en_US.UTF-8 | postgres=CTc/postgres+
                        |          |          |             |             | =c/postgres
    (6 rows)
    
    postgres=# \c roundcube
    You are now connected to database "roundcube" as user "postgres".
    
    roundcube=# select * from system;
           name        |   value    
    -------------------+------------
     roundcube-version | 2019092900
    (1 row)
    
    

    Here we see our happy little owncloud, roundcube and contrib_regression databases, migrated from v12 to v14.

Enjoy!

  1. If you use link mode, the upgrade will be much faster (no file copying) and use less disk space, but you will not be able to access your old cluster once you start the new cluster after the upgrade. Link mode also requires that the old and new cluster data directories be in the same file system. (Tablespaces and pg_wal can be on different file systems.) Clone mode provides the same speed and disk space advantages but does not cause the old cluster to be unusable once the new cluster is started. Clone mode also requires that the old and new data directories be in the same file system. This mode is only available on certain operating systems and file systems.

Remove old postgresql install

Once the new databases have been tested and run for a while, don't forget to clean up.

Follow the pg_upgrade advice:

Once you start the new server, consider running:

/usr/lib/postgresql/14/bin/vacuumdb --all --analyze-in-stages

Running this script will delete the old cluster's data files:

./delete_old_cluster.sh

$ cat delete_old_cluster.sh 
#!/bin/sh

rm -rf '/var/lib/postgresql/12/main'

Then do the physical left over sweeping.

$ sudo apt-get purge postgresql-12 postgresql-client-12 postgresql-doc-12 postgresql-server-dev-12
$ sudo -u postgres rm -rf /var/run/postgresql/12/
$ sudo -u postgres rm -rf /var/lib/postgresql/12/
$ sudo             rm -rf /usr/lib/postgresql/12/
$ sudo -u postgres rm -rf /etc/postgresql/12/

Finally, check the clusters:

$ pg_lsclusters 
Ver Cluster Port Status Owner    Data directory              Log file
14  main    5435 online postgres /var/lib/postgresql/14/main /var/log/postgresql/postgresql-14-main.log

$ ps -jH -U postgres -u postgres u
USER         PID    PGID     SID %CPU %MEM    VSZ   RSS TTY      STAT START   TIME COMMAND
postgres  718566  718566  718565  0.0  0.0   9612  5128 pts/2    S+   15:20   0:00 bash
postgres  752071  752071  752071  0.0  0.1 218008 29184 ?        Ss   16:25   0:00 /usr/lib/postgresql/14/bin/postgres -D /var/lib/postgresql/14/main -c config_file=
postgres  752073  752073  752073  0.0  0.0 218144 12600 ?        Ss   16:25   0:00   postgres: 14/main: checkpointer 
postgres  752074  752074  752074  0.0  0.0 218008  7004 ?        Ss   16:25   0:00   postgres: 14/main: background writer 
postgres  752075  752075  752075  0.0  0.0 218008 11484 ?        Ss   16:25   0:00   postgres: 14/main: walwriter 
postgres  752076  752076  752076  0.0  0.0 218680  9416 ?        Ss   16:25   0:00   postgres: 14/main: autovacuum launcher 
postgres  752077  752077  752077  0.0  0.0  73024  6960 ?        Ss   16:25   0:00   postgres: 14/main: stats collector 
postgres  752078  752078  752078  0.0  0.0 218444  7588 ?        Ss   16:25   0:00   postgres: 14/main: logical replication launcher 

Continue

Now that you have upgraded to the happy new cluster, consider moving data files to a better database protection disk.

Same time, same channel, on the next episode of Linux in the Home.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Postgresql Move Database to Another Filesystem


Table of Contents


Mainly used this on SBC (BeagleBone) computer to move a database from SD card (/var/lib...) to SSD disk (/data/lib...). A database will burn up an SD card quickly 😟

List clusters on host

$ pg_lsclusters
Ver Cluster Port Status Owner    Data directory              Log file
11  main    5432 online postgres /var/lib/postgresql/11/main /var/log/postgresql/postgresql-11-main.log

Show Data_Dictionary

$ sudo -u postgres psql
psql (11.7 (Debian 11.7-0+deb10u1))
Type "help" for help.

postgres=# SHOW data_directory;
       data_directory       
-----------------------------
 /var/lib/postgresql/11/main
(1 row)
postgres=# \q

Stop Cluster

If you have multiple versions you may have to use the version specific name, example: postgresql@11-main.

$ sudo systemctl stop postgresql
$ sudo systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
   Active: inactive (dead) since Sat 2020-08-22 13:14:11 EDT; 9s ago
  Process: 23216 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 23216 (code=exited, status=0/SUCCESS)

Aug 22 10:29:07 app2 systemd[1]: Starting PostgreSQL RDBMS...
Aug 22 10:29:07 app2 systemd[1]: Started PostgreSQL RDBMS.
Aug 22 13:14:11 app2 systemd[1]: postgresql.service: Succeeded.
Aug 22 13:14:11 app2 systemd[1]: Stopped PostgreSQL RDBMS.

Move Database Files

$ sudo mkdir -p /data/lib/postgresql/11/main/
$ sudo chown postgres:postgres /data/lib/postgresql/11/main/
$ sudo rsync -av /var/lib/postgresql /data/lib
sending incremental file list
postgresql/
postgresql/.bash_history
...

postgresql/11/main/pg_wal/000000010000000000000005
postgresql/11/main/pg_wal/000000010000000000000006
postgresql/11/main/pg_wal/archive_status/
postgresql/11/main/pg_xact/
postgresql/11/main/pg_xact/0000

sent 74,675,872 bytes  received 25,503 bytes  8,788,397.06 bytes/sec
total size is 74,581,282  speedup is 1.00

Save Old Database Directory

$ sudo mv /var/lib/postgresql/11/main /var/lib/postgresql/11/main.bak

Edit Configuration to Point to New Directory

Configurations are in /etc/postgresql/<version>/main/ directory.

$ sudo vi /etc/postgresql/11/main/postgresql.conf

~

data_directory = '/data/lib/postgresql/11/main'        # use data in another directory

~

Double Check New Directory

$ sudo ls /data/lib/postgresql/11/main
base    pg_commit_ts  pg_logical    pg_notify     pg_serial     pg_stat        pg_subtrans  pg_twophase  pg_wal   postgresql.auto.conf
global    pg_dynshmem   pg_multixact  pg_replslot  pg_snapshots  pg_stat_tmp  pg_tblspc     PG_VERSION   pg_xact  postmaster.opts

Start Database Cluster Using New Directory

$ sudo systemctl start postgresql
$ sudo systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
   Active: active (exited) since Sat 2020-08-22 13:25:15 EDT; 4s ago
  Process: 24284 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 24284 (code=exited, status=0/SUCCESS)

Aug 22 13:25:15 app2 systemd[1]: Starting PostgreSQL RDBMS...
Aug 22 13:25:15 app2 systemd[1]: Started PostgreSQL RDBMS.

Verify Database

Should show '/data/...'

$ sudo -u postgres psql
psql (11.7 (Debian 11.7-0+deb10u1))
Type "help" for help.

postgres=# SHOW data_directory;
       data_directory       
-----------------------------
 /data/lib/postgresql/11/main
(1 row)
postgres=# \q

Cleanup Old Database Files

Make sure to test things first, and don't forget to free up some disk space for the new database.

$ sudo rm -Rf /var/lib/postgresql/11/main.bak
$ sudo systemctl restart postgresql
$ sudo systemctl status postgresql
● postgresql.service - PostgreSQL RDBMS
   Loaded: loaded (/lib/systemd/system/postgresql.service; enabled; vendor preset: enabled)
   Active: active (exited) since Sat 2020-08-22 13:28:14 EDT; 7s ago
  Process: 24349 ExecStart=/bin/true (code=exited, status=0/SUCCESS)
 Main PID: 24349 (code=exited, status=0/SUCCESS)

Aug 22 13:28:14 app2 systemd[1]: Starting PostgreSQL RDBMS...
Aug 22 13:28:14 app2 systemd[1]: Started PostgreSQL RDBMS.

Re-Create Cluster

If you make a mistake the cluster can be deleted, then re-created. No Problem.

$ sudo rm -rf /mnt/raid1/postgresql/13
$ sudo -u postgres pg_createcluster -d /mnt/raid1/postgresql 13 main

$ pg_lsclusters 
Ver Cluster Port Status Owner    Data directory        Log file
13  main    5432 down   postgres /mnt/raid1/postgresql /var/log/postgresql/postgresql-13-main.log

$ sudo pg_ctlcluster  13 main start

$ pg_lsclusters 
Ver Cluster Port Status Owner    Data directory        Log file
13  main    5432 online postgres /mnt/raid1/postgresql /var/log/postgresql/postgresql-13-main.log

Continue

Now that you have moved that busy data to a suitable disk, consider setting up a cloud server to use that speedy database.

Join again soon on the next episode of Linux in the Home.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Cloud


Table of Contents


A cloud used to be floating in the sky, then it was a bubbly thing on a network diagram where weird things happen. Now a cloud just means "Somebody else's computer.". Of course you get to it over a network and can share things, like files (pictures, music, documents), contacts, calendars and chat, among other things.

For me it means I can get to my stuff no matter where I am; and my stuff is private unless I share it. So I keep it in my home, under lock and key. To share I have a few open ports, encrypted network traffic, and some protected disk arrays with timely backups.

If this sounds interesting so far, here are some contenders for the job:

  • SyncThing [1]
  • Warpinator [2]
  • Cryptomater [3]
  • NextCloud [4]
  1. https://syncthing.net/
  2. https://github.com/linuxmint/warpinator
  3. https://cryptomator.org/
  4. https://nextcloud.com/

SyncThing

Syncthing is a continuous file synchronization program. It synchronizes files between two or more computers in real time.

It is a simple install supporting many operating systems [1]. Debian Ubuntu has a package [2].

The getting started [3] page has very nice instructions.

My experience was very easy to install and syncing was quick and reliable. However I limited my exposure to internal network machine syncs, so no messing with my firewall or port forwarding [4]. If you just want to sync files for a project, picture albums or music collection to a backup host in your home, it should work very well. I had no problems.

They web page states that "All communication is secured using TLS." and commercial support is available [5].

  1. https://syncthing.net/downloads/
  2. https://apt.syncthing.net/
  3. https://docs.syncthing.net/intro/getting-started.html
  4. https://docs.syncthing.net/users/firewall.html#firewall-setup
  5. https://www.kastelo.net/stes/

Warpinator

Warpinator is built to share files across the LAN. I have heard several people and magazines praise how good it works. It is built with Python and available on Github, supporting several operating systems. On Linux Mint it is a simple apt install, probably because it is published on the github linuxmint repository.

The project supplies firewall instructions so it should be able to run remotely if desired. It uses a shared code to secure comminucation. It is un-clear to me if it uses SSL/TLS encryption, and the support would be through Github.

I have not tried it, but it sounds very reasonable for syncing things on a local network since SSL/TLS is not mentioned.

Other platforms include:

Cryptomater

Cryptomator encrypts your data quickly and easily. Afterwards you upload them protected to your favorite cloud service.

The work flow here is to take files from one directory and copy them to another, then encrypting the new directory in preperation of uploading to a cloud service on someone elses computer.

My experience was quite good with it for saving a copy of my development project off-site to DropBox. I would prepare my files into a DropBox directory with DropBox turned off, then turn DropBox on, watch it sync, then turn DropBox off. I was assured that no one outside my network could read my secret project files. Very nice, cheap and easy.

Github reports it is 93% Java, so be aware of that. My usage was all MacOS, so I didn't even realize the Java dependency, but the Linux dependency lists Oracle JDK 16 [3].

They offer enterprise support [1] and also Github [2] bug reporting.

  1. https://cryptomator.org/enterprise/
  2. https://github.com/cryptomator
  3. https://www.oracle.com/java/technologies/javase/products-doc-jdk16certconfig.html

NextCloud

NextCloud allows file storage on another computer, in my house, so that I can easily access these files from almost any other computer/phone/tablet that has authority. It also supports a shared list of Contacts and Calendar. There is a chat function (Talk) to type messages, or voice, or video between two or more other computers.

The app store reports many other apps available as well.

My experience started with OwnCloud, which NextCloud was forked from, and used to have many problems with every new release especially supporting a PostgreSQL database. After OwnCloud went with a entire new infastructure with no more PHP, I decided to try NextCloud, and have been happy ever since. NextCloud is very stable, new releases are smooth, and the many apps seem to work just fine.

Being based on PHP, NextCloud runs better on Apache in my opinion, since nginx does not default with PHP support. The PostgreSQL database support has not had any problems, and the whole stack runs on a small BeagleBone AI (32-bit) or AI-64 (64-bit) SBC, so the Rasberry PI line should also work.

One note, after changing from Debian to RedHat, the php support for Apache (httpd on RedHat), uses php-frm, the PHP FastCGI Process Manager. It seems to work quite well, but be aware there is a systemctl service called php-fpm.

sudo systemctl status php-fpm
● php-fpm.service - The PHP FastCGI Process Manager
     Loaded: loaded (/usr/lib/systemd/system/php-fpm.service; enabled; preset: disabled)
     Active: active (running) since Tue 2023-07-25 22:57:34 EDT; 1 day 9h ago
   Main PID: 940 (php-fpm)
     Status: "Processes active: 0, idle: 13, Requests: 47239, slow: 0, Traffic: 0.2req/sec"
      Tasks: 14 (limit: 48814)
     Memory: 431.1M
        CPU: 43min 15.096s
     CGroup: /system.slice/php-fpm.service
             ├─   940 "php-fpm: master process (/etc/php-fpm.conf)"
             ├─  2642 "php-fpm: pool www"
             ├─  2643 "php-fpm: pool www"
             ├─  2644 "php-fpm: pool www"
             ├─  2645 "php-fpm: pool www"
             ├─  2646 "php-fpm: pool www"
             ├─  2765 "php-fpm: pool www"
             ├─  2767 "php-fpm: pool www"
             ├─  3834 "php-fpm: pool www"
             ├─ 23767 "php-fpm: pool www"
             ├─ 60012 "php-fpm: pool www"
             ├─ 61859 "php-fpm: pool www"
             ├─ 67881 "php-fpm: pool www"
             └─120219 "php-fpm: pool www"

Services

These are services I have/had used without problem:

  • Audio Player for playing music collection, streaming, and playlists
  • Pictures for organizing photographs by year, then topic
  • Contacts for syncing e-mail Thunderbird and iOS
  • Calendar for syncing appointments Thunderbird and iOS
  • Files using NextCloud Clients for Mac, iOS
  • Talk can Send Messages to a phone from Command Line, used for HomeAssistant alerts and run via File: ~/matrix/sendmatrix.sh. Used for monitoring door and window alarms.
IOS App - NextCloud Talk

IMG_84D17B7CC129-1 (2).jpeg

NextCloud Talk uses the Matrix service to detect changes to be sent.

Install Documentation

First install the database;

https://www.postgresql.org/download/

Then apache2;

https://httpd.apache.org/docs/current/install.html

Then nextcloud;

https://nextcloud.com/install/#instructions-server

Then nextcloud apps;

  • contacts
  • calendar
  • talk
  • mattermost

After Nextcloud 27, Talk Mattermost app does not work, and is not needed to send messages through the command line.

Install Steps

What follows are my steps to install NextCloud the way I want it, manually mostly.

It may not be necessary, but it does provide details that might otherwise be missed about how NextCloud functions under the covers. This can also be usefull if a problem arises, to debug the issue or at least know where to start.

Nextcloud PHP Install

Download and unzip the NextCloud release. Change the release number you find.

Change :

  • VER : NextCloud version

File: ~/nextcloud-install.sh

## https://docs.nextcloud.com/server/latest/admin_manual/installation/command_line_installation.html
##
VER=nextcloud-24.0.7
##---------------------------------------------------
## Download
curl https://download.nextcloud.com/server/releases/${VER}.zip -o ${VER}.zip
##
##---------------------------------------------------
## Extract
DIR=$(pwd)
sudo mkdir -p /var/www/nextcloud
cd /var/www
sudo unzip ${DIR}/${VER}.zip
sudo chown -R www-data:www-data /var/www/nextcloud/

Package Dependency Install

Now install the Debian/Ubuntu packages.

$ sudo apt-get install zip libapache2-mod-php php-gd php-json php-pgsql php-curl php-mbstring php-intl php-imagick php-xml php-zip php-bcmath php-gmp zip php-apcu

Create Empty PostgreSQL Database

Here we create an empty database for NextCloud.

Reference: https://docs.nextcloud.com/server/20/admin_manual/configuration_database/linux_database_configuration.html

PostgreSQL PHP configuration

File: /etc/php7/conf.d/pgsql.ini

# configuration for PHP PostgreSQL module
extension=pdo_pgsql.so
extension=pgsql.so

[PostgresSQL]
pgsql.allow_persistent = On
pgsql.auto_reset_persistent = Off
pgsql.max_persistent = -1
pgsql.max_links = -1
pgsql.ignore_notice = 0
pgsql.log_notice = 0
~
:wq

Create nextcloud database

Change :

  • nextclouddb : Your PostgreSQL database owner
$ sudo -u postgres psql -d template1
template1=# 
CREATE USER nextclouddb CREATEDB;
CREATE DATABASE nextcloud OWNER nextclouddb;
\q

PostgreSQL Database Authentication

Recommend using a Database Password, that's what I do. Change :

  • nextclouddb : PostgreSQL database owner
  • ItsABigPaswordToo : password for the database owner

No Database Password

A Nextcloud instance configured with PostgreSQL would contain the path to the socket on which the database is running as the hostname, the system username the PHP process is using, and an empty password to access it, and the name of the database. The config/config.php as created by the Installation wizard would therefore contain entries like this:

File: /var/www/nextcloud/config/config.php

~
  "dbtype"        => "pgsql",
  "dbname"        => "nextcloud",
  "dbuser"        => "nextclouddb",
  "dbpassword"    => "",
  "dbhost"        => "/var/run/postgresql",
  "dbtableprefix" => "oc_",
~
:wq

Note: The host actually points to the socket that is used to connect to the database. Using localhost here will not work if postgreSQL is configured to use peer authentication. Also note that no password is specified, because this authentication method doesn’t use a password.

Database Password

If you use another authentication method (not peer), you’ll need to use the following steps to get the database setup: Now you need to create a database user and the database itself by using the PostgreSQL command line interface. The database tables will be created by Nextcloud when you login for the first time.

$ sudo -u postgres psql -d postgres
postgres=#
ALTER USER nextclouddb WITH PASSWORD 'ItsABigPaswordToo';
drop database nextcloud;
CREATE DATABASE nextcloud TEMPLATE template0 ENCODING 'UNICODE';
ALTER DATABASE nextcloud OWNER TO nextclouddb;
GRANT ALL PRIVILEGES ON DATABASE nextcloud TO nextclouddb;

SHOW hba_file;
              hba_file               
-------------------------------------
 /etc/postgresql/12/main/pg_hba.conf
(1 row)

\q

Set the PostgreSQL Host Based Authentication (hba) for a database user nextclouddb on database nextcloud

File: /etc/postgresql/12/main/pg_hba.conf

~
# nextcloud
host    nextcloud     nextclouddb    192.168.0.2/32    md5  # or `scram-sha-256` instead of `md5` if you use that
hostssl nextcloud     nextclouddb    192.168.0.2/32    md5  # or `scram-sha-256` instead of `md5` if you use that
~
:wq

A Nextcloud instance configured with PostgreSQL would contain the hostname on which the database is running, a valid username and password to access it, and the name of the database. The config/config.php as created by the Installation wizard would therefore contain entries like this:

File: /var/www/nextcloud/config/config.php

~
  "dbtype"        => "pgsql",
  "dbname"        => "nextcloud",
  "dbuser"        => "nextclouddb",
  "dbpassword"    => "ItsABigPaswordToo",
  "dbhost"        => "localhost",
  "dbtableprefix" => "oc_",
~
:wq

PostgreSQL Database Populate NextCloud Metadata

Next we populate the empty database with NextCloud metadata.

File: ~/nextcloud-db-populate.sh

#!/bin/bash
#####################################################################################
cd /var/www/nextcloud
PASS="ItsABigPaswordToo"
sudo -u www-data php ./occ  maintenance:install --database \
 "pgsql" --database-name "nextcloud"  --database-user "nextclouddb" --database-pass \
 "${PASS}" --admin-user "bigcloud" --admin-pass "${PASS}"

Apache Web Server NextCloud Configuration

You can change the alias /nextcloud to a more interesting name if you like.

File: /etc/apache2/sites-available/nextcloud.conf

Alias /nextcloud "/var/www/nextcloud/"

<VirtualHost *:80>
  DocumentRoot /var/www/html/
  ServerName  www.example.com
</VirtualHost>

<VirtualHost _default_:443>
  DocumentRoot /var/www/html/
  ServerName  www.example.com

  <Directory /var/www/nextcloud/>
    Require all granted
    AllowOverride All
    Options FollowSymLinks MultiViews

    <IfModule mod_dav.c>
      Dav off
    </IfModule>
  </Directory>

# Use HTTP Strict Transport Security to force client to use secure connections only      
Header always set Strict-Transport-Security "max-age=31536000; includeSubDomains;"
SSLEngine on

# certbot
SSLCertificateFile /etc/letsencrypt/live/example.com/fullchain.pem
SSLCertificateKeyFile /etc/letsencrypt/live/example.com/privkey.pem
Include /etc/letsencrypt/options-ssl-apache.conf
</VirtualHost>

Enable nextcloud.conf

$ sudo a2ensite nextcloud

Apache Web Server Locale Settings

Some installs of Apache2 do not enable the locale properly, you can set it like this:

File: /etc/apache2/envvars

~
## Uncomment the following line to use the system default locale instead:
. /etc/default/locale
~

Restart apache to pick up changes

$ sudo systemctl restart apache2

NextCloud Trusted Domains and E-Mail Server Connection

Make sure you have trusted_domains set to the NextCloud host, and also the E-Mail host and settings.

File: /var/www/nextcloud/config/config.php

#  edit the "trusted_domains" setting in /var/www/nextcloud/config/config.php
# ...
  array (
    0 => 'localhost',
    1 => '192.168.0.5',
    2 => 'www.example.com",
  ),
~
  'default_phone_region' => 'US',
  'mail_from_address' => 'nextcloud',
  'mail_smtpmode' => 'smtp',
  'mail_sendmailmode' => 'smtp',
  'mail_domain' => 'mail.example.com',
  'mail_smtphost' => '192.168.0.25',
  'mail_smtpport' => '25',
~
:wq

NextCloud PHP Configuration

Enable PHP large memory.

File: /etc/php/7.4/apache2/php.ini

~
;memory_limit = 128M
memory_limit = 512M
~
:wq

NextCloud Redis Caching Configuration

Enable Redis caching.

Reference: https://docs.nextcloud.com/server/22/admin_manual/configuration_server/caching_configuration.html

Install Redis

$ sudo apt-get install redis-server php-redis

Uncomment the Unix socket server.sock in the redis.conf file

File: /etc/redis/redis.conf

~
unixsocket /var/run/redis/redis-server.sock
~
:wq

Allow www-data access to the socket

$ sudo usermod -a -G redis www-data

Add memcache entries in NextCloud config

File: /var/www/nextcloud/config/config.php

~
  'memcache.locking' => '\OC\Memcache\Redis',
  'memcache.local' => '\OC\Memcache\APCu',
  'memcache.distributed' => '\OC\Memcache\Redis',
  'redis' => [
     'host'     => '/run/redis/redis-server.sock',
     'port'     => 0,
     'timeout'  => 1.5,
  ],
~
:wq

Set Redis session handling for Nextcloud

File: /etc/php/7.4/apache2/php.ini

~
redis.session.locking_enabled=1
redis.session.lock_retries=-1
redis.session.lock_wait_time=10000
~
:wq

Move the shared files in NextCloud to NAS

These are the files that sync from your machine to NextCloud, like pictures, documents, music, etc. You we be able to see the files in a normal filesystem, just refrain from making any changes.

Log into PostgreSQL and connect to the nextcloud database.

$ sudo -u postgres psql -d nextcloud

nextcloud=# select * from oc_storages;
 numeric_id |               id                | available | last_checked 
------------+---------------------------------+-----------+--------------
          1 | home::bigcloud                  |         1 |             
          2 | local::/var/www/nextcloud/data/ |         1 |             
(3 rows)

The local::/var/www/nextcloud/data id is the default location for sync data files.

  1. Stop apache and redis
$ sudo systemctl stop apache2
$ sudo systemctl stop redis
  1. Make directory on nas. Log into the nas as root because you need to change permissions of the file directory. Assuming the nas mount point is /mnt/vol01/nfs_share
$ sudo mkdir -p /mnt/vol01/nfs_share/nextcloud
$ sudo chown -R www-data:www-data /mnt/vol01/nfs_share/nextcloud
  1. Copy the data on the NextCloud server to the nfs mount. This assumes the nfs mount point is /data.
$ sudo -u www-data rsync -rav /var/www/nextcloud/data/ /data/nextcloud/
  1. Ensure it is owned by www-data:www-data
$ sudo -u www-data ls -lrt /data/nextcloud/
total 51
-rw-rw-r--  1 www-data www-data      0 Sep 22 12:16 index.html
drwxr-xr-x  4 www-data www-data      4 Sep 22 12:36 bigcloud
drwxr-xr-x  3 www-data www-data      3 Sep 22 16:59 __groupfolders
drwxr-xr-x 11 www-data www-data     11 Sep 22 18:16 appdata_oc3hnmjliksp
-rw-r-----  1 www-data www-data 290001 Sep 23 12:26 nextcloud.log
  1. Rename old data
$ sudo mv /var/www/nextcloud/data /var/www/nextcloud/data.old
  1. Update NextCloud config for the new directory

File: /var/www/nextcloud/config/config.php

~
##     from
##     'datadirectory' => '/var/www/nextcloud/data',
##     to
      'datadirectory' => '/data/nextcloud',
~
  1. Update database:
$ sudo -u postgres -d nextcloud
 
nextcloud=# select * from oc_storages;
 numeric_id |               id                | available | last_checked 
------------+---------------------------------+-----------+--------------
          1 | home::bigcloud                |         1 |             
          2 | local::/var/www/nextcloud/data/ |         1 |             
(3 rows)

nextcloud=# update oc_storages set id='local::/data/nextcloud/' where id='local::/var/www/nextcloud/data/';
UPDATE 1
nextcloud=# select * from oc_storages;
 numeric_id |           id            | available | last_checked 
------------+-------------------------+-----------+--------------
          1 | home::bigcloud        |         1 |             
          2 | local::/data/nextcloud/ |         1 |             
(3 rows)
  1. Start redis and apache
$ sudo systemctl start redis
$ sudo systemctl start apache2

Nextcloud Command Line Client

This CLI tool allows syncing files from the local machine to NextCloud.

Install

$ sudo apt-get install nextcloud-desktop-cmd

Usage

 nextcloudcmd --help
nextcloudcmd - command line Nextcloud client tool

Usage: nextcloudcmd [OPTION] <source_dir> <server_url>

A proxy can either be set manually using --httpproxy.
Otherwise, the setting from a configured sync client will be used.

Options:
  --silent, -s           Don't be so verbose
  --httpproxy [proxy]    Specify a http proxy to use.
                         Proxy is http://server:port
  --trust                Trust the SSL certification.
  --exclude [file]       Exclude list file
  --unsyncedfolders [file]    File containing the list of unsynced remote folders (selective sync)
  --user, -u [name]      Use [name] as the login name
  --password, -p [pass]  Use [pass] as password
  -n                     Use netrc (5) for login
  --non-interactive      Do not block execution with interaction
  --nonshib              Use Non Shibboleth WebDAV authentication
  --davpath [path]       Custom themed dav path, overrides --nonshib
  --max-sync-retries [n] Retries maximum n times (default to 3)
  --uplimit [n]          Limit the upload speed of files to n KB/s
  --downlimit [n]        Limit the download speed of files to n KB/s
  -h                     Sync hidden files, do not ignore them
  --version, -v          Display version and exit
  --logdebug             More verbose logging

To synchronize the Nextcloud directory Music to the local directory media/music, through a proxy listening on port 8080, and on a gateway machine using IP address 192.168.178.1, the command line would be:

$ nextcloudcmd --httpproxy http://192.168.178.1:8080 \
              $HOME/media/music \
              https://server/nextcloud/remote.php/webdav/Music

nextcloudcmd will prompt for the user name and password, unless they have been specified on the command line or -n has been passed.

NOTE: --exclude option does not work! (Oct-2022) Need to create/edit a file named .sync-exclude.lst in the top level directory of the client side. Example:

$ cat cloud/.sync-exclude.lst 
/Pictures/*
Pictures/*

Debug


  • Problem: External storage option for SMB/CIFS.

  • Solution: Enable APP 'External Storage" and install package

    $ sudo apt-get install smbclient
    

  • Problem: Backup calendar and contacts

  • Solution: Use vdirsync (see below) Offline Copy of Contacts and Calendar


  • Problem: Contacts and Calendar access from iOS

  • Solution: Create or modify htaccess file in web root directory to perform http redirect (301)

    File: /var/www/html/.htaccess

    <IfModule mod_rewrite.c>
      RewriteEngine on
      RewriteRule ^\.well-known/carddav   /remote.php/dav [R=301,L]
      RewriteRule ^\.well-known/caldav    /remote.php/dav [R=301,L]
      RewriteRule ^\.well-known/webfinger /index.php/.well-known/webfinger [R=301,L]
      RewriteRule ^\.well-known/nodeinfo  /index.php/.well-known/nodeinfo [R=301,L]
    </IfModule>
    

Talk from Command Line

This is a way to send messages to the phone from Linux command line. Requires the 'NextCloud Talk' iOS app from the Apple App store and 'Mattermost' app from the NextCloud App store.

Talk Mattermost Shell Script

After Nextcloud 27, Talk Mattermost app does not work, and is not needed to send messages through the command line.

File: ~/talk_mattermost.sh

#!/bin/bash
#----------------------------------------------------
# File: talk_mattermost.sh
#
# Usage: talk_mattermost.sh <msg>
#
# Purpose: Send message to nextcloud talk client
#
# Dependencies: 
# Version Nextcloud 26 or below:
#  1) In NextCloud apps, search for Mattermost
# 
#  2) Click install it
# 
#  3) In Settings, enable it
#
# All Versions: 
# 4) Create a dedicated user in Nextcloud; i.e.: robot
#     with password under “User/Security”.
#     (a new account specifically for the bot) first.
#     It will not relay messages from yourself if you use your account
# 
# 5) Create a Nextloud Talk channel
#     - new conversasion (e.g.: robotic) while logged in as new user 'robot'
#     - under ... [options] enable MatterMost -> Nextcloud Talk)
#       with the user for the automatic service
#     - add users to whom the push messages should be sent.
#       (you will see automatic user 'bridge-bot' as a participant)
# 
# 6) Open the channel and *copy the channel id* from the URL
#     (https://<address_to_nextcloud_service>/index.php/call/<channel_id>).
# 
# 7) And now follows the magic PHP-code part, which has to be copied somewhere on the server.
# 
# <?php
# 	function NextcloudTalk_SendMessage($channel_id, $message) {
# 		$SERVER = "https://<address_to_nextcloud_service>";
# 		$USER = "<nextcloud_robotic_user>";
# 		$PASS = "<application_password>";
# 
# 		// notify hack
# 		$data = array(
# 			"token" => $channel_id,
# 			"message" => $message,
# 			"actorDisplayName" => "PYTHON-NOTIFICATION",
# 			"actorType" => "",
# 			"actorId" => "",
# 			"timestamp" => 0,
# 			"messageParameters" => array()
# 		);
# 
# 		$payload = json_encode($data);
# 
# 		$ch = curl_init($SERVER . '/ocs/v2.php/apps/spreed/api/v1/chat/' . $channel_id);
# 
# 		curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true);
# 		curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
# 		curl_setopt($ch, CURLINFO_HEADER_OUT, true);
# 		curl_setopt($ch, CURLOPT_POST, true);
# 		curl_setopt($ch, CURLOPT_POSTFIELDS, $payload);
# 		curl_setopt($ch, CURLOPT_USERPWD, "$USER:$PASS");
# 		curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);
# 
# 		// Set HTTP Header
# 		curl_setopt($ch, CURLOPT_HTTPHEADER, array(
# 			'Content-Type: application/json',
# 			'Content-Length: ' . strlen($payload),
# 			'Accept: application/json',
# 			'OCS-APIRequest: true')
# 		);
# 
# 		$result = curl_exec($ch);
# 		curl_close($ch);
# 
# 	}
# 
# 	$token = $argv[1];
# 	$message = $argv[2];
# 
# 	NextcloudTalk_SendMessage($token, $message);
# ?>
# 
# 8) Test service from command line
# php <path_to_file>/nextcloudmessage.php <channel_id> <message>
# 
# Reference: https://www.developercookies.net/push-notification-service-with-nextcloud-talk/
#            https://github.com/42wim/matterbridge#configuration
#
# Date     Author     Description
# ----     ------     -----------
# Dec-2021 Don Cohoon Created
# Jun-2023 Don Cohoon Talk Mattermost app not available/needed on Nextcloud v27
#----------------------------------------------------
CHANNEL_ID=***********
/usr/bin/php /data/talk_mattermost.php ${CHANNEL_ID} "${@}"

Talk Mattermost PHP Script

<?php
	function NextcloudTalk_SendMessage($channel_id, $message) {
		$SERVER = "https://www.example.com/";
		$USER = "robot";
		$PASS = "*************";

		// notify hack
		$data = array(
			"token" => $channel_id,
			"message" => $message,
			"actorDisplayName" => "PYTHON-NOTIFICATION",
			"actorType" => "",
			"actorId" => "",
			"timestamp" => 0,
			"messageParameters" => array()
		);

		$payload = json_encode($data);

		$ch = curl_init($SERVER . '/ocs/v2.php/apps/spreed/api/v1/chat/' . $channel_id);

		curl_setopt($ch, CURLOPT_SSL_VERIFYPEER, true);
		curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
		curl_setopt($ch, CURLINFO_HEADER_OUT, true);
		curl_setopt($ch, CURLOPT_POST, true);
		curl_setopt($ch, CURLOPT_POSTFIELDS, $payload);
		curl_setopt($ch, CURLOPT_USERPWD, "$USER:$PASS");
		curl_setopt($ch, CURLOPT_HTTPAUTH, CURLAUTH_BASIC);

		// Set HTTP Header
		curl_setopt($ch, CURLOPT_HTTPHEADER, array(
			'Content-Type: application/json',
			'Content-Length: ' . strlen($payload),
			'Accept: application/json',
			'OCS-APIRequest: true')
		);

		$result = curl_exec($ch);
		curl_close($ch);

	}

	$token = $argv[1];
	$message = $argv[2];

	NextcloudTalk_SendMessage($token, $message);
?>

Federated Sharing Between Nextcloud Servers

If you have a friend using NextCloud and want to share data between you and them, or you have an organization with several instances of NextCloud, it is possible to let them sync between each other. This is called Federation.

The main requirement is that both hosts must have https (SSL/TLS) enabled on their web servers with valid certificates.

In both Nextcloud hosts

  1. Enable the Federation App in NextCloud app admin screen.

  2. Update NextCloud configs, trusted_domains and add the allow_local_remote_servers if both hosts are on the same domain.

File: /var/www/nextcloud/config.php:

'trusted_domains' =>  
  array (  
    0 => '[localhost](http://localhost)',  
    1 => '192.168.0.5',  
    2 => '[one.example.com](http://one.example.com)',  
    3 => '[two.example.com](http://two.example.com)',  
    4 => '[www.example.com](http://www.example.com)',  
    5 => '[example.com](http://example.com)',  
  ),  
\~

'allow_local_remote_servers'=>true,

\~
  1. In NextCloud setting screen: Set global sharing checkboxes to send and receive remote shares

In Sending Nextcloud

In NextCloud file screen: Share file/folder using

<login>@<server>/<URI>

For example, from host www to share with host one user cloud enter this as the shared with name on the File screen:

cloud@one.example.com

In Receiving Nextcloud

Check NextCloud on receiving host, should see share alert pop up, then you NEED to press the ACCEPT button .

Reference:

Offline Copy of Contacts and Calendar

This is a solution for backing up Contacts and Calendars from the NextCloud database to a flat file. The file can then be used for restore back to the database or transfer to another NextCloud instance. Additionally it can be used to access Contacts and Calendars from Linux command line.

Install vdirsyncer package

$ sudo apt-get install vdirsyncer

Configure vdirsyncer

File: /data/vcard/vdirsyncer-nextcloud.conf

[general]
status_path = "~/.vdirsyncer/status/"

[pair contacts_nextcloud_to_local]
a = "my_contacts_local"
b = "my_contacts_nextcloud"
collections = ["from a", "from b"]

[storage my_contacts_local]
type = "filesystem"
path = "~/.contacts/"
fileext = ".vcf"

[storage my_contacts_nextcloud]
type = "carddav"

url = "https://www.example.com/"
username = "cloud"
password = "*************"

[pair cal_nextcloud_to_local]
a = "my_cal_local"
b = "my_cal_nextcloud"
collections = ["from a", "from b"]

[storage my_cal_local]
type = "filesystem"
path = "~/.calendars/"
fileext = ".ics"

[storage my_cal_nextcloud]
type = "caldav"

url = "https://www.example.com/"
username = "cloud"
password = "**********"

Run discovery to populate the sync directories and configuration in your $HOME directory

$ vdirsyncer -c /data/vcard/vdirsyncer-nextcloud.conf discover

Create a script to run it. Make sure you created the .htaccess file above.

File:/data/vcard/vdirsyncer.sh

#!/bin/bash
# sudo apt-get install vdirsyncer
#
# One-time only
#vdirsyncer -c vdirsyncer-nextcloud.conf discover
#
# NOTE: Need to add .htaccess to /var/www/html
#  Ref: https://docs.nextcloud.com/server/23/admin_manual/issues/general_troubleshooting.html#service-discovery
#
DIR=/data/vcard
vdirsyncer -c ${DIR}/vdirsyncer-nextcloud.conf sync 2>&1 | mail -s vdirsync mail@example.com

Schedule vdirsyncer

Ensure it is executable:

$ chmod 755 /data/vcard/vdirsyncer.sh

Schedule vdirsync via /etc/cron.daily to backup contacts and calendars.

File: /etc/cron.daily/vdirsyncer

#!/bin/bash
sudo -u bob /data/vcard/vdirsyncer.sh

Contacts from the Command Line

khard command line will search and display your NextCloud contacts using vdirsync files locally.

Reference: https://khard.readthedocs.io/en/latest/

Install

$ sudo apt-get install khard

Configure

First create a configuration file.

File: /data/vcard/khard.conf


# example configuration file for khard version > 0.14.0
# place it under ~/.config/khard/khard.conf
# This file is parsed by the configobj library.  The syntax is described at
# https://configobj.readthedocs.io/en/latest/configobj.html#the-config-file-format

[addressbooks]
[[family]]
path = ~/.contacts/family/
[[friends]]
path = ~/.contacts/friends/

[general]
debug = no
default_action = list
# These are either strings or comma seperated lists
editor = vim, -i, NONE
merge_editor = vimdiff

[contact table]
# display names by first or last name: first_name / last_name / formatted_name
display = first_name
# group by address book: yes / no
group_by_addressbook = no
# reverse table ordering: yes / no
reverse = no
# append nicknames to name column: yes / no
show_nicknames = no
# show uid table column: yes / no
show_uids = yes
# sort by first or last name: first_name / last_name / formatted_name
sort = last_name
# localize dates: yes / no
localize_dates = yes
# set a comma separated list of preferred phone number types in descending priority
# or nothing for non-filtered alphabetical order
preferred_phone_number_type = pref, cell, home
# set a comma separated list of preferred email address types in descending priority
# or nothing for non-filtered alphabetical order
preferred_email_address_type = pref, work, home

[vcard]
# extend contacts with your own private objects
# these objects are stored with a leading "X-" before the object name in the vcard files
# every object label may only contain letters, digits and the - character
# example:
#   private_objects = Jabber, Skype, Twitter
# default: ,  (the empty list)
private_objects = Jabber, Skype, Twitter
# preferred vcard version: 3.0 / 4.0
preferred_version = 3.0
# Look into source vcf files to speed up search queries: yes / no
search_in_source_files = no
# skip unparsable vcard files: yes / no
skip_unparsable = no

Next create a script to run it, pointing to the configuration file and the 'show' verb.

File: ~/khard.sh

#!/bin/bash
# sudo apt-get install khard
#
# Copy khard.conf to ~/.config/khard/khard.conf 
#
# show : allows selection for details
# list : just shows listing then exit
#
khard -c khard.conf show ${1}

echo ""

# Detailed
#khard -c khard.conf show ${1} --format yaml

# dump and exit
#khard -c khard.conf list ${1}

# use contact.yaml as template
#khard -c khard.conf new -i contact.yaml
#khard -c khard.conf edit -i contact.yaml

Run it

Now see it in action.

Sample run, searching for string match

$ ./khard.sh picard
Select contact for Show action
Address book: All
Index    Name                     Phone                                E-Mail    UID   
1        Dr. Picard, Jeffery    HOME,VOICE: 999-555-1212                       57    
2        Picard                 WORK, VOICE, pref: (999) 555-1212              7A    
Enter Index (q to quit): q
Canceled

Calendars from the Command Line

khal is a command line display of NextCloud calendar entries using vdirsync files locally.

Reference: https://khal.readthedocs.io/en/latest/

Install

$ sudo apt-get install khal

Configure

First crete a configuration file that matches your NextCloud calendar names. This example has 2 calendars, personal and bills.

File: /data/vcard/khal.conf

[calendars]

  [[home]]
    path = ~/.calendars/personal/
    color = dark cyan
    priority = 20

  [[bills]]
    path = ~/.calendars/bills/
    color = dark red
    readonly = True

[locale]
local_timezone = America/New_York
default_timezone = America/New_York

# If you use certain characters (e.g. commas) in these formats you may need to
# enclose them in "" to ensure that they are loaded as strings.
timeformat = %H:%M
dateformat = %d-%b-
longdateformat = %d-%b-%Y
datetimeformat =  %d-%b- %H:%M
longdatetimeformat = %d-%b-%Y %H:%M

firstweekday = 0
#monthdisplay = firstday

[default]
default_calendar = home
timedelta = 7d # the default timedelta that list uses
highlight_event_days = True  # the default is False

Next create a script to run it, pointing to the configuration file.

File: ~/khal.sh

#!/bin/bash
khal -c /data/vcard/khal.conf calendar

Run it

Now run the command line calendar. It will use the latest vsyndir files locally.

$ ./khal.sh 
    Mo Tu We Th Fr Sa Su     Today, 03-Jan-2023
Jan 26 27 28 29 30 31  1     21:00-22:00 Gas Bill ⟳
     2  3  4  5  6  7  8     Monday, 09-Jan-2023
     9 10 11 12 13 14 15     18:30-19:30 Trash pickup tomorrow  ⟳
    16 17 18 19 20 21 22     
    23 24 25 26 27 28 29     
Feb 30 31  1  2  3  4  5     
     6  7  8  9 10 11 12     
    13 14 15 16 17 18 19     
    20 21 22 23 24 25 26     
Mar 27 28  1  2  3  4  5     
     6  7  8  9 10 11 12     
    13 14 15 16 17 18 19     
    20 21 22 23 24 25 26     
Apr 27 28 29 30 31  1  2   

Apache - Block Malicious Hosts

With NextCloud comes Apache web server, and with that comes strangers knocking on your door. I welcome my friends, family, neighbors and people who just want to look at your front garden. I also limit who has access to my back garden and especially indoors.

With that, here is a way to post a bouncer at your gate. Obtain the IP addresses from your logwatch and logcheck scripts as well as the /var/log/apache2/errors.log. They usually show up as some kind of mal-formed URL request with strange directory patterns.

Examples of mal-formed URLs:

/.DS_Store
/.env
/debug/default/view?panel=config
/ecp/Current/exporttool/microsoft.exchange.ediscovery.exporttool.application
/telescope/requests
/s/633323e2431313e2535323e26393/_/;/META-INF/maven/com.atlassian.jira/jira-webapp-dist/pom.properties
/?rest_route=/wp/v2/users/
/server-status
/.git/config
/.vscode/sftp.json
/info.php
/login.action
/config.json
/v2/_catalog
/api/search?folderIds=0
/about

Script

The script from nitefood at github provides Autonomous System Numbers (ASN) which is a better indicator of all an organizations IP number ranges then the standard whois <ip>, so the firewall.sh script will utilize them as CIDR range blocks to UFW.

In order to use the IPQualityScore API for in-depth threat reporting, it's necessary to sign up for their service (it's free) and get an API token (it will be emailed to you on sign-up), which will entitle you to 5000 free lookups per month.

Reference:

This script will block the IP address range of an organization. Typically when one IP is hacking into systems, others within that domains range will be at it too.

File: ~/linux/firewall.sh

#!/bin/bash
####################################################################
#
# File: firewall.sh
#
# Purpose: get CIDR of IP and block using ufw
#
# Dependencies:
#  sudo apt-get install curl whois bind9-host mtr-tiny jq ipcalc grepcidr nmap ncat aha 
#
#  git clone https://github.com/nitefood/asn 
#
#  Be sure to get an ~/.asn/iqs_token
#  from https://github.com/nitefood/asn#ip-reputation-api-token
#
####################################################################
DIR=/root
LOG=${DIR}/firewall.log
WHO=/tmp/whois.txt
CIDR=/tmp/whois.cidr
IP="${1}"
#
function run_asn() {
  ${DIR}/asn/asn -n ${IP} > ${WHO}
  /usr/bin/cat ${WHO}
  RANGE=$(/usr/bin/cat ${WHO} | /usr/bin/grep 'NET' | /usr/bin/grep '/' | /usr/bin/awk -Fm '{print $6}' | /usr/bin/cut -d" " -f1)
  echo "CDR: ${RANGE}"
  echo "${RANGE}" > ${CIDR}
}
#
if [ ${1} ]; then
  run_asn
else
  echo "Usage: ${0} <IP Address>"
  exit 1
fi
#
/usr/bin/grep -v deaggregate ${CIDR} > ${CIDR}.block
while read -r IP
  do
    echo "Blocking: ${IP}" | tee -a ${LOG}
    sudo /usr/sbin/ufw prepend deny from ${IP} to any 2>&1 |tee -a $LOG
  done < ${CIDR}.block

Make it executable

$ chmod 755 ~/linux/firewall.sh

Usage

$ ~/linux/firewall.sh 43.129.97.125
                                                                                                                                                           
╭──────────────────────────────╮
│ ASN lookup for 43.129.97.125 │
╰──────────────────────────────╯

 43.129.97.125 ┌PTR -
               ├ASN 132203 (TENCENT-NET-AP-CN Tencent Building, Kejizhongyi Avenue, CN)
               ├ORG 6 COLLYER QUAY
               ├NET 43.129.64.0/18 (ACEVILLEPTELTD-SG)
               ├ABU -
               ├ROA ✓ UNKNOWN (no ROAs found)
               ├TYP  Hosting/DC 
               ├GEO Hong Kong, Central and Western District (HK)
               └REP ✓ NONE  SEEN SCANNING 


CDR: 43.129.64.0/18
Rule inserted
43.129.64.0/18 Blocked

Upgrade Nextcloud

Download the latest nextcloud community server software.

$ curl https://download.nextcloud.com/server/releases/latest.zip -o nextcloud.zip

Upgrade manually

If you upgrade from a previous major version please see critical changes first.

Reference: https://docs.nextcloud.com/server/stable/admin_manual/release_notes/index.html#critical-changes

Always start by making a fresh backup and disabling all 3rd party apps.

  • Back up your existing Nextcloud Server database, data directory, and config.php file. (See Backup, for restore information see Restoring backup)

  • Download and unpack the latest Nextcloud Server release (Archive file) from nextcloud.com/install/ into an empty directory outside of your current installation.

$ unzip nextcloud-[version].zip 
# -or- 
$ tar -xjf nextcloud-[version].tar.bz2
  • Stop your Web server.

  • In case you are running a cron-job for nextcloud’s house-keeping disable it by commenting the entry in the crontab file

# Debian
$ sudo crontab -u www-data -e
# RedHat
$ sudo crontab -u apache -e

(Put a # at the beginning of the corresponding line.)

  • Rename your current Nextcloud directory, for example nextcloud-old.

  • Unpacking the new archive creates a new nextcloud directory populated with your new server files. Move this directory and its contents to the original location of your old server. For example /var/www/, so that once again you have /var/www/nextcloud.

  • Copy the config/config.php file from your old Nextcloud directory to your new Nextcloud directory.

If you keep your data/ directory in your nextcloud/ directory, copy it from your old version of Nextcloud to your new nextcloud/. If you keep it outside of nextcloud/ then you don’t have to do anything with it, because its location is configured in your original config.php, and none of the upgrade steps touch it.

If you are using 3rd party application, it may not always be available in your upgraded/new Nextcloud instance. To check this, compare a list of the apps in the new nextcloud/apps/ folder to a list of the of the apps in your backed-up/old nextcloud/apps/ folder. If you find 3rd party apps in the old folder that needs to be in the new/upgraded instance, simply copy them over and ensure the permissions are set up as shown below.

If you have additional apps folders like for example nextcloud/apps-extras or nextcloud/apps-external, make sure to also transfer/keep these in the upgraded folder.

If you are using 3rd party theme make sure to copy it from your themes/ directory to your new one. It is possible you will have to make some modifications to it after the upgrade.

  • Adjust file ownership and permissions:
$ cd /var/www
# Debian
$ sudo chown -R www-data:www-data nextcloud
# RedHat
$ sudo chown -R apache:apache nextcloud
# Both
$ sudo find nextcloud/ -type d -exec chmod 750 {} \;
$ sudo find nextcloud/ -type f -exec chmod 640 {} \;
  • Restart your Web server.

  • Now launch the upgrade from the command line using occ:

$ cd  /var/www/nextcloud/
# Debian
$ sudo -u www-data php /var/www/nextcloud/occ upgrade
# RedHat
$ sudo -u apache php /var/www/nextcloud/occ upgrade

This MUST be executed from within your nextcloud installation directory

The upgrade operation takes a few minutes to a few hours, depending on the size of your installation. When it is finished you will see a success message, or an error message that will tell where it went wrong.

  • Re-enable the nextcloud cron-job. (See step 4 above.)
# Debian
$ sudo crontab -u www-data -e
# RedHat
$ sudo crontab -u apache -e

(Delete the # at the beginning of the corresponding line in the crontab file.)

  • Login and take a look at the bottom of your Admin page to verify the version number. Check your other settings to make sure they’re correct. Go to the Apps page and review the core apps to make sure the right ones are enabled. Re-enable your third-party apps.

Reference: https://docs.nextcloud.com/server/stable/admin_manual/maintenance/manual_upgrade.html

Continue

Now that you have Cloud, consider some Home Automation, like door and window security, lights and more.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Home Assistant

This is a web based interface for automating things around the home. It can monitor doors, windows, motion detection and turn electrical things on and off.

The server software is in two pieces, Z-Wave JS and HASS. Z-Wave (nodejs) reads a USB stick and manages the connection and commands to various Z-WAVE compatible wireless devices. HASS (python) displays and automates monitoring and reacting to the devices by communication to Z-Wave over the WebSocket service (ws://localhost:3000). I install and run the docker images of these packages.

graph TD;
        Home-Assist<-- ws -->Z-Wave-JS-UI;
        Home-Assist<-- ip -->Camera;
        Home-Assist<-- ip -->TV;
        Home-Assist<-- ip -->Receiver;
        Z-Wave-JS-UI<-->USB-Stick;
        USB-Stick<-. zw .->Window-Alarm;
        USB-Stick<-. zw .->Door-Alarm;
        USB-Stick<-. zw .->Thermostat;
        USB-Stick<-. zw .->Light-Switch;
        USB-Stick<-. zw .->Motion-Detector;
  • ws: WebSocket
  • ip: TCP/IP
  • zw: Z-Wave

Reference:

HomeAssistant.png

Plug in Zooz S2 stick 700

This is a USB radio for wireless communication to the Z-Wave family of home automation devices.

Like:

  • light switch
  • door alarm
  • window alarm
  • motion detector
  • thermostat

Model (US): ZST10-700; US Frequency Band: 908.42 MHz; Z-Wave Plus: USB-A [1]

Find the name of the device for the USB Stick. Here it is '/dev/ttyUSB0'.

# from dmesg
[1464395.479270] usb 1-2: new full-speed USB device number 5 using xhci_hcd
[1464395.630085] usb 1-2: New USB device found, idVendor=10c4, idProduct=ea60, bcdDevice= 1.00
[1464395.630087] usb 1-2: New USB device strings: Mfr=1, Product=2, SerialNumber=3
[1464395.630088] usb 1-2: Product: CP2102N USB to UART Bridge Controller
[1464395.630090] usb 1-2: Manufacturer: Silicon Labs
[1464395.630091] usb 1-2: SerialNumber: f85326c6843ee812862437bcf28b3e41
[1464395.666277] usbcore: registered new interface driver usbserial_generic
[1464395.666283] usbserial: USB Serial support registered for generic
[1464395.667654] usbcore: registered new interface driver cp210x
[1464395.667665] usbserial: USB Serial support registered for cp210x
[1464395.667689] cp210x 1-2:1.0: cp210x converter detected
[1464395.670018] usb 1-2: cp210x converter now attached to ttyUSB0
$ lsusb
Bus 002 Device 001: ID 1d6b:0003 Linux Foundation 3.0 root hub
Bus 001 Device 004: ID 8087:0a2b Intel Corp. 
Bus 001 Device 003: ID 046d:c016 Logitech, Inc. Optical Wheel Mouse
Bus 001 Device 005: ID 10c4:ea60 Silicon Labs CP210x UART Bridge <--- this is it
Bus 001 Device 001: ID 1d6b:0002 Linux Foundation 2.0 root hub
$ ls -l /dev/ttyUSB0 
crw-rw---- 1 root dialout 188, 0 Aug 11 08:22 /dev/ttyUSB0

Reference:

  1. https://www.thesmartesthouse.com/products/zooz-usb-700-series-z-wave-plus-s2-stick-zst10-700

Install Software

Option (1): TrueNAS App

If you already have TrueNAS installed this is a much easier option. I run HA on TrueNAS Scale (Linux Debian) now. If your TrueNAS never moves from the HA device location, this makes even more sense than, say, a laptop.

(1) App for Home-Assistant

Home-Assistant is software to show and control your devices, and perform automation.

  • Using the TrueNAS web interface, select Apps.

  • In the Tab, select Available Applilcations.

  • Select the home-assistant app. Then select:

    • timezone
    • under Storage, check Host Path and navagate to a spot on your NAS filesystem where you want your configuration.yaml, the input field is called Host Path for Home Assistant Configuration Storage Volume EX: /mnt/vol042/ha/hass
    • it should pick an open port for you, larger than 9001
    • save, and Navigate your browser to the Home-Assistant console : <http://localhost:<port>/control-panel> and create a new login.

TrueNAS App puts event and statistics data in a postgresql database [1], instead of sqlite (the default).

Access the database from the App menu (3 dots), then Shell.

# psql  -U postgres -d homeassistance
psql (13.1 (Debian 13.1-1.pgdg100+1))
Type "help" for help.

homeassistance=# \dt
                 List of relations
 Schema |         Name          | Type  |  Owner   
--------+-----------------------+-------+----------
 public | event_data            | table | postgres
 public | event_types           | table | postgres
 public | events                | table | postgres
 public | recorder_runs         | table | postgres
 public | schema_changes        | table | postgres
 public | state_attributes      | table | postgres
 public | states                | table | postgres
 public | states_meta           | table | postgres
 public | statistics            | table | postgres
 public | statistics_meta       | table | postgres
 public | statistics_runs       | table | postgres
 public | statistics_short_term | table | postgres
(12 rows)

homeassistance=# select count(*) from events;
 count 
-------
  4470
(1 row)

homeassistance=# \q
# exit

Your Home-Assistant install will look for a configuration file in the docker /config directory. You can find it here [2]:

$ sudo docker ps --filter="name=home-assistant"|head -2
CONTAINER ID   IMAGE                          COMMAND                  CREATED        STATUS        PORTS     NAMES
b87237af7a25   homeassistant/home-assistant   "/init"                  11 hours ago   Up 11 hours             k8s_home-assistant_home-assistant-d88b55479-rsgpb_ix-home-assistant_11a62bf0-b8e2-4108-a597-1a6c448dd01d_0

# Use the NAMES and grep for config
$ sudo docker container inspect   k8s_home-assistant_home-assistant-d88b55479-rsgpb_ix-home-assistant_11a62bf0-b8e2-4108-a597-1a6c448dd01d_0 |grep config
                "/mnt/vol042/ha/hass:/config",
                "Destination": "/config",
            "WorkingDir": "/config",

The port number is here

$ sudo docker container inspect   k8s_home-assistant_home-assistant-d88b55479-rsgpb_ix-home-assistant_11a62bf0-b8e2-4108-a597-1a6c448dd01d_0 |grep HOME_ASSISTANT_PORT=
                "HOME_ASSISTANT_PORT=tcp://172.17.216.154:22401",

Just ignore the 172... IP Address, it is only for internal docker. A bridge network makes it available to the Host's public IP Address. This example port number is 22401, so I would put into my browser : http://localhost:22401/. You may need to substitute localhost with your public IP Address.

Reference:

  1. https://www.home-assistant.io/integrations/recorder/
  2. https://docs.docker.com/engine/reference/commandline/ps/

(1) Daemon to run ZWave-JS-UI

ZWave-JS-UI is a node application to interface with the USB stick and Home Assistant.

  • Move to one level below your Host Path designated above for Home-Assistant (EX: /mnt/vol042/ha), and install ZWave-JS-UI [1]
$ mkdir zwave-js-ui
$ cd zwave-js-ui
# download latest version
$ curl -s https://api.github.com/repos/zwave-js/zwave-js-ui/releases/latest  \
| grep "browser_download_url.*zip" \
| cut -d : -f 2,3 \
| tr -d \" \
| wget -i -

unzip zwave-js-ui-v*.zip

Put zwave-run.sh in TrueNAS > System > Advanced > Init/Shutdown Scripts ... with POSTINIT

File: zwave-run.sh

#!/bin/bash
DIR=/mnt/vol042/ha
LOG=zwave-run.log
#
cd ${DIR}
nohup sudo ${DIR}/zwave-js-ui-linux >> ${DIR}/${LOG} 2>&1 &

Navigate your browser to Z_Wave-JS-UI console : http://localhost:8091/control-panel and perform the Z_Wave-JS-UI setup [1]. You may need to substitute localhost with your public IP Address.

I disabled MQTT to get rid of Error: connect ECONNREFUSED messages in the debug log window of ZWave-JS-UI, since the HA to ZWAV communication is done via WebSockets (ws://localhost:3000).

Reference:

  1. https://zwave-js.github.io/zwave-js-ui/#/usage/setup

(1) Skip Option (2) and proceed to section -> Configuration for other Zooz products

Option (2): Docker Package Install

This option installs and runs two docker images, one for Home-Assistant, and another for ZWave-JS-UI.

$ sudo apt-get update

$ sudo apt-get install \
    ca-certificates \
    curl \
    gnupg \
    lsb-release

(2) Install Docker Engine on Ubuntu

Follow the latest guide here:

https://docs.docker.com/engine/install/ubuntu/

Make sure you are about to install Comunity Edition (ce) from the Docker repo instead of the default Ubuntu repo by adding Docker’s official GPG key, before setting up the repository

$ sudo apt-get install docker-ce docker-ce-cli containerd.io

$ sudo systemctl status docker

FYI: Docker download OS selection: https://download.docker.com/linux/

centos/
debian/
fedora/
raspbian/
rhel/
sles/
static/
ubuntu/

(2) Run Docker to Install homeassistant

May need to run docker with the --init parameter first. Then stop and restart without init.

export HOME=$(pwd)
$ sudo usermod -aG docker $USER
$ docker run -d \
  --name homeassistant \
  --restart=always \
  -v /etc/localtime:/etc/localtime:ro \
  -v ${HOME}/hass:/config \
  --device /dev/ttyUSB0:/dev/ttyUSB0 \
  -e "DISABLE_JEMALLOC=true" \
  --network=host \
  -p 3000 \
  homeassistant/home-assistant:stable
#
# To Remove and re-install:
docker: Error response from daemon: Conflict. 
The container name "/homeassistant" is already in use by container "5113815f15cb79a0ea19f1888e6efdd39aa9108c60675fe10b515aa162c2e72b". 
You have to remove (or rename) that container to be able to reuse that name.
See 'docker run --help'.

# docker stop 5113815f15cb79a0ea19f1888e6efdd39aa9108c60675fe10b515aa162c2e72b
5113815f15cb79a0ea19f1888e6efdd39aa9108c60675fe10b515aa162c2e72b

# docker rm 5113815f15cb79a0ea19f1888e6efdd39aa9108c60675fe10b515aa162c2e72b
5113815f15cb79a0ea19f1888e6efdd39aa9108c60675fe10b515aa162c2e72b

(2) Run Z-Wave JS Server

Create a new directory for the zwave-js server configuration files

export HOME=$(pwd)
$ sudo mkdir ${HOME}/zwavejs

Run the docker container (the first port listed is for the Z-Wave JS Web Interface, the second port is the Z-Wave JS WebSocket listener)

May need to run docker with the --init parameter first. Then stop and restart without init.

$ sudo docker run -d --restart=always  -p 8091:8091 -p 3000:3000 --device=/dev/ttyUSB0 --name="zwave-js" -e "TZ=America/NewYork" -v ${HOME}/zwavejs:/usr/src/app/store zwavejs/zwavejs2mqtt:latest

(2) Check Docker Status

# docker ps
CONTAINER ID   IMAGE                                 COMMAND                  CREATED          STATUS          PORTS                                                                                  NAMES
d977c2674a80   homeassistant/home-assistant:stable   "/init"                  24 minutes ago   Up 24 minutes                                                                                          homeassistant
99769272d748   zwavejs/zwavejs2mqtt:latest           "docker-entrypoint.s…"   42 minutes ago   Up 42 minutes   0.0.0.0:3000->3000/tcp, :::3000->3000/tcp, 0.0.0.0:8091->8091/tcp, :::8091->8091/tcp   zwave-js

(2) Move Container/Image to SSD (/data)

Put this file in /etc/docker/daemon.json:

File: daemon.json

{ 
   "data-root": "/data/docker" 
}

Stop docker

$ sudo service docker stop

Make dorectory, copy files into it, change the old name

$ mkdir /data/docker

$ sudo rsync -aP /var/lib/docker/ /data/docker

$ sudo mv /var/lib/docker /var/lib/docker.old

Ready to start on the new directory

$ sudo service docker start

test . test . test . test . test .

Then:

$ sudo rm -rf /var/lib/docker.old

(2) Default Route was getting messed up with docker

So changed NetworkInterfaceBlacklist in connman main.cf file as follows:

[General]
PreferredTechnologies=ethernet,wifi
SingleConnectedTechnology=false
AllowHostnameUpdates=false
PersistentTetheringMode=true
NetworkInterfaceBlacklist=SoftAp0,usb0,usb1,vmnet,vboxnet,virbr,ifb,veth-,vb-
Used to be:
NetworkInterfaceBlacklist=SoftAp0,usb0,usb1
...
Then restart connman:
$ sudo systemctl restart connman

Reference: https://stackoverflow.com/questions/62176803/docker-is-overriding-my-default-route-configuration

(2) Restarting

::::::::::::::
homeassistant_restart.sh
::::::::::::::
#homeassistant_restart.sh 
sudo docker restart homeassistant

::::::::::::::
homeassistant_shell.sh
::::::::::::::
docker exec -it homeassistant bash

::::::::::::::
docker_start.sh
::::::::::::::
#!/bin/bash
TMP=$(mktemp)
sudo systemctl start docker
#
./docker_status.sh
#
sudo docker ps -a | while read CONTAINER
do
 awk '$NF ~ /^zwave-js/ || $NF ~ /^homeassistant/' >> ${TMP}
done  
#
while read LINE
do
 echo ${LINE} | awk '{print "Container " $1 " is " $NF}'
 sudo docker container start $(echo ${LINE} | awk '{print $1}')
done <${TMP}
#
./docker_status.sh
#
rm ${TMP}

::::::::::::::
docker_status.sh
::::::::::::::
#!/bin/bash
sudo docker ps

::::::::::::::
docker_stop.sh
::::::::::::::
#!/bin/bash
#------------------------
# File: docker_stop.sh
#------------------------
./docker_status.sh
sudo docker container ls --quiet | while read CONTAINER
do
 echo "Stopping container $CONTAINER"
 sudo docker container stop $CONTAINER
done  
sudo systemctl stop docker

Configuration for other Zooz products

File: ${HOME}/hass/configuration.yaml

# Sun
homeassistant:
  time_zone: America/New_York

# Configure a default setup of Home Assistant (frontend, api, etc)
default_config:

# Text to speech
tts:
  - platform: google_translate

shell_command:
  sms_light_off:              /config/script.sh Outside light off
  sms_light_on:               /config/script.sh Outside light on
  sms_motion_on:              /config/script.sh Outside motion on
  sms_garage_open:            /config/script.sh Garage door open
  sms_garage_closed:          /config/script.sh Garage door closed
  sms_basement_door_open:     /config/script.sh Basement door open
  sms_basement_door_closed:   /config/script.sh Basement door closed
  sms_basement_window_open:   /config/script.sh Basement window open
  sms_basement_window_closed: /config/script.sh Basement window closed
  sms_dining_window_open:     /config/script.sh Dining room window open
  sms_dining_window_closed:   /config/script.sh Dining room window closed
  sms_front_door_open:        /config/script.sh Front door open
  sms_front_door_closed:      /config/script.sh Front door closed
  sms_sliding_door_open:      /config/script.sh Sliding door open
  sms_sliding_door_closed:    /config/script.sh Sliding door closed

group: !include groups.yaml
automation: !include automations.yaml
script: !include scripts.yaml
scene: !include scenes.yaml


# E-mail
#notify:
#  - name: "NOTIFIER_NAME"
#    platform: smtp
#    sender: "YOUR_SENDER"
#    recipient: "YOUR_RECIPIENT"
notify:
  - name: "my_email"
    platform: smtp
    server: "192.168.1.5"
    port: 25
    timeout: 15
    sender: "me@example.com"
    encryption: starttls
    username: "me@example.com"
    recipient:
      - "me@example.com"
    sender_name: "My Home Assistant"
  - name: "my_page"
    platform: smtp
    server: "192.168.1.5"
    port: 25
    timeout: 15
    sender: "me@example.com"
    encryption: starttls
    username: "me@example.com"
    recipient:
      - "5551212@phonecompany.net"
    sender_name: "Home Assist"

#
# Example configuration.yaml entry
alert:
  basement_door:
    name: Basement door is open
    done_message: Basement door is closed
    entity_id: binary_sensor.basement_door_dwzwave25_access_control_window_door_is_open
    state: "on"
    repeat: 30
    can_acknowledge: true
    skip_first: false
    notifiers:
      - my_email
  basement_window:
    name: Basement window is open
    done_message: Basement window is closed
    entity_id: binary_sensor.basement_window_dwzwave25_access_control_window_door_is_open
    state: "on"
    repeat: 30
    can_acknowledge: true
    skip_first: false
    notifiers:
      - my_email
#     - my_page
  sliding_door:
    name: Sliding door is open
    done_message: Sliding door is closed
    entity_id: binary_sensor.sliding_door_dwzwave25_access_control_window_door_is_open
    state: "on"
    repeat: 30
    can_acknowledge: true
    skip_first: false
    notifiers:
      - my_email
#      - my_page
  dining_window:
    name: Dining window is open
    done_message: Dining window is closed
    entity_id: binary_sensor.dining_room_window_dwzwave25_access_control_window_door_is_open
    state: "on"
    repeat: 30
    can_acknowledge: true
    skip_first: false
    notifiers:
      - my_email
#      - my_page
  front_door:
    name: Front Door is open
    done_message: Front Door is closed
    entity_id: binary_sensor.front_door_dwzwave25_access_control_window_door_is_open
    state: "on"
    repeat: 30
    can_acknowledge: true
    skip_first: false
    notifiers:
      - my_email
#     - my_page
  garage_door:
    name: garage door is open
    done_message: garage door is closed
    entity_id: switch.garage_door_relay_zen16_2_2
    state: "off"
    repeat: 30
    can_acknowledge: true
    skip_first: false
    notifiers:
      - my_email
#      - my_page

# Switch Timer
input_number:
  light_timer_minutes:
    name: "Light Timer"
    min: 0
    max: 30
    step: 1
input_boolean:
  light_timer_enabled:
    name: "Light timer switch"
    initial: on
    icon: mdi:timelapse

Configuration script from configuration.yaml

Log file will be picked up by script ${HOME}/matrix/sendmatrix.sh.

NOTE: '/config' in docker is actually ${HOME}/hass, see script comments inside sendmatrix.sh

$ cat ${HOME}/hass/script.sh 
#!/bin/bash
echo "${@} - $(date)" >>  /config/sendsms.log

Sendmatrix script

File: ${HOME}/matrix/sendmatrix.sh

#!/bin/bash
#----------------------------------------------------
# File: sendmatrix.sh
#
# Usage: sendmatrix.sh
#
# Purpose: Watch for new lines in a homeassistant (hass)
#  file (${HOME}/hass/sendmatrix.log) and send an matrix 
#  message with those new line(s)
#
# Dependencies: 
#  - sudo apt-get install sendmatrix
#  - sudo apt-get install inotifywait
#  - retail : git clone https://github.com/mbucc/retail
#  - NOTE: Docker configures / as /config for homeassistant
#          and ${HOME}/hass is /
#  - ${HOME}/hass/configuration.yaml
#       ~
#       shell_command:
#         xmpp_light_off: /config/script.sh Outside light off
#       ~
#  - ${HOME}/hass/automations.yaml 
#       ~
#       - service: shell_command.xmpp_light_off
#       ~
#  - ${HOME}/hass/script.sh
#       #!/bin/bash
#       echo "${@} - $(date)" >>  /config/sendxmpp.log
#
# Date     Author     Description
# ----     ------     -----------
# Sep-2021 Don Cohoon Created
#----------------------------------------------------
HOME=${HOME}

# configure hass interface
DIR=${HOME}/wave
OFFSET=${DIR}/sendmatrix.cnt
RESULT=${DIR}/sendmatrix.txt
MSGS=${DIR}/hass/sendsms.log
LOG=${DIR}/sendmatrix.log

#
date >> ${LOG}
# monitor mode, look for file ${MSGS} modification
/usr/bin/inotifywait -m -e modify ${MSGS} 2>&1 | while read line
do
  echo "$(date) - $line"  >> ${LOG}
  # grab any hass script.sh new lines since last time
  /usr/local/bin/retail -T ${OFFSET} ${MSGS} > ${RESULT}
  if [ ! -s "${RESULT}" ]; then
    rm ${RESULT}
  else
    # send text message to phone
    /bin/cat ${RESULT} | /usr/local/bin/matrix-commander.py >>${LOG} 2>&1
    # nextcloud talk integration 
    MSG=$(/bin/cat ${RESULT})
    ${HOME}/nextcloud/talk_mattermost.sh "${MSG}"
    #
    /bin/cat ${RESULT} >>${LOG} 2>&1
    date >> ${LOG}
  fi
done

Automation

Most of this configuration is done through the web interface: https://localhost:8123

File: ${HOME}/hass/automation.yaml

- id: 380e45ccb4934558ba07d3069830d3d2
  alias: light timer
  trigger:
  - platform: state
    entity_id: switch.light_switch_zen23
    to: 'on'
  condition:
  - condition: state
    entity_id: input_boolean.light_timer_enabled
    state: 'on'
  action:
  - delay:
      minutes: '{{ states(''input_number.light_timer_minutes'') | int }}'
  - service: switch.turn_off
    data: {}
    target:
      entity_id: switch.light_switch_zen23
  - service: shell_command.sms_light_off
  mode: single
- id: '1625258268930'
  alias: Motion Light On
  description: Turn on outside light switch when motion is detected after sunset and
    before sunrise
  trigger:
  - type: motion
    platform: device
    device_id: eeacc9737cf7b1b303418213aed42535
    entity_id: binary_sensor.outside_motion_zse29_home_security_motion_detection
    domain: binary_sensor
  condition:
  - type: is_illuminance
    condition: device
    device_id: eeacc9737cf7b1b303418213aed42535
    entity_id: sensor.outside_motion_zse29_illuminance
    domain: sensor
    below: 150
  action:
  - type: turn_on
    device_id: d2e00f0815ff683a320e28e41ee73ea5
    entity_id: switch.light_switch_zen23
    domain: switch
  - service: shell_command.sms_motion_on
  mode: single
- id: '1628798611219'
  alias: bedroom-alarm
  description: Turn Bedroom Double Switch on at a certain time
  trigger:
  - platform: time
    at: 06:00:00
  condition: []
  action:
  - type: turn_on
    device_id: 92a0858e5012690e2415f47ea4fb122c
    entity_id: switch.double_plug_zen25_bedroom
    domain: switch
  mode: single
- id: '1630351746501'
  alias: Garage Door Open
  description: Garage door open sms message
  trigger:
  - platform: device
    type: turned_off
    device_id: 074e290829b5399ed65b48c792f4f25b
    entity_id: switch.garage_door_relay_zen16_2_2
    domain: switch
  condition: []
  action:
  - service: shell_command.sms_garage_open
  mode: single
- id: '1630351876006'
  alias: Garage door closed
  description: Garage door closed sms message
  trigger:
  - platform: device
    type: turned_on
    device_id: 074e290829b5399ed65b48c792f4f25b
    entity_id: switch.garage_door_relay_zen16_2_2
    domain: switch
  condition: []
  action:
  - service: shell_command.sms_garage_closed
  mode: single
- id: '1631388354084'
  alias: Basement Door Open
  description: Basement Door Open sms message
  trigger:
  - type: opened
    platform: device
    device_id: f618398c8bdf0bce74b6c9a81b822da5
    entity_id: binary_sensor.basement_door_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_basement_door_open
  mode: single
- id: '1631388422907'
  alias: Basement Door Closed
  description: Basement Door Closed sms message
  trigger:
  - type: not_opened
    platform: device
    device_id: f618398c8bdf0bce74b6c9a81b822da5
    entity_id: binary_sensor.basement_door_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_basement_door_closed
  mode: single
- id: '1631389804064'
  alias: Basement Window Open
  description: Basement Window Open sms message
  trigger:
  - type: opened
    platform: device
    device_id: 563036dd16828af53955fa8eb0d4c4bf
    entity_id: binary_sensor.basement_window_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_basement_window_open
  mode: single
- id: '1631389885679'
  alias: Basement Window Closed
  description: Basement Window Closed sms message
  trigger:
  - type: not_opened
    platform: device
    device_id: 563036dd16828af53955fa8eb0d4c4bf
    entity_id: binary_sensor.basement_window_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_basement_window_closed
  mode: single
- id: '1631389952499'
  alias: Dining Room Window Open
  description: Dining Room Window Open sms message
  trigger:
  - type: opened
    platform: device
    device_id: 2b803cd86b266ee0d15b1b00c3160b57
    entity_id: binary_sensor.dining_room_window_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_dining_window_open
  mode: single
- id: '1631390010729'
  alias: Dining Room Window Closed
  description: Dining Room Window Closed sms message
  trigger:
  - type: not_opened
    platform: device
    device_id: 2b803cd86b266ee0d15b1b00c3160b57
    entity_id: binary_sensor.dining_room_window_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_dining_window_closed
  mode: single
- id: '1631390253290'
  alias: Front Door Open
  description: Front Door Open sms message
  trigger:
  - type: opened
    platform: device
    device_id: 8df9b7813be834e418cf5d35165114b1
    entity_id: binary_sensor.front_door_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_front_door_open
  mode: single
- id: '1631390320024'
  alias: Front Door Closed
  description: Front Door Closed sms message
  trigger:
  - type: not_opened
    platform: device
    device_id: 8df9b7813be834e418cf5d35165114b1
    entity_id: binary_sensor.front_door_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_front_door_closed
  mode: single
- id: '1631390388999'
  alias: Sliding Door Open
  description: Sliding Door Open sms message
  trigger:
  - type: opened
    platform: device
    device_id: 3c2bfde83e328e220ee10dbd3b2f3085
    entity_id: binary_sensor.sliding_door_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_sliding_door_open
  mode: single
- id: '1631390452432'
  alias: Sliding Door Closed
  description: Sliding Door Closed sms  message
  trigger:
  - type: not_opened
    platform: device
    device_id: 3c2bfde83e328e220ee10dbd3b2f3085
    entity_id: binary_sensor.sliding_door_dwzwave25_access_control_window_door_is_open
    domain: binary_sensor
  condition: []
  action:
  - service: shell_command.sms_sliding_door_closed
  mode: single

Upgrade

Option: TrueNAS

  • Homeassistant app can be updated using TrueNAS app screen, 3 dot menu in the home-assistant app.
  • Z-Wave-JS-UI latest can re downloaded.

Option: Docker

You can find the latest, stable, and development builds on docker hub here: https://hub.docker.com/u/homeassistant

During the upgrade your devices will continue to work fine, but please note any automations or access to the application will not be available, so it’s recommended to do this during a time that you know no automations will be running.

Validate your current version

Navigate to the Developer Tools section of Home Assistant. Here you can validate the latest version you currently have deployed.

File: docker_upgrade.sh

#!/bin/bash
#####################################################################
#
# File: docker_upgrade.sh
#
# Usage: sudo ./docker_upgrade.sh
#
# Purpose: Keep HomeAssistant up to date
#
# Dependencies: homeassistant and zwave-js are installed via docker
#  Z-Wave USB device may be different
#  Time zone may be different
#
# Process:
#   1. Stop the current container
#   2. Delete it
#   3. Pull the new container from the docker hub
#   4. Run the container again
#
# The config directory is not in the container, so it remains untouched.
#
# NOTE: Do this every time a new version of HA is released.
#
# History:
#
#   Who    When    Why
# -------- ------- --------------------------------------------------
# Don Cohoon  Dec 2022  Created from many, many, notes and searches
#                        wish I took better notes to give credit.
#####################################################################
HOME=$(pwd)
#
function homeassistant_upgrade() {
 docker stop homeassistant
 docker rm homeassistant
 docker pull homeassistant/home-assistant
 docker run -d \
   --name homeassistant \
   --restart=always \
   -v /etc/localtime:/etc/localtime:ro \
   -v ${HOME}/hass:/config \
   --device /dev/ttyUSB0:/dev/ttyUSB0 \
   -e "DISABLE_JEMALLOC=true" \
   --network=host \
   -p 3000 \
   homeassistant/home-assistant:stable
}
#####################################################################
function zwavejs_upgrade() {
 docker stop zwave-js
 docker rm zwave-js
 docker pull zwavejs/zwavejs2mqtt
 sudo docker run -d \
  --restart=always  \
  -p 8091:8091 \
  -p 3000:3000 \
  --device=/dev/ttyUSB0 \
  --name="zwave-js" \
  -e "TZ=America/NewYork" \
  -v ${HOME}/zwavejs:/usr/src/app/store zwavejs/zwavejs2mqtt:latest
}
#####################################################################
ID=$(id -un)
#
if [ ${ID} != "root" ]; then
  echo "ERROR: Must be run as root"
  exit 1
fi
#
# homeassistant upgrade
#
homeassistant_upgrade
#
# zwave-js upgrade
#
zwavejs_upgrade

Validate your new version number

After a few minutes, navigate back to the Developers Tools page. Upon load, you should now be on the latest version of Home Assistant.

More docker commands are in my Docker blog for January 2023

Lights for WALL-E

The relay module ZEN16 is wired to a switch on the garage door. When it is open a red light is displayed on Wall-E, when the door is closed a green light is shown.

Wall-E

A USB charger is used to supply 5 volts DC (BAT1) to the circuit. Circuit switch S1 is connected to the ZEN16 relay R2 connection. Q1 is a 2N2222 transistor. A closed circuit supplies voltage to LED2, open circuit supplies voltage to LED1.

Here is the electrical diagram for that:

Lights_for_Wall-E.png

Circuit

opoohbbehgacecel.png

ZEN16 Relay

ZEN16 will monitor its switch Sw2 below, if is closed, then turn on relay R2, closing the circuit switch S1 above. Home Assistant will detect the change in relay R2 and issue notifications using its automation.yaml definition "alias: Garage Door Open".

ZEN16 will respond to relay R1 on/off commands from Home Assistant. The automation.yaml definition is called "alias: Bedroom Alarm" and turns on relay R1 at 6:00am. A Home Assistant dashboard button is enabled to turn relay R1 off.

Garage door connections:

PurposeConnectionHomeAssistant Entity_ID
Garage Door SwitchSw2N/A
Wall-E Light CircuitR2switch.garage_door_relay_zen16_2_2
Alarm Light RadioR1switch.garage_door_relay_zen16_2

ZEN16

Alarm Light Radio - Diagram

The Power-Tail-Switch is a 120v AC relay controlled by 5v DC. The ZEN16 acts as a switch to turn relay R1 on and off. This opens/closes the 5v DC voltage flow to enable/disable the 102v AC voltage through the PowerSwitchTail, that sends power to the power strip turning on/off the radio and light plugged into it.

AlarmClock-HomeAssistant.png

Z-Wave Devices

ManufacturerProductProduct CodeNameLocation
Silicon Labs700 Series-based ControllerZST10-700Controller ZST10-700Basement
EcolinkZ-Wave Door/Window SensorDWZWAVE25Basement Door DWZWAVE25Basement
Zooz4-in-1 SensorZSE40Basement Motion ZSE40Basement
ZoozOutdoor Motion SensorZSE29Outside Motion ZSE29Deck
ZoozMultirelayZEN16Garage Door Relay ZEN16Basement
ZoozDouble PlugZEN25Double Plug ZEN25Basement
ZoozZ-Wave Plus On/Off Toggle Switch v4ZEN23Light Switch ZEN23Basement
EcolinkZ-Wave Door/Window SensorDWZWAVE25Basement Window DWZWAVE25Basement
EcolinkZ-Wave Door/Window SensorDWZWAVE25Sliding Door DWZWAVE25Living Room
EcolinkZ-Wave Door/Window SensorDWZWAVE25Dining Room Window DWZWAVE25Dining Room
EcolinkZ-Wave Door/Window SensorDWZWAVE25Front Door DWZWAVE25Front Door
ZoozDouble PlugZEN25Double Plug ZEN25 BedroomBedroom
HoneywellT6 Pro Z-Wave Programmable ThermostatTH6320ZWThermostat - T5Living Room

Purge

HA-Purge.png

Continue

Now that you have set up Home Automation-Mail on your server, you will need Matrix for sending out alerts, so now is a good time to install the super fast and secure messaging system.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Matrix


Matrix is a messaging system you can on your server. It allows your to send messages to a phone/watch from command line on the server. It is useful with HomeAssistant for monitoring the house.

Here is an example of a Matrix Client, called Element[1], available on iOS, Android and Desktops.

IMG_2070A268049A-1.jpeg

  1. https://element.io/

Install

Matrix.org provides Debian/Ubuntu packages of Synapse via https://packages.matrix.org/debian/. To install the latest release:

Prerequsites:

sudo apt install -y lsb-release wget apt-transport-https

Matrix.org packages

sudo wget -O /usr/share/keyrings/matrix-org-archive-keyring.gpg https://packages.matrix.org/debian/matrix-org-archive-keyring.gpg
echo "deb [signed-by=/usr/share/keyrings/matrix-org-archive-keyring.gpg] https://packages.matrix.org/debian/ $(lsb_release -cs) main" |
    sudo tee /etc/apt/sources.list.d/matrix-org.list

sudo apt update
sudo apt install matrix-synapse-py3

Reference: https://github.com/matrix-org/synapse/blob/master/INSTALL.md

Configure

  • Set the public_baseurl to your server's local IP address.
  • Set the listeners to your server's local IP address too.

File: /etc/matrix-synapse/homeserver.yaml

#public_baseurl: https://example.com/
public_baseurl: http://192.168.1.3/
~
listeners:
  - port: 8008
    tls: false
    type: http
    x_forwarded: true
    bind_addresses: ['::1', '127.0.0.1', '192.168.1.3']

    resources:
      - names: [client, federation]
        compress: false
~
:wq

Set the shared secret and enable registration so we can create users for ourselves. We can disable it later because anyone with the shared secret can register, whether enabled or not.

Create the shared secret like this:

cat /dev/urandom | tr -dc 'a-zA-Z0-9' | fold -w 32 | head -n 1

File: /etc/matrix-synapse/homeserver.yaml

#enable_registration: false
enable_registration: true
~
# If set, allows registration of standard or admin accounts by anyone who
# has the shared secret, even if registration is otherwise disabled.
#
# Change the 'registration_shared_secret' with the random passphrase generated on below
#
registration_shared_secret: "S9gYeYowVX49U6OG25EEROKU4b2gEU3S"
#
server_name: localhost
#
:wq

Create User(s)

Using enable_registration=true, we can create users via register_new_matrix_user client.

$ register_new_matrix_user -c /etc/matrix-synapse/homeserver.yaml http://localhost:8008
New user localpart [bob]: alice
Password: 
Confirm password: 
Make admin [no]: n
Sending registration request...
Success!

Must have python > 2.7.9

Reference: https://matrix-org.github.io/synapse/latest/usage/administration/admin_api/

Database

Database is sqlite3, here:

$ ls /var/lib/matrix-synapse
homeserver.db  media

Check your database settings in the configuration file, connect to the correct database using either psql [database name] (if using PostgreSQL) or sqlite3 path/to/your/database.db (if using SQLite) and elevate the user @foo:bar.com to server administrator.

UPDATE users SET admin = 1 WHERE name = '@foo:bar.com';

You can also revoke server admin by setting admin = 0 above.

Room administrators and server administrators are different. Server admins don't have any power over rooms. When you use the "Start chat" button in Riot, it'll set everyone as room admin.

You can convert the database to PostgreSQL

https://github.com/matrix-org/synapse/blob/master/docs/postgres.md

Secure Configuration

Once you have tested Matrix, created users and have a SSL/TLS certificate, you can enable external communication.

Turn off registrations and comment out the registration_key.

File: /etc/matrix-synapse/homeserver.yaml

~
  46 # This is set in /etc/matrix-synapse/conf.d/server_name.yaml for Debian installations.
  47 # server_name: "SERVERNAME"
  48 server_name: matrix.example.com
~
  64 # The public-facing base URL that clients use to access this Homeserver (not
  65 # including _matrix/...). This is the same URL a user might enter into the
  66 # 'Custom Homeserver URL' field on their client. If you use Synapse with a
  67 # reverse proxy, this should be the URL to reach Synapse via the proxy.
  68 # Otherwise, it should be the URL to reach Synapse's client HTTP listener (see
  69 # 'listeners' below).
  70 #
  71 #public_baseurl: https://example.com/
  72 public_baseurl: https://matrix.example.com:8448
~
272 #
 273 listeners:
 274   # TLS-enabled listener: for when matrix traffic is sent directly to synapse.
 275   #
 276   # Disabled by default. To enable it, uncomment the following. (Note that you
 277   # will also need to give Synapse a TLS key and certificate: see the TLS section
 278   # below.)
 279   #
 280   - port: 8448
 281     type: http
 282     tls: true
 283     bind_addresses: ['::1', '0.0.0.0']
 284     resources:
 285       - names: [client, federation]
 286 
 287   # Unsecure HTTP listener: for when matrix traffic passes through a reverse proxy
 288   # that unwraps TLS.
 289   #
 290   # If you plan to use a reverse proxy, please see
 291   # https://matrix-org.github.io/synapse/latest/reverse_proxy.html.
 292   #
 293   - port: 8008
 294     tls: false
 295     type: http
 296     x_forwarded: true
 297     bind_addresses: ['::1', '127.0.0.1', '192.168.1.3']
 298 
 299     resources:
 300       - names: [client, federation]
 301         compress: false
 302 
~
 547 ## TLS ##
 548 
 549 # PEM-encoded X509 certificate for TLS.
 550 # This certificate, as of Synapse 1.0, will need to be a valid and verifiable
 551 # certificate, signed by a recognised Certificate Authority.
 552 #
 553 # Be sure to use a `.pem` file that includes the full certificate chain including
 554 # any intermediate certificates (for instance, if using certbot, use
 555 # `fullchain.pem` as your certificate, not `cert.pem`).
 556 #
 557 #tls_certificate_path: "/etc/matrix-synapse/SERVERNAME.tls.crt"
 558 tls_certificate_path: "/etc/letsencrypt/live/example.com/fullchain.pem"
 559 
 560 # PEM-encoded private key for TLS
 561 #
 562 #tls_private_key_path: "/etc/matrix-synapse/SERVERNAME.tls.key"
 563 tls_private_key_path: "/etc/letsencrypt/live/example.com/privkey.pem"
~
755 database:
 756   name: psycopg2
 757   args:
 758     user: matrix_user
 759     password: ************
 760     database: matrix
 761     host: 127.0.0.1
 762     cp_min: 5
 763     cp_max: 10
 764 
 765     # seconds of inactivity after which TCP should send a keepalive message to the server
 766     keepalives_idle: 10
 767 
 768     # the number of seconds after which a TCP keepalive message that is not
 769     # acknowledged by the server should be retransmitted
 770     keepalives_interval: 10
 771 
 772     # the number of TCP keepalives that can be lost before the client's connection
 773     # to the server is considered dead
 774     keepalives_count: 3
~
 777 ## Logging ##
 778 
 779 # A yaml python logging config file as described by
 780 # https://docs.python.org/3.7/library/logging.config.html#configuration-dictionary-schema
 781 #
 782 log_config: "/etc/matrix-synapse/log.yaml"
~
 906 # Directory where uploaded images and attachments are stored.
 907 #
 908 media_store_path: "/var/lib/matrix-synapse/media"
~
1139 # Enable registration for new users.
1140 #
1141 #enable_registration: false
1142 enable_registration: false
~
1190 # If set, allows registration of standard or admin accounts by anyone who
1191 # has the shared secret, even if registration is otherwise disabled.
1192 #
1193 #registration_shared_secret: <PRIVATE STRING>
1194 registration_shared_secret: "*****************************"
~
1438 ## Signing Keys ##
1439 
1440 # Path to the signing key to sign messages with
1441 #
1442 signing_key_path: "/etc/matrix-synapse/homeserver.signing.key"
~
#trusted_key_servers:
1500 #  - server_name: "my_trusted_server.example.com"
1501 #    verify_keys:
1502 #      "ed25519:auto": "abcdefghijklmnopqrstuvwxyzabcdefghijklmopqr"
1503 #  - server_name: "my_other_trusted_server.example.com"
1504 #
1505 trusted_key_servers:
1506   - server_name: "matrix.org"
~

File: /etc/matrix-synapse/conf.d/server_name.yaml

 # This file is autogenerated, and will be recreated on upgrade if it is deleted.
# Any changes you make will be preserved.

# The domain name of the server, with optional explicit port.
# This is used by remote servers to connect to this server,
# e.g. matrix.org, localhost:8080, etc.
# This is also the last part of your UserID.
#
server_name: matrix.example.com

Matrix Commander CLI

This will allow us to send text messages to our phone/watch from homeassistant.

Follow the instruction on the matrix-commander website. Specifically run with the --login parameter to create ~/.config/matrix-commander/credentials.json file.

Reference: https://github.com/8go/matrix-commander

Send Matrix

This script will assist in watching homeassistant for new messages, then sending them out.

File: ~/matrix/sendmatrix.sh

#!/bin/bash
#----------------------------------------------------
# File: sendmatrix.sh
#
# Usage: sendmatrix.sh
#
# Purpose: Watch for new lines in a homeassistant (hass)
#  file (${HOME}/wave/hass/sendmatrix.log) and send an matrix 
#  message with those new line(s)
#
# Dependencies: 
#  - git clone https://github.com/8go/matrix-commander.git
#  - sudo apt-get install inotify-tools
#  - retail : git clone https://github.com/mbucc/retail.git
#  - NOTE: Docker configures / as /config for homeassistant
#          and ~/wave/hass is /
#  - ~/wave/hass/configuration.yaml
#       ~
#       shell_command:
#         xmpp_light_off: /config/script.sh Outside light off
#       ~
#  - ~/wave/hass/automations.yaml 
#       ~
#       - service: shell_command.xmpp_light_off
#       ~
#  - ~/wave/hass/script.sh
#       #!/bin/bash
#       echo "${@} - $(date)" >>  /config/sendxmpp.log
#
# Date     Author     Description
# ----     ------     -----------
# Sep-2021 Don Cohoon Created
# Sep-2022 Don Cohoon Updated matrix-commander version
# Oct-2022 Don Cohoon Fix matrix commander parameters
#----------------------------------------------------
HOME=${HOME}

# configure hass interface
DIR=${HOME}/wave
OFFSET=${DIR}/sendmatrix.cnt
RESULT=${DIR}/sendmatrix.txt
MSGS=${DIR}/hass/sendsms.log
LOG=${DIR}/sendmatrix.log
MATRIX_COMMANDER=$HOME/.local/lib/python3.10/site-packages/matrix_commander/matrix_commander.py
MATRIX_CONFIG=$HOME/.config/matrix-commander/credentials.json
MATRIX_STORE=$HOME/.config/matrix-commander/store
#MATRIX_DEBUG="-d"
MATRIX_DEBUG=""

#
date >> ${LOG}
# monitor mode, look for file ${MSGS} modification
/usr/bin/inotifywait -m -e modify ${MSGS} 2>&1 | while read line
do
  echo "$(date) - $line"  >> ${LOG}
  # grab any hass script.sh new lines since last time
  /usr/local/bin/retail -T ${OFFSET} ${MSGS} > ${RESULT}
  if [ ! -s "${RESULT}" ]; then
    rm ${RESULT}
  else
    # send text message to phone
    MSG=$(/bin/cat ${RESULT})
    /bin/cat ${RESULT} | /usr/bin/python3 ${MATRIX_COMMANDER} \
      -c ${MATRIX_CONFIG} -s ${MATRIX_STORE} --no-ssl ${MATRIX_DEBUG} -m - >>${LOG} 2>&1
    # nextcloud talk integration 
    ${HOME}/nextcloud/talk_mattermost.sh "${MSG}"
    #
    /bin/cat ${RESULT} >>${LOG} 2>&1
    date >> ${LOG}
  fi
done

Debug

matrix-commander credentials

This is the credentials.json file matrix-commander.py creates and uses.

$ cat  ~/.config/matrix-commander/credentials.json |jq .
{
  "homeserver": "http://127.0.0.1:8008",
  "device_id": "GTSIKDHJEG",
  "user_id": "@user:matrix.example.com",
  "room_id": "!JshkYudiHksdhKHSKk:matrix.example.com",
  "access_token": "syt_ZTSjdlksshoidywlSKkDHIASLJW_0D2t3m"
}

If you are interested in where the element values derive from, check these:

Change User Password

As an admin, you can reset another user's password.

Login as an admin and save an access token.

curl -XPOST -d '{"type":"m.login.password", "user":"<userId>", "password":"<pasword>"}' "https://localhost:8448/_matrix/client/r0/login"

Using that access token, reset any user's password.

curl -XPOST -H "Authorization: Bearer <access_token>" -H "Content-Type: application/json" -d '{"new_password":"<new_password>"}' "https://localhost:8448/_matrix/client/r0/admin/reset_password/<userId>"

<userId> is fully qualified. E.g. @user:server.com

Reference:

Matrix Client-Server API

The easiest way to administer a Matrix-Synapse server is through the Client-Server API [1].

Visit the API overview for many examples with code for:

  • Server Administration,
  • Account Management,
  • Spaces,
  • Event Relationships,
  • Threads,
  • Room Participation,
  • Session Management,
  • Capabilities,
  • Room Creation,
  • Device Management,
  • Room Discovery,
  • Room Directory,
  • Room Membership,
  • End-to-End Encryption,
  • Push Notifications,
  • Presence,
  • User Data,
  • and more.
  1. https://matrix.org/docs/api/#overview

Alternative: Nextcloud Talk

NextCloud Talk is part of NextCloud Services.

Continue

Now that you have set up Matrix Communication on your server, you can read some news to send to others. Read on to find out how to set up a local news/RSS reader.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Miniflux News Reader

Reference: https://miniflux.app/docs/installation.html#debian

Miniflux is an RSS (Really Simple Syndication) server that will fetch and display RSS feeds from other websites. Also known as a news reader.

You can view these feeds from your web browser. They look like this:

miniflux (2).png

Database Configuration

Creating the Database

Here an example from the command line:

# Switch to the postgres user
$ sudo -u postgres bash

# Create a database user for Miniflux
$ createuser -P miniflux
Enter password for new role: ******
Enter it again: ******

# Create a database for miniflux that belongs to our user
$ createdb -O miniflux miniflux

# Create the extension hstore as superuser
$ psql miniflux -c 'create extension hstore'
CREATE EXTENSION

Enabling HSTORE extension for Postgresql

Creating Postgresql extensions requires the SUPERUSER privilege. Several solutions are available:

  1. Give SUPERUSER privileges to the miniflux user only during the schema migration:
ALTER USER miniflux WITH SUPERUSER;
-- Run the migrations (miniflux -migrate)
ALTER USER miniflux WITH NOSUPERUSER;
  1. You could create the hstore extension with another user that have the SUPERUSER privileges before running the migrations.
sudo -u postgres psql $MINIFLUX_DATABASE
> CREATE EXTENSION hstore;

Debian/Ubuntu/Raspbian Package Installation

You must have Debian >= 8 or Ubuntu >= 16.04. When using the Debian package, the Miniflux daemon is supervised by systemd.

  • Install the Debian package: dpkg -i miniflux_2.0.13_amd64.deb
$ curl -L https://github.com/miniflux/v2/releases/download/2.0.39/miniflux_2.0.39_arm64.deb -o miniflux_2.0.39_arm64.deb

$ sudo dpkg -i miniflux_2.0.39_arm64.deb 
Selecting previously unselected package miniflux.
(Reading database ... 118954 files and directories currently installed.)
Preparing to unpack miniflux_2.0.39_arm64.deb ...
Unpacking miniflux (2.0.39) ...
Setting up miniflux (2.0.39) ...
Created symlink /etc/systemd/system/multi-user.target.wants/miniflux.service → /lib/systemd/system/miniflux.service.
Job for miniflux.service failed because the control process exited with error code.
See "systemctl status miniflux.service" and "journalctl -xe" for details.
Processing triggers for man-db (2.9.4-2) ...
  • Define the environment variable DATABASE_URL if necessary
$ cat /etc/miniflux.conf 
# See https://miniflux.app/docs/configuration.html

RUN_MIGRATIONS=1
DATABASE_URL = user=miniflux password=********** dbname=miniflux sslmode=disable
  • Run the SQL migrations:
$ sudo -u postgres miniflux -c /etc/miniflux.conf -migrate
-> Current schema version: 60
-> Latest schema version: 60
  • Create an admin user:
$ sudo -u postgres miniflux -c /etc/miniflux.conf -create-admin
Enter Username: nosey
Enter Password: 
  • Customize your configuration file /etc/miniflux.conf if necessary

Config entries followed by a colon (:) are defaults, to change them use an equals (=) sign.

DATABASE_URL = user=miniflux password=************ dbname=miniflux sslmode=disable
LOG_DATE_TIME= true
DEBUG= false
HTTP_SERVICE: true
SCHEDULER_SERVICE: true
HTTPS= true
HSTS: true
BASE_URL= https://rss.example.com
ROOT_URL= https://rss.example.com
BASE_PATH: 
LISTEN_ADDR = 0.0.0.0:8080
DATABASE_MAX_CONNS: 20
DATABASE_MIN_CONNS: 1
RUN_MIGRATIONS= 1
CERT_FILE = /etc/letsencrypt/live/example.com/fullchain.pem
KEY_FILE = /etc/letsencrypt/live/example.com/privkey.pem
CERT_DOMAIN: rss.example.com
CERT_CACHE: /tmp/cert_cache
CLEANUP_FREQUENCY_HOURS: 24
CLEANUP_ARCHIVE_READ_DAYS: 60
CLEANUP_REMOVE_SESSIONS_DAYS: 30
WORKER_POOL_SIZE: 5
POLLING_FREQUENCY: 60
BATCH_SIZE: 10
POLLING_SCHEDULER: round_robin
SCHEDULER_ENTRY_FREQUENCY_MAX_INTERVAL: 1440
SCHEDULER_ENTRY_FREQUENCY_MIN_INTERVAL: 5
PROXY_IMAGES: http-only
CREATE_ADMIN: false
ADMIN_USERNAME=nosey
ADMIN_PASSWORD=**********
POCKET_CONSUMER_KEY: 
OAUTH2_USER_CREATION: false
OAUTH2_CLIENT_ID: 
OAUTH2_CLIENT_SECRET: 
OAUTH2_REDIRECT_URL: 
OAUTH2_OIDC_DISCOVERY_ENDPOINT: 
OAUTH2_PROVIDER: 
HTTP_CLIENT_TIMEOUT: 20
HTTP_CLIENT_MAX_BODY_SIZE: 15728640
AUTH_PROXY_HEADER: 
AUTH_PROXY_USER_CREATION: false
  • Restart the process: sudo systemctl restart miniflux
  • Check the process status: sudo systemctl status miniflux
  • Enable firewall
$ sudo ufw allow 8080
Rule added
Rule added (v6)

Note that you could also use the Miniflux APT repository instead of downloading manually the Debian package.

Since Miniflux v2.0.25, the Debian package is available for multiple architectures: amd64arm64, and armhf. This way, it’s very easy to install Miniflux on a Raspberry Pi.

If you don’t want to run the SQL migrations manually each time you upgrade Miniflux, set the environment variable: RUN_MIGRATIONS=1 in /etc/miniflux.conf.

Systemd reads the environment variables from the file /etc/miniflux.conf. You must restart the service to take the new values into consideration.

Export/Import via standard .opml files

Use the web interface, top menu Feeds, to Import/Export .opml files.

Sample RSS feeds

<?xml version="1.0" encoding="UTF-8"?>
<opml version="2.0">
    <head>
        <title>Miniflux</title>
        <dateCreated>Sun, 23 Oct 2022 14:55:57 EDT</dateCreated>
    </head>
    <body>
        <outline text="All">
            <outline title="Bartosz Ciechanowski" text="Bartosz Ciechanowski" xmlUrl="https://ciechanow.ski/atom.xml" htmlUrl="https://ciechanow.ski/"></outline>
            <outline title="Hacking on Go350" text="Hacking on Go350" xmlUrl="https://www.go350.com/index.xml" htmlUrl="https://www.go350.com/"></outline>
            <outline title="2ality – JavaScript and more" text="2ality – JavaScript and more" xmlUrl="https://feeds.feedburner.com/2ality" htmlUrl="https://2ality.com/"></outline>
            <outline title="9Gag - Awesome - Hot" text="9Gag - Awesome - Hot" xmlUrl="https://9gag.vamourir.fr/feeds/awesome/hot.xml" htmlUrl="https://9gag.vamourir.fr/feeds/awesome/hot.xml"></outline>
            <outline title="aka Ken Smith" text="aka Ken Smith" xmlUrl="http://oldschool.scripting.com/KenSmith/rss.xml" htmlUrl="http://oldschool.scripting.com/KenSmith/"></outline>
            <outline title="A List Apart: The Full Feed" text="A List Apart: The Full Feed" xmlUrl="https://feeds.feedburner.com/alistapart/main" htmlUrl="https://alistapart.com"></outline>
            <outline title="Al Jazeera – Breaking News, World News and Video from Al Jazeera" text="Al Jazeera – Breaking News, World News and Video from Al Jazeera" xmlUrl="https://www.aljazeera.com/xml/rss/all.xml" htmlUrl="https://www.aljazeera.com"></outline>
            <outline title="Articles — brandur.org" text="Articles — brandur.org" xmlUrl="https://brandur.org/articles.atom" htmlUrl="https://brandur.org"></outline>
            <outline title="BBC News - Home" text="BBC News - Home" xmlUrl="https://feeds.bbci.co.uk/news/rss.xml" htmlUrl="https://www.bbc.co.uk/news/"></outline>
            <outline title="benjojo blog" text="benjojo blog" xmlUrl="https://blog.benjojo.co.uk/rss.xml" htmlUrl="https://blog.benjojo.co.uk"></outline>
            <outline title="Boing Boing" text="Boing Boing" xmlUrl="https://boingboing.net/feed" htmlUrl="https://boingboing.net"></outline>
            <outline title="Brain Pickings" text="Brain Pickings" xmlUrl="https://feeds.feedburner.com/brainpickings/rss" htmlUrl="https://www.brainpickings.org"></outline>
            <outline title="CogDogBlog" text="CogDogBlog" xmlUrl="https://cogdogblog.com/feed/" htmlUrl="https://cogdogblog.com"></outline>
            <outline title="Colossal" text="Colossal" xmlUrl="https://www.thisiscolossal.com/feed/" htmlUrl="https://www.thisiscolossal.com"></outline>
            <outline title="CommitStrip" text="CommitStrip" xmlUrl="https://www.commitstrip.com/en/feed/?" htmlUrl="https://www.commitstrip.com"></outline>
            <outline title="computers are bad" text="computers are bad" xmlUrl="https://computer.rip/rss.xml" htmlUrl="https://computer.rip"></outline>
            <outline title="Current Watches, Warnings and Advisories for Prince William (VAC153) Virginia Issued by the National Weather Service" text="Current Watches, Warnings and Advisories for Prince William (VAC153) Virginia Issued by the National Weather Service" xmlUrl="https://alerts.weather.gov/cap/wwaatmget.php?x=VAC153&amp;y=0" htmlUrl="https://alerts.weather.gov/cap/wwaatmget.php?x=VAC153&amp;y=0"></outline>
            <outline title="Daemonic Dispatches" text="Daemonic Dispatches" xmlUrl="http://www.daemonology.net/blog/index.rss" htmlUrl="https://www.daemonology.net/blog/"></outline>
            <outline title="Daring Fireball" text="Daring Fireball" xmlUrl="https://daringfireball.net/feeds/json" htmlUrl="https://daringfireball.net/"></outline>
            <outline title="death and gravity" text="death and gravity" xmlUrl="https://death.andgravity.com/_feed/index.xml" htmlUrl="https://death.andgravity.com/"></outline>
            <outline title="Dr. Brian Robert Callahan" text="Dr. Brian Robert Callahan" xmlUrl="https://briancallahan.net/blog/feed.xml" htmlUrl="https://briancallahan.net/blog"></outline>
            <outline title="Drew DeVault&#39;s blog" text="Drew DeVault&#39;s blog" xmlUrl="https://drewdevault.com/blog/index.xml" htmlUrl="https://drewdevault.com"></outline>
            <outline title="Eli Bendersky&#39;s website" text="Eli Bendersky&#39;s website" xmlUrl="https://eli.thegreenplace.net/feeds/all.atom.xml" htmlUrl="https://eli.thegreenplace.net/"></outline>
            <outline title="English Wikinews Atom feed." text="English Wikinews Atom feed." xmlUrl="https://en.wikinews.org/w/index.php?title=Special:NewsFeed&amp;feed=atom&amp;categories=Published&amp;notcategories=No%20publish%7CArchived%7CAutoArchived%7Cdisputed&amp;namespace=0&amp;count=30&amp;hourcount=124&amp;ordermethod=categoryadd&amp;stablepages=only" htmlUrl="https://en.wikinews.org/wiki/Main_Page"></outline>
            <outline title="fabiensanglard.net" text="fabiensanglard.net" xmlUrl="https://fabiensanglard.net/rss.xml" htmlUrl="https://fabiensanglard.net"></outline>
            <outline title="fasterthanli.me" text="fasterthanli.me" xmlUrl="https://fasterthanli.me/index.xml" htmlUrl="https://fasterthanli.me"></outline>
            <outline title="flak" text="flak" xmlUrl="https://flak.tedunangst.com/rss" htmlUrl="https://flak.tedunangst.com/"></outline>
            <outline title="Geek&amp;Poke" text="Geek&amp;Poke" xmlUrl="http://feeds.feedburner.com/GeekAndPoke" htmlUrl="https://geek-and-poke.com/"></outline>
            <outline title="Hacker News" text="Hacker News" xmlUrl="https://news.ycombinator.com/rss" htmlUrl="https://news.ycombinator.com/"></outline>
            <outline title="https://danluu.com/atom/index.xml" text="https://danluu.com/atom/index.xml" xmlUrl="https://danluu.com/atom.xml" htmlUrl="https://danluu.com/atom/index.xml"></outline>
            <outline title="https://mikestone.me/" text="https://mikestone.me/" xmlUrl="https://mikestone.me/feed.xml" htmlUrl="https://mikestone.me/"></outline>
            <outline title="Ivan on Containers, Kubernetes, and Server-Side" text="Ivan on Containers, Kubernetes, and Server-Side" xmlUrl="https://iximiuz.com/feed.rss" htmlUrl="https://iximiuz.com/"></outline>
            <outline title="Jeff Geerling&#39;s Blog" text="Jeff Geerling&#39;s Blog" xmlUrl="https://www.jeffgeerling.com/blog.xml" htmlUrl="http://www.jeffgeerling.com/"></outline>
            <outline title="Jhey Tompkins Posts" text="Jhey Tompkins Posts" xmlUrl="https://jhey.dev/posts.xml" htmlUrl="https://jhey.dev/"></outline>
            <outline title="Josh Comeau&#39;s blog" text="Josh Comeau&#39;s blog" xmlUrl="https://www.joshwcomeau.com/rss.xml" htmlUrl="https://www.joshwcomeau.com/"></outline>
            <outline title="Joy of Tech (RSS Feed)" text="Joy of Tech (RSS Feed)" xmlUrl="https://www.geekculture.com/joyoftech/jotblog/atom.xml" htmlUrl="http://joyoftech.com/joyoftech/"></outline>
            <outline title="Julia Evans" text="Julia Evans" xmlUrl="https://jvns.ca/atom.xml" htmlUrl="http://jvns.ca"></outline>
            <outline title="kottke.org" text="kottke.org" xmlUrl="http://feeds.kottke.org/main" htmlUrl="http://kottke.org/"></outline>
            <outline title="Laughing Squid" text="Laughing Squid" xmlUrl="https://laughingsquid.com/feed/" htmlUrl="https://laughingsquid.com/"></outline>
            <outline title="LibriVox&#39;s New Releases" text="LibriVox&#39;s New Releases" xmlUrl="https://librivox.org/rss/latest_releases" htmlUrl="http://librivox.org"></outline>
            <outline title="Lifehacker" text="Lifehacker" xmlUrl="https://lifehacker.com/rss" htmlUrl="https://lifehacker.com"></outline>
            <outline title="Logos By Nick" text="Logos By Nick" xmlUrl="https://logosbynick.com/feed/" htmlUrl="https://logosbynick.com"></outline>
            <outline title="LWN.net" text="LWN.net" xmlUrl="https://lwn.net/headlines/rss" htmlUrl="https://lwn.net"></outline>
            <outline title="Manton Reece" text="Manton Reece" xmlUrl="https://www.manton.org/feed.xml" htmlUrl="https://www.manton.org/"></outline>
            <outline title="Marco.org" text="Marco.org" xmlUrl="https://marco.org/rss" htmlUrl="https://marco.org/"></outline>
            <outline title="Martin Fowler" text="Martin Fowler" xmlUrl="https://martinfowler.com/feed.atom" htmlUrl="https://martinfowler.com"></outline>
            <outline title="MaskRay" text="MaskRay" xmlUrl="https://maskray.me/blog/atom.xml" htmlUrl="https://maskray.me/blog/"></outline>
            <outline title="matklad" text="matklad" xmlUrl="https://matklad.github.io/feed.xml" htmlUrl="https://matklad.github.io//"></outline>
            <outline title="Matthias Endler" text="Matthias Endler" xmlUrl="https://endler.dev/rss.xml" htmlUrl="https://endler.dev"></outline>
            <outline title="Matt Might&#39;s blog" text="Matt Might&#39;s blog" xmlUrl="https://matt.might.net/articles/feed.rss" htmlUrl="http://matt.might.net/"></outline>
            <outline title="Michael Tsai" text="Michael Tsai" xmlUrl="https://mjtsai.com/blog/feed/" htmlUrl="https://mjtsai.com/blog"></outline>
            <outline title="Moments in Graphics" text="Moments in Graphics" xmlUrl="http://momentsingraphics.de/RSS.xml" htmlUrl="http://momentsingraphics.de/"></outline>
            <outline title="MonkeyUser" text="MonkeyUser" xmlUrl="https://www.monkeyuser.com/feed.xml" htmlUrl="https://www.monkeyuser.com"></outline>
            <outline title="Mr. Money Mustache" text="Mr. Money Mustache" xmlUrl="https://feeds.feedburner.com/MrMoneyMustache" htmlUrl="https://www.mrmoneymustache.com"></outline>
            <outline title="Nackblog" text="Nackblog" xmlUrl="http://jnack.com/blog/feed/" htmlUrl="http://jnack.com/blog"></outline>
            <outline title="News : NPR" text="News : NPR" xmlUrl="https://feeds.npr.org/1001/rss.xml" htmlUrl="https://www.npr.org/templates/story/story.php?storyId=1001"></outline>
            <outline title="Nicky&#39;s New Shtuff" text="Nicky&#39;s New Shtuff" xmlUrl="https://ncase.me/feed.xml" htmlUrl="https://ncase.me/"></outline>
            <outline title="null program" text="null program" xmlUrl="https://nullprogram.com/feed/" htmlUrl="https://nullprogram.com"></outline>
            <outline title="One Foot Tsunami" text="One Foot Tsunami" xmlUrl="https://onefoottsunami.com/feed/json/" htmlUrl="https://onefoottsunami.com/"></outline>
            <outline title="OpenBSD Journal" text="OpenBSD Journal" xmlUrl="https://www.undeadly.org/cgi?action=rss" htmlUrl="https://www.undeadly.org/"></outline>
            <outline title="OpenBSD Webzine" text="OpenBSD Webzine" xmlUrl="https://webzine.puffy.cafe/atom.xml" htmlUrl="https://webzine.puffy.cafe/"></outline>
            <outline title="Open Source Musings" text="Open Source Musings" xmlUrl="https://opensourcemusings.com/feed/" htmlUrl="https://opensourcemusings.com/"></outline>
            <outline title="Open Source with Christopher Lydon" text="Open Source with Christopher Lydon" xmlUrl="https://radioopensource.org/feed/" htmlUrl="https://radioopensource.org"></outline>
            <outline title="Paul E. McKenney&#39;s Journal" text="Paul E. McKenney&#39;s Journal" xmlUrl="https://paulmck.livejournal.com/data/rss" htmlUrl="https://paulmck.livejournal.com/"></outline>
            <outline title="Phys.org - latest science and technology news stories" text="Phys.org - latest science and technology news stories" xmlUrl="https://phys.org/rss-feed/breaking/" htmlUrl="https://phys.org/"></outline>
            <outline title="Schneier on Security" text="Schneier on Security" xmlUrl="https://www.schneier.com/feed/atom/" htmlUrl="https://www.schneier.com/blog/"></outline>
            <outline title="Scott H Young" text="Scott H Young" xmlUrl="http://feeds.feedburner.com/scotthyoung/HAHx" htmlUrl="https://www.scotthyoung.com/blog"></outline>
            <outline title="Scripting News" text="Scripting News" xmlUrl="http://scripting.com/rss.xml" htmlUrl="http://scripting.com/"></outline>
            <outline title="Simon Willison&#39;s Weblog" text="Simon Willison&#39;s Weblog" xmlUrl="https://simonwillison.net/atom/everything/" htmlUrl="http://simonwillison.net/"></outline>
            <outline title="Slashdot" text="Slashdot" xmlUrl="http://rss.slashdot.org/Slashdot/slashdot" htmlUrl="https://slashdot.org/"></outline>
            <outline title="Stochastic Lifestyle" text="Stochastic Lifestyle" xmlUrl="https://www.stochasticlifestyle.com/feed/" htmlUrl="https://www.stochasticlifestyle.com"></outline>
            <outline title="swyx.io blog" text="swyx.io blog" xmlUrl="https://www.swyx.io/api/rss.xml" htmlUrl="https://swyx.io"></outline>
            <outline title="Tania Rascia | RSS Feed" text="Tania Rascia | RSS Feed" xmlUrl="https://www.taniarascia.com/rss.xml" htmlUrl="https://www.taniarascia.com"></outline>
            <outline title="The Atlantic" text="The Atlantic" xmlUrl="https://feeds.feedburner.com/TheAtlantic" htmlUrl="https://www.theatlantic.com/"></outline>
            <outline title="The David Brownman Blog" text="The David Brownman Blog" xmlUrl="https://xavd.id/blog/feeds/rss.xml" htmlUrl="https://xavd.id"></outline>
            <outline title="The Next Web" text="The Next Web" xmlUrl="https://feeds2.feedburner.com/thenextweb" htmlUrl="https://thenextweb.com"></outline>
            <outline title="The Oatmeal - Comics, Quizzes, &amp; Stories" text="The Oatmeal - Comics, Quizzes, &amp; Stories" xmlUrl="https://feeds.feedburner.com/oatmealfeed" htmlUrl="http://theoatmeal.com/"></outline>
            <outline title="The ryg blog" text="The ryg blog" xmlUrl="https://fgiesen.wordpress.com/feed/" htmlUrl="https://fgiesen.wordpress.com"></outline>
            <outline title="The Sweet Setup" text="The Sweet Setup" xmlUrl="https://thesweetsetup.com/feed/" htmlUrl="https://thesweetsetup.com"></outline>
            <outline title="Varun Vachhar" text="Varun Vachhar" xmlUrl="https://varun.ca/rss.xml" htmlUrl="https://varun.ca"></outline>
            <outline title="webcomic name" text="webcomic name" xmlUrl="https://webcomicname.com/rss" htmlUrl="https://webcomicname.com/"></outline>
            <outline title="Whatever" text="Whatever" xmlUrl="https://whatever.scalzi.com/feed/" htmlUrl="https://whatever.scalzi.com"></outline>
            <outline title="Wikipedia featured articles feed" text="Wikipedia featured articles feed" xmlUrl="https://en.wikipedia.org/w/api.php?action=featuredfeed&amp;feed=featured&amp;feedformat=atom" htmlUrl="https://en.wikipedia.org/wiki/Main_Page"></outline>
            <outline title="Wikipedia picture of the day feed" text="Wikipedia picture of the day feed" xmlUrl="https://en.wikipedia.org/w/api.php?action=featuredfeed&amp;feed=potd&amp;feedformat=atom" htmlUrl="https://en.wikipedia.org/wiki/Main_Page"></outline>
            <outline title="Writing - rachelbythebay" text="Writing - rachelbythebay" xmlUrl="https://rachelbythebay.com/w/atom.xml" htmlUrl="https://rachelbythebay.com/w/"></outline>
            <outline title="www.linusakesson.net" text="www.linusakesson.net" xmlUrl="http://www.linusakesson.net/rssfeed.php" htmlUrl="http://www.linusakesson.net/"></outline>
            <outline title="xkcd.com" text="xkcd.com" xmlUrl="https://xkcd.com/rss.xml" htmlUrl="https://xkcd.com/"></outline>
            <outline title="Yahoo Tech" text="Yahoo Tech" xmlUrl="https://www.yahoo.com/tech/rss" htmlUrl="https://finance.yahoo.com/tech"></outline>
            <outline title="Zhenghao&#39;s blog" text="Zhenghao&#39;s blog" xmlUrl="https://www.zhenghao.io/rss.xml" htmlUrl="https://zhenghao.io"></outline>
        </outline>
    </body>
</opml>

News Reader over ssh

Just an alternative news reader to miniflux, Newsboat is an RSS/Atom feed reader for the text console. It's database of feeds is separate from miniflux and has no images.

Reference:

2.25-screenshot_1x-33f26153.png

Continue

Now that a world of news feeds are at your fingertips, how about doing some electronics work? Read on my friend for an introduction to BeagleBone boards.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

BeagleBone

SBC (Single/Small Board Computer)

The BeagleBone (BB) [1] line supports the Linux kernel and GPIO hardware connections to devices via device trees [2]. It is simular to the Rasberry-PI line, only BB is open source hardware.

They run a full featured Debian distribution, capable of running C, C++, Rust, Python and node-js, all the way up to a PostgreSQL database. It comes with a web-based IDE called Cloud-9 [3] that can also run node-js programs on the BB. The AI line also supplies image recognition packages.

You can connect a display, keyboard and mouse, or run it headless.

All have USB connectors to ssh into from a connected computer, most have wired Ethernet RJ-45 sockets, and some have Bluetooth and WiFi. Controlling relays, lights, I2C sensors, and open switches can be done in most any supported language, even bash.

For robotics, they feature a Programmable real-time unit and industrial communications subsystem (PRU-ICSS) [4].

The forum for questions is here: https://forum.beagleboard.org/

For example:


  • BBB (Black) https://www.beagleboard.org/boards/beaglebone-black
  • beagle-black-wired-shadow_no5v-1-400x400.png
    • This runs very capabily:
      • Music Player Daemon (MDP) [5]. You can control what is played on the command line or through a Phone App.
      • Control lights and relays. You can use the Python based Flask [6] package to create nice looking web apps for this.

  • BBAI (AI 32-bit, arm71) https://www.beagleboard.org/boards/beaglebone-ai
  • beagleboane-ai-heatsink-500x356.png
    • Applications I used with this include:
      • NextCloud server (little slow).
      • Mini-Network Access Storage (NAS) with mirrored SSDs (mdadm).
      • HomeAssistant home automation (without cameras).


  • BeaglePlay is the latest offering (https://www.beagleboard.org/boards/beagleplay)
  • BeaglePlay-front-500x281.jpg
    • Quad Arm® Cortex®-A53 microprocessor subsystem @ 1.4GHz
    • Arm® Cortex®-M4F @ 400MHz
    • Dual-core PRU subsystem @ 333MHz
    • PowerVR® Rogue™ GPU

Features

  • Small 8cm x 8cm form-factor
  • 2GB RAM and 16GB on-board eMMC flash with high-speed interface
  • USB type-C for power and data and USB type-A host
  • Gigabit Ethernet, Single-pair Ethernet with PoDL
  • Integrated 2.4GHz and 5GHz WiFi
  • 2.4GHz and Sub-GHz programmable wireless MCU, integrated IEEE802.15.4 drivers/firmware
  • Full-size HDMI, OLDI, 4-lane CSI
  • Expansion via mikroBUS, Grove, QWIIC
  • Zero-download out-of-box software experience with Debian GNU/Linux

  1. https://www.beagleboard.org/
  2. https://www.beagleboard.org/blog/2022-03-31-device-tree-supporting-similar-boards-the-beaglebone-example
  3. https://github.com/c9
  4. https://beagleboard.org/pru
  5. https://www.musicpd.org/
  6. https://flask.palletsprojects.com/en/2.2.x/

Serial Debug Connection

You can get through severe boot errors where the network is not connecting or disk partition not mounted, using a USB to TTL-232 cable [1], also called FTDI. This will show all the boot messages and allow login, even recovery if the kernel will not boot.

Make sure it is 3.3 volt! Connect the black wire to pin 1, orange/green to pin 4, yellow/white to pin5 directly on the SBC board.

1 AU51n75Ls0JwfPeYMD6sIQ-547387362.png

  • USB to TTL cable
  1. https://www.adafruit.com/product/4331.

28W2088-40.webp

BBAI-64 Adapter

The BBAI-64 has a 3-pin connector so it needs an adapter; Micro JST MX 1.25 4Pins Connector Plug Socket 1.25mm Pitch Female Connector PCB Socket 150mm Cable. The red wire is not used. Plug the other three wires into the USB/TTL adapter above, matching colors.

Screen Shot 2022-10-22 at 9.18.32 AM.png

Screen on Linux

When you have connected USB-end of the serial cable to a Linux based system like Ubuntu, a new tty* device appears in the system. Note down its name. In my case, it was /dev/ttyUSB0. Open a term and issue below command

$ sudo screen /dev/ttyUSB0 115200

Do ls /dev/ before and after connecting cable to find the new device name

You may have to reboot the PC and BB after connection to clear garbled characters.

Recover from lost root password

Flash a new mini-sd card and boot off of it, then:

  1. Mount the eMMC flash:  mount /dev/mmcblk1p2 /media/card
  2. Change the root of the file system to be the partition on the eMMC flash which you just mounted: chroot /media/card
  3. Change the password of the root user to something you know: passwd root
  4. Exit out of the changed root: exit
  5. Shutdown the BeagleBone Black : shutdown -h now
  6. Disconnect the power from the board
  7. Eject the microSD card.
  8. Reconnect the power to the board
  9. Watch the board boot up, and log in as root. You should be able to log in with the password that you just set.

Flash new Operating System

Latest software: https://www.beagleboard.org/distros

Flashing refers to the process of transferring the Operating System image from a Micro-SD card to the on-board mmc memory disk.

WARNING: It will overlay the contents of your BBB soldered in mmc memory.

Be aware that the on-board mmc is only 15GB, so flashing an SD card with a partition larger than 15GB will fail with out of space.

Boot Order

The BB-AI 32-bit has a different disk device naming convention than 64-bit ;-)

BBAI-32 /dev/mmcblk0p1 = Micro-SD; /dev/mmcblk1p1 = On-board mmc

$ sudo inxi -D
Drives:    Local Storage: total: 1005.80 GiB used: 338.77 GiB (33.7%) 
           ID-1: /dev/mmcblk0 model: BC1S5 size: 59.69 GiB 
           ID-2: /dev/mmcblk1 model: TB2916 size: 14.60 GiB 
...

See how cleverly the number 0 uses the natural Linux boot ordering.

BBAI-64 /dev/mmcblk1p1 = Micro-SD; /dev/mmcblk0p1 = On-board mmc

$ sudo inxi -D
Drives:    Local Storage: total: raw: 1.92 TiB usable: 1.01 TiB used: 156.93 GiB (15.2%) 
           ID-1: /dev/mmcblk0 vendor: Kingston model: TB2916 size: 14.6 GiB 
           ID-2: /dev/mmcblk1 model: BC1S5 size: 59.69 GiB 
...
BB-AI 64-bit uses a new approach, extlinux.conf [1], that will probably be the standard going forward [2].

Here it is set to boot from mmcblk1p2.

File: /boot/firmware/extlinux/extlinux.conf

label Linux microSD
    kernel /Image
    fdt /k3-j721e-beagleboneai64-no-shared-mem.dtb
    append console=ttyS2,115200n8 earlycon=ns16550a,mmio32,0x02800000 root=/dev/mmcblk1p2 ro rootfstype=ext4 rootwait net.ifnames=0
    fdtdir /
    initrd /initrd.img

On the BBAI-64, by default we are using U-Boot’s now (finally) standard extlinux.conf, you’ll find it in the first fat partition of either the eMMC or microSD… (I have a long term goal to convert our custom “am335x/am57xx” uEnv.txt solution to extlinux.conf…) [2]

Reference:

  1. https://wiki.syslinux.org/wiki/index.php?title=EXTLINUX
  2. https://forum.beagleboard.org/t/bbai-64-boot-order/33129

Enable Flashing

BBAI-32: Flash

  • For example, download and burn a Micro-SD card, BBAI (32-bit) flasher: https://www.beagleboard.org/distros

    • Burn the download file to a Micro-SD, normally using Etcher https://etcher.balena.io

    • Turn off the BBB, insert the SD card and apply power.

    • This will copy the boot files on top of the mmc on-board disk,

If you start with the non-flasher and verify that it will boot, THEN edit the

/boot/uEnv.txt 

file to activate (un-comment) the flasher script (last line in the file) and reboot the device to flash it.

File: /boot/uEnv.txt

##enable Generic eMMC Flasher:
cmdline=init=/usr/sbin/init-beagle-flasher

BBAI-64: Flash

* To flash a microSD to eMMC:

Run:

sudo apt update ; sudo apt upgrade
sudo beagle-flasher

Or to create a dedicated flasher:

sudo apt update ; sudo apt upgrade
sudo enable-beagle-flasher
sudo reboot

Reference: https://forum.beagleboard.org/t/ai-64-how-to-flash-emmc/32384/2

Flashing Process

You will see the lights doing a bouncing flash; 1,2,3,4 - 4,3,2,1 - etc… for several minutes, eventually it will turn itself off. Then pop out the SD card and re-apply power to boot up the newly flashed on-board mmc.

Disable Flashing

After booting the newly flashed on-board mmc;

BB-AI 32-Bit

  1. mount the SD card:
sudo mount /dev/mmcblk0p1 /media/card
  1. Comment out the ‘flashing’ line in uEnv.txt like this:
$ tail /media/card/boot/uEnv.txt 
...
##enable Generic eMMC Flasher:
#cmdline=init=/usr/sbin/init-beagle-flasher
  1. It is now safe to boot off the SD card! Push it back in and it will be selected during the boot process, before the mmc.

I usually boot onto the Micro-SD and run that way. If it fails you can always remove it and still have a bootable environment.

BB-AI 64-Bit

Disable a dedicated flasher, download the latest non-flasher and re-burn the Micro-SD card.

To Check, mount it on a PC and look at directory :

File: <mount point>/boot/firmware/extlinux/extlinux.conf

...
    append console=ttyS2,115200n8 earlycon=ns16550a,mmio32,0x02800000 root=/dev/mmcblk1p2 ro rootfstype=ext4 rootwait net.ifnames=0
...

Above shows a normal boot file. A flasher will point to a beagle-flasher image to boot onto.

Duplicate Micro-SD card

It is a good idea to backup the active Micro0SD card after all your hard work. Re-imaging and installing all the packages is a pain!

This process will copy everything on the SD card, even blank data. So a 64GB card will make a 64GB img file. Your destination card must be at least as big as the original.

Insert the original SD card and check the name of the device (usually mmcblkX or sdcX):

$ sudo lsblkid -a
...
mmcblk0     179:0    0  59.7G  0 disk  
├─mmcblk0p1 179:1    0   128M  0 part  /media/don/BOOT
└─mmcblk0p2 179:2    0  59.6G  0 part  /media/don/rootfs
$ sudo fdisk -l
...
Device         Boot   Start      End  Sectors  Size Id Type
/dev/mmcblk0p1 *       2048  2099199  2097152    1G  c W95 FAT32 (LBA)
/dev/mmcblk0p2      2099200 31116287 29017088 13.9G 83 Linux

In my case the SD card is /dev/mmcblk0 (the *p1 and *p2 are the partitions).

You have to unmount the devices:

$ sudo umount /dev/mmcblk0p1
$ sudo umount /dev/mmcblk0p2

To create an image of the device:

$ sudo dd if=/dev/mmcblk0 of=~/sd-card-copy.img bs=4M status=progress

This will take a while.

Once it's finished, insert the empty SD card. If the device is different (USB or other type of SD card reader) verify its name and be sure to unmount it:

$ sudo fdisk -l
$ sudo umount /dev/mmcblk0p1
$ sudo umount /dev/mmcblk0p2

Write the image to the device:

$ sudo dd if=~/sd-card-copy.img of=/dev/mmcblk0 bs=4M status=progress

The write operation is much slower than before.

Update Software

Update examples in the Cloud9 IDE workspace

cd /var/lib/cloud9
git pull
...
Update the boot-up scripts and Linux kernel

cd /opt/scripts
git pull

Update Kernel

cd /opt/scripts
sudo tools/update_kernel.sh

Expand SD filesystem

Sometimes when you burn an .iso image to the SD card and boot up the BB, the filesystem will be 4GB while your SD card could be 32GB.

This script will expand the filesystem after your first boot up onto it.

$ sudo /opt/scripts/tools/grow_partition.sh
  • If the script is not available, use this:

Examine the partitioning on your external SD card:

$ sudo fdisk /dev/mmcblk0

Then enter 'p', An important value to note is the start sector of the Linux partition, enter 'd' to delete the Linux partition. (If you have two partitions it will ask which partition to delete, which should be 2.).

Enter 'n' to create a new partition. For the first two questions (partition type and number) just press enter. For the start sector, be absolutely sure to use the same number it had originally. For the last sector, you can use whatever you want in case you don't want to use your whole micro SD, but you can just hit enter to use the default (the max size possible).

If you are satisfied with your changes at this point you can enter 'w' to commit to your changes.

A warning above will appear if you're repartitioning the disk you've booted from. In that case, reboot your system

Lastly, after your BeagleBone Black reboots, you need to expand the file system. Before expanding you need to run fsck as root, otherwise next command will fail to run:

$ sudo fsck /dev/mmcblk0p1

Finally, run the following command (again as root):

$ sudo resize2fs /dev/mmcblk0p1

Once this command completes, you're done!

Reference:

Disable cloud9 development platform

This allows over the web programming on the system, which is a security risk if not being used.

Disable:

systemctl disable cloud9.service
systemctl disable bonescript.service
systemctl disable bonescript.socket
systemctl disable bonescript-autorun.service

Example of service file

debian@pocketbeagle:~$ cat /lib/systemd/system/cloud9.service 
[Unit] 
Description=Cloud9 IDE 
ConditionPathExists=|/var/lib/cloud9 

[Service] 
WorkingDirectory=/opt/cloud9/build/standalonebuild 
EnvironmentFile=/etc/default/cloud9 
ExecStartPre=/opt/cloud9/cloud9-symlink 
ExecStart=/usr/bin/nodejs server.js --packed -w /var/lib/cloud9 SyslogIdentifier=cloud9ide 
User=1000 

bb-bbai-tether system

This is used to connect your device/PC to => BB-AI and BBB over WiFi.

BBAI-64 uses systemd-networkd [1] (files: /etc/systemd/network/*)

Config file: /etc/default/bb-wl18xx

# TETHER_ENABLED: Whether or not to run the /usr/bin/bb-wl18xx-tether daemon; set to no to disable.
#TETHER_ENABLED=yes
TETHER_ENABLED=no

# USE_CONNMAN_TETHER: Whether or not to just use connman tether inteface; set to no to disable.
USE_CONNMAN_TETHER=no

# USE_WL18XX_IP_PREFIX: default IP block of SoftAP0 interface
USE_WL18XX_IP_PREFIX="192.168.18"

# USE_INTERNAL_WL18XX_MAC_ADDRESS: use internal mac address; set to no to disable.
USE_INTERNAL_WL18XX_MAC_ADDRESS=yes

# USE_WL18XX_MAC_ADDRESS: use custom mac address, for when work wifi starts sending deauthentication packet spam.
#USE_WL18XX_MAC_ADDRESS="AB:10:23:C:16:78"

# USE_WL18XX_POWER_MANAGMENT: (sudo iwconfig wlan0 power [on/off]). on = boot default, off is more reliable for accessing idle systems over time
USE_WL18XX_POWER_MANAGMENT=off

# USE_PERSONAL_SSID: set custom ssid
#USE_PERSONAL_SSID="BeagleBone"
USE_PERSONAL_SSID="AB10"

# USE_PERSONAL_PASSWORD: set ssid password
USE_PERSONAL_PASSWORD="BeagleBone"

# USE_GENERATED_DNSMASQ: use generated version of /etc/dnsmasq.d/SoftAp0; set to no so user can modify /etc/dnsmasq.d/SoftAp0
USE_GENERATED_DNSMASQ=yes

# USE_GENERATED_HOSTAPD: use generated version of /etc/hostapd.conf; set to no so user can modify /etc/hostapd.conf
USE_GENERATED_HOSTAPD=yes

# USE_APPENDED_SSID: appends mac address after SSID (aka -WXYZ, BeagleBone-WXYZ)
USE_APPENDED_SSID=yes

# USE_PERSONAL_COUNTRY: (default is US, but note enabled (#) with comment) 
#USE_PERSONAL_COUNTRY=US

Service:

$ sudo systemctl status bb-bbai-tether 
● bb-bbai-tether.service - BBAI brcmfmac tether Service
    Loaded: loaded (/lib/systemd/system/bb-bbai-tether.service; enabled; vendor preset: enabled)
    Active: activating (start) since Sun 2022-09-04 12:19:48 EDT; 16s ago 
Cntrl PID: 2585 (bb-bbai-tether)
     Tasks: 2 (limit: 937)
    Memory: 752.0K
    CGroup: /system.slice/bb-bbai-tether.service
            ├─2585 /bin/bash -e /usr/bin/bb-bbai-tether
            └─2605 sleep 5 Sep 04 12:19:48 bbb.example.com systemd[1]: Starting BBAI brcmfmac tether Service... 
Sep 04 12:19:53 bbb.example.com bb-bbai-tether[2585]: bbai:tether waiting for /sys/class/net/wlan0 
Sep 04 12:19:58 bbb.example.com bb-bbai-tether[2585]: bbai:tether waiting for /sys/class/net/wlan0 
Sep 04 12:20:03 bbb.example.com bb-bbai-tether[2585]: bbai:tether waiting for /sys/class/net/wlan0 
don@app:~$ sudo systemctl stop bb-bbai-tether
  1. https://wiki.debian.org/SystemdNetworkd

BBAI-64 Switches

Push-buttons used on the board.

  1. A switch is provided to allow switching between the modes.
    • Holding the boot switch down during a removal and reapplication of power without a microSD card inserted will force the boot source to be the USB port and if nothing is detected on the USB client port, it will go to the serial port for download.

    • Without holding the switch, the board will boot try to boot from the eMMC. If it is empty, then it will try booting from the microSD slot, followed by the serial port, and then the USB port.

    • If you hold the boot switch down during the removal and reapplication of power to the board, and you have a microSD card inserted with a bootable image, the board will boot from the microSD card.

      NOTE: Pressing the RESET button on the board will NOT result in a change of the boot mode. You MUST remove power and reapply power to change the boot mode. The boot pins are sampled during power on reset from the PMIC to the processor. The reset button on the board is a warm reset only and will not force a boot mode change.

BBAI-64 HDMI Display Connection

When connecting to an HDMI monitor, make sure your miniDP adapter is active. A passive adapter will not work. See Fig: Display adaptors.

BBAI-64 default memory device tree

Reference: https://forum.beagleboard.org/t/beaglebone-ai-64-memory-4gb-or-2gb-ram/32270

the AI-64 has 4GB installed, the other 2GB is reserved in the default image thru remoteproc for the other companion cores (c6, c7x, and R5)…

We have a custom device tree for you if you’d like to disable the other cores and use all 4GB with the A72’s…

=> No Shared Memory => k3-j721e-beagleboneai64-no-shared-mem.dtb

debian@BeagleBone:~$ sudo find /boot/ -name extlinux.conf
/boot/firmware/extlinux/extlinux.conf

debian@BeagleBone:~$ sudo ls /boot/firmware
extlinux  Image       k3-j721e-beagleboneai64.dtb         k3-j721e-common-proc-board.dtb    k3-j721e-sk.dtb  sysfw.itb     tispl.bin
ID.txt      initrd.img  k3-j721e-beagleboneai64-no-shared-mem.dtb  k3-j721e-proc-board-tps65917.dtb  overlays        tiboot3.bin  u-boot.img

..

so in /boot/firmware/extlinux/extlinux.conf just set:

fdt /k3-j721e-beagleboneai64-no-shared-mem.dtb

take it out to go back to sharing memory for PRU.

$ cat /boot/firmware/extlinux/extlinux.conf 
label Linux microSD
    kernel /Image
    fdt /k3-j721e-beagleboneai64-no-shared-mem.dtb
    append console=ttyS2,115200n8 earlycon=ns16550a,mmio32,0x02800000 root=/dev/mmcblk1p2 ro rootfstype=ext4 rootwait net.ifnames=0
    fdtdir /
    initrd /initrd.img

BBAI-64 Fix 'broken boot'

Finger on both boot and reset:
Insert power
lift finger on reset
wait till led lights
lift finger on boot.

Since you have the serial boot log, stop u-boot with the “space” key…

# run emmc_erase_boot0

=> Reboot with SD card inserted, newly burned image: bbai64-debian-11.3-xfce-arm64-2022-06-14-10gb.img.xz

$ sudo apt update

$ sudo apt upgrade

debian@BeagleBone:~$ sudo /opt/u-boot/bb-u-boot-beagleboneai64/install-emmc.sh
Changing ext_csd[BOOT_BUS_CONDITIONS] from 0x02 to 0x02
H/W Reset is already permanently enabled on /dev/mmcblk0
Clearing eMMC boot0
dd if=/dev/zero of=/dev/mmcblk0boot0 count=32 bs=128k
32+0 records in
32+0 records out
4194304 bytes (4.2 MB, 4.0 MiB) copied, 0.549462 s, 7.6 MB/s
dd if=/opt/u-boot/bb-u-boot-beagleboneai64/tiboot3.bin of=/dev/mmcblk0boot0 bs=128k
2+1 records in
2+1 records out
283940 bytes (284 kB, 277 KiB) copied, 1.13585 s, 250 kB/s
debian@BeagleBone:~$ sudo /opt/u-boot/bb-u-boot-beagleboneai64/install-microsd.sh
'/opt/u-boot/bb-k3-image-gen-j721e-evm/sysfw.itb' -> '/boot/firmware/sysfw.itb'
'/opt/u-boot/bb-u-boot-beagleboneai64/tiboot3.bin' -> '/boot/firmware/tiboot3.bin'
'/opt/u-boot/bb-u-boot-beagleboneai64/tispl.bin' -> '/boot/firmware/tispl.bin'
'/opt/u-boot/bb-u-boot-beagleboneai64/u-boot.img' -> '/boot/firmware/u-boot.img'

$ sudo reboot 

=> Login

$ uname -a

Linux BeagleBone 5.10.120-ti-arm64-r50 #1bullseye SMP PREEMPT Tue Jun 28 20:37:27 UTC 2022 aarch64 GNU/Linux

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
udev            1.8G     0  1.8G   0% /dev
tmpfs           371M  2.0M  369M   1% /run
/dev/mmcblk1p2   59G  9.7G   47G  18% /
tmpfs           1.9G   16K  1.9G   1% /dev/shm
tmpfs           5.0M  4.0K  5.0M   1% /run/lock
/dev/mmcblk1p1  127M   47M   80M  38% /boot/firmware

BBAI-64 User LEDs

LEDGPIO SignalDefault Function
D2GPIO3_17Heartbeat when Linux is running
D3GPIO5_5microSD Activity
D4GPIO3_15CPU Activity
D5GPIO3_14eMMC Activity
D8GPIO3_7WiFi/Bluetooth Activity

BB_AI_USERLEDS_800px.png

BBAI Survival Guide

https://community.element14.com/challenges-projects/project14/visionthing/b/blog/posts/beaglebone-ai-survival-guide-v3-18-pwm-i2c-analog-digital-read-write-vision-ai-video-text-overlays-audio-hardware

Real Time Clock (RTC) system service and setup

Any clock needs to keep it's time when the power goes out. Since the PocketBeagle is not always connected to WiFi to sync with Network Time Services, we will install a Real Time Clock with a small battery [1].

Real Time Clock setup using i2c, ExploringBB: pp350 [1]

Here is the wiring diagram for a PocketBeagle:

  • PocketBeagle <-> DS3231 RTC
  • I2C1:
    • Pin 14 - VCC
    • Pin 16 - GND
    • Pin 9 - SCL
    • Pin 11 - SDA
  1. https://learn.adafruit.com/adafruit-ds3231-precision-rtc-breakout/overview

Setup proedure:

$ sudo apt install i2c-tools
$ i2cdetect
$ i2cdump -y 1 0x68 b
$ hwclock -r -f /dev/rtc1 
# (Note: rtc0 is active on-board)

$ sudo modprobe rtc-ds1307
$ sudo /bin/sh -c "/bin/echo ds1307 0x68 > /sys/class/i2c-adapter/i2c-1/new_device"
#  Now /dev/rtc1 should show up, echo delete_device to remove it.

# hwclock -r (read)
# hwclock -w (write)
# hwclock -s (set RTC to system time)
$ hwclock --set --date "2019-01-01 00:00:00" 
# (set new time)

$ sudo systemctl enable hwclock.service
$ sudo systemctl status hwclock.service
$ reboot
  1. http://derekmolloy.ie/exploring-beaglebone-tools-and-techniques-for-building-with-embedded-linux/

Service File

File: /etc/systemd/system/clock.service

[Unit]

Description=RTC Service
Before=getty.target

[Service]
Type=oneshot
ExecStartPre=/bin/sh -c "/bin/echo ds1307 0x68 > /sys/class/i2c-adapter/i2c-1/new_device"
ExecStart=/sbin/hwclock -s -f /dev/rtc1
RemainAfterExit=yes

[Install]
WantedBy=multi-user.target

Seven Segment Display

Now we can install a Seven Segment Display [1] to show the time. I used an old Garfield clock with sentimental memories, and brought it back to life.

NOTE: If running python3, substitute python for python3 below.

  1. https://www.adafruit.com/product/1270
$ sudo apt-get update
$ sudo apt-get install build-essential python-pip python-dev python-smbus git

$ git clone https://github.com/adafruit/Adafruit_Python_GPIO.git
$ cd Adafruit_Python_GPIO
$ sudo python setup.py install

$ git clone https://github.com/adafruit/Adafruit_Python_LED_Backpack.git

$ cd Adafruit_Python_LED_Backpack
$ sudo python setup.py install

$ git clone https://github.com/adafruit/Adafruit-GFX-Library.git

$ cd Adafruit_GFX
$ sudo python setup.py install
Adafruit Python LED Backpack

Python library for controlling LED backpack displays such as 8x8 matrices, bar graphs, and 7/14-segment displays on a Raspberry Pi or BeagleBone Black.

Designed specifically to work with the Adafruit LED backpack displays ----> https://learn.adafruit.com/adafruit-led-backpack/overview

For all platforms (Raspberry Pi and Beaglebone Black) make sure your system is able to compile Python extensions. On Raspbian or Beaglebone Black's Debian/Ubuntu image you can ensure your system is ready by executing:

$ sudo apt-get update
$ sudo apt-get install build-essential python-dev

You will also need to make sure the python-smbus and python-imaging library is installed by executing:

$ sudo apt-get install python-smbus python-imaging

Install the library by downloading with the download link on the right, unzipping the archive, navigating inside the library's directory and executing:

$ sudo python setup.py install

See example of usage in the examples folder.

Clock testing

$ sudo apt install python-pip
$ pip install adafruit_gpio

$ cd Adafruit_Python_LED_Backpack/examples

$ python sevensegment_test.py

Clock Code

File: clock.py

import time

from Adafruit_LED_Backpack import SevenSegment


# Create display instance on default I2C address (0x70) and bus number.
display = SevenSegment.SevenSegment()

# Alternatively, create a display with a specific I2C address and/or bus.
# display = SevenSegment.SevenSegment(address=0x74, busnum=1)

# On BeagleBone, try busnum=2 if IOError occurs with busnum=1
# display = SevenSegment.SevenSegment(address=0x74, busnum=2)

# Initialize the display. Must be called once before using the display.
display.begin()

# Keep track of the colon being turned on or off.
colon = True

import datetime
now = datetime.datetime.now()
print ("Current date and time : ")
print (now.strftime("%Y-%m-%d %H:%M:%S"))
#
OLD_MIN = "99"
while True:
  MIN = now.strftime("%M")
  if ( MIN != OLD_MIN ):
         OLD_MIN = now.strftime("%M")
         # Clear the display buffer.
         display.clear()
         # Print the time to the display.
         display.print_number_str(now.strftime("%l%M"))
         # PM flag, lower right dot
         display.set_decimal(3, 1)
         # ... set a segment of a digit ...
         #display.set_digit_raw(0, 1)
         # Set the colon on or off (True/False).
         display.set_colon(colon)
         # Write the display buffer to the hardware.  This must be called to
         # update the actual display LEDs.
         display.write_display()
         # Delay for a second.
     time.sleep(1.0)
     now = datetime.datetime.now()
     # Flip colon on or off.
     colon = not colon
     display.set_colon(colon)
     # Write the display buffer to the hardware.  This must be called to
     # update the actual display LEDs.
     display.write_display()

Music Player for the Car

This is a BBB under the seat with a wired mint box running Music Player Daemon [1].

Installed three buttons

  • Back - run mpc prev

  • Play/Stop - run mpc pause/play

  • Forward - run mpc next

Wiring Pins

BeagleBone-AI-2_1.png

BeagleBone-AI-2_2.png

Service

File: /etc/systemd/system/remote.service

[Unit]
Description=Remote Control Music Player
After=network.target
StartLimitIntervalSec=0

[Service]
Type=simple
Restart=always
RestartSec=1
User=root
ExecStop=/data/home/remote/remote_stop.sh
ExecStart=/data/home/remote/remote.sh

[Install]
WantedBy=multi-user.target

Forward

File: forward.js

/**********************************************************************
 File: forward.js
 Usage: Wire 3.3v (P9_3) and GPIO (P8_8, P8_7, P8_9) to switches
 Service: remote.service
 Author: Don Cohoon
 History
 16-Oct-2019   Created
**********************************************************************/
var b = require('bonescript');
const { exec } = require('child_process');
// GPIO inputs
const FORWARD = 'P8_8';
const PLAY    = 'P8_7'; // Also STOP
const REVERSE = 'P8_9';
// Is button pressed or released?
const RELEASE = 0;
const PRESS   = 1;
//
b.pinMode(FORWARD, b.INPUT);
b.pinMode(PLAY,    b.INPUT);
b.pinMode(REVERSE, b.INPUT);
setInterval(check,1000);
//
// ---------------------------
function check(){
	b.digitalRead(FORWARD, checkForward);
}

// ---------------------------
function checkForward(err, response){
	if (response == 1){
		console.log('Forward pushed');
		exec('/usr/bin/mpc next', ( err, stdout, stderr) => {
			if (err) {
				console.log(stderr);
				return;
			}
		});
	} else {
	null; //	console.log('Button NOT pushed');
	}

}

Play

File: play.js

/**********************************************************************
 File: play.js
 Usage: Wire 3.3v (P9_3) and GPIO (P8_8, P8_7, P8_9) to switches
 Service: remote.service
 Author: Don Cohoon
 History
 16-Oct-2019   Created
**********************************************************************/
var b = require('bonescript');
const { exec } = require('child_process');
// GPIO inputs
const FORWARD = 'P8_8';
const PLAY    = 'P8_7'; // Also STOP
const REVERSE = 'P8_9';
// Is button pressed or released?
const RELEASE = 0;
const PRESS   = 1;
//
b.pinMode(FORWARD, b.INPUT);
b.pinMode(PLAY,    b.INPUT);
b.pinMode(REVERSE, b.INPUT);
setInterval(check,1000);
//
// ---------------------------
function check(){
	b.digitalRead(PLAY, checkPlay);
}

// ---------------------------
function checkPlay(err, response){
	if (response == 1){
		console.log('Play pushed');
		exec('/usr/bin/mpc current', ( err, stdout, stderr) => {
			if (err) {
				console.log(stderr);
				return;
			}
			if (stdout) {
				console.log('Stop playing ',stdout);
				exec('/usr/bin/mpc stop', ( err, stdout, stderr) => {
					if (err) {
						console.log(stderr);
						return;
					}
				});
				return;
			} else {
				console.log('Play resumed');
				exec('/usr/bin/mpc play', ( err, stdout, stderr) => {
					if (err) {
						console.log(stderr);
						return;
					}
				});
				return;
			}

		}); // current
	} else {
	null; //	console.log('Button NOT pushed');
	}

}

Remote

File: remote.sh

#!/bin/bash
export NODE_PATH=/usr/local/lib/node_modules/
export NODE_MODULES_CONTEXT=1
PIDFILE=/var/run/remote.pid
# node -pe "require('bonescript').getPlatform().bonescript"
#
function log() {
	echo "`date` $@ " >> /var/log/remote.log
}
#
procs=(
     '/data/home/remote/forward.js'
     '/data/home/remote/play.js '
     '/data/home/remote/reverse.js'
    )
n_procs=0
while [ "x${procs[n_procs]}" != "x" ]
do
	   n_procs=$(( $n_procs + 1 ))
done
inx=1
#
echo $$ > $PIDFILE
cd /data/home/remote
log "Starting $$"
while true
do
	# run processes and store pids in array
	for i in ${procs[@]}; do
	  echo "Starting /usr/bin/node ${i}"
	  /usr/bin/node ${i} &
	  pids[${inx}]=$!
	  let "inx = $inx + 1"
	done

	# wait for all pids
	echo ${pids[*]}
	for pid in ${pids[*]}; do
	      wait $pid
	    done
	sleep 30
	log "Restarting after error"
done

Remote Stop

File: remote_stop.sh

#!/bin/bash
TPID=$( /bin/cat /var/run/remote.pid )
/bin/kill -9 ${TPID}
/usr/bin/killall -e /usr/bin/node

Reverse

File: reverse.js

/**********************************************************************
 File: reverse.js
 Usage: Wire 3.3v (P9_3) and GPIO (P8_8, P8_7, P8_9) to switches
 Service: remote.service
 Author: Don Cohoon
 History
 16-Oct-2019   Created
**********************************************************************/
var b = require('bonescript');
const { exec } = require('child_process');
// GPIO inputs
const FORWARD = 'P8_8';
const PLAY    = 'P8_7'; // Also STOP
const REVERSE = 'P8_9';
// Is button pressed or released?
const RELEASE = 0;
const PRESS   = 1;
//
b.pinMode(FORWARD, b.INPUT);
b.pinMode(PLAY,    b.INPUT);
b.pinMode(REVERSE, b.INPUT);
setInterval(check,1000);
//
// ---------------------------
function check(){
	b.digitalRead(REVERSE, checkReverse);
}

// ---------------------------
function checkReverse(err, response){
	if (response == 1){
		console.log('Reverse pushed');
		exec('/usr/bin/mpc prev', ( err, stdout, stderr) => {
			if (err) {
				console.log(stderr);
				return;
			}
		});
	} else {
	null; //	console.log('Button NOT pushed');
	}

}
  1. https://www.musicpd.org/

Power on/off another PC

This project uses a web application written in flask to power on or off two different PCs remotely, and also a PowerTail Switch.

  • Server one:

    • Wire pins P8_11 in parallel to power button
    • Wire pins P8_12 in parallel to reset button
  • Server two:

    • Wire pins P8_13 in parallel to power button
    • Wire pins P8_14 in parallel to reset button
  • PowerTail Light [1]

    • Wire pins P9_21 to the PowerTail input control screws
  1. https://www.adafruit.com/product/2935

File: boot.py

import time, datetime
from itertools import cycle                                                     
import os
from werkzeug.debug import DebuggedApplication

from flask import Flask, render_template, request, Response, session, redirect, url_for
from functools import wraps
import subprocess
import logging
logging.basicConfig(filename='boot.log',level=logging.DEBUG,
            format='%(asctime)s - %(name)s - %(levelname)s - %(message)s')

# Limit ddos
from werkzeug.wrappers import BaseRequest , BaseResponse
class Response ( BaseResponse):
    # limit 1m for input request and form response
    max_content_length = 1024 * 1024 * 1
    max_form_memory_size = 1024 * 1024 * 1

import Adafruit_BBIO.GPIO as GPIO
GPIO.setup("P9_21", GPIO.OUT)
GPIO.output("P9_21", GPIO.HIGH) # TimeLight off
GPIO.setup("P8_11", GPIO.OUT)
GPIO.output("P8_11", GPIO.HIGH)
GPIO.setup("P8_12", GPIO.OUT)
GPIO.output("P8_12", GPIO.HIGH)
GPIO.setup("P8_13", GPIO.OUT)
GPIO.output("P8_13", GPIO.HIGH)
GPIO.setup("P8_14", GPIO.OUT)
GPIO.output("P8_14", GPIO.HIGH)


ASSETS_DIR = os.path.dirname(os.path.abspath(__file__))
app = Flask(__name__)                                            

#@app.before_request # to get your header token and check for its validity

#---------------------------------
# route for handling the login page logic
@app.route('/login', methods=['GET', 'POST'])
def login():
    error = None
    if request.method == 'POST':
        if request.form['username'] != 'SecretUser' or request.form['password'] != 'SecretPassword':
            error = 'Invalid Credentials. Please try again.'
            session['logged_in'] = False
        else:
            session['logged_in'] = True
            return redirect(url_for('secret_page'))

    return render_template('login.html', error=error)

#---------------------------------
# default page
@app.route("/")                                                                 
@app.route("/index")                                                          
def hello():                                                    

    return render_template('index.html')

#---------------------------------
# control server boot actions
@app.route('/secret-page', methods=['GET','POST'])
def secret_page():
    msg='';
    if 'logged_in' in session:
      start_time = session.get('session_time', None)
      if start_time is None:
        start_time = datetime.datetime.now()
        session['session_time'] = start_time
      end_time = start_time + datetime.timedelta(minutes=5)
      if datetime.datetime.now() > end_time: 
        session.clear()
        start_time = session.get('session_time', None)
        return redirect(url_for('login', next=request.url))
    else:
      return redirect(url_for('login', next=request.url))

    app.logger.warning('Agent: %s',request.headers.get('User-Agent'))
    app.logger.warning('Host: %s',request.headers.get('Host'))
    app.logger.warning('Auth: %s',request.headers.get('Authorization'))
    proc = subprocess.Popen(["uptime"], stdout=subprocess.PIPE)
    (out, err) = proc.communicate()
    #print "program output:", out
    option = ''
    if request.method == 'POST':
      option = request.form.get('options', 'empty')
      #option = request.form['options']
      #print "option:", option
      if option == 'on':                                                           
  	#app.logger.debug('A value for debugging')
  	#app.logger.warning('A warning occurred (%d apples)', 42)
  	#app.logger.error('An error occurred')
  	app.logger.warning('Server was turned on')
        GPIO.output("P8_11", GPIO.LOW)
    	time.sleep(3)                                                          
        GPIO.output("P8_11", GPIO.HIGH)
  	app.logger.warning('Server was turned on (end)')
  	msg = 'Server turned on.'
      elif option == 'off':                                                          
  	app.logger.warning('Server was turned off')
        GPIO.output("P8_11", GPIO.LOW)
  	time.sleep(6)
        GPIO.output("P8_11", GPIO.HIGH)
  	app.logger.warning('Server was turned off (end)')
  	msg = 'Server turned off.'
      elif option == 'reset':                                                          
  	app.logger.warning('Server was reset')
        GPIO.output("P8_12", GPIO.LOW)
  	time.sleep(2)
        GPIO.output("P8_12", GPIO.HIGH)
  	app.logger.warning('Server was reset (end)')
  	msg = 'Server reset.'
      elif option == 'on2':                                                           
  	#app.logger.debug('A value for debugging')
  	#app.logger.warning('A warning occurred (%d apples)', 42)
  	#app.logger.error('An error occurred')
  	app.logger.warning('NAS was turned on')
        GPIO.output("P8_14", GPIO.LOW)
    	time.sleep(3)                                                          
        GPIO.output("P8_14", GPIO.HIGH)
  	app.logger.warning('NAS was turned on (end)')
  	msg = 'NAS turned on.'
      elif option == 'off2':                                                          
  	app.logger.warning('NAS was turned off')
        GPIO.output("P8_14", GPIO.LOW)
  	time.sleep(6)
        GPIO.output("P8_14", GPIO.HIGH)
  	app.logger.warning('NAS was turned off (end)')
  	msg = 'NAS turned off.'
      elif option == 'reset2':                                                          
  	app.logger.warning('Server was reset')
        GPIO.output("P8_13", GPIO.LOW)
  	time.sleep(2)
        GPIO.output("P8_13", GPIO.HIGH)
  	app.logger.warning('NAS was reset (end)')
  	msg = 'NAS reset.'
      elif option == 'on3':                                                           
  	app.logger.warning('TimeLight was turned on')
        GPIO.output("P9_21", GPIO.LOW)
  	app.logger.warning('TimeLight was turned on (end)')
  	msg = 'TimeLight turned on.'
      elif option == 'off3':                                                          
  	app.logger.warning('TimeLight was turned off')
        GPIO.output("P9_21", GPIO.HIGH)
  	app.logger.warning('TimeLight was turned off (end)')
  	msg = 'TimeLight turned off.'
      else:
        msg = 'Pick one.'
  	app.logger.warning('No option picked')
    template_data = {                                                           
        'title' : option,                                                        
        'msg'   : msg,
        'out'   : out,
    }                                                                           
    return render_template('secret_page.html', **template_data)

@app.after_request
def apply_caching(response):
    response.headers["Server"] = "Waiter"
    return response

if __name__ == "__main__":                                                      
    app.run('0.0.0.0', debug=True, port=12345)

File: secret_page.html

<!DOCTYPE html>                                                                 
   <head>                                                                       
    <title>{{ title }}</title> 
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <link rel="stylesheet" type="text/css"
      href="{{url_for('static',filename='bootstrap.css')}}">
    <link rel="stylesheet" type= "text/css"
      href="{{url_for('static',filename='bootstrap-theme.css')}}">
    <script>
   function clickAndDisable(link) {
     // disable subsequent clicks
     link.onclick = function(event) {
        event.preventDefault();
        document.getElementById("sub").className = "btn btn-default disabled";
     }
   }   
    </script>
    </head>                                                                     
    <body>                                                                      
      <div class="container">
       <div class="col-lg-1 col-centered">
          <br><p class="row"> {{ out }} </p>
         <form name="boot" class="form-check" action="" method="post" onsubmit="">
           <p>
             <div class="form-check">
               <label class="form-check-label">
                 <input class="form-check-input" type="radio" name="options" id="on" value="on">
                 S-On
               </label>
             </div>
             <div class="form-check">
               <label class="form-check-label">
                <input class="form-check-input" type="radio" name="options" id="off" value="off">
                 S-Off
               </label>
             </div>
             <div class="form-check">
               <label class="form-check-label">
                <input class="form-check-input" type="radio" name="options" id="reset" value="reset">
                 S-Reset
               </label>
             </div>
             <div class="form-check">
               <label class="form-check-label">
                 <input class="form-check-input" type="radio" name="options" id="on2" value="on2">
                 N-On
               </label>
             </div>
             <div class="form-check">
               <label class="form-check-label">
                <input class="form-check-input" type="radio" name="options" id="off2" value="off2">
                 N-Off
               </label>
             </div>
             <div class="form-check">
               <label class="form-check-label">
                <input class="form-check-input" type="radio" name="options" id="reset2" value="reset2">
                 N-Reset
               </label>
             </div>
             <div class="form-check">
               <label class="form-check-label">
                 <input class="form-check-input" type="radio" name="options" id="on3" value="on3">
                 L-On
               </label>
             </div>
             <div class="form-check">
               <label class="form-check-label">
                <input class="form-check-input" type="radio" name="options" id="off3" value="off3">
                 L-Off
               </label>
             </div>
           </p>
           <p> <input type=submit value=Submit id="sub"
                  onclick="clickAndDisable(this);"
                  class="btn btn-default">
           </p>
         </form> 
           {{ msg }}
       </div>
      </div>
    </body>                                                                     
</html>

File: login.html

<!DOCTYPE html>
  <head>
    <title>login page</title>
    <meta name="viewport" content="width=device-width, initial-scale=1.0">
    <link rel="stylesheet" type="text/css"
      href="{{url_for('static',filename='bootstrap.css')}}">
    <link rel="stylesheet" type= "text/css"
      href="{{url_for('static',filename='bootstrap-theme.css')}}">
  </head>
  <body>
    <div class="container">
     <div class="col-lg-1 col-centered">
      <h1>Please login</h1>
      <br>
      <form action="" method="post">
       <div class="form-group row">
        <input type="text" placeholder="Username" name="username" value="{{
          request.form.username }}">
       </div>
       <div class="form-group row">
         <input type="password" placeholder="Password" name="password" value="{{
          request.form.password }}">
       </div>
       <div class="form-group row">
        <input class="btn btn-default" type="submit" value="Login">
       </div>
      </form>
      {% if error %}
        <p class="error"><strong>Error:</strong> {{ error }}
      {% endif %}
     </div>
    </div>
  </body>
</html>

File: main.html

<!DOCTYPE html>                                                                 
   <head>                                                                       
    <title>{{ title }}</title>                                                                                   
    </head>                                                                     
    <body>                                                                      
	  <h1>
			<a href="/on" id="on" class="large_button">ON</a>
	  </h1>
	  <h1>
			<a href="/off" id="off" class="large_button">OFF</a>
	  </h1>
    </body>                                                                     
</html>

Put the bootstrap css package in directory ./static.

Reference: https://getbootstrap.com/

Reference

@book{9781119533160,
   Author = {Derek Molloy},
   Title = {Exploring BeagleBone: Tools and Techniques for Building with Embedded Linux},
   Publisher = {Wiley},
   Edition = {Second},
   Year = {2019},
   ISBN = {9781119533160},
   URL = {http://www.exploringbeaglebone.com/}
}

Code for book: https://github.com/derekmolloy/exploringBB

Continue

Now that you have experimented with BeagleBone SBC, next our travels take us far away to Media Land. The sights and sounds of Video and Audio controlled and maintained by your computer.

Proceed in the order presented, some things are depending on prior setups.

Book Last Updated: 29-March-2024

Media Management


This is where I manage Music and Television files.

Music Player Daemon

Hardware:

  • BeagleBone AI-32 with Debian Linux
  • TrueNAS container

Software:

  • Linux: mpd (server), and mpc, ncmpcpp (clients)
$ sudo apt-get install mpd mpc
  • iOS: MaximumMPD (client for iPhone, iPad)

IMG_60BE35F3E1C0-1 (2).jpeg

MPD Configuration

File: /etc/mpd.conf

# An example configuration file for MPD.
# Read the user manual for documentation: http://www.musicpd.org/doc/user/
# or /usr/share/doc/mpd/html/user.html


# Files and directories #######################################################
#
# This setting controls the top directory which MPD will search to discover the
# available audio files and add them to the daemon's online database. This
# setting defaults to the XDG directory, otherwise the music directory will be
# be disabled and audio files will only be accepted over ipc socket (using
# file:// protocol) or streaming files over an accepted protocol.
#
music_directory		"/data/bob/music/Music"
#
# This setting sets the MPD internal playlist directory. The purpose of this
# directory is storage for playlists created by MPD. The server will use
# playlist files not created by the server but only if they are in the MPD
# format. This setting defaults to playlist saving being disabled.
#
playlist_directory		"/data/bob/music/mpd/playlists"
#
# This setting sets the location of the MPD database. This file is used to
# load the database at server start up and store the database while the
# server is not up. This setting defaults to disabled which will allow
# MPD to accept files over ipc socket (using file:// protocol) or streaming
# files over an accepted protocol.
#
db_file			"/data/bob/music/mpd/tag_cache"
#
# These settings are the locations for the daemon log files for the daemon.
# These logs are great for troubleshooting, depending on your log_level
# settings.
#
# The special value "syslog" makes MPD use the local syslog daemon. This
# setting defaults to logging to syslog, or to journal if mpd was started as
# a systemd service.
#
log_file			"/var/log/mpd/mpd.log"
#
# This setting sets the location of the file which stores the process ID
# for use of mpd --kill and some init scripts. This setting is disabled by
# default and the pid file will not be stored.
#
pid_file			"/run/mpd/pid"
#
# This setting sets the location of the file which contains information about
# most variables to get MPD back into the same general shape it was in before
# it was brought down. This setting is disabled by default and the server
# state will be reset on server start up.
#
state_file			"/data/bob/music/mpd/state"
#
# The location of the sticker database.  This is a database which
# manages dynamic information attached to songs.
#
sticker_file                   "/data/bob/music/mpd/sticker.sql"
#
###############################################################################


# General music daemon options ################################################
#
# This setting specifies the user that MPD will run as. MPD should never run as
# root and you may use this setting to make MPD change its user ID after
# initialization. This setting is disabled by default and MPD is run as the
# current user.
#
user				"mpd"
#
# This setting specifies the group that MPD will run as. If not specified
# primary group of user specified with "user" setting will be used (if set).
# This is useful if MPD needs to be a member of group such as "audio" to
# have permission to use sound card.
#
#group                          "nogroup"
#
# This setting sets the address for the daemon to listen on. Careful attention
# should be paid if this is assigned to anything other then the default, any.
# This setting can deny access to control of the daemon. Choose any if you want
# to have mpd listen on every address. Not effective if systemd socket
# activation is in use.
#
# For network
#bind_to_address		"localhost"
bind_to_address		"0.0.0.0"
#
# And for Unix Socket
#bind_to_address		"/run/mpd/socket"
#
# This setting is the TCP port that is desired for the daemon to get assigned
# to.
#
port				"6600"
#
# This setting controls the type of information which is logged. Available
# setting arguments are "default", "secure" or "verbose". The "verbose" setting
# argument is recommended for troubleshooting, though can quickly stretch
# available resources on limited hardware storage.
#
#log_level			"default"
#
# Setting "restore_paused" to "yes" puts MPD into pause mode instead
# of starting playback after startup.
#
#restore_paused "no"
#
# This setting enables MPD to create playlists in a format usable by other
# music players.
#
#save_absolute_paths_in_playlists	"no"
#
# This setting defines a list of tag types that will be extracted during the
# audio file discovery process. The complete list of possible values can be
# found in the user manual.
#metadata_to_use	"artist,album,title,track,name,genre,date,composer,performer,disc"
#
# This example just enables the "comment" tag without disabling all
# the other supported tags:
#metadata_to_use "+comment"
#
# This setting enables automatic update of MPD's database when files in
# music_directory are changed.
#
#auto_update    "yes"
#
# Limit the depth of the directories being watched, 0 means only watch
# the music directory itself.  There is no limit by default.
#
#auto_update_depth "3"
#
###############################################################################


# Symbolic link behavior ######################################################
#
# If this setting is set to "yes", MPD will discover audio files by following
# symbolic links outside of the configured music_directory.
#
#follow_outside_symlinks	"yes"
#
# If this setting is set to "yes", MPD will discover audio files by following
# symbolic links inside of the configured music_directory.
#
#follow_inside_symlinks		"yes"
#
###############################################################################


# Zeroconf / Avahi Service Discovery ##########################################
#
# If this setting is set to "yes", service information will be published with
# Zeroconf / Avahi.
#
#zeroconf_enabled		"yes"
#
# The argument to this setting will be the Zeroconf / Avahi unique name for
# this MPD server on the network. %h will be replaced with the hostname.
#
#zeroconf_name			"Music Player @ %h"
#
###############################################################################


# Permissions #################################################################
#
# If this setting is set, MPD will require password authorization. The password
# setting can be specified multiple times for different password profiles.
#
#password                        "password@read,add,control,admin"
#
# This setting specifies the permissions a user has who has not yet logged in.
#
#default_permissions             "read,add,control,admin"
#
###############################################################################


# Database #######################################################################
#

#database {
#       plugin "proxy"
#       host "other.mpd.host"
#       port "6600"
#}

# Input #######################################################################
#

input {
        plugin "curl"
#       proxy "proxy.isp.com:8080"
#       proxy_user "user"
#       proxy_password "password"
}

# QOBUZ input plugin
input {
        enabled    "no"
        plugin     "qobuz"
#        app_id     "ID"
#        app_secret "SECRET"
#        username   "USERNAME"
#        password   "PASSWORD"
#        format_id  "N"
}

# TIDAL input plugin
input {
        enabled      "no"
        plugin       "tidal"
#        token        "TOKEN"
#        username     "USERNAME"
#        password     "PASSWORD"
#        audioquality "Q"
}

# Decoder #####################################################################
#

decoder {
        plugin                  "hybrid_dsd"
        enabled                 "no"
#       gapless                 "no"
}

#
###############################################################################

# Audio Output ################################################################
#
# MPD supports various audio output types, as well as playing through multiple
# audio outputs at the same time, through multiple audio_output settings
# blocks. Setting this block is optional, though the server will only attempt
# autodetection for one sound card.
#
# An example of an ALSA output:
#
audio_output {
	type		"alsa"
	name		"Sabrent ALSA Device"
	device		"hw:1"	# optional
#	device		"hw:0,0"	# optional
#	mixer_type      "hardware"      # optional
#	mixer_device	"default"	# optional
#	mixer_control	"PCM"		# optional
#	mixer_index	"0"		# optional
}
#
# Bluetooth through pulesaudio
#audio_output {
#  type “pulse”
#  name “FOSI Audio”
#  mixer_type “none”
#  sink “bluez_sink.7C:58:CA:00:19:63.a2dp_sink”
#}
#
# An example of an OSS output:
#
#audio_output {
#	type		"oss"
#	name		"My OSS Device"
#	device		"/dev/dsp"	# optional
#	mixer_type      "hardware"      # optional
#	mixer_device	"/dev/mixer"	# optional
#	mixer_control	"PCM"		# optional
#}
#
# An example of a shout output (for streaming to Icecast):
#
#audio_output {
#	type		"shout"
#	encoder		"vorbis"		# optional
#	name		"My Shout Stream"
#	host		"localhost"
#	port		"8000"
#	mount		"/mpd.ogg"
#	password	"hackme"
#	quality		"5.0"
#	bitrate		"128"
#	format		"44100:16:1"
#	protocol	"icecast2"		# optional
#	user		"source"		# optional
#	description	"My Stream Description"	# optional
#	url             "http://example.com"    # optional
#	genre		"jazz"			# optional
#	public		"no"			# optional
#	timeout		"2"			# optional
#	mixer_type      "software"              # optional
#}
#
# An example of a recorder output:
#
#audio_output {
#       type            "recorder"
#       name            "My recorder"
#       encoder         "vorbis"                # optional, vorbis or lame
#       path            "/var/lib/mpd/recorder/mpd.ogg"
##      quality         "5.0"                   # do not define if bitrate is defined
#       bitrate         "128"                   # do not define if quality is defined
#       format          "44100:16:1"
#}
#
# An example of a httpd output (built-in HTTP streaming server):
#
audio_output {
	type		"httpd"
	name		"My HTTP Stream"
	encoder		"vorbis"		# optional, vorbis or lame
	port		"8888"
	bind_to_address "0.0.0.0"               # optional, IPv4 or IPv6
	#quality		"5.0"			# do not define if bitrate is defined
	bitrate		"128"			# do not define if quality is defined
	format		"44100:16:1"
	max_clients     "2"                     # optional 0=no limit
}
#
# An example of a pulseaudio output (streaming to a remote pulseaudio server)
# Please see README.Debian if you want mpd to play through the pulseaudio
# daemon started as part of your graphical desktop session!
#
#audio_output {
#	type		"pulse"
#	name		"My Pulse Output"
#	server		"remote_server"		# optional
#	sink		"remote_server_sink"	# optional
#}
#
# An example of a winmm output (Windows multimedia API).
#
#audio_output {
#	type		"winmm"
#	name		"My WinMM output"
#	device		"Digital Audio (S/PDIF) (High Definition Audio Device)" # optional
#		or
#	device		"0"		# optional
#	mixer_type	"hardware"	# optional
#}
#
# An example of an openal output.
#
#audio_output {
#	type		"openal"
#	name		"My OpenAL output"
#	device		"Digital Audio (S/PDIF) (High Definition Audio Device)" # optional
#}
#
## Example "pipe" output:
#
#audio_output {
#	type		"pipe"
#	name		"my pipe"
#	command		"aplay -f cd 2>/dev/null"
## Or if you're want to use AudioCompress
#	command		"AudioCompress -m | aplay -f cd 2>/dev/null"
## Or to send raw PCM stream through PCM:
#	command		"nc example.org 8765"
#	format		"44100:16:2"
#}
#
## An example of a null output (for no audio output):
#
#audio_output {
#	type		"null"
#	name		"My Null Output"
#	mixer_type      "none"                  # optional
#}
#
###############################################################################


# Normalization automatic volume adjustments ##################################
#
# This setting specifies the type of ReplayGain to use. This setting can have
# the argument "off", "album", "track" or "auto". "auto" is a special mode that
# chooses between "track" and "album" depending on the current state of
# random playback. If random playback is enabled then "track" mode is used.
# See <http://www.replaygain.org> for more details about ReplayGain.
# This setting is off by default.
#
#replaygain			"album"
#
# This setting sets the pre-amp used for files that have ReplayGain tags. By
# default this setting is disabled.
#
#replaygain_preamp		"0"
#
# This setting sets the pre-amp used for files that do NOT have ReplayGain tags.
# By default this setting is disabled.
#
#replaygain_missing_preamp	"0"
#
# This setting enables or disables ReplayGain limiting.
# MPD calculates actual amplification based on the ReplayGain tags
# and replaygain_preamp / replaygain_missing_preamp setting.
# If replaygain_limit is enabled MPD will never amplify audio signal
# above its original level. If replaygain_limit is disabled such amplification
# might occur. By default this setting is enabled.
#
#replaygain_limit		"yes"
#
# This setting enables on-the-fly normalization volume adjustment. This will
# result in the volume of all playing audio to be adjusted so the output has
# equal "loudness". This setting is disabled by default.
#
#volume_normalization		"no"
#
###############################################################################

# Character Encoding ##########################################################
#
# If file or directory names do not display correctly for your locale then you
# may need to modify this setting.
#
filesystem_charset		"UTF-8"
#
###############################################################################

MPD Daemon Processes

These configurations keep the mpd daemon running after boot and restarted upon any failure.

Linux

FIle: /lib/systemd/system/mpd.service

[Unit]
Description=Music Player Daemon
Documentation=man:mpd(1) man:mpd.conf(5)
Documentation=file:///usr/share/doc/mpd/user-manual.html
After=network.target sound.target

[Service]
Type=notify
EnvironmentFile=/etc/default/mpd
ExecStart=/usr/bin/mpd --no-daemon $MPDCONF

# Enable this setting to ask systemd to watch over MPD, see
# systemd.service(5).  This is disabled by default because it causes
# periodic wakeups which are unnecessary if MPD is not playing.
#WatchdogSec=120

# allow MPD to use real-time priority 50
LimitRTPRIO=50
LimitRTTIME=infinity

# disallow writing to /usr, /bin, /sbin, ...
ProtectSystem=yes

# more paranoid security settings
NoNewPrivileges=yes
ProtectKernelTunables=yes
ProtectControlGroups=yes
ProtectKernelModules=yes
# AF_NETLINK is required by libsmbclient, or it will exit() .. *sigh*
RestrictAddressFamilies=AF_INET AF_INET6 AF_UNIX AF_NETLINK
RestrictNamespaces=yes

[Install]
WantedBy=multi-user.target
Also=mpd.socket

Audio Player

Hardware:

  • host (BeagleBone AI-64 Debian Linux).

Software:

  • Linux: NextCloud app called audioplayer, visible on NextCloud web interface

The music files reside in the /Music directory of the nextcloud account.

Audio-Player.png

Reference: https://apps.nextcloud.com/apps/audioplayer

Plex Television and Music Recordings

This is where most TV shows are played from, on either one of the Apple-TVs.

Hardware:

  • TrueNAS Scale container

Software:

IMG_233B4FCE41DA-1.jpeg

  • tvOS: Plex (Video, Music) on the Apple-TV

Libraries:

  • TV: /tv_share/tv (Recordings come from OTA HDHomeRun below)
  • Movies: ~/tv_share/movies
  • Music: ~/nextcloud/Music

Reference: https://www.plex.tv/media-server-downloads/

Over-the-Air (OTA) HDHomeRun Television Recording

Hardware:

Software:

  • Built-in webservice on port 80 (http://hdhomerun.local/)
  • iOS: HDHomeRun (Used to set recordings and delete old shows on the HDHomeRun box)

IMG_5456720E92DB-1.jpeg

Copy TV Recordings

The actual recording from OTA happens on the HDHomeRun box, and I copy the recordings into a Plex library using the HDHomeRun web interface. This is scheduled once a day in cron.

File: ~/tv_share/tv/hdhomerun/get.sh

#!/bin/bash
#-------------------------------------------------
# File get.sh
# Usage: get.sh
#   Set the IP address of the HDHomeRun Flex box
#    with DVR USB disk attached
#
# Purpose: Copy recordings to the shared filesystem 
#   so they can be played by other software,
#   i.e.: Plex, Infuse, VLC, etc...
#
# Dependencies: 
#  -apt install jq. 
#  -curl is installed by normal OS. 
#  -silicondust.com DVR subscription
#   required to record on the device, $35 annual
#   hdhomerun_config also from silicondust.com
#
# History:
#  Date        Author      Description
# -----------  ----------  -----------------------
# 22-Jul-2021  Don Cohoon  Created on MacOS
# 20-Jan-2023  Don Cohoon  Added REAL_URL because
#  firmware release kept ${IP} as file name causing
#   of zero byte files
#-------------------------------------------------
MYHOME="/mnt/vol032"
MYDIR="tv_share/tv"
MYTESTFILE=${MYHOME}/tv_share/get.sh
export PATH=/usr/local/bin:/usr/bin:/bin:/usr/sbin:/sbin
#-------------------------------------------------
# Log
function logit()
{
  echo $@
  echo ${0} $(/bin/date) : $@ >>hdhomerun.log 2>&1
}

#-------------------------------------------------
# Ensure we move to correct directory
#
cd ${MYHOME}/${MYDIR}
if [ ! -f ${MYTESTFILE} ]; then
  logit "=> Cannot change to ${MYHOME}/${MYDIR}"
  exit 2
fi

#-------------------------------------------------
# Get IP address of TV Server
#
IP=$(/usr/local/bin/hdhomerun_config discover|awk '{print $6}')
logit "=> IP address of TV server is ${IP}"

#-------------------------------------------------
# Get Record engine status 
#
logit "=> discover.json"
/usr/bin/curl -s http://${IP}/discover.json >discover.json 2>>hdhomerun.log
#                             ^
#cat discover.json
#logit 
#logit ". . ."

#-------------------------------------------------
# Get recorded_files.json from root of dvr
#
logit "=> recorded_files.json"
/usr/bin/curl -s  --header 'Accept: */*' \
         --header 'Accept-Encoding: gzip, deflate' \
         --header 'Accept-Language: en-US,en;q=0.5' \
         --header 'Cache-Control: no-cache' \
         --header 'Connection: keep-alive' \
         --header 'Pragma: no cache' \
         --header "Host: ${IP}" \
         --header "Referer: http://${IP}/recorded_files.html" \
         --header 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0' \
  http://${IP}/recorded_files.json?DisplayGroupID=root > recorded_files.json 2>>hdhomerun.log
  #            ^                   ^              ^ 
#cat recorded_files.json 
#logit 
#logit ". . ."

#-------------------------------------------------
# Get root SeriesIDs from recorded_files.json
#
declare -a SeriesID
SeriesID=($(/usr/bin/jq -r '.[] | .SeriesID' recorded_files.json))
#for (( i = 0 ; i < ${#SeriesID[@]} ; i++))
#do
#  logit "Series [$i]: ${SeriesID[$i]}"
#done

#-------------------------------------------------
# Get root Titles from recorded_files.json
#
declare -a Title
/usr/bin/jq -r '.[] | .Title' recorded_files.json > recorded_files.title
let i=0
while read t
do
  # Add new element at the end of the array
  Title+=("${t}")
  logit "Title [$i]: ${Title[$i]}"
  let i++
done < recorded_files.title
rm recorded_files.title
#

#-------------------------------------------------
# Get SeriesID recordings, into "Title"/recorded_files.json
#
for (( iSeriesID = 0 ; iSeriesID < ${#SeriesID[@]} ; iSeriesID++))
do
  # Put Series into Title directory
  mkdir -p "${Title[$iSeriesID]}"
  cd       "${Title[$iSeriesID]}"

  logit "Downloading Series : ${Title[$iSeriesID]}/recorded_files.json"
  /usr/bin/curl -s  --header 'Accept: */*' \
           --header 'Accept-Encoding: gzip, deflate' \
           --header 'Accept-Language: en-US,en;q=0.5' \
           --header 'Cache-Control: no-cache' \
           --header 'Connection: keep-alive' \
           --header 'Host: ${IP}' \
           --header 'Pragma: no-cache' \
           --header 'Referer: http://${IP}/recorded_files.html?SeriesID="${SeriesID[$iSeriesID]}"' \
           --header 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0' \
    http://${IP}/recorded_files.json?SeriesID="${SeriesID[$iSeriesID]}" > recorded_files.json 2>>hdhomerun.log
    #            ^                   ^           ^
  #-------------------------------------------------
  # Get all PlayURL recordings for this SeriesID
  #
  unset PlayURL
  declare -a PlayURL
  PlayURL=($(/usr/bin/jq -r '.[] | .PlayURL' recorded_files.json))
  for (( iPlayURL = 0 ; iPlayURL < ${#PlayURL[@]} ; iPlayURL++))
  do
    logit "PlayURL [$iPlayURL]: ${PlayURL[$iPlayURL]}"
  done

  #-------------------------------------------------
  # Get all Filename recordings for this SeriesID
  #
  unset Filename
  declare -a Filename
  /usr/bin/jq -r '.[] | .Filename' recorded_files.json > recorded_files.title
  let iFilename=0
  while read f
  do

    # Add new element at the end of the array
    Filename+=("${f}")

    if [ ! -f "${Filename[$iFilename]}" ]; then # New file, download it
      logit "Downloading Filename [$iFilename]: ${Filename[$iFilename]}"
      mail -s "HDHomeRun recording: Downloading Filename [$iFilename]: ${Filename[$iFilename]} - ${PlayURL[$iFilename]}" bob@example.com <<EOF
$(/bin/date)
EOF
      #-------------------------------------------------
      # Download Recording
      #
      REAL_URL=$(eval echo ${PlayURL[$iFilename]})
      logit "RealURL: ${REAL_URL}"
      /usr/bin/curl   --header 'Accept: */*' \
             --header 'Accept-Encoding: gzip, deflate' \
             --header 'Accept-Language: en-US,en;q=0.5' \
             --header 'Cache-Control: no-cache' \
             --header 'Connection: keep-alive' \
             --header 'Host: ${IP}' \
             --header 'Pragma: no-cache' \
             --header 'User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.15; rv:89.0) Gecko/20100101 Firefox/89.0' \
      "${REAL_URL}" > "${Filename[$iFilename]}" 2>>hdhomerun.log
      # ^                         ^
    fi
    # 
    let iFilename++
  done < recorded_files.title

  rm recorded_files.title

  # Back to main directory
  cd ..
done

curseradio

Python project to listen to streaming internet radio from command line

Files:

  • ~/.local/share/curseradio/favourites.opml
  • ~/radio.sh
#!/bin/bash
#sudo apt-get install curseradio
#git clone https://github.com/chronitis/curseradio/
## cd
cd curseradio
## create venv
#python3 -m venv env
## activate vene
source env/bin/activate
## install required libraries via requirements.txt file
# cat requirements.txt 
#lxml
#requests
#pyxdg
## pip install
#pip install -r requirements.txt
## setup.py
# pip install -e .
##Obtaining file:///media/bob/data/curseradio
##Installing collected packages: curseradio
##  Running setup.py develop for curseradio
##Successfully installed curseradio
##
## Run it!
curseradio
## Get out of virtual environment
deactivate

cursoradio configuration file

<outline type="audio" text="Handcrafted Radio (US)" URL="
http://opml.radiotime.com/Tune.ashx?id=s97491" bitrate="128"
reliability="98" guide_id="s97491" subtext="Jackson Browne - Doctor My
Eyes" genre_id="g54" formats="mp3" playing="Jackson Browne - Doctor My
Eyes" playing_image="http://cdn-albums.tunein.com/gn/WJRG9NQZCHq.jpg"
item="station" image="http://cdn-radiotime-logos.tunein.com/s97491q.png
" now_playing_id="s97491" preset_id="s97491"/>
...
from ~/.local/share/curseradio/favourites.opml

Conclusion

Now that you have sensed the sights and sounds of Video and Audio controlled and maintained by your computer, I wish to thank you for your interest in this book..

Hopefully this little write up was of some help.

Regards,
-- Don

Book Last Updated: 29-March-2024

NCMPCPP Commands


ncmpcpp.png

Reference: https://rybczak.net/ncmpcpp/

Install ncmpcpp

$ sudo apt-get install ncmpcpp

Using ncmpcpp

Simply run this on your host running Music Player Daemon (MPD) over your ssh session

$ ncmpcpp

or run on one host connecting to MPD on another host, like this

$ ncmpcpp --host=192.168.1.44 -p 6600

and you’ll see a ncurses-powered graphical user interface in your terminal.

Press 4 and you should see your local music library, be able to change the selection using the arrow keys and press Enter to play a song.

Doing this multiple times will create a playlist, which allows you to move to the next track using the > button (not the right arrow, the > closing angle bracket character) and go back to the previous track with <. The + and – buttons increase and decrease volume. The Q button quits ncmpcpp but it doesn’t stop the music. You can play and pause with P.

You can see the current playlist by pressing the 1 button (this is the default view). From this view you can press i to look at the information (tags) about the current song. You can change the tags of the currently playing (or paused) song by pressing 6.

Pressing the \ button will add (or remove) an informative panel at the top of the view. In the top left, you should see something that looks like this:

[------]

Pressing the r, z, y, R, x buttons will respectively toggle the repeat, random, single, consume and crossfade playback modes and will replace one of the characters in that little indicator to the initial of the selected mode.

Pressing the F1 button will display some help text, which contains a list of keybindings, so there’s no need to write a complete list here. So now go on, be geeky, and play all your music from your terminal!

Reference: https://github.com/ncmpcpp/ncmpcpp

Keys - Movement

  • Up k - Move cursor up
  • Down j - Move cursor down
  • [ - Move cursor up one album
  • ] - Move cursor down one album
  • { - Move cursor up one artist
  • } - Move cursor down one artist
  • Page Up - Page up
  • Page Down - Page down
  • Home - Home
  • End - End
  • Tab - Switch to next screen in sequence
  • Shift-Tab - Switch to previous screen in sequence
  • F1 - Show help
  • 1 - Show playlist
  • 2 - Show browser
  • 3 - Show search engine
  • 4 - Show media library
  • 5 - Show playlist editor
  • 6 - Show tag editor
  • 7 - Show outputs
  • 8 - Show music visualizer
  • = - Show clock
  • @ - Show server info

Keys - Global

  • s - Stop
  • p - Pause
  • > - Next track
  • < - Previous track
  • Ctrl-H Backspace - Replay playing song
  • f - Seek forward in playing song
  • b - Seek backward in playing song
  • - Left - Decrease volume by 2%
  • Right + - Increase volume by 2%
  • t - Toggle space mode (select/add)
  • T - Toggle add mode (add or remove/always add)
  • | - Toggle mouse support
  • v - Reverse selection
  • V - Remove selection
  • B - Select songs of album around the cursor
  • a - Add selected items to playlist
  • ` - Add random items to playlist
  • r - Toggle repeat mode
  • z - Toggle random mode
  • y - Toggle single mode
  • R - Toggle consume mode
  • Y - Toggle replay gain mode
  • # - Toggle bitrate visibility
  • Z - Shuffle playlist
  • x - Toggle crossfade mode
  • X - Set crossfade
  • u - Start music database update
  • : - Execute command
  • Ctrl-F - Apply filter
  • / - Find item forward
  • ? - Find item backward
  • , - Jump to previous found item
  • . - Jump to next found item
  • w - Toggle find mode (normal/wrapped)
  • G - Locate song in browser
  • ~ - Locate song in media library
  • Ctrl-L - Lock/unlock current screen
  • Left h - Switch to master screen (left one)
  • Right l - Switch to slave screen (right one)
  • E - Locate song in tag editor
  • P - Toggle display mode
  • \ - Toggle user interface
  • ! - Toggle displaying separators between albums
  • g - Jump to given position in playing song (formats: mm:ss, x%)
  • i - Show song info
  • I - Show artist info
  • L - Toggle lyrics fetcher
  • F - Toggle fetching lyrics for playing songs in background
  • q - Quit

Keys - Playlist

  • Enter - Play selected item
  • Delete - Delete selected item(s) from playlist
  • c - Clear playlist
  • C - Clear playlist except selected item(s)
  • Ctrl-P - Set priority of selected items
  • Ctrl-K m - Move selected item(s) up
  • n Ctrl-J - Move selected item(s) down
  • M - Move selected item(s) to cursor position
  • A - Add item to playlist
  • e - Edit song
  • S - Save playlist
  • Ctrl-V - Sort playlist
  • Ctrl-R - Reverse playlist
  • o - Jump to current song
  • U - Toggle playing song centering

Keys - Browser

  • Enter - Enter directory/Add item to playlist and play it
  • Space - Add item to playlist/select it
  • e - Edit song
  • e - Edit directory name
  • e - Edit playlist name
  • 2 - Browse MPD database/local filesystem
  • ` - Toggle sort mode
  • o - Locate playing song
  • Ctrl-H Backspace - Jump to parent directory
  • Delete - Delete selected items from disk
  • G - Jump to playlist editor (playlists only)

Keys - Search engine

  • Enter - Add item to playlist and play it/change option
  • Space - Add item to playlist
  • e - Edit song
  • y - Start searching
  • 3 - Reset search constraints and clear results

Keys - Media library

  • 4 - Switch between two/three columns mode
  • Left h - Previous column
  • Right l - Next column
  • Enter - Add item to playlist and play it
  • Space - Add item to playlist
  • e - Edit song
  • e - Edit tag (left column)/album (middle/right column)
  • ` - Toggle type of tag used in left column
  • m - Toggle sort mode

Keys - Playlist editor

  • Left h - Previous column
  • Right l - Next column
  • Enter - Add item to playlist and play it
  • Space - Add item to playlist/select it
  • e - Edit song
  • e - Edit playlist name
  • Ctrl-K m - Move selected item(s) up
  • n Ctrl-J - Move selected item(s) down
  • Delete - Delete selected playlists (left column)
  • C - Clear playlist except selected item(s)
  • Ctrl-P - Set priority of selected items
  • Ctrl-K m - Move selected item(s) up
  • n Ctrl-J - Move selected item(s) down
  • M - Move selected item(s) to cursor position
  • A - Add item to playlist
  • e - Edit song
  • S - Save playlist
  • Ctrl-V - Sort playlist
  • Ctrl-R - Reverse playlist
  • o - Jump to current song
  • U - Toggle playing song centering

Keys - Browser

  • Enter - Enter directory/Add item to playlist and play it
  • Space - Add item to playlist/select it
  • e - Edit song
  • e - Edit directory name
  • e - Edit playlist name
  • 2 - Browse MPD database/local filesystem
  • ` - Toggle sort mode
  • o - Locate playing song
  • Ctrl-H Backspace - Jump to parent directory
  • Delete - Delete selected items from disk
  • G - Jump to playlist editor (playlists only)

Keys - Search engine

  • Enter - Add item to playlist and play it/change option
  • Space - Add item to playlist
  • e - Edit song
  • y - Start searching
  • 3 - Reset search constraints and clear results

Keys - Media library

  • 4 - Switch between two/three columns mode
  • Left h - Previous column
  • Right l - Next column
  • Enter - Add item to playlist and play it
  • Space - Add item to playlist
  • e - Edit song
  • e - Edit tag (left column)/album (middle/right column)
  • ` - Toggle type of tag used in left column
  • m - Toggle sort mode

Keys - Playlist editor

  • Left h - Previous column
  • Right l - Next column
  • Enter - Add item to playlist and play it
  • Space - Add item to playlist/select it
  • e - Edit song
  • e - Edit playlist name
  • Ctrl-K m - Move selected item(s) up
  • n Ctrl-J - Move selected item(s) down
  • Delete - Delete selected playlists (left column)
  • Delete - Delete selected item(s) from playlist (right column)
  • c - Clear playlist
  • C - Clear playlist except selected items

Keys - Lyrics

  • Space - Toggle reloading lyrics upon song change
  • e - Open lyrics in external editor
  • ` - Refetch lyrics

Keys - Tiny tag editor

  • Enter - Edit tag
  • y - Save

Keys - Tag editor

  • Enter - Edit tag/filename of selected item (left column)
  • Enter - Perform operation on all/selected items (middle column)
  • Space - Switch to albums/directories view (left column)
  • Space - Select item (right column)
  • Left h - Previous column
  • Right l - Next column
  • Ctrl-H Backspace - Jump to parent directory (left column, directories view)

2023


January

  • Docker

February

  • Virtual IP

March

  • Encrypt Files

April

  • Network Management

2023 - January


Docker

Docker is a way to manage containers, which are like namespaces or jails. Here are some useful common commands to check out the status of these containers.

Prune unused containers, images and cache

This can free up disk space.

$ docker system prune
WARNING! This will remove:
  - all stopped containers
  - all networks not used by at least one container
  - all dangling images
  - all dangling build cache

Are you sure you want to continue? [y/N] y
Deleted Containers:
e91363806fd869a4f9349fc2de4faf26cc6b4b26a7fdbf16c5535c13ef46f995

Deleted Images:
untagged: homeassistant/home-assistant@sha256:1b4a62627444841ad222aaf3795b90581c2f8b712762c8d7d6b6cf9a42045ea8
deleted: sha256:8a975b58aacfc196fbe3eeb34810b9068e8a50d953ff88b67504d1c2351940d7
deleted: sha256:8df0a83f8a5a8360e6deef1187b2dbf6c27b7fe57584bca0b657671363a20729
deleted: sha256:abdac85af7ebbecf80442d554fa655133eb4ee2dbf4d96136caf1615e7f82992
deleted: sha256:b992bfbe5047c28682b5cc7ce976e418b5402113ee404143a4de208214f322f7
deleted: sha256:84e4d6e3db7ee758f3089b2f28ed3b1d248e31c72604004fdfdd34970c1a47
...
Total reclaimed space: 3.556GB

Reference: https://docs.docker.com/engine/reference/commandline/system_prune/

Check a container's logs

$ docker logs -f homeassistant
...
Traceback (most recent call last):
  File "/usr/local/lib/python3.10/site-packages/pyatv/protocols/airplay/__init__.py", line 258, in _connect_rc
    await control.start(str(core.config.address), control_port, credentials)
  File "/usr/local/lib/python3.10/site-packages/pyatv/protocols/airplay/remote_control.py", line 60, in start
    await self._setup_event_channel(self.connection.remote_ip)
  File "/usr/local/lib/python3.10/site-packages/pyatv/protocols/airplay/remote_control.py", line 102, in _setup_event_channel
    resp = await self._setup(
...

Inspect a container

$ docker container inspect  homeassistant
[
    {
        "Id": "0baa0742e18858ebea97a93ee5a97f0fb09f369d504d3f6d1252c8c35234ac91",
        "Created": "2023-04-16T23:45:06.042429674Z",
        "Path": "/init",
        "Args": [],
        "State": {
            "Status": "running",
            "Running": true,
            "Paused": false,
            "Restarting": false,
            "OOMKilled": false,
            "Dead": false,
            "Pid": 2378698,
            "ExitCode": 0,
            "Error": "",
            "StartedAt": "2023-04-16T23:45:06.981493189Z",
            "FinishedAt": "0001-01-01T00:00:00Z"
        },
...

Check for container filesystem changes since start

$ docker diff <container name>

Check container running processes

$ docker top homeassistant
UID                 PID                 PPID                C                   STIME               TTY                 TIME                CMD
root                2378698             2378670             0                   19:45               ?                   00:00:00            /package/admin/s6/command/s6-svscan -d4 -- /run/service
root                2378732             2378698             0                   19:45               ?                   00:00:00            s6-supervise s6-linux-init-shutdownd
root                2378733             2378732             0                   19:45               ?                   00:00:00            /package/admin/s6-linux-init/command/s6-linux-init-shutdownd -c /run/s6/basedir -g 3000 -C -B
root                2378742             2378698             0                   19:45               ?                   00:00:00            s6-supervise s6rc-fdholder
root                2378743             2378698             0                   19:45               ?                   00:00:00            s6-supervise s6rc-oneshot-runner
root                2378751             2378743             0                   19:45               ?                   00:00:00            /package/admin/s6/command/s6-ipcserverd -1 -- /package/admin/s6/command/s6-ipcserver-access -v0 -E -l0 -i data/rules -- /package/admin/s6/command/s6-sudod -t 30000 -- /package/admin/s6-rc/command/s6-rc-oneshot-run -l ../.. --
root                2378782             2378698             0                   19:45               ?                   00:00:00            s6-supervise home-assistant
root                2378784             2378782             1                   19:45               ?                   00:00:15            python3 -m homeassistant --config /config

Check container statistics

$ docker stats homeassistant

CONTAINER ID   NAME            CPU %     MEM USAGE / LIMIT     MEM %     NET I/O   BLOCK I/O        PIDS
0baa0742e188   homeassistant   0.23%     217.6MiB / 15.52GiB   1.37%     0B / 0B   102MB / 10.5MB   21

Check docker networks

$ docker network ls
NETWORK ID     NAME      DRIVER    SCOPE
b67081eb628c   bridge    bridge    local
0921bc6a0760   host      host      local
c8363d03e4cd   none      null      local

$ docker network inspect  bridge
[
    {
        "Name": "bridge",
        "Id": "b67081eb628c51d90d2f234df1c97580bc0d83fa52c300f2fbf4764556a14ba6",
        "Created": "2023-04-15T06:02:48.923361812-04:00",
        "Scope": "local",
        "Driver": "bridge",
        "EnableIPv6": false,
        "IPAM": {
            "Driver": "default",
            "Options": null,
            "Config": [
                {
                    "Subnet": "172.17.0.0/16",
                    "Gateway": "172.17.0.1"
                }
            ]
        },
        "Internal": false,
        "Attachable": false,
        "Ingress": false,
        "ConfigFrom": {
            "Network": ""
        },
        "ConfigOnly": false,
        "Containers": {
            "3357239a67204620012173ae3cd724a2508d9456d2aaabee7b1be3d7b351e6f7": {
                "Name": "zwave-js",
                "EndpointID": "7d50227167937e8ce5886abeb87411f66902eae494b4b3860ec133b77a39ecbd",
                "MacAddress": "02:42:ac:11:00:02",
                "IPv4Address": "172.17.0.2/16",
                "IPv6Address": ""
            }
        },
        "Options": {
            "com.docker.network.bridge.default_bridge": "true",
            "com.docker.network.bridge.enable_icc": "true",
            "com.docker.network.bridge.enable_ip_masquerade": "true",
            "com.docker.network.bridge.host_binding_ipv4": "0.0.0.0",
            "com.docker.network.bridge.name": "docker0",
            "com.docker.network.driver.mtu": "1500"
        },
        "Labels": {}
    }
]

Move data files

Default docker installs will put the large containers into /var/lib/docker which may be a directory off root. It is better to put them on a seperate filesystem in case they grow unexpectedly.

1 - Stop the docker daemon

sudo service docker stop

2 - Add a configuration file to tell the docker daemon what is the location of the data directory

Change the data-root location:

File: /etc/docker/daemon.json

{ 
   "data-root": "/path/to/your/docker" 
}

3 - Copy the current data directory to the new one

sudo rsync -aP /var/lib/docker/ /path/to/your/docker

4 - Rename the old docker directory

sudo mv /var/lib/docker /var/lib/docker.old

This is just a sanity check to see that everything is ok and docker daemon will effectively use the new location for its data.

5 - Restart the docker daemon

sudo service docker start

6 - Test

If everything is ok you should see no differences in using your docker containers. When you are sure that the new directory is being used correctly by docker daemon you can delete the old data directory.

sudo rm -rf /var/lib/docker.old

Upgrade a container

Here is my documentation on upgrading the homeassistant docker container:

Upgrade HomeAssistant

2023 - February


Virtual IP

Virtual IP Interface behaves like a normal interface. All traffic routed to it will go through the master interface (for example, eth0) but with a VLAN tag. Only VLAN-aware devices can accept them if configured correctly, else the traffic is dropped.

You can create a Virtual IP address for special routing purposes or security. Normally these are used for server to server connections, or to isolate guest connections.

NetworkD Virtual IP

  • Create a file in directory /etc/netplan ending with yaml. They will be processed in numeric/alphabetical order.

File: /etc/netplan/60-vlan-init.yaml

# Remove NetworkManager - add second interface - Don Sept 2019
network:
  version: 2
  renderer: networkd
  # ERROR: vlan1: NetworkManager only supports global scoped routes
  #renderer: NetworkManager
  ethernets:
    eno1:
      addresses: [192.168.1.3/24]
      gateway4: 192.168.1.1
      nameservers:
        addresses: [1.1.1.1, 1.0.0.1]
      optional: true
    eno2: {}
  vlans:
    vlan1:
      id: 1
      link: eno1
      addresses: [192.168.2.3/24]

  • Try the change in debug mode first:
$ sudo netplan --debug try
DEBUG:eno1 not found in {}
DEBUG:eno2 not found in {'eno1': {'addresses': ['192.168.1.3/24'], 'gateway4': '192.168.1.1', 'nameservers': {'addresses': ['1.1.1.1', '1.0.0.1']}, 'optional': True}}
DEBUG:vlan1 not found in {}
DEBUG:Merged config:
network:
  bonds: {}
  bridges: {}
  ethernets:
    eno1:
      addresses:
      - 192.168.1.3/24
      gateway4: 192.168.1.1
      nameservers:
        addresses:
        - 1.1.1.1
        - 1.0.0.1
      optional: true
    eno2: {}
  vlans:
    vlan1:
      addresses:
      - 192.168.2.3/24
      id: 1
      link: eno1
  wifis: {}

DEBUG:New interfaces: set()
** (generate:11029): DEBUG: 08:52:30.927: Processing input file /etc/netplan/60-vlan-init.yaml..
** (generate:11029): DEBUG: 08:52:30.927: starting new processing pass
** (generate:11029): DEBUG: 08:52:30.927: vlan1: setting default backend to 1
** (generate:11029): DEBUG: 08:52:30.927: Configuration is valid
** (generate:11029): DEBUG: 08:52:30.927: eno1: setting default backend to 1
** (generate:11029): DEBUG: 08:52:30.927: Configuration is valid
** (generate:11029): DEBUG: 08:52:30.927: eno2: setting default backend to 1
** (generate:11029): DEBUG: 08:52:30.927: Configuration is valid
** (generate:11029): DEBUG: 08:52:30.928: Generating output files..
** (generate:11029): DEBUG: 08:52:30.928: NetworkManager: definition eno1 is not for us (backend 1)
** (generate:11029): DEBUG: 08:52:30.928: NetworkManager: definition eno2 is not for us (backend 1)
** (generate:11029): DEBUG: 08:52:30.928: NetworkManager: definition vlan1 is not for us (backend 1)
DEBUG:netplan generated networkd configuration changed, restarting networkd
DEBUG:no netplan generated NM configuration exists
DEBUG:eno1 not found in {}
DEBUG:eno2 not found in {'eno1': {'addresses': ['192.168.1.3/24'], 'gateway4': '192.168.1.1', 'nameservers': {'addresses': ['1.1.1.1', '1.0.0.1']}, 'optional': True}}
DEBUG:vlan1 not found in {}
DEBUG:Merged config:
network:
  bonds: {}
  bridges: {}
  ethernets:
    eno1:
      addresses:
      - 192.168.1.3/24
      gateway4: 192.168.1.1
      nameservers:
        addresses:
        - 1.1.1.1
        - 1.0.0.1
      optional: true
    eno2: {}
  vlans:
    vlan1:
      addresses:
      - 192.168.2.3/24
      id: 1
      link: eno1
  wifis: {}

DEBUG:Skipping non-physical interface: lo
DEBUG:device eno1 operstate is up, not changing
DEBUG:Skipping non-physical interface: vlan1
DEBUG:Skipping non-physical interface: wlp58s0
DEBUG:Skipping non-physical interface: tun0
DEBUG:{}
DEBUG:netplan triggering .link rules for lo
DEBUG:netplan triggering .link rules for eno1
DEBUG:netplan triggering .link rules for vlan1
DEBUG:netplan triggering .link rules for wlp58s0
DEBUG:netplan triggering .link rules for tun0
Do you want to keep these settings?


Press ENTER before the timeout to accept the new configuration


Changes will revert in 118 seconds
Configuration accepted.
  • If you have success, make the change permanent:
$ sudo netplan apply
  • Test it with a ping:
# ping 192.168.2.3
PING 192.168.2.3 (192.168.2.3) 56(84) bytes of data.
64 bytes from 192.168.2.3: icmp_seq=1 ttl=64 time=0.088 ms
64 bytes from 192.168.2.3: icmp_seq=2 ttl=64 time=0.104 ms
^C
--- 192.168.2.3 ping statistics ---
2 packets transmitted, 2 received, 0% packet loss, time 1011ms
rtt min/avg/max/mdev = 0.088/0.096/0.104/0.008 ms
  • Check the routes. One for physical interface eno1, another for virtual interface vlan1.
# route
Kernel IP routing table
Destination     Gateway         Genmask         Flags Metric Ref    Use Iface
default         _gateway        0.0.0.0         UG    0      0        0 eno1
192.168.1.0     0.0.0.0         255.255.255.0   U     0      0        0 eno1
192.168.2.0     0.0.0.0         255.255.255.0   U     0      0        0 vlan1
  • Another way to check routes:
# ip r
default via 192.168.1.1 dev eno1 proto static 
192.168.1.0/24 dev eno1 proto kernel scope link src 192.168.1.3 
192.168.2.0/24 dev vlan1 proto kernel scope link src 192.168.2.3 
  • Also you can check the ip addresses:

Notice the virtual interface is called vlan1@eno1 because it is stacked on top of physical interface eno1.

# ip a
~
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 1c:69:7a:09:e7:61 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
    inet 192.168.1.3/24 brd 192.168.1.255 scope global eno1
       valid_lft forever preferred_lft forever
    inet6 fe80::1e69:7aff:fe09:e761/64 scope link 
       valid_lft forever preferred_lft forever
~
4: vlan1@eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 1c:69:7a:09:e7:61 brd ff:ff:ff:ff:ff:ff
    inet 192.168.2.3/24 brd 192.168.2.255 scope global vlan1
       valid_lft forever preferred_lft forever
    inet6 fe80::1e69:7aff:fe09:e761/64 scope link 
       valid_lft forever preferred_lft forever
~

Reference: https://netplan.io/examples

Create vlan from command line

Create a vlan called vlan9 on physical device eth0, with vlan id of 9.

$ sudo ip link add link eth0 name vlan9 type vlan id 9

Display interface

$ sudo ip -d link show vlan9
4: vlan9@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP mode DEFAULT group default qlen 1000
    link/ether 1c:69:7a:09:e7:61 brd ff:ff:ff:ff:ff:ff promiscuity 0 minmtu 0 maxmtu 65535 
    vlan protocol 802.1Q id 9 <REORDER_HDR> addrgenmode eui64 numtxqueues 1 numrxqueues 1 gso_max_size 65536 gso_max_segs 65535 

Of course this interface will go away after a reboot, unless you run this command again.

The -d flag shows full details of an interface. Notice the vlan protocol 802.1Q id is 9.

$ sudo ip -d addr show
4: vlan9@eth0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default 
   link/ether 96:4a:9c:84:36:51 brd ff:ff:ff:ff:ff:ff promiscuity 0 
   vlan protocol 802.1Q id 9 <REORDER_HDR> 
   inet6 fe80::944a:9cff:fe84:3651/64 scope link 
      valid_lft forever preferred_lft forever

Add an IPv4 address:

#                  IP Address           Broadcast           Device
$ sudo ip addr add 192.168.100.1/24 brd 192.168.100.255 dev vlan9
$ sudo ip link set dev vlan9 up

Shut down the link:

$ sudo ip link set dev vlan9 down

Remove VLAN interface:

# sudo ip link delete vlan9

Reference:

Redhat Version

Install nmstate package

$ sudo dnf install nmstate

Create config file

File: /etc/nmstate/60-create-vlan.yml

---
interfaces:
- name: vlan10
  type: vlan
  state: up
  ipv4:
    enabled: true
    address:
    - ip: 192.168.22.1
      prefix-length: 24
    dhcp: false
  ipv6:
    enabled: false
  vlan:
    base-iface: eno1
    id: 10
- name: eno1
  type: ethernet
  state: up

Apply config file

$ sudo nmstatectl apply /etc/nmstate/60-create-vlan.yml
  • Verification

Display the status of the devices and connections:

# nmcli device status
  DEVICE      TYPE      STATE      CONNECTION
  vlan10      vlan      connected  vlan10

Display all settings of the connection profile:

# nmcli connection show vlan10
  connection.id:              vlan10
  connection.uuid:            1722970f-788e-4f81-bd7d-a86bf21c9df5
  connection.stable-id:       --
  connection.type:            vlan
  connection.interface-name:  vlan10
  ...

Display the connection settings in YAML format:

# nmstatectl show vlan10

Permanent setup is performed by nmstate.service. It invokes nmstatectl service command which apply all network state files ending with .yml in /etc/nmstate folder. The applied network state file will be renamed with postfix .applied to prevent repeated applied on next service start. Rename the file to .yml and restart nmstate to make changes active.

$ sudo systemctl status nmstate.service
● nmstate.service - Apply nmstate on-disk state
     Loaded: loaded (/usr/lib/systemd/system/nmstate.service; enabled; preset: disabled)
     Active: active (exited) since Sat 2023-06-10 15:31:04 EDT; 50s ago
       Docs: man:nmstate.service(8)
             https://www.nmstate.io
    Process: 77788 ExecStart=/usr/bin/nmstatectl service (code=exited, status=0/SUCCESS)
   Main PID: 77788 (code=exited, status=0/SUCCESS)
        CPU: 40ms

Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstate::nm::query_apply::profile] Modifying connection UUID Some("050da471-2365-4e>
Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstate::nm::query_apply::profile] Reapplying connection 1f39a84e-5d13-3ea0-8b34-fd>
Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstate::nm::query_apply::profile] Reapplying connection 0a0d9431-27a5-4e7e-b370-47>
Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstate::nispor::base_iface] Got unsupported interface type Tun: vnet5, ignoring
Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstate::nispor::show] Got unsupported interface vnet5 type Tun
Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstate::nm::show] Got unsupported interface type tun: vnet5, ignoring
Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstate::query_apply::net_state] Destroyed checkpoint /org/freedesktop/NetworkManag>
Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstatectl::service] Applied nmstate config: /etc/nmstate/60-create-vlan.yml
Jun 10 15:31:04 bob.example.com nmstatectl[77788]: [2022-06-10T19:31:04Z INFO  nmstatectl::service] Renamed applied config /etc/nmstate/60-create-vlan.yml to /etc/nm>
Jun 10 15:31:04 bob.example.com systemd[1]: Finished Apply nmstate on-disk state.

Reference:

2023 - March


Encrypting Files

You should protect your carefully built system and it's files from abuse, tampering, or just plain spying. It's easy to do, what are you waiting for? Encrypt your data directory now.

Also E-Mailing files is not secure, but you can encrypt that file before sending it and the other party can decrypt it with just a simple pass phrase.

Encrypt disk partition using LUKS Format

1 - Create cryptographic device mapper device in LUKS encryption mode:

sudo cryptsetup --verbose --cipher aes-xts-plain64 --key-size 512 --hash sha512 --iter-time 5000 --use-random luksFormat /dev/sdd1

2 - Unlock the partition, here "backup" is device mapper name, think of it as label.

sudo cryptsetup open --type luks /dev/sdd1 backup

3 - We have to create filesystem in order to write encrypted data that would be accessible through the device mapper name (label).

sudo mkfs.ext4 /dev/mapper/backup

4 - Mount the device and transfer all of your data:

sudo mount -t ext4 /dev/mapper/backup /backups

5 - Unmount and close the device once you are done:

sudo umount /backup
#
sudo cryptsetup close backup
#
#Last but not least, clear the copy and cache buffers:
#
sudo sysctl --write vm.drop_caches=3

Reference:

Encrypt a file

Create a file to encrypt:

$ echo "Cold!" > mittens

Encrypt it:

You will be prompted for a pass phrase

$ gpg -c mittens

Check the output:

<file name >.gpg is the encrypted version, the original file is left intact

$ ls mittens*
mittens
mittens.gpg

$ strings mittens*
Cold!
O2#a3

You can now distribute the encrypted file. You must also securely share the passphrase.

Decrypt it

This example uses the public keyring stored on this computer user's home directory ~/.gnupg

$ gpg -d mittens.gpg 
gpg: AES256.CFB encrypted data
gpg: encrypted with 1 passphrase
Cold!

Files created the first time you run gpg:

$ ls ~/.gnupg/
private-keys-v1.d  pubring.kbx	random_seed

Reference: https://www.redhat.com/sysadmin/getting-started-gpg

GPG key files

GPG supports private and public key files so passphrases are not required in normal use for encrypting, decrypting and signing files and e-mails.

Here is how that works.

Create a GPG keypair

Use the default (1)

$ gpg --full-generate-key
Please select what kind of key you want:
   (1) RSA and RSA (default)
...

Use the default 3072

RSA keys may be between 1024 and 4096 bits long.
What keysize do you want? (3072)

Enter 2 years

Please specify how long the key should be valid.
         0 = key does not expire
      <n>  = key expires in n days
      <n>w = key expires in n weeks
      <n>m = key expires in n months
      <n>y = key expires in n years
Key is valid for? (0) 2y
  • Fill out your name, email, etc.
GnuPG needs to construct a user ID to identify your key.

Real name: Best User
Email address: bestuser@example.com
Comment: Best Company
You selected this USER-ID:
    "Best User (Best Company) <bestuser@example.com>"

Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit?

Enter a passphrase in the popup

  • Verify your information, then a keystore will be created in your home directory:
...
We need to generate a lot of random bytes. It is a good idea to perform
some other action (type on the keyboard, move the mouse, utilize the
disks) during the prime generation; this gives the random number
generator a better chance to gain enough entropy.
gpg: key B...5 marked as ultimately trusted
gpg: directory '/home/bob/.gnupg/openpgp-revocs.d' created
gpg: revocation certificate stored as '/home/bob/.gnupg/openpgp-revocs.d/A...3.rev'
public and secret key created and signed.

pub   rsa3072 2023-04-20 [SC] [expires: 2025-04-19]
      A...3
uid                      Bob <bob@bob.com>
sub   rsa3072 2023-04-20 [E] [expires: 2025-04-19]
  • Store your host, username, and passphrase in your password manager.

Files created:

$ tree ~/.gnupg/
/home/bob/.gnupg/
├── openpgp-revocs.d
│   └── B...5.rev
├── private-keys-v1.d
│   ├── C...2.key
│   └── A...3.key
├── pubring.kbx
├── pubring.kbx~
├── random_seed
└── trustdb.gpg

Edit your GPG key

$ gpg --edit-key bestuser@example.com
gpg>

At the subprompt, help or a ? lists the available edit commands.

List GPG keys

$ gpg --list-keys
gpg: checking the trustdb
gpg: marginals needed: 3  completes needed: 1  trust model: pgp
gpg: depth: 0  valid:   1  signed:   0  trust: 0-, 0q, 0n, 0m, 0f, 1u
gpg: next trustdb check due at 2025-04-19
/home/bob/.gnupg/pubring.kbx
----------------------------
pub   rsa3072 2023-04-20 [SC] [expires: 2025-04-19]
      A...3
uid           [ultimate] Bob <bob@bob.com>
sub   rsa3072 2023-04-20 [E] [expires: 2025-04-19]

Export Public GPG key

$ gpg --export --armor --output bob-gpg.pub
$ more bob-gpg.pub 
-----BEGIN PGP PUBLIC KEY BLOCK-----
...

The -a or --armor option encodes the output to plain text. The -o or --output option saves the output to a specified file instead of displaying it to standard out on the screen.

Sharing public keys

You can share your public key to the OpenPGP Key server [1]. This way other people can read your e-mails, they just need to import your key from the keyserver using your email address.

  • Pros:

    1. Public key servers allow other people to easily read your encrypted e-mails.
    2. Signing verifies the email came from you.
    3. It also guarantees the message was not altered.
  • Cons:

    1. It does not assure privacy, because if a third party gets ahold of your encrypted email they can still decrypt it using your public key on a public server.
    2. After your certificate expires or is revoked, encrypted messages become unreadable. Renewed certificates allow old messages to be read.
  1. https://keys.openpgp.org/upload

Fingerprints

To allow other people a method of verifying the public key, also share the fingerprint of the public key in email signatures and even on business cards. The more places it appears, the more likely others will have a copy of the correct fingerprint to use for verification.

$ gpg --fingerprint
/home/bob/.gnupg/pubring.kbx
----------------------------
pub   rsa3072 2023-04-20 [SC] [expires: 2025-04-19]
      A... 3... 5... 7... D...  9... A... E... 1... 4...
uid           [ultimate] Bob <bob@bob.com>
sub   rsa3072 2023-04-20 [E] [expires: 2025-04-19]

Reference: https://www.redhat.com/sysadmin/creating-gpg-keypairs

Protonmail

To obtain a correspondent's protonmail public key, use curl. Change user@protonmail.com to the real email.

$ curl https://api.protonmail.ch/pks/lookup?op=get\&search=user@protonmail.com -o user-pubkey.asc

Then import it into gpg

$ gpg --import user-public.asc

Now you can decrypt their files and they can import your public pgp key from https://keys.openpgp.org using your email address.

Reference: https://proton.me/mail

Copy keys to e-mail client host

Maybe you have a laptop for e-mail, while the gpg keys were created on a server. Here is how to copy the key(s).

Export public and secret key files

ID=bob@bob.com
 gpg --export ${ID} > public.key
 gpg --export-secret-key ${ID} > private.key

Copy to new host

 scp bob@server:'/home/bob/gpg/bob.com/*.key' .

Import into gpg on new host

 gpg --import public.key
 gpg --import private.key

Be sure to clean up the keys!

 rm public.key
 rm private.key

You will need to verify the signature in your GUI E-Mail client, change it to 'verified' or 'accepted' to get full functionality.

Now you can send/receive encrypted e-mail on the new host

Reference:

Revoke Certificate

If you forget your password or your private key is compromised, revoke the current certificate.

$ gpg --output revoke.asc --gen-revoke user@example.com
$ cp revoke.asc  ~/.gnupg/openpgp-revocs.d/revoke.rev

Be sure to update the public key server and your fingerprints.

Encrypt/decrypt a file with a key

  • Create a file to be mailed:

File: junk

To Whom it May Concern,

This is addressed to the party that I have contacted.

Regards,
-- Me
  • Encrypt your email file
$ gpg --sign --armor --recipient bob@bob.com --encrypt junk
  • Mail it
cat junk.asc | mail -s "Hello Bro" bob@bob.com
  • The other person can decrypt it (if they imported your public key)
$ gpg --output junk-doc --decrypt junk.asc

This is more secure if you do not share your public key on a public server. You just have to find a secure way to share it, like your own cloud server.

2023 - April


Network Management

The command line offers a wealth of network management commands. Here are some of my favorites.

nmcli

NetworkManager has a command line interface (CLI).

Get status:

$ nmcli general status
STATE      CONNECTIVITY  WIFI-HW  WIFI     WWAN-HW  WWAN    
connected  full          enabled  enabled  enabled  enabled 

Get connections:

$ nmcli connection show
NAME                UUID                                  TYPE      DEVICE 
Wired connection 1  39738cc4-2a3b-3990-8c49-b4d0355116c3  ethernet  eth0   

Get devices:

$ nmcli device
DEVICE  TYPE      STATE      CONNECTION         
eth0    ethernet  connected  Wired connection 1 
usb0    ethernet  unmanaged  --                 
usb1    ethernet  unmanaged  --                 
lo      loopback  unmanaged  --                 

Get configuration file names (notice they are in /etc and /run):

$ nmcli -f TYPE,FILENAME,NAME conn
TYPE      FILENAME                                                                NAME               
ethernet  /etc/NetworkManager/system-connections/eno1.nmconnection                eno1               
loopback  /run/NetworkManager/system-connections/lo.nmconnection                  lo                 
bridge    /etc/NetworkManager/system-connections/virbr0.nmconnection              virbr0             
vlan      /etc/NetworkManager/system-connections/vlan1.nmconnection               vlan1              
tun       /run/NetworkManager/system-connections/vnet3.nmconnection               vnet3              
ethernet  /etc/NetworkManager/system-connections/Wired connection 2.nmconnection  Wired connection 2

Show device details

$ nmcli device show eth0
GENERAL.DEVICE:                         eth0
GENERAL.TYPE:                           ethernet
GENERAL.HWADDR:                         53:21:B9:C6:B6:FE
GENERAL.MTU:                            1500
GENERAL.STATE:                          100 (connected)
GENERAL.CONNECTION:                     Wired connection 1
GENERAL.CON-PATH:                       /org/freedesktop/NetworkManager/ActiveConnection/4
WIRED-PROPERTIES.CARRIER:               on
IP4.ADDRESS[1]:                         192.168.1.9/24
IP4.GATEWAY:                            192.168.1.1
IP4.ROUTE[1]:                           dst = 0.0.0.0/0, nh = 192.168.1.1, mt = 100
IP4.ROUTE[2]:                           dst = 192.168.10.0/24, nh = 0.0.0.0, mt = 100
IP4.DNS[1]:                             192.168.10.1
IP6.ADDRESS[1]:                         fe80::cc1a:6ba2:c43:1b58/64
IP6.GATEWAY:                            --
IP6.ROUTE[1]:                           dst = fe80::/64, nh = ::, mt = 100
IP6.ROUTE[2]:                           dst = ff00::/8, nh = ::, mt = 256, table=255

Check the radio:

$ nmcli radio
WIFI-HW  WIFI     WWAN-HW  WWAN    
enabled  enabled  enabled  enabled 

Show available WiFi SSID signals:

$ nmcli device wifi list
  SSID                           MODE  CHAN    RATE    SIGNAL     BARS  SECURITY  
   MY_WIRELESS_NET               Infra  11    54 Mbit/s  100     ▂▄▆█  WPA1 WPA2 
   ANOTHER_WIRELLESS_NET         Infra  52    54 Mbit/s  100     ▂▄▆█  WPA1 WPA2 
   YET_ANOTHER_WIR_NET           Infra  6     54 Mbit/s  55      ▂▄__   WPA2  

Even get the WiFi password~

$ nmcli device wifi show-password

Connect to WiFi:

$ nmcli device wifi connect MY_WIRELESS_NET password 8ehdxhre5kkhb6g6
Device 'wlp5s0' successfully activated with 'a7c8fbf5-3e7d-456c-921b-d739de0e3c79'.

Reference: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/7/html/networking_guide/sec-configuring_ip_networking_with_nmcli

ip

ip will show / manipulate routing, network devices, interfaces and tunnels

  • To show the IP addresses assigned to an interface on your server:
# ip address show 
  • To assign an IP to an interface, for example, enps03:
# ip address add 192.168.1.254/24 dev enps03
  • To delete an IP on an interface:
# ip address del 192.168.1.254/24 dev enps03
  • Alter the status of the interface by bringing the interface eth0 online:
# ip link set eth0 up
  • Alter the status of the interface by bringing the interface eth0 offline:
# ip link set eth0 down
  • Alter the status of the interface by changing the MTU of eth0:
# ip link set eth0 mtu 9000
  • Alter the status of the interface by enabling promiscuous mode for eth0:
# ip link set eth0 promisc on
  • Add a default route (for all addresses) via the local gateway 192.168.1.254 that can be reached on device eth0:
# ip route add default via 192.168.1.254 dev eth0
  • Add a route to 192.168.1.0/24 via the gateway at 192.168.1.254:
# ip route add 192.168.1.0/24 via 192.168.1.254
  • Add a route to 192.168.1.0/24 that can be reached on device eth0:
# ip route add 192.168.1.0/24 dev eth0
  • Delete the route for 192.168.1.0/24 via the gateway at 192.168.1.254:
# ip route delete 192.168.1.0/24 via 192.168.1.254
  • Display the route taken for IP 10.10.1.4:
# ip route get 10.10.1.4

Reference:

ss

Show Socket program ss shows which ports are open, their status, and what programs are attached to them locally.

Sockets:

$ sudo ss -ntrp
State      Recv-Q      Send-Q                 Local Address:Port              Peer Address:Port       Process   
...

Who is listening

$ sudo ss -lntup | less
Netid State  Recv-Q Send-Q                     Local Address:Port  Peer Address:PortProcess 
...

Reference: https://www.man7.org/linux/man-pages/man8/ss.8.html

connmanctl

First found on the BeagleBoneBlack and PocketBeagle SBC devices, this was the way to manage WiFi, USB and Ethernet connections. It does not seem to be used on the BeagleBone AI [1].

The configuration files live in /var/lib/connman/ and the control program for changing them is connmanctl.

  • WiFi

Here is an example run of connmanctl to set up a new WiFi connection [2] called MyWifi on an access point/router.

$ sudo connmanctl⏎
connmanctl> scan wifi⏎
Scan completed for wifi
connmanctl> services⏎
       MyWifi                  wifi_1234567890_1234567890123456_managed_psk
connmanctl> agent on⏎
Agent registered
connmanctl> connect wifi_1234567890_1234567890123456_managed_psk⏎
Agent RequestInput wifi_1234567890_1234567890123456_managed_psk
       Passphrase = [ Type=psk, Requirement=mandatory, Alternates=[ WPS ] ]
       WPS = [ Type=wpspin, Requirement=alternate ]
Passphrase? MySecretPassphrase⏎
Connected wifi_1234567890_1234567890123456_managed_psk
connmanctl> quit⏎
$
  • Ethernet

Configure fixed IP address on wired ethernet port

Check settings before

$ sudo cat /var/lib/connman/ethernet_5051a9a6bafe_cable/settings
[ethernet_5051a9a6bafe_cable]
Name=Wired
AutoConnect=true
Modified=2023-03-13T22:49:38.241177Z
IPv4.method=manual
IPv4.DHCP.LastAddress=192.168.1.29
IPv6.method=auto
IPv6.privacy=disabled
IPv4.netmask_prefixlen=16
IPv4.local_address=192.168.1.99
IPv4.gateway=192.168.1.1
IPv6.DHCP.DUID=0001000126b5d99b5051a9a6bafe

Change fixed IP address from 99 to 9

#                                                                 ip address   mask        nameserver
$ sudo connmanctl config ethernet_5051a9a6bafe_cable ipv4 manual  192.168.1.9  255.255.0.0 192.168.1.1;

Check settings after

$ sudo cat /var/lib/connman/ethernet_5051a9a6bafe_cable/settings
[ethernet_5051a9a6bafe_cable]
Name=Wired
AutoConnect=true
Modified=2023-03-13T22:55:28.241177Z
IPv4.method=manual
IPv4.DHCP.LastAddress=192.168.1.29
IPv6.method=auto
IPv6.privacy=disabled
IPv4.netmask_prefixlen=16
IPv4.local_address=192.168.1.9
IPv4.gateway=192.168.1.1
IPv6.DHCP.DUID=0001000126b5d99b5051a9a6bafe

You can see all the devices here, and turn on Tethering (incoming connections):

$ sudo cat /var/lib/connman/settings
[global]
OfflineMode=false

[Wired]
Enable=true
Tethering=false

[WiFi]
Enable=true
Tethering=false

[Gadget]
Enable=false
Tethering=false

[P2P]
Enable=false
Tethering=false

[Bluetooth]
Enable=true
Tethering=false

For wifi configuration, BeagleBone-AI-64, BeagleBonePlay (or later) moved from connman -> systemd-network, so wifi is now configured thru wpa_supplicant-wlan0.conf

You can use

$  sudo wpa_cli -i wlan0…

Reference:

  1. bbai-tether-system
  2. https://gist.github.com/kylemanna/6930087

firewalld

The Server Setup section of this book covers how to set up a firewall to protect your system network.

networkd

The Feburary 2023 Blog covers networkd manipulation using netplan.

tcpdump

Here is a super useful program for tracing what is happening on your network.

For instance, you can watch a certain port for activity. In this example we watch port 81 (which is a web server).

$ sudo tcpdump -i eth0 -a port 81
tcpdump: verbose output suppressed, use -v[v]... for full protocol decode
listening on eth0, link-type EN10MB (Ethernet), snapshot length 262144 bytes
16:04:37.901252 IP 192.168.1.4.65088 > www.example.com.81: Flags [SEW], seq 1319586290, win 65535, options [mss 1460,nop,wscale 6,nop,nop,TS val 582966262 ecr 0,sackOK,eol], length 0
16:04:37.901392 IP www.example.com.81 > 192.168.1.4.65088: Flags [S.E], seq 2476571858, ack 1319586291, win 65160, options [mss 1460,sackOK,TS val 1494679242 ecr 582966262,nop,wscale 7], length 0
16:04:37.901630 IP 192.168.1.4.65088 > www.example.com.81: Flags [.], ack 1, win 2058, options [nop,nop,TS val 582966264 ecr 1494679242], length 0
16:04:37.904531 IP 192.168.1.4.65088 > www.example.com.81: Flags [P.], seq 1:638, ack 1, win 2058, options [nop,nop,TS val 582966267 ecr 1494679242], length 637
16:04:37.904562 IP www.example.com.81 > 192.168.1.4.65088: Flags [.], ack 638, win 505, options [nop,nop,TS val 1494679245 ecr 582966267], length 0
16:04:37.905443 IP www.example.com.81 > 192.168.1.4.65088: Flags [P.], seq 1:257, ack 638, win 505, options [nop,nop,TS val 1494679246 ecr 582966267], length 256
16:04:37.905634 IP 192.168.1.4.65088 > www.example.com.81: Flags [.], ack 257, win 2054, options [nop,nop,TS val 582966268 ecr 1494679246], length 0
16:04:37.906243 IP 192.168.1.4.65088 > www.example.com.81: Flags [P.], seq 638:718, ack 257, win 2054, options [nop,nop,TS val 582966268 ecr 1494679246], length 80
16:04:37.906258 IP www.example.com.81 > 192.168.1.4.65088: Flags [.], ack 718, win 505, options [nop,nop,TS val 1494679247 ecr 582966268], length 0
16:04:37.906445 IP 192.168.1.4.65088 > www.example.com.81: Flags [.], seq 718:2166, ack 257, win 2054, options [nop,nop,TS val 582966268 ecr 1494679246], length 1448
16:04:37.906465 IP www.example.com.81 > 192.168.1.4.65088: Flags [.], ack 2166, win 501, options [nop,nop,TS val 1494679247 ecr 582966268], length 0
16:04:37.906508 IP 192.168.1.4.65088 > www.example.com.81: Flags [P.], seq 2166:5653, ack 257, win 2054, options [nop,nop,TS val 582966268 ecr 1494679246], length 3487
16:04:37.906532 IP www.example.com.81 > 192.168.1.4.65088: Flags [.], ack 5653, win 480, options [nop,nop,TS val 1494679247 ecr 582966268], length 0
16:04:37.906582 IP www.example.com.81 > 192.168.1.4.65088: Flags [P.], seq 257:528, ack 5653, win 480, options [nop,nop,TS val 1494679247 ecr 582966268], length 271
16:04:37.906701 IP 192.168.1.4.65088 > www.example.com.81: Flags [.], ack 528, win 2050, options [nop,nop,TS val 582966269 ecr 1494679247], length 0
16:04:37.906871 IP www.example.com.81 > 192.168.1.4.65088: Flags [P.], seq 528:732, ack 5653, win 501, options [nop,nop,TS val 1494679248 ecr 582966269], length 204
16:04:37.907007 IP 192.168.1.4.65088 > www.example.com.81: Flags [.], ack 732, win 2047, options [nop,nop,TS val 582966269 ecr 1494679248], length 0
^C
17 packets captured
17 packets received by filter
0 packets dropped by kernel

arp-scan

arp-scan is a local network scanner capable of displaying known hosts by their IP address, MAC address, and manufacturer ID.

$ arp-scan --interface=eth0 192.168.0.0/24
Interface: eth0, datalink type: EN10MB (Ethernet)
Starting arp-scan 1.4 with 256 hosts (http://www.nta-monitor.com/tools/arp-scan/)
192.168.0.1     00:c0:9f:09:b8:db       QUANTA COMPUTER, INC.
192.168.0.3     00:02:b3:bb:66:98       Intel Corporation
192.168.0.5     00:02:a5:90:c3:e6       Compaq Computer Corporation
192.168.0.6     00:c0:9f:0b:91:d1       QUANTA COMPUTER, INC.
192.168.0.12    00:02:b3:46:0d:4c       Intel Corporation
192.168.0.13    00:02:a5:de:c2:17       Compaq Computer Corporation
192.168.0.87    00:0b:db:b2:fa:60       Dell ESG PCBA Test
192.168.0.90    00:02:b3:06:d7:9b       Intel Corporation
192.168.0.105   00:13:72:09:ad:76       Dell Inc.
192.168.0.153   00:10:db:26:4d:52       Juniper Networks, Inc.
192.168.0.191   00:01:e6:57:8b:68       Hewlett-Packard Company
192.168.0.251   00:04:27:6a:5d:a1       Cisco Systems, Inc.
192.168.0.196   00:30:c1:5e:58:7d       HEWLETT-PACKARD
13 packets received by filter, 0 packets dropped by kernel
Ending arp-scan: 256 hosts scanned in 3.386 seconds (75.61 hosts/sec).  13
responded

Reference: https://linux.die.net/man/1/arp-scan

vnstat

To display the amount of network traffic for each day of the last week:

$ vnstat -d 7

 eth01  /  daily

          day        rx      |     tx      |    total    |   avg. rate
     ------------------------+-------------+-------------+---------------
     2023-04-16     9.66 GiB |    3.69 GiB |   13.35 GiB |    1.33 Mbit/s
     2023-04-17    13.17 GiB |    6.03 GiB |   19.20 GiB |    1.91 Mbit/s
     2023-04-18    11.38 GiB |    5.31 GiB |   16.68 GiB |    1.66 Mbit/s
     2023-04-19    14.79 GiB |    5.15 GiB |   19.94 GiB |    1.98 Mbit/s
     2023-04-20    12.26 GiB |    2.40 GiB |   14.65 GiB |    1.46 Mbit/s
     2023-04-21    14.26 GiB |    3.42 GiB |   17.68 GiB |    1.76 Mbit/s
     2023-04-22    12.08 GiB |    1.64 GiB |   13.72 GiB |    1.98 Mbit/s
     ------------------------+-------------+-------------+---------------
     estimated     17.57 GiB |    2.39 GiB |   19.96 GiB |

For the last two months:

$ vnstat

                      rx      /      tx      /     total    /   estimated
 eth01:
       2023-03    334.51 GiB  /   94.16 GiB  /  428.67 GiB
       2023-04    242.84 GiB  /   57.62 GiB  /  300.47 GiB  /  415.63 GiB
     yesterday     14.26 GiB  /    3.42 GiB  /   17.68 GiB
         today     12.08 GiB  /    1.64 GiB  /   13.72 GiB  /   19.96 GiB

 tun01:
       2023-03           0 B  /   48.57 KiB  /   48.57 KiB
       2023-04           0 B  /   26.12 KiB  /   26.12 KiB  /     --     
     yesterday           0 B  /    1.08 KiB  /    1.08 KiB
         today           0 B  /       816 B  /       816 B  /     --     

 vlan101:
       2023-03    304.72 GiB  /   50.54 GiB  /  355.26 GiB
       2023-04    220.68 GiB  /   25.76 GiB  /  246.44 GiB  /  340.90 GiB
     yesterday     13.13 GiB  /    1.27 GiB  /   14.40 GiB
         today     11.28 GiB  /  972.62 MiB  /   12.23 GiB  /   17.79 GiB

 wlp0s31e4:
       2023-03           0 B  /         0 B  /         0 B
       2023-04           0 B  /         0 B  /         0 B  /     --     
     yesterday           0 B  /         0 B  /         0 B
         today           0 B  /         0 B  /         0 B  /     -- 

nethogs

To find out which program is demanding the most of your network right now, try nethogs.

NetHogs version 0.8.6-3

    PID USER     PROGRAM                               DEV         SENT      RECEIVED      
    828 monit    /usr/bin/monit                        eth01      10.038     215.140 KB/sec
      ? root     192.168.1.3:40120-192.168.1.50:80                 0.396       8.211 KB/sec
      ? root     192.168.1.3:40160-192.168.1.60:80                 0.358       8.167 KB/sec
   2925 www-da.. nginx: worker process                 eth01       2.048       1.875 KB/sec
1815892 root     /usr/bin/docker-proxy                 docker      0.284       1.661 KB/sec
1815663 root     python3                               eth01       0.299       0.332 KB/sec
   4287 bob      sshd: bob@pts/0                       eth01       0.506       0.116 KB/sec
      ? root     192.168.1.4:40000-192.168.1.2:57392               0.000       0.000 KB/sec
      ? root     192.168.1.4:45308-192.168.1.5:80                  0.000       0.000 KB/sec

  TOTAL                                                           13.929     235.501 KB/sec

Conclusion

This was a short list, hoping to provide an introduction to managing your network from the command line.

Hope this helps,
-- Don

2023 - June


Virtual Machines

A Virtual Machine (VM) is a way to create a whole operating system, with many applications, that is able to be moved from one physical machine to another physical machine. This has advantages in case of hardware, location or network problems to quickly and correctly restore service.

The following descriptions follow a VM on Redhat/AlmaLinux/RockyLinux/CENTOS 9 using qemu virtualizer and the Cockpit management web console. We also set the VM network up for external access and share files with the physical host's NFS mount. [1]

  1. NFS client

Reference:

Install Prerequisites

Install the virtualization hypervisor packages.

  • Redhat:
$ sudo dnf install qemu-kvm libvirt virt-install virt-viewer
  • Start the virtualization services:
$ sudo for drv in qemu network nodedev nwfilter secret storage interface; do systemctl start virt${drv}d{,-ro,-admin}.socket; done

Reference: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_enabling-virtualization-in-rhel-9_configuring-and-managing-virtualization#proc_enabling-virtualization-in-rhel-9_assembly_enabling-virtualization-in-rhel-9

  • Debian:
$ sudo apt install qemu-kvm libvirt-daemon  bridge-utils virtinst libvirt-daemon-system

Load the network module into the running kernel

$ sudo modprobe vhost_net
$ lsmod |grep vhost
vhost_net              36864  0
tun                    61440  1 vhost_net
vhost                  57344  1 vhost_net
vhost_iotlb            16384  1 vhost
tap                    28672  1 vhost_net

Make it load at boot time by adding this line

File: /etc/modules

vhost_net

Optional Tools:

  • libguestfs is a set of tools for accessing and modifying virtual machine (VM) disk images. You can use this for viewing and editing files inside guests, scripting changes to VMs, monitoring disk used/free statistics, creating guests, P2V, V2V, performing backups, cloning VMs, building VMs, formatting disks, resizing disks, and much more.
$ sudo apt install  libguestfs-tools
  • The libosinfo project comprises three parts

A database of metadata about operating systems, hypervisors, virtual hardware and more A GObject based library API for querying information from the database Command line tools for querying & extracting information from the database

$ sudo apt install libosinfo-bin
  • qemu-system and virt-manager allow command line and graphical starting, stopping, configuring qemu-kvm systems
$ sudo apt install  libguestfs-tools libosinfo-bin  qemu-system virt-manager
  • Bridge definitions

Install bridge-utils

$ sudo apt install bridge-utils

Add the iface br0 inet dhcp, and assign bridge_ports to an ethernet interface (probably USB)

File: /etc/network/interfaces

# This file describes the network interfaces available on your system
# and how to activate them. For more information, see interfaces(5).

source /etc/network/interfaces.d/*

# The loopback network interface
auto lo
iface lo inet loopback

# USB Ethernet
auto enxc87f54384756
iface enxc87f54384756 inet manual

# Bridge setup
auto br0
iface br0 inet dhcp
    bridge_ports enxc87f54384756

$ ip a ~ 3: enxc87f54384756: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master br0 state UP group default qlen 1000 link/ether c8:7f:54:93:56:44 brd ff:ff:ff:ff:ff:ff ~ 6: br0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000 link/ether 86:32:53:56:4a:fa brd ff:ff:ff:ff:ff:ff inet 192.168.1.2/24 brd 192.168.1.1 scope global dynamic br0 valid_lft 86082sec preferred_lft 86082sec inet6 fe80::8432:53ff:fe56:4edf/64 scope link valid_lft forever preferred_lft forever


Reference:

 * <https://wiki.debian.org/BridgeNetworkConnections>
 * <https://wiki.libvirt.org/Networking.html#debian-ubuntu-bridging>

> ??? Apparmor and Selinux may require more permission. , maybe using /home, trying /local

/etc/apparmor.d/local/abstractions/libvirt-qemu

- Set user and group 
File: /etc/libvirt/qemu.conf

~ user = "libvirt-qemu" group = "libvirt-qemu" ~

 Then reboot.



## Install Cockpit

Cockpit is a web based system allowing full management of systems, including virtual qemu systems.

Install packages cockpit and cockpit-machine.

> Install postfix first on Debian, or else exim4 mail server will be installed with cockpit

$ sudo dnf install cockpit cockpit-machines


Start Cockpit and libvirtd:

$ sudo systemctl enable --now libvirtd $ sudo systemctl start libvirtd $ sudo systemctl enable --now cockpit.socket

To log in to Cockpit, open your web browser to localhost:9090 and enter your Linux username and password.

Reference: <https://www.redhat.com/sysadmin/intro-cockpit>

## Virtual machines in Cockpit

Click on Virtual machines to open the virtual machine panel.

If you have existing virtual machines with libvirt, Cockpit detects them. Should Cockpit fail to detect existing virtual machines, you can import them by clicking the Import VM button.

Cockpit knows the virtual machine's state and can start or stop it. In the pop-up menu on the right, you can clone, rename, and delete the virtual machine.

### Create storage pools with Cockpit

A storage pool is space that you designate as being available to store virtual machine images. You can set a network location, an iSCSI target, or a filesystem.

In Cockpit, to create a storage pool, click the Storage pool button at the top of the virtual machine panel.

View storage pools

$ sudo virsh pool-list --all --details Name State Autostart Persistent Capacity Allocation Available

data running yes yes 1.27 TiB 15.45 GiB 1.25 TiB


If no storage space is created, the default /var/lib/machines will be used.

Reference: <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/managing-storage-for-virtual-machines_configuring-and-managing-virtualization#assembly_managing-virtual-machine-storage-pools-using-the-cli_managing-storage-for-virtual-machines>

### Create a new virtual machine

To create a new Virtual Machine, click the Create VM button on the right side of the virtual machine panel.

You can download a recent operating system version from a drop-down list, or you can choose an ISO image on your local drive, or you can have the virtual machine boot from a Preboot Execution Environment (PXE) server.

Reference: <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/assembly_creating-virtual-machines_configuring-and-managing-virtualization>

Start it:

virsh --connect qemu:///system start almalinux9-2023-10-6


Restart install:

$ sudo virt-install --connect qemu:///system --quiet --os-variant almalinux9 --reinstall almalinux9-2023-10-6 --wait -1 --noautoconsole --install os=almalinux9

### Examine a virtual machine

Display the info about your virtual machines

$ virt-host-validate

QEMU: Checking for hardware virtualization : PASS QEMU: Checking if device /dev/kvm exists : PASS QEMU: Checking if device /dev/kvm is accessible : PASS QEMU: Checking if device /dev/vhost-net exists : PASS QEMU: Checking if device /dev/net/tun exists : PASS QEMU: Checking for cgroup 'cpu' controller support : PASS QEMU: Checking for cgroup 'cpuacct' controller support : PASS QEMU: Checking for cgroup 'cpuset' controller support : PASS QEMU: Checking for cgroup 'memory' controller support : PASS QEMU: Checking for cgroup 'devices' controller support : WARN (Enable 'devices' in kernel Kconfig file or mount/enable cgroup controller in your system) QEMU: Checking for cgroup 'blkio' controller support : PASS QEMU: Checking for device assignment IOMMU support : PASS QEMU: Checking if IOMMU is enabled by kernel : WARN (IOMMU appears to be disabled in kernel. Add intel_iommu=on to kernel cmdline arguments) QEMU: Checking for secure guest support : WARN (Unknown if this platform has Secure Guest support)


Start a VM

$ sudo virsh start demo-guest1


Stop a VM

$ sudo virsh shutdown demo-guest1


VM Diagnostics: <https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/diagnosing-virtual-machine-problems_configuring-and-managing-virtualization>

### Network on a virtual machine

#### NAT VM Network (VM network *default*)

By *default*, a newly created VM connects to a NAT-type network that uses virbr0, the default virtual bridge on the host. This ensures that the VM can use the host’s network interface controller (NIC) for connecting to outside networks, but the VM is *not reachable from external systems*.

> See file /etc/libvirt/network/default.xml


``` mermaid
graph TD;
        Router<--->eth0;
        PhysicalHost<-->eth0;
        NAT-virbr0-->eth0;
        VM-->NAT-virbr0;

Bridged VM Network (Physical Host bridge)

If you require a VM to appear on the same external network as the hypervisor, you must use bridged mode instead. To do so, attach the VM to a bridge device connected to the hypervisor’s physical network device.

See file /etc/nmstate/50-create-bridge.yml below

graph TD;
        Router<-->eth1;
        eth1<-->bridge-virbr0;
        bridge-virbr0<-->VM;
        Router<-->eth0;
        PhysicalHost<-->eth0;
        eth0<-->vlan1;
        vlan1<-->NFS;

RedHat Way [1]

  • Create a nmstate configuration file on the physical host.

Install nmstate, is not already done

$ sudo dnf install nmstate

In this example the host IP address will be fixed at 192.168.1.12, and the guest (VM) will pick up a DHCP address from the local DHCP server. Of course you should change the router address (192.168.1.1) and maybe the DNS resolvers (1.1.1.1 and 1.0.0.1). The port (eno1) is the machine's onboard ethernet port. Port eno1 will no longer have an IP address, rather bridge interface owns the IP address.

Bridge should be created before any vlan to allow routing for bridge. Otherwise the vlan will become the first route, blocking outside access.

File: /etc/nmstate/50-create-bridge.yml

# https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html-single/configuring_and_managing_networking/index#proc_configuring-a-network-bridge-by-using-nmstatectl_configuring-a-network-bridge
# ---
interfaces:
- name: virbr0
  type: linux-bridge
  ipv4:
    enabled: true
    address:
    - ip: 192.168.1.12
      prefix-length: 24
    dhcp: false
  ipv6:
    enabled: false
  bridge:
    options:
      stp:
        enabled: true
      vlan-protocol: 802.1q
    port:
    - name: eno1

It is important to disable the default virbr0 network interface within Cockpit/virsh.

$ sudo virsh net-destroy default
Network default stopped
$ sudo virsh net-autostart --disable default
Network default unmarked as autostarted

$ sudo virsh net-list --all
 Name      State      Autostart   Persistent
----------------------------------------------
 default   inactive   no          yes

Hint: net-destroy only stops the running process ;-)

Apply the bridge network config, fix any errors.

$ sudo nmstatectl apply /etc/nmstate/50-create-bridge.yml

IP address check on physical machine

$ ip a show virbr0
5: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether 25:54:01:f3:2a:2e brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.12/24 brd 192.168.1.255 scope global noprefixroute virbr0
       valid_lft forever preferred_lft forever
$ ip a show eno1
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UP group default qlen 1000
    link/ether 3c:49:7a:b9:e7:6f brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6

Notice virbr0 is the master of eno1.

Make the changes permanent

$ sudo systemctl restart nmstate

This will rename file 50-create-bridge.yml to 50-create-bridge.applied. To re-apply if changes are needed, rename the file to 50-create-bridge.yml before restarting the service nmstate.

  • The VM should use virbr0 as it's network interface. Using Cockpit add a Bridged network to the VM.

image libvirt-bridge.png libvirt-bridge.png

-> OR <- define it using virsh:

$ sudo virsh edit vm_machine
<domain type='kvm'>
~
  <devices>
~
    <interface type='bridge'>
      <mac address='52:54:00:0b:4b:a8'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
    </interface>
~

Reference:

Afterwords it will look like this:

$ sudo virsh domiflist vm_machine
 Interface   Type     Source   Model    MAC
-----------------------------------------------------------
 vnet5       bridge   virbr0   virtio   22:53:06:f9:d2:e1

vnet5 is automatically created, with virbr0 as it's master. Typically, a vnet will be added to a bridge interface which means plugging the VM into a switch.

25: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:f7:d2:40 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fef7:d240/64 scope link 
       valid_lft forever preferred_lft forever

Physical network interface (eno1) -> bridge (virbr0) <- Virtual network interface (vnet5)

$ bridge link show virbr0
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master virbr0 state forwarding priority 32 cost 100 
25: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master virbr0 state forwarding priority 32 cost 100 
$ ip link show master virbr0
2: eno1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UP mode DEFAULT group default qlen 1000
    link/ether 1c:69:7a:09:e7:61 brd ff:ff:ff:ff:ff:ff
    altname enp0s31f6
25: vnet5: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:54:00:f7:d2:40 brd ff:ff:ff:ff:ff:ff

Debian Way

In this host, we have a USB-A to Ethernet dongle. Plugging it in created a network device called enxc87f54935633.

This is important if you need to preserve your existing ethernet connection while configuring a new bridge.

In this example the host IP address will be fixed at 192.168.1.10, and the guest (VM) will pick up a DHCP address from the local DHCP server. Of course you should change the router address (192.168.1.1) and maybe the DNS resolvers (1.1.1.1 and 1.0.0.1). The ethernets (enxc87f54935633) is a USB-A ethernet port. Ethernet enxc87f54935633 will no longer have an IP address, rather bridge interface owns the IP address.

File: /etc/netplan/60-bridge-init.yaml

# sudo apt install bridge-utils -y
# USB-A -> Ethernet: enxc87f54935633
network:
  version: 2
  renderer: networkd

  ethernets:
    enxc87f54935633:
      dhcp4: false 
      dhcp6: false 

  bridges:
    virbr0:
      interfaces: [enxc87f54935633]
      addresses: [192.168.1.10/24]
      routes:
      - to: default
        via: 192.168.1.1
        metric: 100
        on-link: true
      mtu: 1500
      nameservers:
        addresses: [1.1.1.1]
      parameters:
        stp: true
        forward-delay: 4
      dhcp4: no
      dhcp6: no

It is important to disable the default virbr0 network interface within Cockpit/virsh.

$ sudo virsh net-destroy default
Network default stopped
$ sudo virsh net-autostart --disable default
Network default unmarked as autostarted

$ sudo virsh net-list --all
 Name      State      Autostart   Persistent
----------------------------------------------
 default   inactive   no          yes

Hint: net-destroy only stops the running process ;-)

Apply the bridge network config, fix any errors.

$ sudo netplan apply /etc/netplan/60-bridge-init.yaml
...

Check the interface

$ ip a show virbr0
28: virbr0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue state UP group default qlen 1000
    link/ether ee:a7:6e:d0:3b:53 brd ff:ff:ff:ff:ff:ff
    inet 10.123.50.63/24 brd 10.123.50.255 scope global virbr0
       valid_lft forever preferred_lft forever
    inet6 fe80::eca7:6eff:fed0:3b53/64 scope link 
       valid_lft forever preferred_lft forever
$ ip a show enxc87f54935633
27: enxc87f54935633: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UP group default qlen 1000
    link/ether c8:7f:54:93:56:33 brd ff:ff:ff:ff:ff:ff

Notice virbr0 is the master of enxc87f54935633.

  • The VM should use virbr0 as it's network interface. Using Cockpit add a Bridged network to the VM.

image libvirt-bridge.png libvirt-bridge.png

-> OR <- define it using virsh:

$ sudo virsh edit vm_machine
<domain type='kvm'>
~
  <devices>
~
    <interface type='bridge'>
      <mac address='52:54:00:0b:4b:a8'/>
      <source bridge='virbr0'/>
      <model type='virtio'/>
    </interface>
~

Reference:

Afterwords it will look like this:

$ sudo virsh domiflist vm-machine
 Interface   Type     Source   Model    MAC
-----------------------------------------------------------
 vnet13      bridge   virbr0   virtio   51:34:07:0b:4a:a1

vnet13 is automatically created, with virbr0 as it's master.

$ ip a show vnet13
30: vnet13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN group default qlen 1000
    link/ether fe:54:00:0b:4b:a8 brd ff:ff:ff:ff:ff:ff
    inet6 fe80::fc54:ff:fe0b:4ba8/64 scope link 
       valid_lft forever preferred_lft forever

Physical network interface (enxc87f54935633) -> bridge (virbr0) <- Virtual network interface (vnet13)

$ sudo brctl show virbr0
bridge name	bridge id		STP enabled	interfaces
virbr0		8000.eea76ed03b53	yes		enxc87f54935633
                                                        vnet13
$ bridge link show virbr0
27: enxc87f54935633: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master virbr0 state forwarding priority 32 cost 4 
30: vnet13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 master virbr0 state forwarding priority 32 cost 100 
$ ip link show master virbr0
27: enxc87f54935633: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel master virbr0 state UP mode DEFAULT group default qlen 1000
    link/ether c8:7f:54:93:56:33 brd ff:ff:ff:ff:ff:ff
30: vnet13: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc noqueue master virbr0 state UNKNOWN mode DEFAULT group default qlen 1000
    link/ether fe:54:00:0b:4b:a8 brd ff:ff:ff:ff:ff:ff

Reference:

  1. https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_networking/configuring-a-network-bridge_configuring-and-managing-networking#proc_configuring-a-network-bridge-by-using-nmstatectl_configuring-a-network-bridge

Sharing files with physical and virtual hosts

  • Make a directory on the VM
$ sudo mkdir /data
  • In Cockpit on the physical host > Shared directories, add this directory to the Source path, and create a mount tag; i.e.: data
Source path    Mount tag	
-------------- ---------
/data/         data
  • In the VM update fstab

File: /etc/fstab

~
# virt share :
# mount_tag /mnt/mount/path virtiofs rw,noatime,_netdev 0 0
data /data virtiofs rw,noatime,_netdev 0 0
~

Mount it

$ sudo mount /data

Now the shared filesystem will be mounted upon every VM start.

Alternative: Manual mount

#mount -t virtiofs [mount tag] [mount point]
sudo mount -t virtiofs data /data

Reference: https://access.redhat.com/documentation/en-us/red_hat_enterprise_linux/9/html/configuring_and_managing_virtualization/sharing-files-between-the-host-and-its-virtual-machines_configuring-and-managing-virtualization#proc_using-the-web-console-to-share-files-between-the-host-and-its-virtual-machines-using-virtiofs_sharing-files-between-the-host-and-its-virtual-machines-using-virtio-fs

Virtual Storage Pools on NFS

The advantages of virtual machine storage pools on NFS are:

  • Raid protection on NFS
  • Ability to move VM from one host to another without copying data files
  • Hardware upgrades, failures and network outages are easier to recover from

To support multiple hosts, the definition files need to be copied and updated on each host in advance:

  1. The VM definition file, located in /etc/libvirt/qemu/<VM Name>.xml
  2. The storage pool definition file, located in /etc/libvirt/storage/<storage pool name>.xml
  3. A virtual bridge definition file, located in /etc/netplan for Ubuntu or /etc/nmstate for RedHat

Define a storage pool at the host level and it will mount the NFS volume when libvirtd systemd process starts.

The Source is your NFS client mount as exposed by the NFS server.

The Target is your local NFS client directory to mount it on.

The Name is what you use with virsh/Cockpit to add Storage Volumes (logical disks) to the VM.

File: /etc/libvirt/storage/my_vm01.xml

<!--
WARNING: THIS IS AN AUTO-GENERATED FILE. CHANGES TO IT ARE LIKELY TO BE
OVERWRITTEN AND LOST. Changes to this xml configuration should be made using:
  virsh pool-edit my_vm_pool
or other application using the libvirt API.
-->

<pool type='netfs'>
  <name>my_vm_pool</name>
  <uuid>7c847772-0565-4d26-a3bc-46e4634fb84f</uuid>
  <capacity unit='bytes'>0</capacity>
  <allocation unit='bytes'>0</allocation>
  <available unit='bytes'>0</available>
  <source>
    <host name='192.168.1.65'/>
    <dir path='/mnt/vol032/vm_data/'/>
    <format type='auto'/>
  </source>
  <target>
    <path>/vm_data/</path>
  </target>
</pool>

Copy or create your Storage Volumes to the dir path on the NFS server, then add them via virsh/Cockpit.

# Create Volume
#
sudo virsh vol-create-as my_vm_pool test_vol2.qcow2 2G
#    my_vm_pool: Is the pool name.
#    test_vol2: This is the name of the volume.
#    2G: This is the storage capacity of the volume.
#
# List volumes
#
$ sudo virsh vol-list --pool my_vm_pool --details
 Name               Path                        Type   Capacity    Allocation
-----------------------------------------------------------------------------
 test_vol02.qcow2   /vm_data/test_vol02.qcow2   file   2.00 GiB    0.65 GiB

Reference:

Clone Virtual Machine

1st clone:

  • Create vlan on 1st ethernet adapter (ensure network switch support vlans) [1]
  • Add NFS mount, if not already there [2]
  • Create nmstate bridge virbr0 on 2nd ethernet adapter (can use USB/Ethernet adapter) [3]
  • Re-create 1st adapter as fixed address in nmstate [4]
  • Create storage pool as type NFS, using Cockpit [5]
  • Import VM, using storage pool data as nfs, using Cockpit
  • Delete default VM network using Cockpit
  • Create VM network bridge on VM (use host's virbr0), using Cockpit
  • Change /etc/nmstate/*.applied to *.yml, reboot to get route working
  • Change owner of Storage Volume to libvirtd-qemu for Debian or qemu for RedHat on NAS

It really helps to use a secondary network adapter, because the routing will be lost to the main IP and you have to use the console to get it back. Also one adapter can handle the NFS traffic while the other handles the VM traffic.

  1. vlan

  2. nfs

  3. Network on a virtual machine

  4. Fixed IP address; servers should have this

File: /etc/nmstate/40-create-eno1.yml

---
interfaces:
- name: eno1
  type: ethernet
  state: up
  ipv4:
    enabled: true
    address:
    - ip: 192.168.1.12
      prefix-length: 24
    dhcp: false
  ipv6:
    enabled: false

routes:
  config:
  - destination: 0.0.0.0/0
    next-hop-interface: eno1
    next-hop-address: 192.168.1.1

dns-resolver:
  config:
    search:
    - example.com
    server:
    - 1.1.1.1
    - 8.8.8.8

Apply change:

$ sudo nmstatectl apply /etc/nmstate/40-create-eno1.yml
  1. VM NFS Storage Pool

Copy Data Files

Now the easy part.

  1. Stop the VM on the old host, using Cockpit. Remember to disable autostart!
  2. Copy the Storage Pool data file(s) to your NFS mount, if not already done.
  3. Start your VM on the new host, and enjoy!

In the future you can just stop the VM on the old host, then start the new VM on the new host, assuming they are both on NFS..

Remember to disable autostart on the Storage Pool and VM on the old host! If 'Failed to get "write" lock, Is another process using the image [/data_vm/data01]?', make sure other host has stopped VM and storage pool.

$ sudo virsh pool-autostart --disable my_vm01
Pool my_vm01 unmarked as autostarted

If the IP Address changed in the VM

  • Copy new SSH keys with new IP address (delete old on remote)
# SSH Copy
$ ssh-copy-id \<remote IP\>

# SSH Delete
$ ssh \<remote IP\> grep -n \<remote IP\> ~/.ssh/known_hosts
2:192.168.1.4 ssh-rsa 
# delete line number 2 in file ~/.ssh/known_hosts on host \<remote IP\>
  • Edit apache configuration to reflect new IP address

    • File: /etc/httpd/conf/httpd.conf
  • Edit Nextcloud configuration, to add IP address to list of trusted hosts

    • File: /var/www/nextcloud/config/config.php

Restart apache to pick up changes sudo systemctl restart httpd

Configuration Files

/etc/libvirt
├── hooks
├── libvirt-admin.conf
├── libvirt.conf
├── libvirtd.conf
├── libxl.conf
├── libxl-lockd.conf
├── libxl-sanlock.conf
├── lxc.conf
├── nwfilter
│   ├── allow-arp.xml
│   ├── allow-dhcp-server.xml
│   ├── allow-dhcpv6-server.xml
│   ├── allow-dhcpv6.xml
│   ├── allow-dhcp.xml
│   ├── allow-incoming-ipv4.xml
│   ├── allow-incoming-ipv6.xml
│   ├── allow-ipv4.xml
│   ├── allow-ipv6.xml
│   ├── clean-traffic-gateway.xml
│   ├── clean-traffic.xml
│   ├── no-arp-ip-spoofing.xml
│   ├── no-arp-mac-spoofing.xml
│   ├── no-arp-spoofing.xml
│   ├── no-ip-multicast.xml
│   ├── no-ip-spoofing.xml
│   ├── no-ipv6-multicast.xml
│   ├── no-ipv6-spoofing.xml
│   ├── no-mac-broadcast.xml
│   ├── no-mac-spoofing.xml
│   ├── no-other-l2-traffic.xml
│   ├── no-other-rarp-traffic.xml
│   ├── qemu-announce-self-rarp.xml
│   └── qemu-announce-self.xml
├── qemu
│   ├── autostart
│   │   └── my_vm01.xml -> /etc/libvirt/qemu/my_vm01.xml
│   ├── my_vm01.xml
│   ├── my_vm02.xml
│   ├── networks
│   │   ├── autostart
│   │   └── default.xml
│   └── test_vm42.xml
├── qemu.conf
├── qemu-lockd.conf
├── qemu-sanlock.conf
├── secrets
├── storage
│   ├── autostart
│   │   └── my_vm01.xml -> /etc/libvirt/storage/my_vm01.xml
│   ├── my_vm01.xml
│   └── my_vm02.xml
├── virtlockd.conf
└── virtlogd.conf

9 directories, 45 files

2023 - August


Watch Notification System - v 2023.09.5

  • Finally got my alerts working, so when a host goes haywire I get an alert on the Phone and Cloud, with E-Mail as backup.

Hope this helps, -- Don

alert.sh and log.sh are meant to run a few times per hour and send alerts if watch thresholds are exceeded.

  • alert.sh - Usually runs on one host and monitors other hosts, using a normal user ssh tunnel.
  • log.sh - Usually runs on each host as root.

When any new file is uploaded into the new alert user space, an entry will be written to the Conversation with the name and link to that file.

On your Phone you can install NextCloud Talk, log in as the new alert user and recieve notifications. Bonus: Your watch will alert you too!

On your Phone you can install NextCloud Sync, log in as the new alert user and read notification files.

  • Here is the file structure:
/home/bob/watch
.
├── alert
│   ├── vm5.util.20230822112005.uploaded
│   └── vm7.util.20230822130010.uploaded
├── alert.sh
├── db
│   ├── db-18.1.40
│   ├── oracle_berkely_DB-V997917-01.zip
│   └── readme
├── deploy.sh
├── df.sh
├── geturl.pl
├── log
│   ├── cloud.log.0
│   ├── cloud.log.1.gz
├── log.sh
├── mail.sh
├── nbs
│   ├── db.c
│   ├── db.o
│   ├── INSTALL
│   ├── Makefile
│   └── ...
├── nbs.tar
├── readme
├── retail
│   ├── Makefile
│   ├── retail
│   ├── retail.c
│   └── ...
├── retail.tar
├── savelog.sh
├── status
│   ├── apache-error
│   ├── apache-error.db.cnt
│   ├── apache-error.db.idx
│   ├── apache-error.db.rec
│   ├── apache-error.db.upd
│   ├── apache-error_new.txt
├── sync.sh
├── util.sh
└── watch.sh


/home/bob/.config
├── watch
│   ├── hosts.txt
│   ├── config.txt
│   └── df.vm7

Installation

  • Download

https://github.com/dfcsoftware/watch

Copy all files to ~/watch, or whatever directory you like, just change this documents' references of /home/bob to your directory.

Delete any lines in files /etc/issue and /etc/issue.net as they will interfere with function monitors doing ssh into a host causing alerts every time.

Files: /etc/issue, /etc/issue.net

$ sudo -i
$ > /etc/issue
$ > /etc/issue.net

Configuration

Create config directory structure:

$ mkdir -p ~/.config/watch

Create config file:

File: ~/.config/watch/config.txt

export CLOUD_USER=<nextcloud user>
export CLOUD_PASS="<nextcloud password>
export CLOUD_DIR=alert
export CLOUD_SERVER="https://www.example.com/nextcloud"
export CLOUD_LOG=/home/bob/watch/log/cloud.log
export LOCAL_DIR=/home/bob/watch
export SSH_USER=bob
export LD_LIBRARY_PATH=/usr/local/BerkeleyDB.18.1/lib:$LD_LIBRARY_PATH
  • Make sure the LOCAL_DIR/alert exists
$ mkdir -p ${LOCAL_DIR}/alert
  • Make sure CLOUD_LOG is writeable
$ touch ${CLOUD_LOG}
  • Create an hosts.txt list of hosts to monitor

File: ~/.config/watch/hosts.txt

# Host   ssh    Remote   Remote
#        Port   Script   Home
# ------ ------ -------- -----------------------
vm1      223    0        /home/bob
vm2      224    1        /home/data/bob
#
# Remote Script: 1=run moniter script that is on remote machine 
#                0=run monitor script on local, through ssh tunnel

Schedule alert.sh in cron

i.e.: every 20 minutes

File: /etc/cron.d/alert

# Run the alert analysis
SHELL=/bin/bash
PATH=/sbin:/bin:/usr/sbin:/usr/bin
MAILTO="bob@bob.com"
*/20 * * * * root /home/bob/watch/alert.sh

Copy ssh keys to remote

If remote monitoring is desired;

  • Generate and copy the linux ssh keys.
$ ssh-key-gen
$ ssh-copy-id <remote hosts>

Alert Functions

This is a seperate file for each functional alert, and sometimes host.

Remote Hosts need their own config file(s)

  • On the Remote Host(s):
$ mkdir -p ~/.config/watch

Example of df functional monitor. Each script will describe it's own.

File: ~/.config/monitor/df.

Usage-Percent-Limit   File-System        Mount-Point
34                       "/dev/mmcblk1p2"   "/"
38                       "/dev/mmcblk1p1"   "/boot/firmware" 
16                       "/dev/md0"         "/mnt/raid1"

Dependencies:

dnf=RedHat; apt=Debian

$ sudo dnf install pcp cockpit-pcp python3-pcp # default from cockpit web install
$ sudo apt install pcp cockpit-pcp python3-pcp # default from cockpit web install
#   - Systemd Services:
#     pmcd.service
#     pmlogger.service
#     pmie.service
#     pmproxy.service
#
$ sudo dnf install pcp-export-pcp2json         # pcp2json
$ sudo apt install pcp-export-pcp2json         # pcp2json
#     Debian 11 may need to add;
#      File: /etc/apt/sources.list.d/unstable.list
#       deb http://deb.debian.org/debian/ unstable main
#     Also may need to run: pip install requests
$ sudo dnf install pcp-system-tools            # pmrep
#
$ sudo dnf install jq                          # json parser
$ sudo apt install jq                          # json parser
#
$ sudo dnf install jc                          # json commands
$ sudo apt install jc                          # json commands
#
$ sudo dnf install ncdu                        # Text-based disk usage viewer
$ sudo apt install ncdu                        # Text-based disk usage viewer
#
$ sudo dnf install nmon                        # Text-based system utilization viewer
$ sudo apt install nmon                        # Text-based system utilization viewer

Default is to send an e-mail if limits are exceeded

To stop E-Mail, add a SEND_MAIL export to the config.txt file.

  • 0 = NO
  • 1 = YES

File: ~/.config/watch/config.txt

~
export SEND_MAIL=0
~

NextCloud Flow Notifications

  • Create new alert user in NextCloud

    • Add the NextCloud server as SERVER in ~/.config/watch/config.txt
    • Add the NextCloud user as CLOUD_USER in ~/.config/watch/config.txt
    • Add the NextCloud user's password as CLOUD_PASS in ~/.config/watch/config.txt
  • As the new alert user in NextCloud;

    • Go to Talk
      • Create a new group Conversation
    • Go to Files and create a new alert directory
      • Add it as the CLOUD_DIR to ~/.config/watch/config.txt
    • Go to Personal Settings > Flow
      • Add a new flow Write to conversasion (blue)
        • When: File created
        • and: File size (upload) is greater than 0 MB
      • -> Write to conversasion using the
        • Conversation created above

Reference:

savelog.sh

This is used to save off several copies of the last sync.sh process sending alerts to the NextCloud server.

The logs are better viewed using the lnav Linu package. lnav ~/watch/log/

This is released on many OS packages, but not all, so it is included here. Thanks very much to the original authors!

Reference:

log.sh - Log Watcher

This script runs as root on each node to search for Never Before Seen (NBS) entries in a log file. It needs to be scheduled in cron.

The flow is:

  1. cron runs log.sh

File: /etc/cron.d/logwatcher

# Log - Watcher
PATH=/usr/lib/sysstat:/usr/sbin:/usr/sbin:/usr/bin:/sbin:/bin
MAILTO="bob@bob.com"
# Run Log watcher
*/20 * * * * bob  /home/bob/watch/log.sh 
  1. log.sh reads config file ~/.config/watch/logwatch.<hostname>

File: logwatch.vm7

# File: logwatch.vm7
#           DB                      File                               Alert  Email          Filter 
# ------------------------- ----------------------------------------- ------ ------ -----------------------------------#
apache-access                /var/log/apache2/access.log               N      Y     geturl.pl skip_local_ips.sh
apache-error                 /var/log/apache2/error.log                Y      Y     geturl.pl
apache-other_vhosts_access   /var/log/apache2/other_vhosts_access.log  Y      Y     geturl.pl
  1. New lines in File are read by retail, filtered by any and all Filters supplied.
  2. DB is checked if this has been seen before and can be ignored.
  3. Any remainging lines are sent to the alert directory and an Email sent, if Y in config.txt.
  4. A sync for new alert files is done, sending new files to the cloud (NextCloud).
  5. NextCloud will send a Talk alert to the CLOUD_USER for new files. These are viewable on a Phone or Watch, if running the Talk app.

The following software packages have to be installed and compiled locally:

Refer to the log.sh script for instructions.

Reference:

Trouble Resolution

armv7l Debian 10

The pmlogger systemd daemon had issues being installed, and the following log instructions solved it.

Aug 08 11:07:39 bob.example.com systemd[1]: Starting LSB: Control pmlogger (the performance metrics logger for PCP)...
Aug 08 11:07:43 bob.example.com pmlogger[913]: /etc/init.d/pmlogger: Warning: Performance Co-Pilot archive logger(s) not permanently enabled.
Aug 08 11:07:43 bob.example.com pmlogger[913]:     To enable pmlogger, run the following as root:
Aug 08 11:07:44 bob.example.com pmlogger[913]:          update-rc.d -f pmlogger remove
Aug 08 11:07:44 bob.example.com pmlogger[913]:          update-rc.d pmlogger defaults 94 06


Linux in the House - https://linux-in-the-house.org Creative Commons License