From 797a1404213173791a5f4126a77ad383ceb00064 Mon Sep 17 00:00:00 2001 From: Christian Cleberg Date: Mon, 4 Mar 2024 22:34:28 -0600 Subject: initial migration to test org-mode --- content/wiki/index.md | 141 -------------------------------------------------- 1 file changed, 141 deletions(-) delete mode 100644 content/wiki/index.md (limited to 'content/wiki') diff --git a/content/wiki/index.md b/content/wiki/index.md deleted file mode 100644 index fe61c99..0000000 --- a/content/wiki/index.md +++ /dev/null @@ -1,141 +0,0 @@ -+++ -title = "Wiki" -description = "An informal wiki of sorts." -+++ - -An informal wiki of sorts. - -## Digital Garden - -> At times, wilderness is exactly what readers want: a rich collection -> of resources and links. At times, rigid formality suits readers -> perfectly, providing precisely the information they want, no more and -> no less. Indeed, individual hypertexts and Web sites may contain -> sections that tend toward each extreme. - -> Often, however, designers should strive for the comfort, interest, and -> habitability of parks and gardens: places that invite visitors to -> remain, and that are designed to engage and delight them, to invite -> them to linger, to explore, and to reflect. - -[Hypertext Garden](https://www.eastgate.com/garden/) - -## Git - -I want to get rid of all local modifications and go back to the working -tree of the most recent commit: - -```sh -git restore . -``` - -Revert a specified commit: - -```sh -git revert commit-id -``` - -Reset the repository to a specific commit in the git log: - -```sh -git reset --mixed commit-id -``` - -I need to commit and push changes to a remote that has been changed -since my most recent pull: - -```sh -git pull --rebase -``` - -## Hardware - -### Laptops - -#### macOS - -| Category | Details | -| -------- | --------------- | -| Model | Macbook Pro 16" | -| CPU | Apple M2 Pro | -| RAM | 16GB | -| Storage | 512GB SSD | - -#### Linux - -| Category | Details | -| -------- | ------------------------------------------- | -| Model | Lenovo ThinkPad E15 Gen 4, model 21ED0048US | -| CPU | AMD Ryzen 5 5625U with Radeon Graphics | -| RAM | 16 GB | -| Storage | 256 GB SSD | - -### Servers - -| Category | Details | -| ------------------ | -------------------------------------- | -| Case | Rosewill RSV-R4100U 4U | -| Motherboard | NZXT B550 | -| CPU | AMD Ryzen 7 5700G with Radeon Graphics | -| RAM | 64GB RAM (2x32GB) | -| Storage (On-board) | Western Digital 500GB M.2 NVME SSD | -| Storage (HDD Bay) | 48TB HDD | -| PSU | Corsair RM850 PSU | - -### Networking Equipment - -- UDM-Pro -- USW-24-PoE -- USW-Lite-8-PoE -- U6-Pro -- U6-Extender -- USP-Plug -- UVC G4 Instant x 3 -- UVC G4 Doorbell Pro -- UP Chime -- USW 24-Port Patch Panel -- USW Switch Lite 8 PoE - -## Software - -### Laptop - -Alpine 3.18.2; no DE. - -I currently run my Alpine laptop via the default login shell - no -desktop environment. From here, I use a mix of byobu and emacs to split -my screen into tabs and panes. All programs run through the shell and do -not use visual libraries such as X or Wayland. - -I have Sway installed and configured, but only launch it when I must. - -- brightnessctl -- byobu -- emacs -- [font-dejavu, font-noto, font-noto-cjk, font-noto-cjk-extra] -- glances -- gnupg -- irssi -- lynx -- nano -- neomutt -- newsboat -- ohmyzsh -- [pango, pango-tools] -- pipewire -- syncthing -- wireguard -- zola -- zsh - -### Server - -Ubuntu 22.04.1; no DE. - -See my services page for a list of the publicly-available services -running on this server. - -- certbot -- [docker, docker-compose] -- nginx -- zsh -- cgit v1.2.3-70-g09d2 From 41bd0ad58e44244fe67cb36e066d4bb68738516f Mon Sep 17 00:00:00 2001 From: Christian Cleberg Date: Fri, 29 Mar 2024 01:30:23 -0500 Subject: massive re-write from org-publish to weblorg --- .gitignore | 3 +- README.org | 62 +- blog/aes-encryption/index.org | 103 -- blog/agile-auditing/index.org | 137 --- blog/alpine-desktop/index.org | 260 ---- blog/alpine-linux/index.org | 269 ----- blog/alpine-ssh-hardening/index.org | 71 -- blog/apache-redirect/index.org | 43 - blog/audit-analytics/index.org | 229 ---- blog/audit-dashboard/index.org | 171 --- blog/audit-review-template/index.org | 76 -- blog/audit-sampling/index.org | 264 ----- blog/audit-sql-scripts/index.org | 262 ----- blog/backblaze-b2/index.org | 176 --- blog/bash-it/index.org | 233 ---- blog/burnout/index.org | 41 - blog/business-analysis/index.org | 380 ------ blog/byobu/index.org | 66 -- blog/changing-git-authors/index.org | 72 -- blog/cisa/index.org | 205 ---- blog/clone-github-repos/index.org | 148 --- blog/cloudflare-dns-api/index.org | 190 --- blog/cpp-compiler/index.org | 128 -- blog/cryptography-basics/index.org | 171 --- blog/curseradio/index.org | 95 -- blog/customizing-ubuntu/index.org | 195 --- blog/daily-poetry/index.org | 208 ---- blog/debian-and-nginx/index.org | 172 --- blog/delete-gitlab-repos/index.org | 110 -- blog/digital-minimalism/index.org | 100 -- blog/ditching-cloudflare/index.org | 89 -- blog/dont-say-hello/index.org | 26 - blog/exiftool/index.org | 60 - blog/exploring-hare/index.org | 169 --- blog/fediverse/index.org | 92 -- blog/fedora-i3/index.org | 152 --- blog/fedora-login-manager/index.org | 40 - blog/financial-database/index.org | 256 ---- blog/flac-to-opus/index.org | 169 --- blog/flatpak-symlinks/index.org | 46 - blog/gemini-capsule/index.org | 177 --- blog/gemini-server/index.org | 150 --- blog/git-server/index.org | 617 ---------- blog/gnupg/index.org | 297 ----- blog/goaccess-geoip/index.org | 64 - blog/graphene-os/index.org | 154 --- blog/happiness-map/index.org | 217 ---- blog/homelab/index.org | 149 --- blog/index.org | 141 --- blog/internal-audit/index.org | 247 ---- blog/leaving-the-office/index.org | 240 ---- blog/linux-display-manager/index.org | 72 -- blog/linux-software/index.org | 271 ----- blog/local-llm/index.org | 108 -- blog/macos-customization/index.org | 170 --- blog/macos/index.org | 200 ---- blog/mass-unlike-tumblr-posts/index.org | 87 -- blog/mediocrity/index.org | 122 -- blog/mtp-linux/index.org | 73 -- blog/neon-drive/index.org | 93 -- blog/nextcloud-on-ubuntu/index.org | 159 --- blog/nginx-caching/index.org | 68 -- blog/nginx-compression/index.org | 73 -- blog/nginx-referrer-ban-list/index.org | 126 -- blog/nginx-reverse-proxy/index.org | 220 ---- blog/nginx-tmp-errors/index.org | 75 -- blog/nginx-wildcard-redirect/index.org | 116 -- blog/njalla-dns-api/index.org | 199 ---- blog/org-blog/index.org | 71 -- blog/password-security/index.org | 121 -- blog/photography/index.org | 68 -- blog/php-auth-flow/index.org | 188 --- blog/php-comment-system/index.org | 265 ----- blog/pinetime/index.org | 146 --- blog/plex-migration/index.org | 230 ---- blog/plex-transcoder-errors/index.org | 58 - blog/privacy-com-changes/index.org | 94 -- blog/random-wireguard/index.org | 112 -- blog/recent-website-changes/index.org | 74 -- blog/redirect-github-pages/index.org | 120 -- blog/reliable-notes/index.org | 137 --- blog/scli/index.org | 145 --- blog/self-hosting-anonymousoverflow/index.org | 127 -- blog/self-hosting-authelia/index.org | 443 ------- blog/self-hosting-baikal/index.org | 150 --- blog/self-hosting-convos/index.org | 160 --- blog/self-hosting-freshrss/index.org | 232 ---- blog/self-hosting-gitweb/index.org | 72 -- blog/self-hosting-matrix/index.org | 207 ---- blog/self-hosting-otter-wiki/index.org | 135 --- blog/self-hosting-voyager/index.org | 119 -- blog/self-hosting-wger/index.org | 143 --- blog/serenity-os/index.org | 116 -- blog/server-build/index.org | 141 --- blog/server-hardening/index.org | 334 ------ blog/session-manager/index.org | 128 -- blog/seum/index.org | 92 -- blog/ssh-mfa/index.org | 189 --- blog/st/index.org | 87 -- blog/steam-on-ntfs/index.org | 93 -- blog/syncthing/index.org | 169 --- blog/tableau-dashboard/index.org | 156 --- blog/terminal-lifestyle/index.org | 211 ---- blog/the-ansoff-matrix/index.org | 140 --- blog/tuesday/index.org | 37 - blog/ubuntu-emergency-mode/index.org | 69 -- blog/ufw/index.org | 213 ---- blog/unifi-ip-blocklist/index.org | 82 -- blog/unifi-nextdns/index.org | 1241 -------------------- blog/useful-css/index.org | 189 --- blog/vaporwave-vs-outrun/index.org | 124 -- blog/video-game-sales/index.org | 175 --- blog/visual-recognition/index.org | 197 ---- blog/vps-web-server/index.org | 399 ------- blog/website-redesign/index.org | 96 -- blog/wireguard-lan/index.org | 143 --- blog/zfs/index.org | 324 ----- blog/zork/index.org | 90 -- build.sh | 3 + content/blog/2018-11-28-aes-encryption.org | 103 ++ content/blog/2018-11-28-cpp-compiler.org | 128 ++ content/blog/2019-01-07-useful-css.org | 189 +++ content/blog/2019-09-09-audit-analytics.org | 229 ++++ content/blog/2019-12-03-the-ansoff-matrix.org | 117 ++ content/blog/2019-12-16-password-security.org | 121 ++ content/blog/2020-01-25-linux-software.org | 271 +++++ content/blog/2020-01-26-steam-on-ntfs.org | 93 ++ content/blog/2020-02-09-cryptography-basics.org | 171 +++ content/blog/2020-03-25-session-manager.org | 128 ++ content/blog/2020-05-03-homelab.org | 149 +++ content/blog/2020-05-19-customizing-ubuntu.org | 195 +++ content/blog/2020-07-20-video-game-sales.org | 175 +++ content/blog/2020-07-26-business-analysis.org | 380 ++++++ content/blog/2020-08-22-redirect-github-pages.org | 120 ++ content/blog/2020-08-29-php-auth-flow.org | 188 +++ content/blog/2020-09-01-visual-recognition.org | 197 ++++ content/blog/2020-09-22-internal-audit.org | 247 ++++ content/blog/2020-09-25-happiness-map.org | 217 ++++ content/blog/2020-10-12-mediocrity.org | 122 ++ content/blog/2020-12-27-website-redesign.org | 96 ++ content/blog/2020-12-28-neon-drive.org | 93 ++ content/blog/2020-12-29-zork.org | 90 ++ content/blog/2021-01-01-seum.org | 92 ++ content/blog/2021-01-04-fediverse.org | 92 ++ content/blog/2021-01-07-ufw.org | 213 ++++ content/blog/2021-02-19-macos.org | 200 ++++ content/blog/2021-03-19-clone-github-repos.org | 148 +++ content/blog/2021-03-28-gemini-capsule.org | 177 +++ content/blog/2021-03-28-vaporwave-vs-outrun.org | 124 ++ content/blog/2021-03-30-vps-web-server.org | 399 +++++++ content/blog/2021-04-17-gemini-server.org | 150 +++ content/blog/2021-04-23-php-comment-system.org | 265 +++++ content/blog/2021-04-28-photography.org | 68 ++ content/blog/2021-05-30-changing-git-authors.org | 72 ++ content/blog/2021-07-15-delete-gitlab-repos.org | 110 ++ content/blog/2021-08-25-audit-sampling.org | 264 +++++ content/blog/2021-10-09-apache-redirect.org | 43 + content/blog/2021-12-04-cisa.org | 205 ++++ content/blog/2022-02-10-leaving-the-office.org | 240 ++++ content/blog/2022-02-10-njalla-dns-api.org | 199 ++++ content/blog/2022-02-16-debian-and-nginx.org | 172 +++ content/blog/2022-02-17-exiftool.org | 60 + content/blog/2022-02-20-nginx-caching.org | 68 ++ content/blog/2022-02-22-tuesday.org | 37 + content/blog/2022-03-02-reliable-notes.org | 137 +++ content/blog/2022-03-03-financial-database.org | 256 ++++ content/blog/2022-03-08-plex-migration.org | 230 ++++ content/blog/2022-03-23-cloudflare-dns-api.org | 190 +++ content/blog/2022-03-23-nextcloud-on-ubuntu.org | 159 +++ content/blog/2022-03-24-server-hardening.org | 334 ++++++ content/blog/2022-03-26-ssh-mfa.org | 189 +++ content/blog/2022-04-02-nginx-reverse-proxy.org | 220 ++++ content/blog/2022-04-09-pinetime.org | 146 +++ content/blog/2022-06-01-ditching-cloudflare.org | 89 ++ content/blog/2022-06-07-self-hosting-freshrss.org | 232 ++++ content/blog/2022-06-16-terminal-lifestyle.org | 211 ++++ content/blog/2022-06-22-daily-poetry.org | 208 ++++ content/blog/2022-06-24-fedora-i3.org | 152 +++ content/blog/2022-07-01-git-server.org | 617 ++++++++++ content/blog/2022-07-14-gnupg.org | 297 +++++ content/blog/2022-07-25-curseradio.org | 95 ++ content/blog/2022-07-30-flac-to-opus.org | 169 +++ content/blog/2022-07-31-bash-it.org | 233 ++++ content/blog/2022-08-31-privacy-com-changes.org | 94 ++ content/blog/2022-09-17-serenity-os.org | 116 ++ content/blog/2022-09-21-graphene-os.org | 154 +++ content/blog/2022-10-04-mtp-linux.org | 73 ++ content/blog/2022-10-04-syncthing.org | 169 +++ content/blog/2022-10-22-alpine-linux.org | 269 +++++ content/blog/2022-10-30-linux-display-manager.org | 72 ++ content/blog/2022-11-07-self-hosting-matrix.org | 207 ++++ content/blog/2022-11-11-nginx-tmp-errors.org | 75 ++ content/blog/2022-11-27-server-build.org | 141 +++ .../blog/2022-11-29-nginx-referrer-ban-list.org | 126 ++ content/blog/2022-12-01-nginx-compression.org | 73 ++ .../blog/2022-12-07-nginx-wildcard-redirect.org | 116 ++ content/blog/2022-12-17-st.org | 87 ++ content/blog/2022-12-23-alpine-desktop.org | 260 ++++ content/blog/2023-01-03-recent-website-changes.org | 74 ++ .../blog/2023-01-05-mass-unlike-tumblr-posts.org | 87 ++ content/blog/2023-01-08-fedora-login-manager.org | 40 + content/blog/2023-01-21-flatpak-symlinks.org | 46 + content/blog/2023-01-23-random-wireguard.org | 112 ++ content/blog/2023-01-28-self-hosting-wger.org | 143 +++ content/blog/2023-02-02-exploring-hare.org | 169 +++ content/blog/2023-05-22-burnout.org | 41 + content/blog/2023-06-08-goaccess-geoip.org | 64 + content/blog/2023-06-08-self-hosting-baikal.org | 150 +++ content/blog/2023-06-18-unifi-ip-blocklist.org | 82 ++ content/blog/2023-06-20-audit-review-template.org | 76 ++ content/blog/2023-06-23-byobu.org | 66 ++ content/blog/2023-06-23-self-hosting-convos.org | 160 +++ content/blog/2023-06-28-backblaze-b2.org | 176 +++ content/blog/2023-06-30-self-hosting-voyager.org | 119 ++ content/blog/2023-07-12-wireguard-lan.org | 143 +++ content/blog/2023-07-19-plex-transcoder-errors.org | 58 + content/blog/2023-08-18-agile-auditing.org | 137 +++ content/blog/2023-09-15-self-hosting-gitweb.org | 72 ++ content/blog/2023-09-19-audit-sql-scripts.org | 262 +++++ content/blog/2023-10-04-digital-minimalism.org | 100 ++ content/blog/2023-10-11-self-hosting-authelia.org | 443 +++++++ content/blog/2023-10-15-alpine-ssh-hardening.org | 71 ++ .../2023-10-17-self-hosting-anonymousoverflow.org | 127 ++ content/blog/2023-11-08-scli.org | 145 +++ content/blog/2023-12-03-unifi-nextdns.org | 1241 ++++++++++++++++++++ content/blog/2024-01-08-dont-say-hello.org | 26 + content/blog/2024-01-09-macos-customization.org | 170 +++ content/blog/2024-01-13-local-llm.org | 108 ++ content/blog/2024-01-26-audit-dashboard.org | 171 +++ content/blog/2024-01-27-tableau-dashboard.org | 156 +++ content/blog/2024-02-06-zfs.org | 324 +++++ content/blog/2024-02-13-ubuntu-emergency-mode.org | 69 ++ .../blog/2024-02-21-self-hosting-otter-wiki.org | 135 +++ content/blog/2024-03-13-doom-emacs.org | 354 ++++++ .../blog/2024-03-15-self-hosting-ddns-updater.org | 314 +++++ content/blog/2024-03-29-org-blog.org | 43 + content/index.org | 2 + content/salary/index.org | 54 + content/services/index.org | 16 + content/wiki/blogroll.org | 32 + content/wiki/hardware.org | 114 ++ content/wiki/ios.org | 198 ++++ content/wiki/macos.org | 192 +++ index.org | 28 - publish.el | 98 ++ salary/index.org | 50 - services/index.org | 14 - static/styles.css | 807 ------------- static/syntax-theme-dark.css | 280 ----- static/syntax-theme-light.css | 407 ------- theme/static/gpg.txt | 52 + theme/static/robots.txt | 3 + theme/static/styles.css | 578 +++++++++ theme/static/styles.min.css | 1 + theme/static/syntax-theme-dark.css | 280 +++++ theme/static/syntax-theme-light.css | 407 +++++++ theme/templates/atom.xml | 42 + theme/templates/base.html | 41 + theme/templates/blog.html | 14 + theme/templates/index.html | 41 + theme/templates/page.html | 11 + theme/templates/post.html | 30 + theme/templates/wiki.html | 17 + wiki/index.org | 145 --- 264 files changed, 21927 insertions(+), 20993 deletions(-) delete mode 100644 blog/aes-encryption/index.org delete mode 100644 blog/agile-auditing/index.org delete mode 100644 blog/alpine-desktop/index.org delete mode 100644 blog/alpine-linux/index.org delete mode 100644 blog/alpine-ssh-hardening/index.org delete mode 100644 blog/apache-redirect/index.org delete mode 100644 blog/audit-analytics/index.org delete mode 100644 blog/audit-dashboard/index.org delete mode 100644 blog/audit-review-template/index.org delete mode 100644 blog/audit-sampling/index.org delete mode 100644 blog/audit-sql-scripts/index.org delete mode 100644 blog/backblaze-b2/index.org delete mode 100644 blog/bash-it/index.org delete mode 100644 blog/burnout/index.org delete mode 100644 blog/business-analysis/index.org delete mode 100644 blog/byobu/index.org delete mode 100644 blog/changing-git-authors/index.org delete mode 100644 blog/cisa/index.org delete mode 100644 blog/clone-github-repos/index.org delete mode 100644 blog/cloudflare-dns-api/index.org delete mode 100644 blog/cpp-compiler/index.org delete mode 100644 blog/cryptography-basics/index.org delete mode 100644 blog/curseradio/index.org delete mode 100644 blog/customizing-ubuntu/index.org delete mode 100644 blog/daily-poetry/index.org delete mode 100644 blog/debian-and-nginx/index.org delete mode 100644 blog/delete-gitlab-repos/index.org delete mode 100644 blog/digital-minimalism/index.org delete mode 100644 blog/ditching-cloudflare/index.org delete mode 100644 blog/dont-say-hello/index.org delete mode 100644 blog/exiftool/index.org delete mode 100644 blog/exploring-hare/index.org delete mode 100644 blog/fediverse/index.org delete mode 100644 blog/fedora-i3/index.org delete mode 100644 blog/fedora-login-manager/index.org delete mode 100644 blog/financial-database/index.org delete mode 100644 blog/flac-to-opus/index.org delete mode 100644 blog/flatpak-symlinks/index.org delete mode 100644 blog/gemini-capsule/index.org delete mode 100644 blog/gemini-server/index.org delete mode 100644 blog/git-server/index.org delete mode 100644 blog/gnupg/index.org delete mode 100644 blog/goaccess-geoip/index.org delete mode 100644 blog/graphene-os/index.org delete mode 100644 blog/happiness-map/index.org delete mode 100644 blog/homelab/index.org delete mode 100644 blog/index.org delete mode 100644 blog/internal-audit/index.org delete mode 100644 blog/leaving-the-office/index.org delete mode 100644 blog/linux-display-manager/index.org delete mode 100644 blog/linux-software/index.org delete mode 100644 blog/local-llm/index.org delete mode 100644 blog/macos-customization/index.org delete mode 100644 blog/macos/index.org delete mode 100644 blog/mass-unlike-tumblr-posts/index.org delete mode 100644 blog/mediocrity/index.org delete mode 100644 blog/mtp-linux/index.org delete mode 100644 blog/neon-drive/index.org delete mode 100644 blog/nextcloud-on-ubuntu/index.org delete mode 100644 blog/nginx-caching/index.org delete mode 100644 blog/nginx-compression/index.org delete mode 100644 blog/nginx-referrer-ban-list/index.org delete mode 100644 blog/nginx-reverse-proxy/index.org delete mode 100644 blog/nginx-tmp-errors/index.org delete mode 100644 blog/nginx-wildcard-redirect/index.org delete mode 100644 blog/njalla-dns-api/index.org delete mode 100644 blog/org-blog/index.org delete mode 100644 blog/password-security/index.org delete mode 100644 blog/photography/index.org delete mode 100644 blog/php-auth-flow/index.org delete mode 100644 blog/php-comment-system/index.org delete mode 100644 blog/pinetime/index.org delete mode 100644 blog/plex-migration/index.org delete mode 100644 blog/plex-transcoder-errors/index.org delete mode 100644 blog/privacy-com-changes/index.org delete mode 100644 blog/random-wireguard/index.org delete mode 100644 blog/recent-website-changes/index.org delete mode 100644 blog/redirect-github-pages/index.org delete mode 100644 blog/reliable-notes/index.org delete mode 100644 blog/scli/index.org delete mode 100644 blog/self-hosting-anonymousoverflow/index.org delete mode 100644 blog/self-hosting-authelia/index.org delete mode 100644 blog/self-hosting-baikal/index.org delete mode 100644 blog/self-hosting-convos/index.org delete mode 100644 blog/self-hosting-freshrss/index.org delete mode 100644 blog/self-hosting-gitweb/index.org delete mode 100644 blog/self-hosting-matrix/index.org delete mode 100644 blog/self-hosting-otter-wiki/index.org delete mode 100644 blog/self-hosting-voyager/index.org delete mode 100644 blog/self-hosting-wger/index.org delete mode 100644 blog/serenity-os/index.org delete mode 100644 blog/server-build/index.org delete mode 100644 blog/server-hardening/index.org delete mode 100644 blog/session-manager/index.org delete mode 100644 blog/seum/index.org delete mode 100644 blog/ssh-mfa/index.org delete mode 100644 blog/st/index.org delete mode 100644 blog/steam-on-ntfs/index.org delete mode 100644 blog/syncthing/index.org delete mode 100644 blog/tableau-dashboard/index.org delete mode 100644 blog/terminal-lifestyle/index.org delete mode 100644 blog/the-ansoff-matrix/index.org delete mode 100644 blog/tuesday/index.org delete mode 100644 blog/ubuntu-emergency-mode/index.org delete mode 100644 blog/ufw/index.org delete mode 100644 blog/unifi-ip-blocklist/index.org delete mode 100644 blog/unifi-nextdns/index.org delete mode 100644 blog/useful-css/index.org delete mode 100644 blog/vaporwave-vs-outrun/index.org delete mode 100644 blog/video-game-sales/index.org delete mode 100644 blog/visual-recognition/index.org delete mode 100644 blog/vps-web-server/index.org delete mode 100644 blog/website-redesign/index.org delete mode 100644 blog/wireguard-lan/index.org delete mode 100644 blog/zfs/index.org delete mode 100644 blog/zork/index.org create mode 100755 build.sh create mode 100644 content/blog/2018-11-28-aes-encryption.org create mode 100644 content/blog/2018-11-28-cpp-compiler.org create mode 100644 content/blog/2019-01-07-useful-css.org create mode 100644 content/blog/2019-09-09-audit-analytics.org create mode 100644 content/blog/2019-12-03-the-ansoff-matrix.org create mode 100644 content/blog/2019-12-16-password-security.org create mode 100644 content/blog/2020-01-25-linux-software.org create mode 100644 content/blog/2020-01-26-steam-on-ntfs.org create mode 100644 content/blog/2020-02-09-cryptography-basics.org create mode 100644 content/blog/2020-03-25-session-manager.org create mode 100644 content/blog/2020-05-03-homelab.org create mode 100644 content/blog/2020-05-19-customizing-ubuntu.org create mode 100644 content/blog/2020-07-20-video-game-sales.org create mode 100644 content/blog/2020-07-26-business-analysis.org create mode 100644 content/blog/2020-08-22-redirect-github-pages.org create mode 100644 content/blog/2020-08-29-php-auth-flow.org create mode 100644 content/blog/2020-09-01-visual-recognition.org create mode 100644 content/blog/2020-09-22-internal-audit.org create mode 100644 content/blog/2020-09-25-happiness-map.org create mode 100644 content/blog/2020-10-12-mediocrity.org create mode 100644 content/blog/2020-12-27-website-redesign.org create mode 100644 content/blog/2020-12-28-neon-drive.org create mode 100644 content/blog/2020-12-29-zork.org create mode 100644 content/blog/2021-01-01-seum.org create mode 100644 content/blog/2021-01-04-fediverse.org create mode 100644 content/blog/2021-01-07-ufw.org create mode 100644 content/blog/2021-02-19-macos.org create mode 100644 content/blog/2021-03-19-clone-github-repos.org create mode 100644 content/blog/2021-03-28-gemini-capsule.org create mode 100644 content/blog/2021-03-28-vaporwave-vs-outrun.org create mode 100644 content/blog/2021-03-30-vps-web-server.org create mode 100644 content/blog/2021-04-17-gemini-server.org create mode 100644 content/blog/2021-04-23-php-comment-system.org create mode 100644 content/blog/2021-04-28-photography.org create mode 100644 content/blog/2021-05-30-changing-git-authors.org create mode 100644 content/blog/2021-07-15-delete-gitlab-repos.org create mode 100644 content/blog/2021-08-25-audit-sampling.org create mode 100644 content/blog/2021-10-09-apache-redirect.org create mode 100644 content/blog/2021-12-04-cisa.org create mode 100644 content/blog/2022-02-10-leaving-the-office.org create mode 100644 content/blog/2022-02-10-njalla-dns-api.org create mode 100644 content/blog/2022-02-16-debian-and-nginx.org create mode 100644 content/blog/2022-02-17-exiftool.org create mode 100644 content/blog/2022-02-20-nginx-caching.org create mode 100644 content/blog/2022-02-22-tuesday.org create mode 100644 content/blog/2022-03-02-reliable-notes.org create mode 100644 content/blog/2022-03-03-financial-database.org create mode 100644 content/blog/2022-03-08-plex-migration.org create mode 100644 content/blog/2022-03-23-cloudflare-dns-api.org create mode 100644 content/blog/2022-03-23-nextcloud-on-ubuntu.org create mode 100644 content/blog/2022-03-24-server-hardening.org create mode 100644 content/blog/2022-03-26-ssh-mfa.org create mode 100644 content/blog/2022-04-02-nginx-reverse-proxy.org create mode 100644 content/blog/2022-04-09-pinetime.org create mode 100644 content/blog/2022-06-01-ditching-cloudflare.org create mode 100644 content/blog/2022-06-07-self-hosting-freshrss.org create mode 100644 content/blog/2022-06-16-terminal-lifestyle.org create mode 100644 content/blog/2022-06-22-daily-poetry.org create mode 100644 content/blog/2022-06-24-fedora-i3.org create mode 100644 content/blog/2022-07-01-git-server.org create mode 100644 content/blog/2022-07-14-gnupg.org create mode 100644 content/blog/2022-07-25-curseradio.org create mode 100644 content/blog/2022-07-30-flac-to-opus.org create mode 100644 content/blog/2022-07-31-bash-it.org create mode 100644 content/blog/2022-08-31-privacy-com-changes.org create mode 100644 content/blog/2022-09-17-serenity-os.org create mode 100644 content/blog/2022-09-21-graphene-os.org create mode 100644 content/blog/2022-10-04-mtp-linux.org create mode 100644 content/blog/2022-10-04-syncthing.org create mode 100644 content/blog/2022-10-22-alpine-linux.org create mode 100644 content/blog/2022-10-30-linux-display-manager.org create mode 100644 content/blog/2022-11-07-self-hosting-matrix.org create mode 100644 content/blog/2022-11-11-nginx-tmp-errors.org create mode 100644 content/blog/2022-11-27-server-build.org create mode 100644 content/blog/2022-11-29-nginx-referrer-ban-list.org create mode 100644 content/blog/2022-12-01-nginx-compression.org create mode 100644 content/blog/2022-12-07-nginx-wildcard-redirect.org create mode 100644 content/blog/2022-12-17-st.org create mode 100644 content/blog/2022-12-23-alpine-desktop.org create mode 100644 content/blog/2023-01-03-recent-website-changes.org create mode 100644 content/blog/2023-01-05-mass-unlike-tumblr-posts.org create mode 100644 content/blog/2023-01-08-fedora-login-manager.org create mode 100644 content/blog/2023-01-21-flatpak-symlinks.org create mode 100644 content/blog/2023-01-23-random-wireguard.org create mode 100644 content/blog/2023-01-28-self-hosting-wger.org create mode 100644 content/blog/2023-02-02-exploring-hare.org create mode 100644 content/blog/2023-05-22-burnout.org create mode 100644 content/blog/2023-06-08-goaccess-geoip.org create mode 100644 content/blog/2023-06-08-self-hosting-baikal.org create mode 100644 content/blog/2023-06-18-unifi-ip-blocklist.org create mode 100644 content/blog/2023-06-20-audit-review-template.org create mode 100644 content/blog/2023-06-23-byobu.org create mode 100644 content/blog/2023-06-23-self-hosting-convos.org create mode 100644 content/blog/2023-06-28-backblaze-b2.org create mode 100644 content/blog/2023-06-30-self-hosting-voyager.org create mode 100644 content/blog/2023-07-12-wireguard-lan.org create mode 100644 content/blog/2023-07-19-plex-transcoder-errors.org create mode 100644 content/blog/2023-08-18-agile-auditing.org create mode 100644 content/blog/2023-09-15-self-hosting-gitweb.org create mode 100644 content/blog/2023-09-19-audit-sql-scripts.org create mode 100644 content/blog/2023-10-04-digital-minimalism.org create mode 100644 content/blog/2023-10-11-self-hosting-authelia.org create mode 100644 content/blog/2023-10-15-alpine-ssh-hardening.org create mode 100644 content/blog/2023-10-17-self-hosting-anonymousoverflow.org create mode 100644 content/blog/2023-11-08-scli.org create mode 100644 content/blog/2023-12-03-unifi-nextdns.org create mode 100644 content/blog/2024-01-08-dont-say-hello.org create mode 100644 content/blog/2024-01-09-macos-customization.org create mode 100644 content/blog/2024-01-13-local-llm.org create mode 100644 content/blog/2024-01-26-audit-dashboard.org create mode 100644 content/blog/2024-01-27-tableau-dashboard.org create mode 100644 content/blog/2024-02-06-zfs.org create mode 100644 content/blog/2024-02-13-ubuntu-emergency-mode.org create mode 100644 content/blog/2024-02-21-self-hosting-otter-wiki.org create mode 100644 content/blog/2024-03-13-doom-emacs.org create mode 100644 content/blog/2024-03-15-self-hosting-ddns-updater.org create mode 100644 content/blog/2024-03-29-org-blog.org create mode 100644 content/index.org create mode 100644 content/salary/index.org create mode 100644 content/services/index.org create mode 100644 content/wiki/blogroll.org create mode 100644 content/wiki/hardware.org create mode 100644 content/wiki/ios.org create mode 100644 content/wiki/macos.org delete mode 100644 index.org create mode 100644 publish.el delete mode 100644 salary/index.org delete mode 100644 services/index.org delete mode 100644 static/styles.css delete mode 100644 static/syntax-theme-dark.css delete mode 100644 static/syntax-theme-light.css create mode 100644 theme/static/gpg.txt create mode 100644 theme/static/robots.txt create mode 100644 theme/static/styles.css create mode 100644 theme/static/styles.min.css create mode 100644 theme/static/syntax-theme-dark.css create mode 100644 theme/static/syntax-theme-light.css create mode 100644 theme/templates/atom.xml create mode 100644 theme/templates/base.html create mode 100644 theme/templates/blog.html create mode 100644 theme/templates/index.html create mode 100644 theme/templates/page.html create mode 100644 theme/templates/post.html create mode 100644 theme/templates/wiki.html delete mode 100644 wiki/index.org (limited to 'content/wiki') diff --git a/.gitignore b/.gitignore index b7423f7..347a4a0 100644 --- a/.gitignore +++ b/.gitignore @@ -1,3 +1,2 @@ .DS_Store -public/* -sitemap.org +.build diff --git a/README.org b/README.org index 116185d..6507aa9 100644 --- a/README.org +++ b/README.org @@ -5,54 +5,19 @@ [[https://cleberg.net][cleberg.net]] is my personal webpage. -This README is viewable on [[https://git.cleberg.net/cleberg.net.git/tree/README.org][cgit]] (raw), or on [[https://cleberg.net/README.html][my website]] (html). +This README is viewable on [[https://git.cleberg.net/cleberg.net.git/tree/README.org][rgit]]. ** Overview -This website & blog uses [[https://orgmode.org/][Org-Mode]], published with =org-publish=. +This website & blog uses [[https://orgmode.org/][Org-Mode]], published with [[https://github.com/emacs-love/weblorg][weblorg]]. ** Configuration -In order to configure the project for publishing, add the following code to your =~/.emacs= file (for Doom, =~/.doom.d/config.el=). - -#+begin_src lisp -;; org-publish -(require 'ox-publish) - -(setq org-publish-project-alist - `(("blog" - :base-directory "~/Source/cleberg.net/" - :base-extension "org" - :recursive t - :publishing-directory "~/Source/cleberg.net/public/" - :publishing-function org-html-publish-to-html) - ;; HTML5 - :html-doctype "html5" - :html-html5-fancy t - ;; Disable some Org's HTML defaults - :html-head-include-scripts nil - :html-head-include-default-style nil - ;; Generate sitemap - :auto-sitemap t - :sitemap-filename "sitemap.org" - ;; Customize head, preamble, & postamble - :html-head "" - :html-preamble "" - :html-postamble "" - ("static" - :base-directory "~/Source/cleberg.net/static/" - :base-extension "css\\|txt\\|jpg\\|gif\\|png" - :recursive t - :publishing-directory "~/Source/cleberg.net/public/" - :publishing-function org-publish-attachment) - - ("cleberg.net" :components ("blog" "static"))));; org-publish -(require 'ox-publish) -#+end_src +Everything is configured within the =publish.el= file. Refer to the weblorg documentation for further configuration options. ** Building -Local testing can be done via [[https://www.gnu.org/software/emacs/][Emacs]]. +Local testing can be done via [[https://www.gnu.org/software/emacs/][Emacs]] or through the command line. To get running: @@ -62,22 +27,21 @@ cd cleberg.net emacs -nw #+end_src -Within Emacs, open any of the repository files. In Doom, I do this with =Spc f f= and selecting =README.org=. Once a file has been opened, you can publish the project with =C-c C-e P a=. +Within Emacs, open any of the repository files. In Doom, I do this with =Spc f f= and selecting =README.org=. Make any changes necessary to customize the project. + +To publish, you can use the =build.sh= script (change the deployment target!) or you can run the following commands. -If you need to re-publish unchanged files, I recommend using the following command: +Use the =ENV= environment variable to determine which base URL weblorg will use. If ENV is ommitted, it will default to =localhost:8000=. If =ENV=prod=, weblorg will look in the =publish.el= file for the production base URL. -#+begin_src lisp -M-: (org-publish "project name" t) +#+begin_src sh +ENV=prod emacs --script publish.el #+end_src -The files will be published to the =public= directory. +The files will be published to the =.build= directory. You can deploy these files to the target through any number of methods, such as =scp= or SFTP. ** Tasks -*** TODO Create RSS feed -Possible Solution: [[https://writepermission.com/org-blogging-rss-feed.html][Org mode blogging: RSS feed]] -Possible Solution: [[https://www.zoraster.org/blog/script-to-generate-rss-feed][Script to Generate RSS Feeds]] +*** DONE Create RSS feed *** TODO Format all blog posts with =M q= -*** TODO Create script to auto-generate the =/blog/= list and =/= most recent posts -Possible Solution: [[https://taingram.org/blog/org-mode-blog.html#orgde61a58][Sitemap]] +*** DONE Create script to auto-generate the =/blog/= list and =/= most recent posts *** TODO Figure out how to get filetags to show up diff --git a/blog/aes-encryption/index.org b/blog/aes-encryption/index.org deleted file mode 100644 index 03dcbf9..0000000 --- a/blog/aes-encryption/index.org +++ /dev/null @@ -1,103 +0,0 @@ -#+title: AES Encryption -#+description: Learn how the AES Encryption algorithm works. -#+date: <2018-11-28 Wed> -#+filetags: :security: - -* Basic AES -If you're not familiar with encryption techniques, [[https://en.wikipedia.org/wiki/Advanced_Encryption_Standard][AES]] is the *Advanced -Encryption Standard*. This specification was established by the National -Institute of Standards and Technology, sub-selected from the Rijndael family of -ciphers (128, 192, and 256 bits) in 2001. Furthering its popularity and status, -the US government chose AES as their default encryption method for top-secret -data, removing the previous standard which had been in place since 1977. - -AES has proven to be an extremely safe encryption method, with 7-round and -8-round attacks making no material improvements since the release of this -encryption standard almost two decades ago. - -#+begin_quote -Though many papers have been published on the cryptanalysis of AES, the fastest -single-key attacks on round-reduced AES variants [20, 33] so far are only -slightly more powerful than those proposed 10 years ago [23,24]. - -- [[http://research.microsoft.com/en-us/projects/cryptanalysis/aesbc.pdf][Bogdonav, et al.]] -#+end_quote - -* How Secure is AES? -In theory, AES-256 is non-crackable due to the massive number of combinations -that can be produced. However, AES-128 is no longer recommended as a viable -implementation to protect important data. - -A semi-short [[http://www.moserware.com/2009/09/stick-figure-guide-to-advanced.html][comic strip]] from Moserware quickly explains AES for the public to -understand. Basically AES encrypts the data by obscuring the relationship -between the data and the encrypted data. Additionally, this method spreads the -message out. Lastly, the key produced by AES is the secret to decrypting it. -Someone may know the method of AES, but without the key, they are powerless. - -To obscure and spread the data out, AES creates a substitution-permutation -network. Wikipedia has a wonderful [[https://upload.wikimedia.org/wikipedia/commons/thumb/c/cd/SubstitutionPermutationNetwork2.png/468px-SubstitutionPermutationNetwork2.png][example of an SP network]] available. This -network sends the data through a set of S boxes (using the unique key) to -substitute the bits with another block of bits. Then, a P box will permutate, or -rearrange, the bits. This is done over and over, with the key being derived from -the last round. For AES, the key size specifies the number of transformation -rounds: 10, 12, and 14 rounds for 128-bit, 192-bit, and 256-bit keys, -respectively. - -* The Process -1. *KeyExpansion=: Using [[https://en.m.wikipedia.org/wiki/Advanced_Encryption_Standard][Rijndael's key schedule]], the keys are dynamically - generated. -2. *AddRoundKey*: Each byte of the data is combined with this key using bitwise - xor. -3. *SubBytes*: This is followed by the substitution of each byte of data. -4. *ShiftRows*: Then, the final three rows are shifted a certain number of - steps, dictated by the cipher. -5. *MixColumns*: After the rows have been shifted, the columns are mixed and - combined. - -This process does not necessarily stop after one full round. Steps 2 through 5 -will repeat for the number of rounds specified by the key. However, the final -round excludes the MixColumns step. As you can see, this is a fairly complex -process. One must have a solid understanding of general mathematic principles to -fully understand how the sequence works (and to even attempt to find a -weakness). - -According to research done by Bogdanov et al., it would take billions of years -to brute force a 126-bit key with current hardware. Additionally, this brute -force attack would require storing 2^{88} bits of data! However, there are a few -different attacks that have been used to show vulnerabilities with the use of -this technology. Side-channel attacks use inadvertent leaks of data from the -hardware or software, which can allow attackers to obtain the key or run -programs on a user's hardware. - -Please note that this is not something you should run out and try to implement -in your =Hello, World!= app after only a few hours of research. While AES -(basically all encryption methods) is extremely efficient in what it does, it -takes a lot of time and patience to understand. If you're looking for something -which currently implements AES, check out the [[https://www.bouncycastle.org/documentation.html][Legion of the Bouncy Castle]] for -Java implementations of cryptographic algorithms. - -* Why Does Encryption Matter? -There are limitless reasons to enable encryption at-rest or in-transit for -various aspects of your digital life. You can research specific examples, such -as [[https://arstechnica.com/tech-policy/2018/12/australia-passes-new-law-to-thwart-strong-encryption/][Australia passes new law to thwart strong encryption]]. However, I will simply -list a few basic reasons to always enable encryption, where feasible: - -1. Privacy is a human right and is recognized as a national right in some - countries (e.g., [[https://www.law.cornell.edu/wex/fourth_amendment][US Fourth Amendment]]). -2. "Why not?" Encryption rarely affects performance or speed, so there's usually - not a reason to avoid it in the first place. -3. Your digital identity and activity (texts, emails, phone calls, online - accounts, etc.) are extremely valuable and can result in terrible - consequences, such as identity theft, if leaked to other parties. Encrypting - this data prevents such leaks from ruining lives. -4. Wiping or factory-resetting does not actually wipe all data from the storage - device. There are methods to read data from the physical disks/boards inside - devices. -5. Corporations, governments, and other nefarious groups/individuals are - actively looking for ways to collect personal information about anyone they - can. If someone's data is unencrypted, that person may become a target due to - the ease of data collection. - -​*Read More:* - -- [[http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.197.pdf][Federal Information Processing Standards Publication 197]] diff --git a/blog/agile-auditing/index.org b/blog/agile-auditing/index.org deleted file mode 100644 index 69e5485..0000000 --- a/blog/agile-auditing/index.org +++ /dev/null @@ -1,137 +0,0 @@ -#+title: Agile Auditing: An Introduction -#+date: <2023-08-18> -#+description: A quick introduction to using the Agile methodology in an audit. -#+filetags: :audit: - -* What is Agile Auditing? -[[https://en.wikipedia.org/wiki/Agile_software_development][Agile]], the collaborative philosophy behind many software development methods, -has been picking up steam as a beneficial tool to use in the external and -internal auditing world. - -This blog post will walk through commonly used terms within Agile, Scrum, and -Kanban in order to translate these terms and roles into audit-specific terms. - -Whether your team is in charge of a financial statement audit, an attestation -(SOC 1, SOC 2, etc.), or a unique internal audit, the terms used throughout this -post should still apply. - -* Agile -To start, I'll take a look at Agile. - -#+begin_quote -The Agile methodology is a project management approach that involves breaking -the project into phases and emphasizes continuous collaboration and improvement. -Teams follow a cycle of planning, executing, and evaluating. -#+end_quote - -While this approach may seem familiar to what audit teams have historically -done, an audit team must make distinct changes in their mentality and how they -approach and manage a project. - -** Agile Values -The Agile Manifesto, written in 2001 at a summit in Utah, contain a set of four -main values that comprise the Agile approach: - -1. Individuals and interactions over processes and tools. -2. Working software over comprehensive documentation. -3. Customer collaboration over contract negotiation. -4. Responding to change over following a plan. - -Beyond the four values, [[https://agilemanifesto.org/principles.html][twelve principles]] were also written as part of the -summit. - -In order to relate these values to an audit or attestation engagement, we need -to shift the focus from software development to the main goal of an engagement: -completing sufficient audit testing to address to relevant risks over the -processes and controls at hand. - -Audit Examples: - -- Engagement teams must value the team members, client contacts, and their - interactions over the historical processes and tools that have been used. -- Engagement teams must value a final report that contains sufficient audit - documentation over excessive documentation or scope creep. -- Engagement teams must collaborate with the audit clients as much as feasible - to ensure that both sides are constantly updated with current knowledge of the - engagement's status and any potential findings, rather than waiting for - pre-set meetings or the end of the engagement to communicate. -- Engagement teams must be able to respond to change in an engagement's - schedule, scope, or environment to ensure that the project is completed in a - timely manner and that all relevant areas are tested. - - In terms of an audit department's portfolio, they must be able to respond to - changes in their company's or client's environment and be able to - dynamically change their audit plan accordingly. - -* Scrum -The above section discusses the high-level details of the Agile philosophy and -how an audit team can potentially mold that mindset into the audit world, but -how does a team implement these ideas? - -There are many methods that use an Agile mindset, but I prefer [[https://en.wikipedia.org/wiki/Scrum_(software_development)][Scrum]]. Scrum is a -framework based on Agile that enables a team to work through a project through a -series of roles, ceremonies, artifacts, and values. - -Let's dive into each of these individually. - -** Scrum Team -A scrum project is only as good as the team running the project. Standard scrum -teams are separated into three distinct areas: - -1. *Product Owner (Client Contact)*: The client contact is the audit equivalent - of the product owner in Scrum. They are responsible for partnering with the - engagement or audit team to ensure progress is being made, priorities are - established, and clear guidance is given when questions or findings arise - within each sprint. -2. *Scrum Master (Engagement Lead)*: The engagement or audit team lead is - responsible for coaching the team and the client contact on the scrum - process, tracking team progress against plan, scheduling necessary resources, - and helping remove obstacles. -3. *Scrum Developers (Engagement Members)*: The engagement or audit team is the - set of team members responsible for getting the work done. These team members - will work on each task, report progress, resolve obstacles, and collaborate - with other team members and the client contact to ensure goals are being met. - -** Scrum Ceremonies -Scrum ceremonies are events that are performed on a regular basis. - -1. *Sprint Planning*: The team works together to plan the upcoming sprint goal - and which user stories (tasks) will be added to the sprint to achieve that - goal. -2. *Sprint*: The time period, typically at least one week and no more than one - month in length, where the team works on the stories and anything in the - backlog. -3. *Daily Scrum*: A very short meeting held each day, typically 15 minutes, to - quickly emphasize alignment on the sprint goal and plan the next 24 hours. - Each team member may share what they did the day before, what they'll do - today, and any obstacles to their work. -4. *Sprint Review*: At the end of each sprint, the team will gather and discuss - the progress, obstacles, and backlog from the previous sprint. -5. *Sprint Retrospective*: More specific than the sprint review, the - retrospective is meant to discuss what worked and what did not work during - the sprint. This may be processes, tools, people, or even things related to - the Scrum ceremonies. - -One additional ceremony that may be applicable is organizing the backlog. This -is typically the responsibility of the engagement leader and is meant to -prioritize and clarify what needs to be done to complete items in the backlog. - -** Artifacts -While artifacts are generally not customizable in the audit world (i.e., each -control test must include some kind of working paper with evidence supporting -the test results), I wanted to include some quick notes on associating scrum -artifact terms with an audit. - -1. *Product Backlog*: This is the overall backlog of unfinished audit tasks from - all prior sprints. -2. *Sprint Backlog*: This is the backlog of unfinished audit tasks from one - individual sprint. -3. *Increment*: This is the output of each sprint - generally this is best - thought of as any documentation prepared during the sprint, such as risk - assessments, control working papers, deficiency analysis, etc. - -* Kanban -Last but not least, Kanban is a methodology that relies on boards to categorize -work into distinct, descriptive categories that allow an agile or scrum team to -effectively plan the work of a sprint or project. - -See Atlassian's [[https://www.atlassian.com/agile/kanban][Kanban]] page for more information. diff --git a/blog/alpine-desktop/index.org b/blog/alpine-desktop/index.org deleted file mode 100644 index 2648123..0000000 --- a/blog/alpine-desktop/index.org +++ /dev/null @@ -1,260 +0,0 @@ -#+title: Alpine Linux as a Desktop OS -#+date: 2022-12-23 -#+description: Learn how to set up Alpine Linux with Sway to use as a desktop operating system. -#+filetags: :linux: - -* Isn't Alpine Linux for Servers? -This is a question I see a lot when people are presented with an example -of Alpine Linux running as a desktop OS. - -While Alpine is small, fast, and minimal, that doesn't stop it from -functioning at a productive level for desktop users. - -This post is documentation of how I installed and modified Alpine Linux -to become my daily desktop OS. - -* Installation -Note that I cover the installation of Alpine Linux in my other post, so -I won't repeat it here: [[../alpine-linux/][Alpine Linux: My New -Server OS]]. - -Basically, get a bootable USB or whatever you prefer with Alpine on it, -boot the ISO, and run the setup script. - -#+begin_src sh -setup-alpine -#+end_src - -Once you have gone through all the options and installer finishes -without errors, reboot. - -#+begin_src sh -reboot -#+end_src - -* Initial Setup -Once Alpine is installed and the machine has rebooted, login is as root -initially or =su= to root once you log in as your user. From here, you -should start by updating and upgrading the system in case the ISO was -not fully up-to-date. - -#+begin_src sh -# Update and upgrade system -apk -U update && apk -U upgrade - -# Add an editor so we can enable the community repository -apk add nano -#+end_src - -You need to uncomment the =community= repository for your version of -Alpine Linux. - -For v3.17, the =repositories= file should look like this: - -#+begin_src sh -nano /etc/apk/repositories -#+end_src - -#+begin_src conf -#/media/sda/apks -http://mirrors.gigenet.com/alpinelinux/v3.17/main -http://mirrors.gigenet.com/alpinelinux/v3.17/community -#http://mirrors.gigenet.com/alpinelinux/edge/main -#http://mirrors.gigenet.com/alpinelinux/edge/community -#http://mirrors.gigenet.com/alpinelinux/edge/testing -#+end_src - -#+begin_src sh -# Add the rest of your packages -apk add linux-firmware iwd doas git curl wget - -# Add yourself to the wheel group so you can use the doas command -adduser $USER wheel -#+end_src - -* Window Manager (Desktop) -The [[https://wiki.alpinelinux.org/wiki/Sway][Sway installation guide]] -has everything you need to get Sway working on Alpine. - -However, I'll include a brief list of the commands I ran and their -purpose for posterity here. - -#+begin_src sh -# Add eudev and set it up -apk add eudev -setup-devd udev - -# Since I have Radeon graphics, I need the following packages -apk add mesa-dri-gallium mesa-va-gallium - -# Add user to applicable groups -adduser $USER input -adduser $USER video - -# Add a font package -apk add ttf-dejavu - -# Add the seatd daemon -apk add seatd -rc-update add seatd -rc-service seatd start - -# Add user to seat group -adduser $USER seat - -# Add elogind -apk add elogind polkit-elogind -rc-update add elogind -rc-service elogind start - -# Finally, add sway and dependencies -apk add sway sway-doc -apk add \ # Install optional dependencies: - xwayland \ # recommended for compatibility reasons - foot \ # default terminal emulator - bemenu \ # wayland menu - swaylock swaylockd \ # lockscreen tool - swaybg \ # wallpaper daemon - swayidle # idle management (DPMS) daemon -#+end_src - -Once you have the packages installed and set-up, you need to export the -=XDG_RUNTIME_DIR= upon login. To do this, edit your =.profile= file. - -If you use another shell, such as =zsh=, you need to edit that shell's -profile (e.g., =~/.zprofile=)! - -#+begin_src sh -nano ~/.profile -#+end_src - -Within the file, paste this: - -#+begin_src sh -if test -z "${XDG_RUNTIME_DIR}"; then - export XDG_RUNTIME_DIR=/tmp/$(id -u)-runtime-dir - if ! test -d "${XDG_RUNTIME_DIR}"; then - mkdir "${XDG_RUNTIME_DIR}" - chmod 0700 "${XDG_RUNTIME_DIR}" - fi -fi -#+end_src - -Once that's complete, you can launch Sway manually. - -#+begin_src sh -dbus-run-session -- sway -#+end_src - -** Personal Touches -I also added the following packages, per my personal preferences and -situation. - -#+begin_src sh -doas apk add brightnessctl \ # Brightness controller - zsh \ # Shell - firefox \ # Browser - syncthing \ # File sync service - wireguard-tools \ # Wireguard VPN - gomuks \ # CLI Matrix client - neomutt \ # CLI email client - thunderbird \ # GUI email client - gnupg # GPG key manager -#+end_src - -From here, I use my Syncthing storage to pull all the configuration -files I stored from prior desktops, such as -[[https://git.sr.ht/~cmc/dotfiles][my dotfiles]]. - -* Resolving Issues -** WiFi Issues -I initially tried to set up my Wi-Fi the standard way with =iwd=, but it -didn't work. - -Here is what I initially tried (I did all of this as =root=): - -#+begin_src sh -apk add iwd -rc-service iwd start -iwctl station wlan0 connect # This will prompt for the password -rc-update add iwd boot && rc-update add dbus boot -#+end_src - -Then, I added the Wi-Fi entry to the bottom of the networking interface -file: - -#+begin_src sh -nano /etc/network/interfaces -#+end_src - -#+begin_src conf -auto wlan0 -iface wlan0 inet dhcp -#+end_src - -Finally, restart the networking service: - -#+begin_src sh -rc-service networking restart -#+end_src - -My Wi-Fi interface would receive an IP address from the router, but it -could not ping anything in the network. To solve the Wi-Fi issues, I -originally upgraded to Alpine's =edge= repositories, which was -unnecessary. - -Really, the solution was to enable the =NameResolvingService=resolvconf= -in =/etc/iwd/main.conf=. - -#+begin_src sh -doas nano /etc/iwd/main.conf -#+end_src - -#+begin_src conf -[Network] - -NameResolvingService=resolvconf -#+end_src - -Once I finished this process, my Wi-Fi is working flawlessly. - -** Sound Issues -Same as with the Wi-Fi, I had no sound and could not control the -mute/unmute or volume buttons on my laptop. - -To resolve this, I installed -[[https://wiki.alpinelinux.org/wiki/PipeWire][pipewire]]. - -#+begin_src sh -# Add your user to the following groups -addgroup $USER audio -addgroup $USER video - -# Install pipewire and other useful packages -apk add pipewire wireplumber pipewire-pulse pipewire-jack pipewire-alsa -#+end_src - -Finally, I needed to add =/usr/libexec/pipewire-launcher= to my -=.config/sway/config= file so that Pipewire would run every time I -launched sway. - -#+begin_src sh -nano ~/.config/sway/config -#+end_src - -#+begin_src conf -# Run pipewire audio server -exec /usr/libexec/pipewire-launcher - -# Example audio button controls -bindsym XF86AudioRaiseVolume exec --no-startup-id pactl set-sink-volume @DEFAULT_SINK@ +5% -bindsym XF86AudioLowerVolume exec --no-startup-id pactl set-sink-volume @DEFAULT_SINK@ -5% -bindsym XF86AudioMute exec --no-startup-id pactl set-sink-mute @DEFAULT_SINK@ toggle -bindsym XF86AudioMicMute exec --no-startup-id pactl set-source-mute @DEFAULT_SOURCE@ toggle -#+end_src - -Note that I do not use bluetooth or screen sharing, so I won't cover -those options in this post. - -Other than these issues, I have a working Alpine desktop. No other -complaints thus far! diff --git a/blog/alpine-linux/index.org b/blog/alpine-linux/index.org deleted file mode 100644 index 8d4a14b..0000000 --- a/blog/alpine-linux/index.org +++ /dev/null @@ -1,269 +0,0 @@ -#+title: Alpine Linux: My New Server OS -#+date: 2022-10-22 -#+description: A retrospective on installing and configuring Alpine Linux as my new server operating system. -#+filetags: :linux: - -* Alpine Linux -[[https://alpinelinux.org][Alpine Linux]] is a very small distro, built -on musl libc and busybox. It uses ash as the default shell, OpenRC as -the init system, and apk as the package manager. According to their -website, an Alpine container "requires no more than 8 MB and a minimal -installation to disk requires around 130 MB of storage." An actual bare -metal machine is recommended to have 100 MB of RAM and 0-700 MB of -storage space. - -Historically, I've used Ubuntu's minimal installation image as my server -OS for the last five years. Ubuntu worked well and helped as my original -server contained an nVidia GPU and no onboard graphics, so quite a few -distros won't boot or install without a lot of tinkering. - -Alpine has given me a huge increase in performance across my Docker apps -and Nginx websites. CPU load for the new server I'm using to test Alpine -hovers around 0-5% on average with an Intel(R) Core(TM) i3-6100 CPU @ -3.70GHz. - -The only services I haven't moved over to Alpine are Plex Media Server -and Syncthing, which may increase CPU load quite a bit depending on how -many streams are running. - -** Installation -In terms of installation, Alpine has an incredibly useful -[[https://wiki.alpinelinux.org/wiki/Installation][wiki]] that will guide -a user throughout the installation and post-installation processes, as -well as various other articles and guides. - -To install Alpine, find an appropriate -[[https://alpinelinux.org/downloads/][image to download]] and flash it -to a USB using software such as Rufus or Etcher. I opted to use the -Standard image for my x86_{64} architecture. - -Once the USB is ready, plug it into the machine and reboot. Note that -you may have to use a key such as =Esc= or =F1-12= to access the boot -menu. The Alpine Linux terminal will load quickly and for a login. - -To log in to the installation image, use the =root= account; there is no -password. Once logged-in, execute the setup command: - -#+begin_src sh -setup-alpine -#+end_src - -The setup script will ask a series of questions to configure the system. -Be sure to answer carefully or else you may have to re-configure the -system after boot. - -- Keyboard Layout (Local keyboard language and usage mode, e.g., us and - variant of us-nodeadkeys.) -- Hostname (The name for the computer.) -- Network (For example, automatic IP address discovery with the "DHCP" - protocol.) -- DNS Servers (Domain Name Servers to query. For privacy reasons, it is - NOT recommended to route every local request to servers like Google's - 8.8.8.8 .) -- Timezone -- Proxy (Proxy server to use for accessing the web. Use "none" for - direct connections to the internet.) -- Mirror (From where to download packages. Choose the organization you - trust giving your usage patterns to.) -- SSH (Secure SHell remote access server. "Openssh" is part of the - default install image. Use "none" to disable remote login, e.g. on - laptops.) -- NTP (Network Time Protocol client used for keeping the system clock in - sync with a time-server. Package "chrony" is part of the default - install image.) -- Disk Mode (Select between diskless (disk="none"), "data" or "sys", as - described above.) - -Once the setup script is finished, be sure to reboot the machine and -remove the USB device. - -#+begin_src sh -reboot -#+end_src - -** Post-Installation -There are many things you can do once your Alpine Linux system is up and -running, and it largely depends on what you'll use the machine for. I'm -going to walk through my personal post-installation setup for my web -server. - -1. Upgrade the System - - First, login as =root= in order to update and upgrade the system: - - #+begin_src sh - apk -U upgrade - #+end_src - -2. Adding a User - - I needed to add a user so that I don't need to log in as root. Note - that if you're used to using the =sudo= command, you will now need to - use the =doas= command on Alpine Linux. - - #+begin_src sh - apk add doas - adduser - adduser wheel - #+end_src - - You can now log out and log back in using the newly-created user: - - #+begin_src sh - exit - #+end_src - -3. Enable Community Packages - - In order to install more common packages that aren't found in the - =main= repository, you will need to enable the =community= - repository: - - #+begin_src sh - doas nano /etc/apk/repositories - #+end_src - - Uncomment the community line for whichever version of Alpine you're - running: - - #+begin_src sh - /media/usb/apks - http://dl-cdn.alpinelinux.org/alpine/v3.16/main - http://dl-cdn.alpinelinux.org/alpine/v3.16/community - #http://dl-cdn.alpinelinux.org/alpine/edge/main - #http://dl-cdn.alpinelinux.org/alpine/edge/community - #http://dl-cdn.alpinelinux.org/alpine/edge/testing - #+end_src - -4. Install Required Packages - - Now that the community packages are available, you can install any - packages you need. In my case, I installed the web server packages I - need for my services: - - #+begin_src sh - doas apk add nano nginx docker docker-compose ufw - #+end_src - -5. SSH - - If you didn't install OpenSSH as part of the installation, you can do - so now: - - #+begin_src sh - doas apk add openssh - #+end_src - - Next, either create a new key or copy your SSH key to the server from - your current machines: - - #+begin_src sh - # Create a new key - ssh-keygen - #+end_src - - If you need to copy an existing SSH key from a current machine: - - #+begin_src sh - # Copy key from existing machines - ssh-copy-id @ - #+end_src - -6. Firewall - - Lastly, I installed =ufw= above as my firewall. To set up, default to - deny incoming and allow outgoing connections. Then selectively allow - other ports or apps as needed. - - #+begin_src sh - doas ufw default deny incoming - doas ufw default allow outgoing - doas ufw allow SSH - doas ufw allow "WWW Full" - doas ufw allow 9418 # Git server port - #+end_src - -7. Change Hostname - - If you don't like the hostname set during installation, you just need - to edit two files. First, edit the simple hostname file: - - #+begin_src sh - doas nano /etc/hostname - #+end_src - - #+begin_src sh - - #+end_src - - Next, edit the =hosts= file: - - #+begin_src sh - doas nano /etc/hosts - #+end_src - - #+begin_src sh - 127.0.0.1 .local localhost.local localhost - ::1 .local - #+end_src - -* Nginx Web Server -To set up my web server, I simply created the =www= user and created the -necessary files. - -#+begin_src sh -doas adduser -D -g 'www' www -mkdir /www -doas mkdir /www -doas chown -R www:www /var/lib/nginx/ -doas chown -R www:www /www -#+end_src - -If you're running a simple webroot, you can alter the main =nginx.conf= -file. Otherwise, you can drop configuration files in the following -directory. You don't need to enable or symlink the configuration file -like you do in other systems. - -#+begin_src sh -doas nano /etc/nginx/http.d/example_website.conf -#+end_src - -Once the configuration is set and pointed at the =/www= directory to -serve files, enable the Nginx service: - -#+begin_src sh -# Note that 'default' must be included or Nginx will not start on boot -doas rc-update add nginx default -#+end_src - -* Docker Containers -Docker works exactly the same as other systems. Either execute a -=docker run= command or create a =docker-compose.yml= file and do -=docker-compose up -d=. - -* Git Server -I went in-depth on how to self-host a git server in another post: -[[../git-server/][Self-Hosting a Personal Git Server]]. - -However, there are a few differences with Alpine. First note that in -order to change the =git= user's shell, you must do a few things a -little different: - -#+begin_src sh -doas apk add libuser -doas touch /etc/login.defs -doas mkdir /etc/default -doas touch /etc/default/useradd -doas lchsh git -#+end_src - -* Thoughts on Alpine -So far, I love Alpine Linux. I have no complaints about anything at this -point, but I'm not completely finished with the migration yet. Once I'm -able to upgrade my hardware to a rack-mounted server, I will migrate -Plex and Syncthing over to Alpine as well - possibly putting Plex into a -container or VM. - -The performance is stellar, the =apk= package manager is seamless, and -system administration tasks are effortless. My only regret is that I -didn't install Alpine sooner. diff --git a/blog/alpine-ssh-hardening/index.org b/blog/alpine-ssh-hardening/index.org deleted file mode 100644 index 4e7fcc5..0000000 --- a/blog/alpine-ssh-hardening/index.org +++ /dev/null @@ -1,71 +0,0 @@ -#+title: SSH Hardening for Alpine Linux -#+date: 2023-10-15 -#+description: A quick guide to harden SSH configuration on Alpine. -#+filetags: :linux: - -* Overview -This guide follows the standard -[[https://www.ssh-audit.com/hardening_guides.html][ssh-audit]] hardening -guide, tweaked for Alpine Linux. - -* Hardening Guide -These steps must be performed as root. You can try to use =doas= or -=sudo=, but there may be issues. - -1. Re-generate the RSA and ED25519 keys - -#+begin_src sh -rm /etc/ssh/ssh_host_* -ssh-keygen -t rsa -b 4096 -f /etc/ssh/ssh_host_rsa_key -N "" -ssh-keygen -t ed25519 -f /etc/ssh/ssh_host_ed25519_key -N "" -#+end_src - -2. [@2] Remove small Diffie-Hellman moduli - -#+begin_src sh -awk '$5 >= 3071' /etc/ssh/moduli > /etc/ssh/moduli.safe -mv /etc/ssh/moduli.safe /etc/ssh/moduli -#+end_src - -3. [@3] Enable the RSA and ED25519 HostKey directives in the - /etc/ssh/sshd_{config} file - -#+begin_src sh -sed -i 's/^\#HostKey \/etc\/ssh\/ssh_host_\(rsa\|ed25519\)_key$/HostKey \/etc\/ssh\/ssh_host_\1_key/g' /etc/ssh/sshd_config -#+end_src - -4. [@4] Restrict supported key exchange, cipher, and MAC algorithms - -#+begin_src sh -echo -e "\n# Restrict key exchange, cipher, and MAC algorithms, as per sshaudit.com\n# hardening guide.\nKexAlgorithms sntrup761x25519-sha512@openssh.com,curve25519-sha256,curve25519-sha256@libssh.org,diffie-hellman-group16-sha512,diffie-hellman-group18-sha512,diffie-hellman-group-exchange-sha256\nCiphers chacha20-poly1305@openssh.com,aes256-gcm@openssh.com,aes128-gcm@openssh.com,aes256-ctr,aes192-ctr,aes128-ctr\nMACs hmac-sha2-256-etm@openssh.com,hmac-sha2-512-etm@openssh.com,umac-128-etm@openssh.com\nHostKeyAlgorithms ssh-ed25519,ssh-ed25519-cert-v01@openssh.com,sk-ssh-ed25519@openssh.com,sk-ssh-ed25519-cert-v01@openssh.com,rsa-sha2-512,rsa-sha2-512-cert-v01@openssh.com,rsa-sha2-256,rsa-sha2-256-cert-v01@openssh.com" > /etc/ssh/sshd_config.d/ssh-audit_hardening.conf -#+end_src - -5. [@5] Include the /etc/ssh/sshd_{config}.d directory - -#+begin_src sh -echo -e "Include /etc/ssh/sshd_config.d/*.conf" > /etc/ssh/sshd_config -#+end_src - -6. [@6] Restart OpenSSH server - -#+begin_src sh -rc-service sshd restart -#+end_src - -* Testing SSH -You can test the results with the =ssh-audit= python script. - -#+begin_src sh -pip3 install ssh-audit -ssh-audit localhost -#+end_src - -If everything succeeded, the results will show as all green. If anything -is yellow, orange, or red, you may need to tweak additional settings. - -#+begin_src txt -,#+caption: ssh audit -#+end_src - -#+caption: ssh-audit -[[https://img.cleberg.net/blog/20231015-ssh-hardening/ssh-audit.png]] diff --git a/blog/apache-redirect/index.org b/blog/apache-redirect/index.org deleted file mode 100644 index 25fb7ba..0000000 --- a/blog/apache-redirect/index.org +++ /dev/null @@ -1,43 +0,0 @@ -#+title: Apache Redirect HTML Files to a Directory -#+date: 2021-10-09 -#+description: A guide on redirecting HTML files to directory in Apache. -#+filetags: :apache: - -* The Problem -After recently switching static site generators (SSG), my blog URLs -changed with no option to preserve the classic =.html= extension at the -end of my blog post URLs. - -I really disliked using my old SSG ([[https://jekyllrb.com][Jekyll]]) -and prefer my new tool ([[https://www.getzola.org][Zola]]) much more, so -I was determined to figure out a way to get the proper redirect set up -so that people who find my posts online aren't constantly met by 404 -errors. - -* The Solution -To solve this problem, I really needed to solve two pieces: - -1. Redirect all blog post URL requests from =/blog/some-post.html= to - =/blog/some-post/=. -2. Ensure that no other =.html= files are redirected, such as - =index.html=. - -After /a lot/ of tweaking and testing, I believe I have finally found -the solution. The solution is shown below. - -#+begin_src conf -RewriteEngine On -RewriteCond %{REQUEST_URI} !\index.html$ [NC] -RewriteRule ^(.*).html$ https://example.com/$1 [R=301,L] -#+end_src - -This piece of code in the Apache =.conf= or =.htaccess= file will do the -following: - -1. Turn on the RewriteEngine so that we can modify URLs. -2. Ignore any =index.html= files from the rule we are about to specify. -3. Find any =.html= files within the website directory and redirect it - to exclude the file extension. -4. The final piece is adding the trailing slash (=/=) at the end of the - URL - you'll notice that I don't have an Apache rule for that since - Apache handles that automatically. diff --git a/blog/audit-analytics/index.org b/blog/audit-analytics/index.org deleted file mode 100644 index 77b3082..0000000 --- a/blog/audit-analytics/index.org +++ /dev/null @@ -1,229 +0,0 @@ -#+title: Data Analysis in Auditing -#+date: 2019-09-09 -#+description: Learn how to use data analysis in the world of auditing. -#+filetags: :audit: - -* What Are Data Analytics? -A quick aside before I dive into this post: =data analytics= is a vague -term that has become popular in recent years. Think of a =data analytic= -as the output of any data analysis you perform. For example, a pivot -table or a pie chart could be a data analytic. - -[[https://en.wikipedia.org/wiki/Data_analysis][Data analysis]] is a -process that utilizes statistics and other mathematical methods to -discover useful information within datasets. This involves examining, -cleaning, transforming, and modeling data so that you can use the data -to support an opinion, create more useful viewpoints, and gain knowledge -to implement into audit planning or risk assessments. - -One of the common mistakes that managers (and anyone new to the process) -make is assuming that everything involved with this process is "data -analytics". In fact, data analytics are only a small part of the -process. - -See *Figure 1** for a more accurate representation of where data analysis -sits within the full process. This means that data analysis does not -include querying or extracting data, selecting samples, or performing -audit tests. These steps can be necessary for an audit (and may even be -performed by the same associates), but they are not data analytics. - -#+caption: The Intelligence Cycle -[[https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/intelligence_cycle-min.png]] - -* Current Use of Analytics in Auditing -While data analysis has been an integral part of most businesses and -departments for the better part of the last century, only recently have -internal audit functions been adopting this practice. The internal audit -function works exclusively to provide assurance and consulting services -to the business areas within the firm (except for internal auditing -firms who are hired by different companies to perform their roles). - -#+begin_quote -Internal Auditing helps an organization accomplish its objectives by -bringing a systematic, disciplined approach to evaluate and improve the -effectiveness of risk management, control and governance processes. - -- The IIA's Definition of Internal Audit - -#+end_quote - -Part of the blame for the slow adoption of data analysis can be -attributed to the fact that internal auditing is strongly based on -tradition and following the precedents set by previous auditors. -However, there can be no progress without auditors who are willing to -break the mold and test new audit techniques. In fact, as of 2018, -[[https://www.cpapracticeadvisor.com/accounting-audit/news/12404086/internal-audit-groups-are-lagging-in-data-analytics][only -63% of internal audit departments currently utilize data analytics]] in -North America. This number should be as close as possible to 100%. I -have never been part of an audit that would not have benefited from data -analytics. - -So, how do internal audit functions remedy this situation? It's -definitely not as easy as walking into work on Monday and telling your -Chief Audit Executive that you're going to start implementing analytics -in the next audit. You need a plan and a system to make the analysis -process as effective as possible. - -* The DELTA Model -One of the easiest ways to experiment with data analytics and gain an -understanding of the processes is to implement them within your own -department. But how do we do this if we've never worked with analysis -before? One of the most common places to start is to research some data -analysis models currently available. For this post, we'll take a look at -the DELTA model. You can take a look at ****Figure 2***** for a quick -overview of the model. - -The DELTA model sets a few guidelines for areas wanting to implement -data analytics so that the results can be as comprehensive as possible: - -- *Data*: Must be clean, accessible, and (usually) unique. -- *Enterprise-Wide Focus*: Key data systems and analytical resources - must be available for use (by the Internal Audit Function). -- *Leaders*: Must promote a data analytics approach and show the value - of analytical results. -- *Targets*: Must be set for key areas and risks that the analytics can - be compared against (KPIs). -- *Analysts*: There must be auditors willing and able to perform data - analytics or else the system cannot be sustained. - -#+caption: The Delta Model -[[https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/delta-min.png]] - -* Finding the Proper KPIs -Once the Internal Audit Function has decided that they want to start -using data analytics internally and have ensured they're properly set up -to do so, they need to figure out what they will be testing against. Key -Performance Indicators (KPIs) are qualitative or quantitative factors -that can be evaluated and assessed to determine if the department is -performing well, usually compared to historical or industry benchmarks. -Once KPIs have been agreed upon and set, auditors can use data analytics -to assess and report on these KPIs. This allows the person performing -the analytics the freedom to express opinions on the results, whereas -the results are ambiguous if no KPIs exist. - -It should be noted that tracking KPIs in the department can help ensure -you have a rigorous Quality Assurance and Improvement Program (QAIP) in -accordance with some applicable standards, such as IPPF Standard 1300. - -#+begin_quote -The chief audit executive must develop and maintain a quality assurance -and improvement program that covers all aspects of the internal audit -activity. - -- IPPF Standard 1300 - -#+end_quote - -Additionally, IPPF Standard 2060 discusses reporting: - -#+begin_quote -The chief audit executive must report periodically to senior management -and the board on the internal audit activity's purpose, authority, -responsibility, and performance relative to its plan and on its -conformance with the Code of Ethics and the Standards. Reporting must -also include significant risk and control issues, including fraud risks, -governance issues, and other matters that require the attention of -senior management and/or the board. - -- IPPF Standard 2060 - -#+end_quote - -The hardest part of finding KPIs is to determine which KPIs are -appropriate for your department. Since every department is different and -has different goals, KPIs will vary drastically between companies. To -give you an idea of where to look, here are some ideas I came up with -when discussing the topic with a few colleagues. - -- Efficiency/Budgeting: - - Audit hours to staff utilization ratio (annual hours divided by - total annual work hours). - - Audit hours compared to the number of audits completed. - - Time between audit steps or to complete the whole audit. E.g., time - from fieldwork completion to audit report issuance. -- Reputation: - - The frequency that management has requested the services of the IAF. - - Management, audit committee, or external audit satisfaction survey - results. - - Education, experience, certifications, tenure, and training of the - auditors on staff. -- Quality: - - Number and frequency of audit findings. Assign monetary or numerical - values, if possible. - - Percentage of recommendations issued and implemented. -- Planning: - - Percentage or number of key risks audited per year or per audit. - - Proportion of audit universe audited per year. - -* Data Analysis Tools -Finally, to be able to analyze and report on the data analysis, auditors -need to evaluate the tools at their disposal. There are many options -available, but a few of the most common ones can easily get the job -done. For example, almost every auditor already has access to Microsoft -Excel. Excel is more powerful than most people give it credit for and -can accomplish a lot of basic statistics without much work. If you don't -know a lot about statistics but still want to see some of the more basic -results, Excel is a great option. - -To perform more in-depth statistical analysis or to explore large -datasets that Excel cannot handle, auditors will need to explore other -options. The big three that have had a lot of success in recent years -are Python, R, and ACL. ACL can be used as either a graphical tool -(point and click) or as a scripting tool, where the auditor must write -the scripts manually. Python and the R-language are solely scripting -languages. - -The general trend in the data analytics environment is that if the tool -allows you to do everything by clicking buttons or dragging elements, -you won't be able to fully utilize the analytics you need. The most -robust solutions are created by those who understand how to write the -scripts manually. It should be noted that as the utility of a tool -increases, it usually means that the learning curve for that tool will -also be higher. It will take auditors longer to learn how to utilize -Python, R, or ACL versus learning how to utilize Excel. - -* Visualization -Once an auditor has finally found the right data, KPIs, and tools, they -must report these results so that actions can be taken. Performing -in-depth data analysis is only useful if the results are understood by -the audiences of the data. The best way to create this understanding is -to visualize the results of the data. Let's take a look at some of the -best options to visualize and report the results you've found. - -Some of the most popular commercial tools for visualization are -Microsoft PowerBI and Tableau Desktop. However, other tools exist such -as JMP, Plotly, Qlikview, Alteryx, or D3. Some require commercial -licenses while others are simply free to use. For corporate data, you -may want to make sure that the tool does not communicate any of the data -outside the company (such as cloud storage). I won't be going into depth -on any of these tools since visualization is largely a subjective and -creative experience, but remember to constantly explore new options as -you repeat the process. - -Lastly, let's take a look at an example of data visualization. This -example comes from a -[[https://talent.works/2018/03/28/the-science-of-the-job-search-part-iii-61-of-entry-level-jobs-require-3-years-of-experience/][blog -post written by Kushal Chakrabarti]] in 2018 about the percent of -entry-level US jobs that require experience. *Figure 3** shows us an -easy-to-digest picture of the data. We can quickly tell that only about -12.5% of entry-level jobs don't require experience. - -This is the kind of result that easily describes the data for you. -However, make sure to include an explanation of what the results mean. -Don't let the reader assume what the data means, especially if it -relates to a complex subject. /Tell a story/ about the data and why the -results matter. For example, *Figure 4** shows a part of the explanation -the author gives to illustrate his point. - -#+caption: Entry-Level Visualization -[[https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/vis_example-min.png]] - -#+caption: Visualization Explanation -[[https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/vis_example_explanation-min.png]] - -* Wrap-Up -While this is not an all-encompassing program that you can just adopt -into your department, it should be enough to get anyone started on the -process of understanding and implementing data analytics. Always -remember to continue learning and exploring new options as your -processes grow and evolve. diff --git a/blog/audit-dashboard/index.org b/blog/audit-dashboard/index.org deleted file mode 100644 index e48c938..0000000 --- a/blog/audit-dashboard/index.org +++ /dev/null @@ -1,171 +0,0 @@ -#+title: Building an Audit Status Dashboard -#+date: 2024-01-26 -#+description: Learn how to utilize Alteryx Designer and Power BI Desktop to build a simple status tracking dashboard for an audit or other engagement. -#+filetags: :audit: - -Alteryx and Power BI are powerful tools that can help turn your -old-school audit trackers into interactive tools that provide useful -insights and potential action plans. - -With these tools, we are going to build the following dashboard: - -[[https://img.cleberg.net/blog/20240126-audit-dashboard/dashboard_01.png]] -[[https://img.cleberg.net/blog/20240126-audit-dashboard/dashboard_02.png]] - -* Requirements -This project assumes the following: - -- You have access to Alteryx Designer and Power BI Desktop. - - If you only have Power BI Desktop, you may need to perform some - analysis in Power BI instead of Alteryx. -- Your data is in a format that can be imported into Alteryx and/or - Power BI. -- You have a basic understanding of data types and visualization. - -* Alteryx: Data Preparation & Analysis -** Import Data -With Alteryx, importing data is easy with the use of the =Input Data= -tool. Simply drag this tool onto the canvas from the =In/Out= tab in the -Ribbon to create it as a node. - -You can choose the File Format manually or simply connect to your -file/database and let Alteryx determine the format for you. For this -example, we will be importing an Excel file and changing the -=Start Data Import on Line= variable to =2=. - -#+caption: Alteryx Excel Import -[[https://img.cleberg.net/blog/20240126-audit-dashboard/alteryx_import.png]] - -** Transform Data -Next, let's replace null data and remove whitespace to clean up our -data. We can do this with the =Data Cleansing= tool in the =Preparation= -tab in the Ribbon. - -Ensure that the following options are enabled: - -- Replace Nulls - - Replace with Blanks (String Fields) - - Replace with 0 (Numeric Fields) -- Remove Unwanted Characters - - Leading and Trailing Whitespace - -#+caption: Data Cleansing -[[https://img.cleberg.net/blog/20240126-audit-dashboard/alteryx_cleansing.png]] - -For our next step, we will transform the date fields from strings to -datetime format. Add a =Datetime= tool for each field you want to -transform - in the example below, I am using the tool twice for the -"Started On" and "Submitted On" fields. - -#+caption: Data Transformation -[[https://img.cleberg.net/blog/20240126-audit-dashboard/alteryx_transformation.png]] - -Now that the dates are in the correct format, let's perform a -calculation based on those fields. Start by adding a =Filter= tool, -naming a new Output Column, and pasting the formula below into it (the -two fields used in this formula must match the output of the =Datetime= -tools above): - -#+begin_src txt -DateTimeDiff([SubmittedOn_Out],[StartedOn_Out], "days") -#+end_src - -#+caption: Data Analysis -[[https://img.cleberg.net/blog/20240126-audit-dashboard/alteryx_analysis.png]] - -** Export Data -Finalize the process by exporting the transformed data set to a new -file, for use in the following visualization step. - -#+caption: Data Export -[[https://img.cleberg.net/blog/20240126-audit-dashboard/alteryx_export.png]] - -* Power BI: Data Visualization -** Import Data -To start, open the Power BI Desktop application. Upon first use, Power -BI will ask if you want to open an existing dashboard or import new -data. - -As we are creating our first dashboard, let's import our data. In my -example below, I'm importing data from the "Tracker" sheet of the Excel -file I'm using for this project. - -During this process, I also imported the export from the Alteryx -workflow above. Therefore, we have two different files available for use -in our dashboard. - -#+caption: Excel Tracker -[[https://img.cleberg.net/blog/20240126-audit-dashboard/excel_tracker.png]] - -#+caption: Power BI Excel Import -[[https://img.cleberg.net/blog/20240126-audit-dashboard/powerbi_import.png]] - -** Add Visuals -To create the dashboard below, you will need to follow the list -instructions below and format as needed: - -[[https://img.cleberg.net/blog/20240126-audit-dashboard/dashboard_01.png]] -[[https://img.cleberg.net/blog/20240126-audit-dashboard/dashboard_02.png]] - -Instructions to create the visuals above: - -- =Text Box=: Explain the name and purpose of the dashboard. You can - also add images and logos at the top of the dashboard. -- =Donut Chart=: Overall status of the project. - - =Legend=: Status - - =Values=: Count of Status -- =Stacked Column Chart=: Task count by assignee. - - =X-axis=: Preparer - - =Y-axis=: Count of Control ID - - =Legend=: Status -- =Treemap=: Top N client submitters by average days to submit. - - =Details=: Preparer - - =Values=: Sum of Avg_DaysToSubmit -- =Line Chart=: Projected vs. actual hours over time. -- =Clustered Bar Chart=: Projected vs. actual hours per person. -- =Slicer & Table= - Upcoming due dates. - - =Slicer=: - - =Values=: Date Due - - =Table=: - - =Columns=: Count of Control ID, Date Due, Preparer, Status - -** Format the Dashboard -You can choose a theme in the View tab of the Ribbon. You can even -browse for custom JSON files that define themes, such as ones found -online or custom ones created by your organization. - -For each visual, you can click the =Format= button in the -=Visualizations= side pane and explore the options. You can custom -options such as: - -- Visual - - Legend - - Colors - - Data labels - - Category labels -- General - - Properties - - Title - - Effects - - Header icons - - Tooltips - - Alt text - -You can always look online for inspiration when trying to decide how -best to organize and style your dashboard. - -* Sharing the Results -Generally, you have a few different options for sharing your dashboards -with others: - -1. Export the dashboard as a PDF in the file menu of Power BI. This will - export all tabs and visuals as they are set when the export button is - pressed. You will lose all interactivity with this option. -2. Send the full Power BI file to those you wish to share the dashboard. - This will retain all settings and interactivity. However, you will - also need to send the source files if they need to refresh the - dashboard and you will need to re-send the files if you make updates. -3. Store the dashboard in a synced location, such as a shared drive or - Microsoft Teams. Depending on how a user configures their local - Windows paths, the data source paths may not be compatible for all - users with such a setup. diff --git a/blog/audit-review-template/index.org b/blog/audit-review-template/index.org deleted file mode 100644 index 135a845..0000000 --- a/blog/audit-review-template/index.org +++ /dev/null @@ -1,76 +0,0 @@ -#+title: Audit Testing Review Template -#+date: 2023-06-20 -#+description: A handy reference template for audit review. -#+filetags: :audit: - -* Overview -This post is a /very/ brief overview on the basic process to review -audit test results, focusing on work done as part of a financial -statement audit (FSA) or service organization controls (SOC) report. - -While there are numerous different things to review and look for - all -varying wildly depending on the report, client, and tester - this list -serves as a solid base foundation for a reviewer. - -I have used this throughout my career as a starting point to my reviews, -and it has worked wonders for creating a consistent and objective -template to my reviews. The goal is to keep this base high-level enough -to be used on a wide variety of engagements, while still ensuring that -all key areas are covered. - -* Review Template -1. [ ] Check all documents for spelling and grammar. -2. [ ] Ensure all acronyms are fully explained upon first use. -3. [ ] For all people referenced, use their full names and job titles - upon first use. -4. [ ] All supporting documents must cross-reference to the lead sheet - and vice-versa. -5. [ ] Verify that the control has been adequately tested: - - [ ] *Test of Design*: Did the tester obtain information regarding - how the control should perform normally and abnormally (e.g., - emergency scenarios)? - - [ ] *Test of Operating Effectiveness*: Did the tester inquire, - observe, inspect, or re-perform sufficient evidence to support - their conclusion over the control? Inquiry alone is not adequate! -6. [ ] For any information used in the control, whether by the control - operator or by the tester, did the tester appropriately document the - source (system or person), extraction method, parameters, and - completeness and accuracy (C&A)? - - [ ] For any reports, queries, etc. used in the extraction, did the - tester include a copy and notate C&A considerations? -7. [ ] Did the tester document the specific criteria that the control is - being tested against? -8. [ ] Did the tester notate in the supporting documents where each - criterion was satisfied? -9. [ ] If testing specific policies or procedures, are the documents - adequate? - - [ ] e.g., a test to validate that a review of policy XYZ occurs - periodically should also evaluate the sufficiency of the policy - itself, if meant to cover the risk that such a policy does not - exist and is not reviewed. -10. [ ] Does the test cover the appropriate period under review? - - [ ] If the test is meant to cover only a portion of the audit - period, do other controls exist to mitigate the risks that exist - for the remainder of the period? -11. [ ] For any computer-aided audit tools (CAATs) or other automation - techniques used in the test, is the use of such tools explained and - appropriately documented? -12. [ ] If prior-period documentation exists, are there any missing - pieces of evidence that would further enhance the quality of the - test? -13. [ ] Was any information discovered during the walkthrough or inquiry - phase that was not incorporated into the test? -14. [ ] Are there new rules or expectations from your company's internal - guidance or your regulatory bodies that would affect the audit - approach for this control? -15. [ ] Was an exception, finding, or deficiency identified as a result - of this test? - - [ ] Was the control deficient in design, operation, or both? - - [ ] What was the root cause of the finding? - - [ ] Does the finding indicate other findings or potential fraud? - - [ ] What's the severity and scope of the finding? - - [ ] Do other controls exist as a form of compensation against the - finding's severity, and do they mitigate the risk within the - control objective? - - [ ] Does the finding exist at the end of the period, or was it - resolved within the audit period? diff --git a/blog/audit-sampling/index.org b/blog/audit-sampling/index.org deleted file mode 100644 index 9882fb2..0000000 --- a/blog/audit-sampling/index.org +++ /dev/null @@ -1,264 +0,0 @@ -#+title: Audit Sampling with Python -#+date: 2021-08-25 -#+description: Learn how to sample populations with Python. -#+filetags: :audit: - -* Introduction -For anyone who is familiar with internal auditing, external auditing, or -consulting, you will understand how tedious audit testing can become -when you are required to test large swaths of data. When we cannot -establish an automated means of testing an entire population, we -generate samples to represent the population of data. This helps ensure -we can have a small enough data pool to test and that our results still -represent the population. - -However, sampling data within the world of audit still seems to confuse -quite a lot of people. While some audit-focused tools have introduced -sampling functionality (e.g. Wdesk), many audit departments and firms -cannot use software like this due to certain constraints, such as the -team's budget or knowledge. Here is where this article comes in: we're -going to use [[https://www.python.org][Python]], a free and open-source -programming language, to generate random samples from a dataset in order -to suffice numerous audit situations. - -* Audit Requirements for Sampling -Before we get into the details of how to sample with Python, I want to -make sure I discuss the different requirements that auditors may have of -samples used within their projects. - -** Randomness -First, let's discuss randomness. When testing out new technology to help -assist with audit sampling, you need to understand exactly how your -samples are being generated. For example, if the underlying function is -just picking every 57th element from a list, that's not truly random; -it's a systematic form of sampling. Luckily, since Python is -open-source, we have access to its codebase. Through this blog post, I -will be using the [[https://pandas.pydata.org][pandas]] module in order -to generate the random samples. More specifically, I will be using the -[[https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.sample.html][pandas.DataFrame.sample]] -function provided by Pandas. - -Now that you know what you're using, you can always check out the code -behind =pandas.DataFrame.sample=. This function does a lot of work, but -we really only care about the following snippets of code: - -#+begin_src python -# Process random_state argument -rs = com.random_state(random_state) - -... - -locs = rs.choice(axis_length, size=n, replace=replace, p=weights) -result = self.take(locs, axis=axis) -if ignore_index: -result.index = ibase.default_index(len(result)) - -return result -#+end_src - -The block of code above shows you that if you assign a =random_state= -argument when you run the function, that will be used as a seed number -in the random generation and will allow you to reproduce a sample, given -that nothing else changes. This is critical to the posterity of audit -work. After all, how can you say your audit process is adequately -documented if the next person can't run the code and get the same -sample? The final piece here on randomness is to look at the -[[https://docs.%20python.org/3/library/random.html#random.choice][choice]] -function used above. This is the crux of the generation and can also be -examined for more detailed analysis on its reliability. As far as -auditing goes, we will trust that these functions are mathematically -random. - -** Sample Sizes -As mentioned in the intro, sampling is only an effective method of -auditing when it truly represents the entire population. While some -audit departments or firms may consider certain judgmental sample sizes -to be adequate, you may need to rely on statistically-significant -confidence levels of sample testing at certain points. I will -demonstrate both here. For statistically-significant confidence levels, -most people will assume a 90% - 99% confidence level. In order to -actually calculate the correct sample size, it is best to use -statistical tools due to the tedious math work required. For example, -for a population of 1000, and a 90% confidence level that no more than -5% of the items are nonconforming, you would sample 45 items. - -However, in my personal experience, many audit departments and firms do -not use statistical sampling. Most people use a predetermined, often -proprietary, table that will instruct auditors which sample sizes to -choose. This allows for uniform testing and reduces overall workload. -See the table below for a common implementation of sample sizes: - -| Control Frequency | Sample Size - High Risk | Sample Size - Low Risk | -|-------------------+-------------------------+------------------------| -| More Than Daily | 40 | 25 | -| Daily | 40 | 25 | -| Weekly | 12 | 5 | -| Monthly | 5 | 3 | -| Quarterly | 2 | 2 | -| Semi-Annually | 1 | 1 | -| Annually | 1 | 1 | -| Ad-hoc | 1 | 1 | - -*** Sampling with Python & Pandas -In this section, I am going to cover a few basic audit situations that -require sampling. While some situations may require more effort, the -syntax, organization, and intellect used remain largely the same. If -you've never used Python before, note that lines starting with a '=#=' -symbol are called comments, and they will be skipped by Python. I highly -recommend taking a quick tutorial online to understand the basics of -Python if any of the code below is confusing to you. - -** Simple Random Sample -First, let's look at a simple, random sample. The code block below will -import the =pandas= module, load a data file, sample the data, and -export the sample to a file. - -#+begin_src python -# Import the Pandas module -import pandas - -# Specify where to find the input file & where to save the final sample -file_input = r'Population Data.xlsx' -file_output = r'Sample.xlsx' - -# Load the data with pandas -# Remember to use the sheet_name parameter if your Excel file has multiple sheets -df = pandas.read_excel(file_input) - -# Sample the data for 25 selections -# Remember to always use the random_state parameter so the sample can be re-performed -sample = df.sample(n=25, random_state=0) - -# Save the sample to Excel -sample.to_excel(file_output) -#+end_src - -** Simple Random Sample: Using Multiple Input Files -Now that we've created a simple sample, let's create a sample from -multiple files. - -#+begin_src python -# Import the Pandas module -import pandas - -# Specify where to find the input file & where to save the final sample -file_input_01 = r'Population Data Q1.xlsx' -file_input_02 = r'Population Data Q2.xlsx' -file_input_03 = r'Population Data Q3.xlsx' -file_output = r'Sample.xlsx' - -# Load the data with pandas -# Remember to use the sheet_name parameter if your Excel file has multiple sheets -df_01 = pandas.read_excel(file_input_01) -df_02 = pandas.read_excel(file_input_02) -df_03 = pandas.read_excel(file_input_03) - -# Sample the data for 5 selections from each quarter -# Remember to always use the random_state parameter so the sample can be re-performed -sample_01 = df_01.sample(n=5, random_state=0) -sample_02 = df_02.sample(n=5, random_state=0) -sample_03 = df_03.sample(n=5, random_state=0) - -# If required, combine the samples back together -sample = pandas.concat([sample_01, sample_02, sample_03], ignore_index=True) - -# Save the sample to Excel -sample.to_excel(file_output) -#+end_src - -** Stratified Random Sample -Well, what if you need to sample distinct parts of a single file? For -example, let's write some code to separate our data by "Region" and -sample those regions independently. - -#+begin_src python -# Import the Pandas module -import pandas - -# Specify where to find the input file & where to save the final sample -file_input = r'Sales Data.xlsx' -file_output = r'Sample.xlsx' - -# Load the data with pandas -# Remember to use the sheet_name parameter if your Excel file has multiple sheets -df = pandas.read_excel(file_input) - -# Stratify the data by "Region" -df_east = df[df['Region'] == 'East'] -df_west = df[df['Region'] == 'West'] - -# Sample the data for 5 selections from each quarter -# Remember to always use the random_state parameter so the sample can be re-performed -sample_east = df_east.sample(n=5, random_state=0) -sample_west = df_west.sample(n=5, random_state=0) - -# If required, combine the samples back together -sample = pandas.concat([sample_east, sample_west], ignore_index=True) - -# Save the sample to Excel -sample.to_excel(file_output) -#+end_src - -** Stratified Systematic Sample -This next example is quite useful if you need audit coverage over a -certain time period. This code will generate samples for each month in -the data and combine them all together at the end. Obviously, this code -can be modified to stratify by something other than months, if needed. - -#+begin_src python -# Import the Pandas module -import pandas - -# Specify where to find the input file & where to save the final sample -file_input = r'Sales Data.xlsx' -file_output = r'Sample.xlsx' - -# Load the data with pandas -# Remember to use the sheet_name parameter if your Excel file has multiple sheets -df = pandas.read_excel(file_input) - -# Convert the date column to datetime so the function below will work -df['Date of Sale'] = pandas.to_datetime(df['Date of Sale']) - -# Define a function to create a sample for each month -def monthly_stratified_sample(df: pandas.DataFrame, date_column: str, num_selections: int) -> pandas.DataFrame: - static_num_selections = num_selections final_sample = pandas.DataFrame() - for month in range(1, 13): - num_selections = static_num_selections - rows_list = [] - for index, row in df.iterrows(): - df_month = row[date_column].month - if month == df_month: - rows_list.append() - monthly_df = pd.DataFrame(data=rows_list) - if (len(monthly_df)) == 0: - continue - elif not (len(monthly_df) > sample_size): - num_selections = sample_size - elif len(monthly_df) >= sample_size: - num_selections = sample_size - sample = monthly_df.sample(n=num_selections, random_state=0) - final_sample = final_sample.append(sample) - return sample - -# Sample for 3 selections per month -sample_size = 3 -sample = monthly_stratified_sample(df, 'Date of Sale', sample_size) -sample.to_excel(file_output) -#+end_src - -*** Documenting the Results -Once you've generated a proper sample, there are a few things left to do -in order to properly ensure your process is reproducible. - -1. Document the sample. Make sure the resulting file is readable and - includes the documentation listed in the next bullet. -2. Include documentation around the data source, extraction techniques, - any modifications made to the data, and be sure to include a copy of - the script itself. -3. Whenever possible, perform a completeness and accuracy test to ensure - your sample is coming from a complete and accurate population. To - ensure completeness, compare the record count from the data source to - the record count loaded into Python. To ensure accuracy, test a small - sample against the source data (e.g., test 5 sales against the - database to see if the details are accurate). diff --git a/blog/audit-sql-scripts/index.org b/blog/audit-sql-scripts/index.org deleted file mode 100644 index b47771c..0000000 --- a/blog/audit-sql-scripts/index.org +++ /dev/null @@ -1,262 +0,0 @@ -#+title: Useful SQL Scripts for Auditing Logical Access -#+date: 2023-09-19 -#+description: A reference of SQL scripts for auditing logical access for common databases. -#+filetags: :audit: - -* Overview -When you have to scope a database into your engagement, you may be -curious how to best extract the information from the database. While -there are numerous different methods to extract this type of -information, I'm going to show an example of how to gather all users and -privileges from three main database types: Oracle, Microsoft SQL, and -MySQL. - -* Oracle -You can use the following SQL script to see all users and their -privileges in an Oracle database: - -#+begin_src sql -SELECT - grantee AS "User", - privilege AS "Privilege" -FROM - dba_sys_privs -WHERE - grantee IN (SELECT DISTINCT grantee FROM dba_sys_privs) -UNION ALL -SELECT - grantee AS "User", - privilege AS "Privilege" -FROM - dba_tab_privs -WHERE - grantee IN (SELECT DISTINCT grantee FROM dba_tab_privs); -#+end_src - -This script queries the =dba_sys_privs= and =dba_tab_privs= views to -retrieve system and table-level privileges respectively. It then -combines the results using =UNION ALL= to show all users and their -associated privileges. Please note that this method does not extract -information from the =dba_role_privs= table - use the method below for -that data. - -Please note that you might need appropriate privileges (e.g., DBA -privileges) to access these views, and you should exercise caution when -querying system tables in a production Oracle database. - -** Alternative Oracle Query -You can also extract each table's information separately and perform -processing outside the database to explore and determine the information -necessary for the audit: - -#+begin_src sql -SELECT ** FROM sys.dba_role_privs; -SELECT ** FROM sys.dba_sys_privs; -SELECT ** FROM sys.dba_tab_privs; -SELECT ** FROM sys.dba_users; -#+end_src - -* Microsoft SQL -You can use the following SQL script to see all users and their -privileges in a Microsoft SQL Server database -([[https://stackoverflow.com/a/30040784][source]]): - -#+begin_src sql -/* -Security Audit Report -1) List all access provisioned to a sql user or windows user/group directly -2) List all access provisioned to a sql user or windows user/group through a database or application role -3) List all access provisioned to the public role - -Columns Returned: -UserName : SQL or Windows/Active Directory user account. This could also be an Active Directory group. -UserType : Value will be either 'SQL User' or 'Windows User'. This reflects the type of user defined for the - SQL Server user account. -DatabaseUserName: Name of the associated user as defined in the database user account. The database user may not be the - same as the server user. -Role : The role name. This will be null if the associated permissions to the object are defined at directly - on the user account, otherwise this will be the name of the role that the user is a member of. -PermissionType : Type of permissions the user/role has on an object. Examples could include CONNECT, EXECUTE, SELECT - DELETE, INSERT, ALTER, CONTROL, TAKE OWNERSHIP, VIEW DEFINITION, etc. - This value may not be populated for all roles. Some built in roles have implicit permission - definitions. -PermissionState : Reflects the state of the permission type, examples could include GRANT, DENY, etc. - This value may not be populated for all roles. Some built in roles have implicit permission - definitions. -ObjectType : Type of object the user/role is assigned permissions on. Examples could include USER_TABLE, - SQL_SCALAR_FUNCTION, SQL_INLINE_TABLE_VALUED_FUNCTION, SQL_STORED_PROCEDURE, VIEW, etc. - This value may not be populated for all roles. Some built in roles have implicit permission - definitions. -ObjectName : Name of the object that the user/role is assigned permissions on. - This value may not be populated for all roles. Some built in roles have implicit permission - definitions. -ColumnName : Name of the column of the object that the user/role is assigned permissions on. This value - is only populated if the object is a table, view or a table value function. -,*/ - ---List all access provisioned to a sql user or windows user/group directly -SELECT - [UserName] = CASE princ.[type] - WHEN 'S' THEN princ.[name] - WHEN 'U' THEN ulogin.[name] COLLATE Latin1_General_CI_AI - END, - [UserType] = CASE princ.[type] - WHEN 'S' THEN 'SQL User' - WHEN 'U' THEN 'Windows User' - END, - [DatabaseUserName] = princ.[name], - [Role] = null, - [PermissionType] = perm.[permission_name], - [PermissionState] = perm.[state_desc], - [ObjectType] = obj.type_desc,--perm.[class_desc], - [ObjectName] = OBJECT_NAME(perm.major_id), - [ColumnName] = col.[name] -FROM - --database user - sys.database_principals princ -LEFT JOIN - --Login accounts - sys.login_token ulogin on princ.[sid] = ulogin.[sid] -LEFT JOIN - --Permissions - sys.database_permissions perm ON perm.[grantee_principal_id] = princ.[principal_id] -LEFT JOIN - --Table columns - sys.columns col ON col.[object_id] = perm.major_id - AND col.[column_id] = perm.[minor_id] -LEFT JOIN - sys.objects obj ON perm.[major_id] = obj.[object_id] -WHERE - princ.[type] in ('S','U') -UNION ---List all access provisioned to a sql user or windows user/group through a database or application role -SELECT - [UserName] = CASE memberprinc.[type] - WHEN 'S' THEN memberprinc.[name] - WHEN 'U' THEN ulogin.[name] COLLATE Latin1_General_CI_AI - END, - [UserType] = CASE memberprinc.[type] - WHEN 'S' THEN 'SQL User' - WHEN 'U' THEN 'Windows User' - END, - [DatabaseUserName] = memberprinc.[name], - [Role] = roleprinc.[name], - [PermissionType] = perm.[permission_name], - [PermissionState] = perm.[state_desc], - [ObjectType] = obj.type_desc,--perm.[class_desc], - [ObjectName] = OBJECT_NAME(perm.major_id), - [ColumnName] = col.[name] -FROM - --Role/member associations - sys.database_role_members members -JOIN - --Roles - sys.database_principals roleprinc ON roleprinc.[principal_id] = members.[role_principal_id] -JOIN - --Role members (database users) - sys.database_principals memberprinc ON memberprinc.[principal_id] = members.[member_principal_id] -LEFT JOIN - --Login accounts - sys.login_token ulogin on memberprinc.[sid] = ulogin.[sid] -LEFT JOIN - --Permissions - sys.database_permissions perm ON perm.[grantee_principal_id] = roleprinc.[principal_id] -LEFT JOIN - --Table columns - sys.columns col on col.[object_id] = perm.major_id - AND col.[column_id] = perm.[minor_id] -LEFT JOIN - sys.objects obj ON perm.[major_id] = obj.[object_id] -UNION ---List all access provisioned to the public role, which everyone gets by default -SELECT - [UserName] = '{All Users}', - [UserType] = '{All Users}', - [DatabaseUserName] = '{All Users}', - [Role] = roleprinc.[name], - [PermissionType] = perm.[permission_name], - [PermissionState] = perm.[state_desc], - [ObjectType] = obj.type_desc,--perm.[class_desc], - [ObjectName] = OBJECT_NAME(perm.major_id), - [ColumnName] = col.[name] -FROM - --Roles - sys.database_principals roleprinc -LEFT JOIN - --Role permissions - sys.database_permissions perm ON perm.[grantee_principal_id] = roleprinc.[principal_id] -LEFT JOIN - --Table columns - sys.columns col on col.[object_id] = perm.major_id - AND col.[column_id] = perm.[minor_id] -JOIN - --All objects - sys.objects obj ON obj.[object_id] = perm.[major_id] -WHERE - --Only roles - roleprinc.[type] = 'R' AND - --Only public role - roleprinc.[name] = 'public' AND - --Only objects of ours, not the MS objects - obj.is_ms_shipped = 0 -ORDER BY - princ.[Name], - OBJECT_NAME(perm.major_id), - col.[name], - perm.[permission_name], - perm.[state_desc], - obj.type_desc--perm.[class_desc] -#+end_src - -* MySQL -You can use the following SQL script to see all users and their -privileges in a MySQL database: - -#+begin_src sh -mysql -u root -p -#+end_src - -Find all users and hosts with access to the database: - -#+begin_src sql -SELECT ** FROM information_schema.user_privileges; -#+end_src - -This script retrieves user information and their associated -database-level privileges from the =information_schema.user_privileges= -table in MySQL. It lists various privileges such as SELECT, INSERT, -UPDATE, DELETE, CREATE, and more for each user and database combination. - -Please note that you may need appropriate privileges (e.g., =SELECT= -privileges on =information_schema.user_privileges=) to access this -information in a MySQL database. Additionally, some privileges like -GRANT OPTION, EXECUTE, EVENT, and TRIGGER may not be relevant for all -users and databases. - -** Alternative MySQL Query -You can also grab individual sets of data from MySQL if you prefer to -join them after extraction. I have marked the queries below with -=SELECT ...= and excluded most =WHERE= clauses for brevity. You should -determine the relevant privileges in-scope and query for those -privileges to reduce the length of time to query. - -#+begin_src sql --- Global Permissions -SELECT ... FROM mysql.user; - --- Database Permissions -SELECT ... FROM mysql.db -WHERE db = @db_name; - --- Table Permissions -SELECT ... FROM mysql.tables -WHERE db = @db_name; - --- Column Permissions -SELECT ... FROM mysql.columns_priv -WHERE db = @db_name; - --- Password Configuration -SHOW GLOBAL VARIABLES LIKE 'validate_password%'; -SHOW VARIABLES LIKE 'validate_password%'; -#+end_src diff --git a/blog/backblaze-b2/index.org b/blog/backblaze-b2/index.org deleted file mode 100644 index d51fd56..0000000 --- a/blog/backblaze-b2/index.org +++ /dev/null @@ -1,176 +0,0 @@ -#+title: Getting Started with Backblaze B2 Cloud Storage -#+date: 2023-06-28 -#+description: An introduction to the free ttier of Backblaze B2 Cloud Storage. -#+filetags: :sysadmin: - -* Overview -Backblaze [[https://www.backblaze.com/b2/cloud-storage.html][B2 Cloud -Storage]] is an inexpensive and reliable on-demand cloud storage and -backup solution. - -The service starts at $5/TB/month ($0.005/GB/month) with a download rate -of $0.01/GB/month. - -However, there are free tiers: - -- The first 10 GB of storage is free. -- The first 1 GB of data downloaded each day is free. -- Class A transactions are free. -- The first 2500 Class B transactions each day are free. -- The first 2500 Class C transactions each day are free. - -You can see which API calls fall into categories A, B, or C here: -[[https://www.backblaze.com/b2/b2-transactions-price.html][Pricing -Organized by API Calls]]. - -For someone like me, who wants an offsite backup of their server's -=/home/= directory and various other server configs that fall under 10 -GB total, Backblaze is a great solution from a financial perspective. - -* Create An Account -To start with Backblaze, you'll need to -[[https://www.backblaze.com/b2/sign-up.html][create a free account]] - -no payment method is required to sign up. - -Once you have an account, you can test out the service with their web -GUI, their mobile app, or their CLI tool. I'm going to use the CLI tool -below to test a file upload and then sync an entire directory to my -Backblaze bucket. - -* Create a Bucket -Before you can start uploading, you need to create a bucket. If you're -familiar with other object storage services, this will feel familiar. If -not, it's pretty simple to create one. - -As their webpage says: - -#+begin_quote -A bucket is a container that holds files that are uploaded into B2 Cloud -Storage. The bucket name must be globally unique and must have a minimum -of 6 characters. A limit of 100 buckets may be created per account. An -unlimited number of files may be uploaded into a bucket. - -#+end_quote - -Once you click the =Create a Bucket= button on their webpage or mobile -app, you need to provide the following: - -- Bucket Unique Name -- Files in Bucket are: =Private= or =Public= -- Default Encryption: =Disable= or =Enable= -- Object Lock: =Disable= or =Enable= - -For my bucket, I created a private bucket with encryption enabled and -object lock disabled. - -Once your bucket is created, you can test the upload/download feature on -their web GUI or mobile app! At this point, you have a fully functional -bucket and account. - -* Linux CLI Tool -** Installation -To install the =b2= CLI tool, you'll need to download it from the -[[https://www.backblaze.com/docs/cloud-storage-command-line-tools][CLI -Tools]] page. I recommend copying the URL from the link that says -=Linux= and using wget to download it, as shown below. - -Once downloaded, make the file executable and move it to a location on -your =$PATH=, so that you can execute that command from anywhere on the -machine. - -#+begin_src sh -wget -chmod +x b2_linux -mv b2_linux /usr/bin/b2 -#+end_src - -** Log In -The first step after installation is to log in. To do this, execute the -following command and provide your == and -==. - -If you don't want to provide these values in the command itself, you can -simply execute the base command and it will request them in an -interactive prompt. - -#+begin_src sh -# if you want to provide the keys directly: -b2 authorize-account [] [] - -# or, if you don't want your keys in your shell history: -b2 authorize-account -#+end_src - -** Upload a Test File -In order to test the functionality of the CLI tool, I'll start by -uploading a single test file to the bucket I created above. We can do -this with the =upload_file= function. - -The command is issued as follows: - -#+begin_src sh -b2 upload_file -#+end_src - -In my situation, I executed the following command with my username. - -#+begin_src sh -b2 upload_file my_unique_bucket /home//test.md test.md -#+end_src - -To confirm that the file was uploaded successfully, list the files in -your bucket: - -#+begin_src sh -b2 ls -#+end_src - -#+begin_src txt -test.md -#+end_src - -** Sync a Directory -If you have numerous files, you can use the =sync= function to perform -functionality similar to =rsync=, where you can check what's in your -bucket and sync anything that is new or modified. - -The command is issued as follows: - -#+begin_src sh -b2 sync -#+end_src - -In my case, I can sync my user's entire home directory to my bucket -without specifying any of the files directly: - -#+begin_src sh -b2 sync /home// "b2:///home/" -#+end_src - -* Caveats -** Timing of Updates to the Web GUI -When performing actions over a bucket, there is a slight delay in the -web GUI when inspecting a bucket or its file. Note that simple actions -such as uploading or deleting files may have a delay of a few minutes up -to 24 hours. In my experience (<10 GB and ~20,000 files), any actions -took only a few minutes to update across clients. - -** Symlinks -Note that symlinks are resolved by b2, so if you have a link from -=/home//nas-storage= that symlinks out to a =/mnt/nas-storage= -folder that has 10TB of data, =b2= will resolve that link and start -uploading all 10TB of data linked within the folder. - -If you're not sure if you have any symlinks, a symlink will look like -this (note the =->= symbol): - -#+begin_src sh -> ls -lha -lrwxrwxrwx 1 root root 20 Jun 28 13:32 nas -> /mnt/nas-storage/ -#+end_src - -You can recursively find symlink in a path with the following command: - -#+begin_src sh -ls -lR /path/to/search | grep '^l' -#+end_src diff --git a/blog/bash-it/index.org b/blog/bash-it/index.org deleted file mode 100644 index 7c1316f..0000000 --- a/blog/bash-it/index.org +++ /dev/null @@ -1,233 +0,0 @@ -#+title: Upgrade Bash with Bash-It & Ble.sh -#+date: 2022-07-31 -#+description: Learn how to increase the power of bash with Bash-It and Ble.sh. -#+filetags: :sysadmin: - -* Bash -For those who are not familiar, -[[https://en.wikipedia.org/wiki/Bash_(Unix_shell)][Bash]] is a Unix -shell that is used as the default login shell for most Linux -distributions. This shell and command processor should be familiar if -you've used Linux (or older version of macOS) before. - -However, bash is not the only option. There are numerous other shells -that exist. Here are some popular examples: - -- [[https://en.wikipedia.org/wiki/Z_shell][zsh]] -- [[https://en.wikipedia.org/wiki/Fish_(Unix_shell)][fish]] -- [[https://github.com/ibara/oksh][oksh]] -- [[https://wiki.gentoo.org/wiki/Mksh][mksh]] -- [[https://en.wikipedia.org/wiki/Debian_Almquist_shell][dash]] - -While each shell has its differences, bash is POSIX compliant and the -default for many Linux users. Because of this, I am going to explore a -program called =bash-it= below that helps bash users increase the -utility of their shell without installing a completely new shell. - -** Installation -First, if bash is not already installed on your system, you can -[[https://www.gnu.org/software/bash/][download bash from GNU]] or use -your package manager to install it. - -For example, this is how you can install bash on Fedora Linux: - -#+begin_src sh -sudo dnf install bash -#+end_src - -If you are not using bash as your default shell, use the =chsh= command -to change your shell: - -#+begin_src sh -chsh -#+end_src - -You should see a prompt like the one below. If the brackets (=[]=) -contain =bash= already, you're done, and you can simply continue by -hitting the Enter key. - -If the brackets contain another shell path (e.g. =/usr/bin/zsh=), enter -the path to the bash program on your system (it's most likely located at -=/usr/bin/bash=). - -#+begin_src sh -Changing shell for . -New shell [/usr/bin/bash]: -#+end_src - -You must log out or restart the machine in order for the login shell to -be refreshed. You can do it now or wait until you're finished -customizing the shell. - -#+begin_src sh -sudo reboot now -#+end_src - -* Bash-it -As noted on the [[https://github.com/Bash-it/bash-it][Bash-it]] -repository: - -#+begin_quote -Bash-it is a collection of community Bash commands and scripts for Bash -3.2+. (And a shameless ripoff of oh-my-zsh 😃) - -#+end_quote - -Bash-it makes it easy to install plugins, set up aliases for common -commands, and easily change the visual theme of your shell. - -** Installation -To install the framework, simply copy the repository files and use the -=install.sh= script provided. If you want, you can (and should!) inspect -the contents of the installation script before you run it. - -#+begin_src sh -git clone --depth=1 https://github.com/Bash-it/bash-it.git ~/.bash_it -~/.bash_it/install.sh -#+end_src - -If you didn't restart your session after making bash the default, and -are currently working within another shell, be sure to enter a bash -session before using =bash-it=: - -#+begin_src sh -bash -#+end_src - -** Aliases -Bash-it contains a number of aliases for common commands to help improve -efficiency in the terminal. To list all available options, use the -following command: - -#+begin_src sh -bash-it show aliases -#+end_src - -This will provide you a list that looks like the following text block. -Within this screen, you will be able to see all available options and -which ones are currently enabled. - -#+begin_src txt -Alias Enabled? Description -ag [ ] the silver searcher (ag) aliases -ansible [ ] ansible abbreviations -apt [ ] Apt and dpkg aliases for Ubuntu and Debian distros. -atom [ ] Atom.io editor abbreviations -bash-it [ ] Aliases for the bash-it command (these aliases are automatically included with the "general" aliases) -bolt [ ] puppet bolt aliases -bundler [ ] ruby bundler -clipboard [ ] xclip shortcuts -composer [ ] common composer abbreviations -curl [x] Curl aliases for convenience. -... -#+end_src - -To enable an alias, do: - -#+begin_src sh -bash-it enable alias [alias name]... -or- $ bash-it enable alias all -#+end_src - -To disable an alias, do: - -#+begin_src sh -bash-it disable alias [alias name]... -or- $ bash-it disable alias all -#+end_src - -** Plugins -Similar to aliases, plugins are available with bash-it. You can find a -complete list of plugins in the same way as aliases. Simply execute the -following: - -#+begin_src sh -bash-it show plugins -#+end_src - -You will see the following output showing enabled and disabled plugins: - -#+begin_src txt -Plugin Enabled? Description -alias-completion [ ] -autojump [ ] Autojump configuration, see https://github.com/wting/autojump for more details -aws [ ] AWS helper functions -base [x] miscellaneous tools -basher [ ] initializes basher, the shell package manager -battery [x] display info about your battery charge level -blesh [ ] load ble.sh, the Bash line editor! -boot2docker [ ] Helpers to get Docker setup correctly for boot2docker -browser [ ] render commandline output in your browser -#+end_src - -To enable a plugin, do: - -#+begin_src sh -bash-it enable plugin [plugin name]... -or- $ bash-it enable plugin all -#+end_src - -To disable a plugin, do: - -#+begin_src sh -bash-it disable plugin [plugin name]... -or- $ bash-it disable plugin all -#+end_src - -** Themes -There are quite a few pre-defined -[[https://bash-it.readthedocs.io/en/latest/themes-list/#list-of-themes][themes]] -available with bash-it. - -To list all themes: - -#+begin_src sh -ls ~/.bash_it/themes/ -#+end_src - -To use a new theme, you'll need to edit =.bashrc= and alter the -=BASH_IT_THEME= variable to your desired theme. For example, I am using -the =zork= theme. - -#+begin_src sh -nano ~/.bashrc -#+end_src - -#+begin_src sh -export BASH_IT_THEME='zork' -#+end_src - -Once you save your changes, you just need to exit your terminal and -create a new one in order to see your changes to the =.bashrc= file. You -can also =source= the file to see changes, but I recommend starting a -completely new shell instead. - -*** ble.sh -One big feature I was missing in Bash that both =zsh= and =fish= have is -an autosuggestion feature. To explain: as you type, an autosuggestion -feature in the shell will offer suggestions in a lighter font color -beyond the characters already typed. Once you see the command you want, -you can click the right arrow and have the shell auto-complete that line -for you. - -Luckily, the [[https://github.com/akinomyoga/ble.sh][Bash Line Editor]] -(ble.sh) exists! This program provides a wonderful autosuggestions -feature perfectly, among other features that I haven't tested yet. - -In order to install ble.sh, execute the following: - -#+begin_src sh -git clone --recursive https://github.com/akinomyoga/ble.sh.git -make -C ble.sh install PREFIX=~/.local -echo 'source ~/.local/share/blesh/ble.sh' >> ~/.bashrc -#+end_src - -Again, exit the terminal and open a new one in order to see the -newly-configured shell. - -* Restart the Session -Finally, as mentioned above, you'll need to restart the session to -ensure that your user is using bash by default. - -You will also need to exit and re-open a shell (e.g., terminal or -terminal tab) any time you make changes to the =.bashrc= file. - -#+begin_src sh -sudo reboot now -#+end_src diff --git a/blog/burnout/index.org b/blog/burnout/index.org deleted file mode 100644 index 75757ea..0000000 --- a/blog/burnout/index.org +++ /dev/null @@ -1,41 +0,0 @@ -#+title: RE: Burnout -#+date: 2023-05-22 -#+description: A response to Drew DeVault's burnout post. -#+filetags: :personal: - -* RE: Burnout -I recently read -[[https://drewdevault.com/2023/05/01/2023-05-01-Burnout.html][Drew -DeVault's post on burnout]] around the same time I was pulling out of a -burnout rut myself earlier this month. Finally, seeing the light at the -end of my burnout tunnel made me want to write my first post back on -this topic. - -* Busy Seasons on Busy Seasons -My career deals with busy seasons, generally driven by client demand. -This last year, I dealt with a harsh busy season from Aug to Oct 2022 to -issue a few SOC reports for the period ending 2022-09-30. Immediately -following that, I had to pivot and found another busy season from Oct to -Jan for financial statement audits ending on 2022-12-31. Then again, -supporting other clients from Jan to Mar 2023, followed by my current -client workload aiming for SOC reports due on 2023-06-30. - -The result? A busy season that has lasted from August 2022 through -today. I will likely be rushing throughout the next month or two before -I have a brief break and need to focus on the 2023-09-30 SOC reports -again. While auditing and consulting always involve a busy season, this -is the first time I've had one last 9+ months without a break. - -While it's been tough, I have a handful of breaks pre-planned throughout -this next cycle and should be able to moderate the level of commitment -required for each client. - -* Refocusing -Outside of work, I finally have time to work on hobbies such as this -website, programming, athletics, games, etc. - -You may have noticed my absence if you're in the same channels, forums, -and rooms that I am, but I should finally be active again. - -I'm hoping to break an item out of my backlog soon and start working on -building a new project or hack around with a stale one. diff --git a/blog/business-analysis/index.org b/blog/business-analysis/index.org deleted file mode 100644 index 6d60471..0000000 --- a/blog/business-analysis/index.org +++ /dev/null @@ -1,380 +0,0 @@ -#+title: Algorithmically Analyzing Local Businesses -#+date: 2020-07-26 -#+description: Exploring and visualizing data with Python. -#+filetags: :data: - -* Background Information -This project aims to help investors learn more about a random city in -order to determine optimal locations for business investments. The data -used in this project was obtained using Foursquare's developer API. - -Fields include: - -- Venue Name -- Venue Category -- Venue Latitude -- Venue Longitude - -There are 232 records found using the center of Lincoln as the area of -interest with a radius of 10,000. - -* Import the Data -The first step is the simplest: import the applicable libraries. We will -be using the libraries below for this project. - -#+begin_src python -# Import the Python libraries we will be using -import pandas as pd -import requests -import folium -import math -import json -from pandas.io.json import json_normalize -from sklearn.cluster import KMeans -#+end_src - -To begin our analysis, we need to import the data for this project. The -data we are using in this project comes directly from the Foursquare -API. The first step is to get the latitude and longitude of the city -being studied (Lincoln, NE) and setting up the folium map. - -#+begin_src python -# Define the latitude and longitude, then map the results -latitude = 40.806862 -longitude = -96.681679 -map_LNK = folium.Map(location=[latitude, longitude], zoom_start=12) - -map_LNK -#+end_src - -#+caption: Blank Map -[[https://img.cleberg.net/blog/20200726-ibm-data-science/01_blank_map-min.png]] - -Now that we have defined our city and created the map, we need to go get -the business data. The Foursquare API will limit the results to 100 per -API call, so we use our first API call below to determine the total -results that Foursquare has found. Since the total results are 232, we -perform the API fetching process three times (100 + 100 + 32 = 232). - -#+begin_src python -# Foursquare API credentials -CLIENT_ID = 'your-client-id' -CLIENT_SECRET = 'your-client-secret' -VERSION = '20180604' - -# Set up the URL to fetch the first 100 results -LIMIT = 100 -radius = 10000 -url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( - CLIENT_ID, - CLIENT_SECRET, - VERSION, - latitude, - longitude, - radius, - LIMIT) - -# Fetch the first 100 results -results = requests.get(url).json() - -# Determine the total number of results needed to fetch -totalResults = results['response']['totalResults'] -totalResults - -# Set up the URL to fetch the second 100 results (101-200) -LIMIT = 100 -offset = 100 -radius = 10000 -url2 = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}&offset={}'.format( - CLIENT_ID, - CLIENT_SECRET, - VERSION, - latitude, - longitude, - radius, - LIMIT, - offset) - -# Fetch the second 100 results (101-200) -results2 = requests.get(url2).json() - -# Set up the URL to fetch the final results (201 - 232) -LIMIT = 100 -offset = 200 -radius = 10000 -url3 = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}&offset={}'.format( - CLIENT_ID, - CLIENT_SECRET, - VERSION, - latitude, - longitude, - radius, - LIMIT, - offset) - -# Fetch the final results (201 - 232) -results3 = requests.get(url3).json() -#+end_src - -* Clean the Data -Now that we have our data in three separate dataframes, we need to -combine them into a single dataframe and make sure to reset the index so -that we have a unique ID for each business. The =get~categorytype~= -function below will pull the categories and name from each business's -entry in the Foursquare data automatically. Once all the data has been -labeled and combined, the results are stored in the =nearby_venues= -dataframe. - -#+begin_src python -# This function will extract the category of the venue from the API dictionary -def get_category_type(row): - try: - categories_list = row['categories'] - except: - categories_list = row['venue.categories'] - - if len(categories_list) == 0: - return None - else: - return categories_list[0]['name'] - -# Get the first 100 venues -venues = results['response']['groups'][0]['items'] -nearby_venues = json_normalize(venues) - -# filter columns -filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] -nearby_venues = nearby_venues.loc[:, filtered_columns] - -# filter the category for each row -nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1) - -# clean columns -nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns] - ---- - -# Get the second 100 venues -venues2 = results2['response']['groups'][0]['items'] -nearby_venues2 = json_normalize(venues2) # flatten JSON - -# filter columns -filtered_columns2 = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] -nearby_venues2 = nearby_venues2.loc[:, filtered_columns] - -# filter the category for each row -nearby_venues2['venue.categories'] = nearby_venues2.apply(get_category_type, axis=1) - -# clean columns -nearby_venues2.columns = [col.split(".")[-1] for col in nearby_venues.columns] -nearby_venues = nearby_venues.append(nearby_venues2) - ---- - -# Get the rest of the venues -venues3 = results3['response']['groups'][0]['items'] -nearby_venues3 = json_normalize(venues3) # flatten JSON - -# filter columns -filtered_columns3 = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] -nearby_venues3 = nearby_venues3.loc[:, filtered_columns] - -# filter the category for each row -nearby_venues3['venue.categories'] = nearby_venues3.apply(get_category_type, axis=1) - -# clean columns -nearby_venues3.columns = [col.split(".")[-1] for col in nearby_venues3.columns] - -nearby_venues = nearby_venues.append(nearby_venues3) -nearby_venues = nearby_venues.reset_index(drop=True) -nearby_venues -#+end_src - -#+caption: Clean Data -[[https://img.cleberg.net/blog/20200726-ibm-data-science/02_clean_data-min.png]] - -* Visualize the Data -We now have a complete, clean data set. The next step is to visualize -this data onto the map we created earlier. We will be using folium's -=CircleMarker()= function to do this. - -#+begin_src python -# add markers to map -for lat, lng, name, categories in zip(nearby_venues['lat'], nearby_venues['lng'], nearby_venues['name'], nearby_venues['categories']): - label = '{} ({})'.format(name, categories) - label = folium.Popup(label, parse_html=True) - folium.CircleMarker( - [lat, lng], - radius=5, - popup=label, - color='blue', - fill=True, - fill_color='#3186cc', - fill_opacity=0.7, - ).add_to(map_LNK) - -map_LNK -#+end_src - -![[https://img.cleberg.net/blog/20200726-ibm-data-science/03_data_map-min.png][Initial -data map]] - -* Clustering: /k-means/ -To cluster the data, we will be using the /k-means/ algorithm. This -algorithm is iterative and will automatically make sure that data points -in each cluster are as close as possible to each other, while being as -far as possible away from other clusters. - -However, we first have to figure out how many clusters to use (defined -as the variable /'k'/). To do so, we will use the next two functions to -calculate the sum of squares within clusters and then return the optimal -number of clusters. - -#+begin_src python -# This function will return the sum of squares found in the data -def calculate_wcss(data): - wcss = [] - for n in range(2, 21): - kmeans = KMeans(n_clusters=n) - kmeans.fit(X=data) - wcss.append(kmeans.inertia_) - - return wcss - -# Drop 'str' cols so we can use k-means clustering -cluster_df = nearby_venues.drop(columns=['name', 'categories']) - -# calculating the within clusters sum-of-squares for 19 cluster amounts -sum_of_squares = calculate_wcss(cluster_df) - -# This function will return the optimal number of clusters -def optimal_number_of_clusters(wcss): - x1, y1 = 2, wcss[0] - x2, y2 = 20, wcss[len(wcss)-1] - - distances = [] - for i in range(len(wcss)): - x0 = i+2 - y0 = wcss[i] - numerator = abs((y2-y1)*x0 - (x2-x1)*y0 + x2*y1 - y2*x1) - denominator = math.sqrt((y2 - y1)**2 + (x2 - x1)**2) - distances.append(numerator/denominator) - - return distances.index(max(distances)) + 2 - -# calculating the optimal number of clusters -n = optimal_number_of_clusters(sum_of_squares) -#+end_src - -Now that we have found that our optimal number of clusters is six, we -need to perform k-means clustering. When this clustering occurs, each -business is assigned a cluster number from 0 to 5 in the dataframe. - -#+begin_src python -# set number of clusters equal to the optimal number -kclusters = n - -# run k-means clustering -kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(cluster_df) - -# add clustering labels to dataframe -nearby_venues.insert(0, 'Cluster Labels', kmeans.labels_) -#+end_src - -Success! We now have a dataframe with clean business data, along with a -cluster number for each business. Now let's map the data using six -different colors. - -#+begin_src python -# create map with clusters -map_clusters = folium.Map(location=[latitude, longitude], zoom_start=12) -colors = ['#0F9D58', '#DB4437', '#4285F4', '#800080', '#ce12c0', '#171717'] - -# add markers to the map -for lat, lng, name, categories, cluster in zip(nearby_venues['lat'], nearby_venues['lng'], nearby_venues['name'], nearby_venues['categories'], nearby_venues['Cluster Labels']): - label = '[{}] {} ({})'.format(cluster, name, categories) - label = folium.Popup(label, parse_html=True) - folium.CircleMarker( - [lat, lng], - radius=5, - popup=label, - color=colors[int(cluster)], - fill=True, - fill_color=colors[int(cluster)], - fill_opacity=0.7).add_to(map_clusters) - -map_clusters -#+end_src - -#+caption: Clustered Map -[[https://img.cleberg.net/blog/20200726-ibm-data-science/04_clusters-min.png]] - -* Investigate Clusters -Now that we have figured out our clusters, let's do a little more -analysis to provide more insight into the clusters. With the information -below, we can see which clusters are more popular for businesses and -which are less popular. The results below show us that clusters 0 -through 3 are popular, while clusters 4 and 5 are not very popular at -all. - -#+begin_src python -# Show how many venues are in each cluster -color_names = ['Dark Green', 'Red', 'Blue', 'Purple', 'Pink', 'Black'] -for x in range(0,6): - print("Color of Cluster", x, ":", color_names[x]) - print("Venues found in Cluster", x, ":", nearby_venues.loc[nearby_venues['Cluster Labels'] == x, nearby_venues.columns[:]].shape[0]) - print("---") -#+end_src - -#+caption: Venues per Cluster -[[https://img.cleberg.net/blog/20200726-ibm-data-science/05_venues_per_cluster-min.png]] - -Our last piece of analysis is to summarize the categories of businesses -within each cluster. With these results, we can clearly see that -restaurants, coffee shops, and grocery stores are the most popular. - -#+begin_src python -# Calculate how many venues there are in each category -# Sort from largest to smallest -temp_df = nearby_venues.drop(columns=['name', 'lat', 'lng']) - -cluster0_grouped = temp_df.loc[temp_df['Cluster Labels'] == 0].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster1_grouped = temp_df.loc[temp_df['Cluster Labels'] == 1].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster2_grouped = temp_df.loc[temp_df['Cluster Labels'] == 2].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster3_grouped = temp_df.loc[temp_df['Cluster Labels'] == 3].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster4_grouped = temp_df.loc[temp_df['Cluster Labels'] == 4].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster5_grouped = temp_df.loc[temp_df['Cluster Labels'] == 5].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) - -# show how many venues there are in each cluster (> 1) -with pd.option_context('display.max_rows', None, 'display.max_columns', None): - print("\n\n", "Cluster 0:", "\n", cluster0_grouped.loc[cluster0_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 1:", "\n", cluster1_grouped.loc[cluster1_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 2:", "\n", cluster2_grouped.loc[cluster2_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 3:", "\n", cluster3_grouped.loc[cluster3_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 4:", "\n", cluster4_grouped.loc[cluster4_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 5:", "\n", cluster5_grouped.loc[cluster5_grouped['Cluster Labels'] > 1]) -#+end_src - -#+caption: Venues per Cluster, pt. 1 -[[https://img.cleberg.net/blog/20200726-ibm-data-science/06_categories_per_cluster_pt1-min.png]] - -#+caption: Venues per Cluster, pt. 2 -[[https://img.cleberg.net/blog/20200726-ibm-data-science/07_categories_per_cluster_pt2-min.png]] - -* Discussion -In this project, we gathered location data for Lincoln, Nebraska, USA -and clustered the data using the k-means algorithm in order to identify -the unique clusters of businesses in Lincoln. Through these actions, we -found that there are six unique business clusters in Lincoln and that -two of the clusters are likely unsuitable for investors. The remaining -four clusters have a variety of businesses, but are largely dominated by -restaurants and grocery stores. - -Using this project, investors can now make more informed decisions when -deciding the location and category of business in which to invest. - -Further studies may involve other attributes for business locations, -such as population density, average wealth across the city, or crime -rates. In addition, further studies may include additional location data -and businesses by utilizing multiple sources, such as Google Maps and -OpenStreetMap. diff --git a/blog/byobu/index.org b/blog/byobu/index.org deleted file mode 100644 index 902e5f5..0000000 --- a/blog/byobu/index.org +++ /dev/null @@ -1,66 +0,0 @@ -#+title: Byobu -#+date: 2023-06-23 -#+description: Learning about the Byobu application for terminals. -#+filetags: :linux: - -* Byobu -[[https://www.byobu.org][byobu]] is a command-line tool that allows you -to use numerous screens within a single terminal emulator instance. More -specifically, it's a text based window manager, using either =screen= or -=tmux=. - -This post is mostly just a self-reference as I explore byobu, so I may -come back later and update this post with more content. - -** Screenshot -Take a look below at my current multi-window set-up in byobu while I -write this blog post: - -#+caption: byobu -[[https://img.cleberg.net/blog/20230623-byobu/byobu.png]] - -*** Keybindings -You can open the help menu with either of the following commands; they -will both open the same manpage: - -#+begin_src sh -byobu --help -# or -man byobu -#+end_src - -While the manpage contains a ton of information about the functionality -of byobu (such as status notifications, sessions, and windows), the -first location to explore should be the keybindings section. - -The keybindings are configured as follows: - -#+begin_src txt -byobu keybindings can be user defined in /usr/share/byobu/keybindings/ (or -within .screenrc if byobu-export was used). The common key bindings are: - -F2 - Create a new window -F3 - Move to previous window -F4 - Move to next window -F5 - Reload profile -F6 - Detach from this session -F7 - Enter copy/scrollback mode -F8 - Re-title a window -F9 - Configuration Menu -F12 - Lock this terminal -shift-F2 - Split the screen horizontally -ctrl-F2 - Split the screen vertically -shift-F3 - Shift the focus to the previous split region -shift-F4 - Shift the focus to the next split region -shift-F5 - Join all splits -ctrl-F6 - Remove this split -ctrl-F5 - Reconnect GPG and SSH sockets -shift-F6 - Detach, but do not logout -alt-pgup - Enter scrollback mode -alt-pgdn - Enter scrollback mode -Ctrl-a $ - show detailed status -Ctrl-a R - Reload profile -Ctrl-a ! - Toggle key bindings on and off -Ctrl-a k - Kill the current window -Ctrl-a ~ - Save the current window's scrollback buffer -#+end_src diff --git a/blog/changing-git-authors/index.org b/blog/changing-git-authors/index.org deleted file mode 100644 index b06660d..0000000 --- a/blog/changing-git-authors/index.org +++ /dev/null @@ -1,72 +0,0 @@ -#+title: Changing Git Authors -#+date: 2021-05-30 -#+description: A guide to change Git author names and emails in old commits. -#+filetags: :dev: - -* Changing Git Author/Email Based on Previously Committed Email -Here's the dilemma: You've been committing changes to your git -repository with an incorrect name or email (or multiple repositories), -and now you want to fix it. Luckily, there's a semi-reliable way to fix -that. While I have never experienced issues with this method, some -people have warned that it can mess with historical hashes and integrity -of commits, so use this method only if you're okay accepting that risk. - -Okay, let's create the bash script: - -#+begin_src sh -nano change_git_authors.sh -#+end_src - -The following information can be pasted directly into your bash script. -The only changes you need to make are to the following variables: - -- =OLD_EMAIL= -- =CORRECT_NAME= -- =CORRECT_EMAIL= - -#+begin_src sh -#!/bin/sh - -# List all sub-directories in the current directory -for dir in */ -do - # Remove the trailing "/" - dir=${dir%*/} - # Enter sub-directory - cd $dir - - git filter-branch --env-filter ' - - OLD_EMAIL="old@example.com" - CORRECT_NAME="your-new-name" - CORRECT_EMAIL="new@example.com" - - if [ "$GIT_COMMITTER_EMAIL" = "$OLD_EMAIL" ] - then - export GIT_COMMITTER_NAME="$CORRECT_NAME" - export GIT_COMMITTER_EMAIL="$CORRECT_EMAIL" - fi - if [ "$GIT_AUTHOR_EMAIL" = "$OLD_EMAIL" ] - then - export GIT_AUTHOR_NAME="$CORRECT_NAME" - export GIT_AUTHOR_EMAIL="$CORRECT_EMAIL" - fi - ' --tag-name-filter cat -- --branches --tags - - git push --force --tags origin 'refs/heads/*' - - cd .. -done -#+end_src - -Finally, save the bash script and make it executable. - -#+begin_src sh -chmod a+x change_git_authors.sh -#+end_src - -Now you can run the script and should see the process begin. - -#+begin_src sh -./change_git_authors.sh -#+end_src diff --git a/blog/cisa/index.org b/blog/cisa/index.org deleted file mode 100644 index d06eb51..0000000 --- a/blog/cisa/index.org +++ /dev/null @@ -1,205 +0,0 @@ -#+title: I Passed the CISA! -#+date: 2021-12-04 -#+description: A recap of the CISA certification exam and my results. -#+filetags: :audit: - -* What is the CISA? -For those of you lucky enough not to be knee-deep in the world of IT/IS -Auditing, [[https://www.isaca.org/credentialing/cisa][CISA]] stands for -Certified Information Systems Auditor. This certification and exam are -part of ISACA's suite of certifications. As I often explain it to people -like my family, it basically means you're employed to use your knowledge -of information systems, regulations, common threats, risks, etc. in -order to assess an organization's current control of their risk. If a -risk isn't controlled (and the company doesn't want to accept the risk), -an IS auditor will suggest implementing a control to address that risk. - -Now, the CISA certification itself is, in my opinion, the main -certification for this career. While certifications such as the CPA or -CISSP are beneficial, nothing matches the power of the CISA for an IS -auditor when it comes to getting hired, getting a raise/bonus, or -earning respect in the field. - -However, to be honest, I am a skeptic of most certifications. I -understand the value they hold in terms of how much you need to commit -to studying or learning on the job, as well as the market value for -certifications such as the CISA. But I also have known some very -+incompetent+ /less than stellar/ auditors who have CPAs, CISAs, CIAs, -etc. - -The same goes for most industries: if a person is good at studying, they -can earn the certification. However, that knowledge means nothing unless -you're actually able to use it in real life and perform as expected of a -certification holder. The challenge comes when people are hired or -connected strictly because of their certifications or resume; you need -to see a person work before you can assume them having a CISA means -they're better than someone without the CISA. - -Okay, rant over. Certifications are generally accepted as a measuring -stick of commitment and quality of an employee, so I am accepting it -too. - -* Exam Content -The CISA is broken down into five sections, each weighted with a -percentage of test questions that may appear. - -#+caption: CISA exam sections -[[https://img.cleberg.net/blog/20211204-i-passed-the-cisa/cisa-exam-sections.png]] - -Since the exam contains 150 questions, here's how those sections break -down: - -| Exam Section | Percentage of Exam | Questions | -|---------------+--------------------+-----------| -| 1 | 21% | 32 | -| 2 | 17% | 26 | -| 3 | 12% | 18 | -| 4 | 23% | 34 | -| 5 | 27% | 40 | -| *Grand Total* | *100%* | *150* | - -* My Studying Habits -This part is a little hard for me to break down into specific detail due -to the craziness of the last year. While I officially purchased my -studying materials in December 2020 and opened them to "start studying" -in January 2021, I really wasn't able to study much due to the demands -of my job and personal life. - -Let me approach this from a few different viewpoints. - -** Study Materials -Let's start by discussing the study materials I purchased. I'll be -referring to #1 as the CRM and #2 as the QAE. - -1. [[https://store.isaca.org/s/store#/store/browse/detail/a2S4w000004KoCbEAK][CISA - Review Manual, 27th Edition | Print]] -2. [[[[https://store.isaca.org/s/store#/store/browse/detail/a2S4w000004KoCcEAK]]][CISA - Review Questions, Answers & Explanations Manual, 12th Edition | - Print]] - -The CRM is an excellent source of information and could honestly be used -as a reference for most IS auditors as a learning reference during their -daily audit responsibilities. However, it is *full** of information and -can be overloading if you're not good at filtering out useless -information while studying. - -The QAE is the real star of the show here. This book contains 1000 -questions, separated by exam section, and a practice exam. My only -complaint about the QAE is that each question is immediately followed -with the correct answer and explanations below it, which means I had to -use something to constantly cover the answers while I was studying. - -I didn't use the online database version of the QAE, but I've heard that -it's easier to use than the printed book. However, it is more expensive -($299 database vs $129 book) which might be important if you're paying -for materials yourself. - -In terms of question difficulty, I felt that the QAE was a good -representation of the actual exam. I've seen a lot of people online say -it wasn't accurate to the exam or that it was much easier/harder, but I -disagree with all of those. The exam was fairly similar to the QAE, just -focusing on whichever topics they chose for my version of the exam. - -If you understand the concepts, skim the CRM (and read in-depth on -topics you struggle with), and use the QAE to continue practicing -exam-like questions, you should be fine. I didn't use any online -courses, videos, etc. - the ISACA materials are more than enough. - -** Studying Process -While I was able to briefly read through sections 1 and 2 in early 2021, -I had to stop and take a break from February/March to September. I -switched jobs in September, which allowed me a lot more free time to -study. - -In September, I studied sections 3-5, took notes, and did a quick review -of the section topics. Once I felt comfortable with my notes, I took a -practice exam from the QAE manual and scored 70% (105/150). - -Here's a breakdown of my initial practice exam: - -| Exam Section | Incorrect | Correct | Grand Total | Percent | -|---------------+-----------+---------+-------------+---------| -| 1 | 8 | 25 | 33 | 76% | -| 2 | 5 | 20 | 25 | 80% | -| 3 | 6 | 12 | 18 | 67% | -| 4 | 10 | 23 | 33 | 70% | -| 5 | 16 | 25 | 41 | 61% | -| *Grand Total** | *45** | *105** | *150** | *70%** | - -As I expected, my toughest sections were related to project management, -development, implementation, and security. - -This just leaves October and November. For these months, I tried to -practice every few days, doing 10 questions for each section, until the -exam. This came out to 13 practice sessions, ~140 questions per section, -and ~700 questions total. - -While some practice sessions were worse and some were better, the final -results were similar to my practice exam results. As you can see below, -my averages were slightly worse than my practice exam. However, I got in -over 700 questions of practice and, most importantly, *I read through -the explanations every time I answered incorrectly and learned from my -mistakes*. - -| Exam Section | Incorrect | Correct | Grand Total | Percent | -|---------------+-----------+---------+-------------+---------| -| 1 | 33 | 108 | 141 | 77% | -| 2 | 33 | 109 | 142 | 77% | -| 3 | 55 | 89 | 144 | 62% | -| 4 | 52 | 88 | 140 | 63% | -| 5 | 55 | 85 | 140 | 61% | -| *Grand Total** | *228** | *479** | *707** | *68%** | - -#+caption: CISA practice question results -[[https://img.cleberg.net/blog/20211204-i-passed-the-cisa/cisa-practice-questions-results.png]] - -* Results -Now, how do the practice scores reflect my actual results? After all, -it's hard to tell how good a practice regimen is unless you see how it -turns out. - -| Exam Section | Section Name | Score | -|--------------+------------------------------------------------------------------+-------| -| 1 | Information Systems Auditing Process | 678 | -| 2 | Governance and Management of IT | 590 | -| 3 | Information Systems Acquisition, Development, and Implementation | 721 | -| 4 | Information Systems Operations and Business Resilience | 643 | -| 5 | Protection of Information Assets | 511 | -| *TOTAL* | | *616* | - -Now, in order to pass the CISA, you need at least 450 on a sliding scale -of 200-800. Personally, I really have no clue what an average CISA score -is. After a /very/ brief look online, I can see that the high end is -usually in the low 700s. In addition, only about 50-60% of people pass -the exam. - -Given this information, I feel great about my scores. 616 may not be -phenomenal, and I wish I had done better on sections 2 & 5, but my -practicing seems to have worked very well overall. - -However, the practice results do not conform to the actual results. -Section 2 was one of my highest practice sections and was my -second-lowest score in the exam. Conversely, section 3 was my -second-lowest practice section and turned out to be my highest actual -score! - -After reflecting, it is obvious that if you have any background on the -CISA topics at all, the most important part of studying is doing -practice questions. You really need to understand how to read the -questions critically and pick the best answer. - -* Looking Forward -I am extremely happy that I was finally able to pass the CISA. Looking -to the future, I'm not sure what's next in terms of professional -learning. My current company offers internal learning courses, so I will -most likely focus on that if I need to gain more knowledge in certain -areas. - -To be fair, even if you pass the CISA, it's hard to become an expert on -any specific topic found within. My career may take me in a different -direction, and I might need to focus more on security or networking -certifications (or possibly building a better analysis/visualization -portfolio if I want to go into data analysis/science). - -All I know is that I am content at the moment and extremely proud of my -accomplishment. diff --git a/blog/clone-github-repos/index.org b/blog/clone-github-repos/index.org deleted file mode 100644 index 3814e9f..0000000 --- a/blog/clone-github-repos/index.org +++ /dev/null @@ -1,148 +0,0 @@ -#+title: How to Clone All Repositories from a GitHub or Sourcehut Account -#+date: 2021-03-19 -#+description: Learn how to clone all GitHub or Sourcehut repositories. -#+filetags: :dev: - -* Cloning from GitHub -If you're like me and use a lot of different devices (and sometimes -decide to just wipe your device and start with a new OS), you probably -know the pain of cloning all your old code repositories down to your -local file system. - -If you're using GitHub, you can easily clone all of your code back down -in just seconds. First, create a bash script. I do so by opening a new -file in =nano=, but you can use =gedit=, =vim=, or something else: - -#+begin_src sh -nano clone_github_repos.sh -#+end_src - -Next, paste in the following information. Note that you can replace the -word =users= in the first line with =orgs= and type an organization's -name instead of a user's name. - -#+begin_src sh -CNTX=users; NAME=YOUR-USERNAME; PAGE=1 -curl "https://api.github.com/$CNTX/$NAME/repos?page=$PAGE&per_page=100" | - grep -e 'git_url*' | - cut -d \" -f 4 | - xargs -L1 git clone -#+end_src - -Finally, save the bash script and make it executable. - -#+begin_src sh -chmod a+x clone_github_repos.sh -#+end_src - -Now you can run the script and should see the cloning process begin. - -#+begin_src sh -./clone_github_repos.sh -#+end_src - -* Cloning from Sourcehut -I haven't fully figured out how to directly incorporate Sourcehut's -GraphQL API into a bash script yet, so this one will take two steps. - -First, log-in to Sourcehut and go to their -[[https://git.sr.ht/graphql][GraphQL playground for Git]]. Next, paste -the following query into the left box: - -#+begin_src sh -query { - me { - canonicalName - repositories() { - cursor - results { - name - } - } - } -} -#+end_src - -The output on the right side will give you an object of all your -repositories. Just grab that text and remove all the characters such as -quotation marks and curly brackets. You will need a single-line list of -space-separated values for the next step. - -Now let's create the bash script: - -#+begin_src sh -nano clone_sourcehut_repos.sh -#+end_src - -Next, paste the following bash script in with the list of repositories -you obtained above and replace =your-username= with your username. - -Note that this uses the SSH-based Git cloning method -(e.g. =git@git...=), so you'll need to ensure you have set up Sourcehut -with your SSH key. - -#+begin_src sh -repos=(repo1 repo2 repo3) - -# List all sub-directories in the current directory -for repo in "${repos[@]}" -do - # Clone - git clone git@git.sr.ht:~your-username/$repo -done -#+end_src - -Finally, save the bash script and make it executable. - -#+begin_src sh -chmod a+x clone_sourcehut_repos.sh -#+end_src - -Now you can run the script and should see the cloning process begin. - -#+begin_src sh -./clone_sourcehut_repos.sh -#+end_src - -* Moving Repositories to a New Host -Now that you have all of your code repositories cloned to your local -computer, you may want to change the remote host (e.g., moving from -GitHub to GitLab). To do this, let's create another bash script: - -#+begin_src sh -nano change_remote_urls.sh -#+end_src - -Past the following information and be sure to change the URL information -to whichever host you are moving to. For this example, I am looping -through all of my cloned GitHub directories and changing them to -Sourcehut (e.g. == -> =git@git.sr.ht:~myusername=). - -#+begin_src sh -# List all sub-directories in the current directory -for dir in */ -do - # Remove the trailing "/" - dir=${dir%*/} - # Enter sub-directory - cd $dir - # Change remote Git URL - git remote set-url origin /"${dir##*/}" - # Push code to new remote - git push - # Go back to main directory - cd .. -done -#+end_src - -Finally, save the bash script and make it executable. - -#+begin_src sh -chmod a+x change_remote_urls.sh -#+end_src - -Now you can run the script and should see the cloning process begin. - -#+begin_src sh -./change_remote_urls.sh -#+end_src diff --git a/blog/cloudflare-dns-api/index.org b/blog/cloudflare-dns-api/index.org deleted file mode 100644 index 39d6fac..0000000 --- a/blog/cloudflare-dns-api/index.org +++ /dev/null @@ -1,190 +0,0 @@ -#+title: Dynamic DNS with Cloudflare API -#+date: 2022-03-23 -#+description: Learn how to dynamically update DNS records for changing IPs with Cloudflare. -#+filetags: :sysadmin: - -* DDNS: Dynamic DNS -If you're hosting a service from a location with dynamic DNS (where your -IP may change at any time), you must have a solution to update the DNS -so that you can access your service even when the IP of the server -changes. - -The process below uses the [[https://api.cloudflare.com/][Cloudflare -API]] to update DNS =A= and =AAAA= records with the server's current IP. -If you use another DNS provider, you will have to find a way to update -your DNS (or find a way to get a static IP). - -First, install =jq= since we will use it in the next script: - -#+begin_src sh -sudo apt install jq -#+end_src - -Next, create a location for your DDNS update scripts and open the first -script: - -#+begin_src sh -mkdir ~/ddns -nano ~/ddns/update.sh -#+end_src - -The following =update.sh= script will take all of your domains and -subdomains and check Cloudflare to see if the current =A= and =AAAA= -records match your server's IP address. If not, it will update the -records. - -#+begin_src sh -# file: update.sh -#!/bin/bash - -# Update TLDs -domains=(example.com example.net) - -for domain in "${domains[@]}" -do - echo -e "\nUpdating $domain..." - zone_name=$domain /home//ddns/ddns.sh -done - -# Update subdomains for example.com -domain=example.com -subdomains=(photos.example.com) - -for subdomain in "${subdomains[@]}" -do - echo -e "\nUpdating $subdomain..." - zone_name=$domain dns_record=$subdomain /home//ddns/ddns.sh -done -#+end_src - -Next, open up the =ddns.sh= script. Paste the following into the script -and update the =api_token= and =email= variables. - -#+begin_src sh -nano ~/ddns/ddns.sh -#+end_src - -*Note*: If you want your DNS records to be proxied through -Cloudflare, find and update the following snippet: ="proxied":false}"= -to say =true= instead of =false=. - -#+begin_src sh -# file: ddns.sh -#!/bin/bash -# based on https://gist.github.com/Tras2/cba88201b17d765ec065ccbedfb16d9a -# initial data; they need to be filled by the user -## API token -api_token= -## email address associated with the Cloudflare account -email= - -# get the basic data -ipv4=$(curl -s -X GET -4 https://ifconfig.co) -ipv6=$(curl -s -X GET -6 https://ifconfig.co) -user_id=$(curl -s -X GET "https://api.cloudflare.com/client/v4/user/tokens/verify" \ - -H "Authorization: Bearer $api_token" \ - -H "Content-Type:application/json" \ - | jq -r '{"result"}[] | .id' - ) - -echo "Your IPv4 is: $ipv4" -echo "Your IPv6 is: $ipv6" - -# check if the user API is valid and the email is correct -if [ $user_id ] -then - zone_id=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones?name=$zone_name&status=active" \ - -H "Content-Type: application/json" \ - -H "X-Auth-Email: $email" \ - -H "Authorization: Bearer $api_token" \ - | jq -r '{"result"}[] | .[0] | .id' - ) - # check if the zone ID is - if [ $zone_id ] - then - # check if there is any IP version 4 - if [ $ipv4 ] - then - dns_record_a_id=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$zone_id/dns_records?type=A&name=$dns_record" \ - -H "Content-Type: application/json" \ - -H "X-Auth-Email: $email" \ - -H "Authorization: Bearer $api_token" - ) - # if the IPv6 exist - dns_record_a_ip=$(echo $dns_record_a_id | jq -r '{"result"}[] | .[0] | .content') - echo "The set IPv4 on Cloudflare (A Record) is: $dns_record_a_ip" - if [ $dns_record_a_ip != $ipv4 ] - then - # change the A record - curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$zone_id/dns_records/$(echo $dns_record_a_id | jq -r '{"result"}[] | .[0] | .id')" \ - -H "Content-Type: application/json" \ - -H "X-Auth-Email: $email" \ - -H "Authorization: Bearer $api_token" \ - --data "{"type":"A","name":"$dns_record","content":"$ipv4","ttl":1,"proxied":false}" \ - | jq -r '.errors' - else - echo "The current IPv4 and DNS record IPv4 are the same." - fi - else - echo "Could not get your IPv4. Check if you have it; e.g. on https://ifconfig.co" - fi - - # check if there is any IP version 6 - if [ $ipv6 ] - then - dns_record_aaaa_id=$(curl -s -X GET "https://api.cloudflare.com/client/v4/zones/$zone_id/dns_records?type=AAAA&name=$dns_record" \ - -H "Content-Type: application/json" \ - -H "X-Auth-Email: $email" \ - -H "Authorization: Bearer $api_token" - ) - # if the IPv6 exist - dns_record_aaaa_ip=$(echo $dns_record_aaaa_id | jq -r '{"result"}[] | .[0] | .content') - echo "The set IPv6 on Cloudflare (AAAA Record) is: $dns_record_aaaa_ip" - if [ $dns_record_aaaa_ip != $ipv6 ] - then - # change the AAAA record - curl -s -X PUT "https://api.cloudflare.com/client/v4/zones/$zone_id/dns_records/$(echo $dns_record_aaaa_id | jq -r '{"result"}[] | .[0] | .id')" \ - -H "Content-Type: application/json" \ - -H "X-Auth-Email: $email" \ - -H "Authorization: Bearer $api_token" \ - --data "{"type":"AAAA","name":"$dns_record","content":"$ipv6","ttl":1,"proxied":false}" \ - | jq -r '.errors' - else - echo "The current IPv6 and DNS record IPv6 are the same." - fi - else - echo "Could not get your IPv6. Check if you have it; e.g. on https://ifconfig.co" - fi - else - echo "There is a problem with getting the Zone ID. Check if the Zone Name is correct." - fi -else - echo "There is a problem with either the email or the password" -fi -#+end_src - -Once the script is saved and closed, make the scripts executable: - -#+begin_src sh -chmod +x ~/ddns/ddns.sh -chmod +x ~/ddns/update.sh -#+end_src - -You can test the script by running it manually: - -#+begin_src sh -./update.sh -#+end_src - -To make sure the scripts run automatically, add it to the =cron= file so -that it will run on a schedule. To do this, open the cron file: - -#+begin_src sh -crontab -e -#+end_src - -In the cron file, paste the following at the bottom of the editor: - -#+begin_src sh -,*/5 ** ** ** ** bash /home//ddns/update.sh -#+end_src diff --git a/blog/cpp-compiler/index.org b/blog/cpp-compiler/index.org deleted file mode 100644 index 1e2f802..0000000 --- a/blog/cpp-compiler/index.org +++ /dev/null @@ -1,128 +0,0 @@ -#+title: The C++ Compiler -#+date: 2018-11-28 -#+description: Learn basics about the C++ compilation process. -#+filetags: :dev: - -* A Brief Introduction -[[https://en.wikipedia.org/wiki/C%2B%2B][C++]] is a general-purpose -programming language with object-oriented, generic, and functional -features in addition to facilities for low-level memory manipulation. - -The source code, shown in the snippet below, must be compiled before it -can be executed. There are many steps and intricacies to the compilation -process, and this post was a personal exercise to learn and remember as -much information as I can. - -#+begin_src cpp -#include - -int main() -{ - std::cout << "Hello, world!\n"; -} -#+end_src - -** Compilation Process -*** An Overview -Compiling C++ projects is a frustrating task most days. Seemingly -nonexistent errors keeping your program from successfully compiling can -be annoying (especially since you know you wrote it perfectly the first -time, right?). - -I'm learning more and more about C++ these days and decided to write -this concept down so that I can cement it even further in my own head. -However, C++ is not the only compiled language. Check out -[[https://en.wikipedia.org/wiki/Compiled_language][the Wikipedia entry -for compiled languages]] for more examples of compiled languages. - -I'll start with a wonderful, graphical way to conceptualize the C++ -compiler. View -[[https://web.archive.org/web/20190419035048/http://faculty.cs.niu.edu/~mcmahon/CS241/Notes/compile.html][The -C++ Compilation Process]] by Kurt MacMahon, an NIU professor, to see the -graphic and an explanation. The goal of the compilation process is to -take the C++ code and produce a shared library, dynamic library, or an -executable file. - -** Compilation Phases -Let's break down the compilation process. There are four major steps to -compiling C++ code. - -*** Step 1 -The first step is to expand the source code file to meet all -dependencies. The C++ preprocessor includes the code from all the header -files, such as =#include =. Now, what does that mean? The -previous example includes the =iostream= header. This tells the computer -that you want to use the =iostream= standard library, which contains -classes and functions written in the core language. This specific header -allows you to manipulate input/output streams. After all this, you'll -end up which a temporary file that contains the expanded source code. - -In the example of the C++ code above, the =iostream= class would be -included in the expanded code. - -*** Step 2 -After the code is expanded, the compiler comes into play. The compiler -takes the C++ code and converts this code into the assembly language, -understood by the platform. You can see this in action if you head over -to the [[https://godbolt.org][GodBolt Compiler Explorer]], which shows -C++ being converted into assembly dynamically. - -For example, the =Hello, world!= code snippet above compiles into the -following assembly code: - -#+begin_src asm -.LC0: - .string "Hello, world!\n" -main: - push rbp - mov rbp, rsp - mov esi, OFFSET FLAT:.LC0 - mov edi, OFFSET FLAT:_ZSt4cout - call std::basic_ostream >& std::operator<< >(std::basic_ostream >&, char const*) - mov eax, 0 - pop rbp - ret -__static_initialization_and_destruction_0(int, int): - push rbp - mov rbp, rsp - sub rsp, 16 - mov DWORD PTR [rbp-4], edi - mov DWORD PTR [rbp-8], esi - cmp DWORD PTR [rbp-4], 1 - jne .L5 - cmp DWORD PTR [rbp-8], 65535 - jne .L5 - mov edi, OFFSET FLAT:_ZStL8__ioinit - call std::ios_base::Init::Init() [complete object constructor] - mov edx, OFFSET FLAT:__dso_handle - mov esi, OFFSET FLAT:_ZStL8__ioinit - mov edi, OFFSET FLAT:_ZNSt8ios_base4InitD1Ev - call __cxa_atexit -.L5: - nop - leave - ret -_GLOBAL__sub_I_main: - push rbp - mov rbp, rsp - mov esi, 65535 - mov edi, 1 - call __static_initialization_and_destruction_0(int, int) - pop rbp - ret -#+end_src - -*** Step 3 -Third, the assembly code generated by the compiler is assembled into the -object code for the platform. Essentially, this is when the compiler -takes the assembly code and assembles it into machine code in a binary -format. After researching this online, I figured out that a lot of -compilers will allow you to stop compilation at this step. This would be -useful for compiling each source code file separately. This saves time -later if a single file changes; only that file needs to be recompiled. - -*** Step 4 -Finally, the object code file generated by the assembler is linked -together with the object code files for any library functions used to -produce a shared library, dynamic library, or an executable file. It -replaces all references to undefined symbols with the correct addresses. diff --git a/blog/cryptography-basics/index.org b/blog/cryptography-basics/index.org deleted file mode 100644 index 366239a..0000000 --- a/blog/cryptography-basics/index.org +++ /dev/null @@ -1,171 +0,0 @@ -#+title: Cryptography Basics -#+date: 2020-02-09 -#+description: Learn about the basics of cryptography. -#+filetags: :security: - -* Similar Article Available -If you haven't already, feel free to read my post on -[[../aes-encryption/][AES Encryption]]. - -* What is Cryptography? -In layman's terms, cryptography is a process that can change data from a -readable format into an unreadable format (and vice-versa) through a -series of processes and secrets. More technically, this is the Internet -Security Glossary's definition: - -#+begin_quote -[Cryptography is] the mathematical science that deals with transforming -data to render its meaning unintelligible (i.e., to hide its semantic -content), prevent its undetected alteration, or prevent its unauthorized -use. If the transformation is reversible, cryptography also deals with -restoring encrypted data to an intelligible form. - -- [[https://tools.ietf.org/html/rfc2828][Internet Security Glossary - (2000)]] - -#+end_quote - -Cryptography cannot offer protection against the loss of data; it simply -offers encryption methods to protect data at-rest and data in-traffic. -At a high-level, encrypted is when plaintext data is encrypted to -ciphertext (a secure form of text that cannot be understood unless -decrypted back to plaintext). The encryption process is completed -through the use of a mathematical function that utilizes one or more -values called keys to encrypt or decrypt the data. - -* Key Elements of Cryptographic Systems -To create or evaluate a cryptographic system, you need to know the -essential pieces to the system: - -- *Encryption Algorithm (Primitive):** A mathematical process that - encrypts and decrypts data. -- *Encryption Key:** A string of bits used within the encryption - algorithm as the secret that allows successful encryption or - decryption of data. -- *Key Length (Size):** The maximum number of bits within the encryption - key. It's important to remember that key size is regulated in many - countries. -- *Message Digest:** A smaller, fixed-size bit string version of the - original message. This is practically infeasible to reverse, which is - why it's commonly used to verify integrity. - -* Symmetric Systems (Secret Key Cryptography) -Symmetric cryptography utilizes a secret, bidirectional key to perform -both encryption and decryption of the data. The most common -implementation of symmetric cryptography is the Advanced Encryption -Standard, which uses keys that are 128 bits to 256 bits in size. This -standard came after the National Institute of Standards and Technology -(NIST) decided to retire the Data Encryption Standard (DES) in 2001. - -Since brute force attacks strongly correlate with key length, the 56-bit -key length of DES was considered insecure after it was publicly broken -in under 24 hours. However, there is a modern implementation of DES -called Triple DES where the DES method is applied three times to each -data block. - -The main advantages to symmetric systems are the ease of use, since only -one key is required for both encryption and decryption, and the -simplicity of the algorithms. This helps with bulk data encryption that -may unnecessarily waste time and power using asymmetric systems. - -However, symmetric systems have disadvantages to keep in mind. Since the -key is private, it can be difficult to safely distribute keys to -communication partners. Additionally, the key cannot be used to sign -messages since it's necessary to keep the key private. - -* Asymmetric Systems (Public Key Cryptography) -Asymmetric cryptography utilizes two keys within the system: a secret -key that is privately-held and a public key that can be distributed -freely. The interesting aspect of asymmetric cryptography is that either -key can be used to encrypt the data, there's no rule that dictates which -key must be used for encryption. Once one key is used to encrypt the -data, only the other key can be used to decrypt the data. This means -that if the private key encrypts the data, only the public key can -decrypt the data. - -An advantage of this system is that if you successfully decrypt data -using one of the keys, you can be sure of the sender since only the -other key could have encrypted the data. - -One of the major implementations of an asymmetric system is a digital -signature. A digital signature can be generated using the sender's -private key, or a one-way hash function and is used to provide assurance -for the integrity and authenticity of the message. A couple common -message digest algorithms are SHA-256 and SHA-512, which securely -compress data and produce a 128-bit message digest. - -It should be noted that man-in-the-middle attacks are one of the risks -with digital signatures and public keys. To combat this, applications -often use a public key infrastructure (PKI) to independently -authenticate the validity of signatures and keys. - -Due to the large key size and -[[https://crypto.stackexchange.com/a/591][inefficient mathematical -functions]] of asymmetric encryption, elliptical curve cryptography -(ECC) is often used to increase security while using fewer resources. - -* Applications of Cryptographic Systems -There are quite a few implementations of cryptographic systems around -the world. Here are a few popular examples: - -*Transport Layer Security (TLS):** One of the most famous cryptographic -solutions created is TLS, a session-layered or connection-layered -internet protocol that allows for secure communications between browsers -and servers. Using handshakes, peer negotiation, and authentication -allows TLS to prevent eavesdropping and malicious transformation of -data. The major reason for TLS popularity is that a major vulnerability -was found in the SSL protocol in 2014. Instead of SSL, TLS can be used -with HTTP to form HTTPS and is the preferred method for modern web -development due to its increased security. - -*Secure Hypertext Transfer Protocol (HTTPS):** An application layer -protocol that allows for secure transport of data between servers and -web clients. One of the unique parts of HTTPS is that it uses a secured -port number instead of the default web port address. - -*Virtual Private Network (VPN):** VPNs are made to securely extend a -private network across public networks by utilizing an encrypted layered -tunneling protocol paired with an authentication method, such as -usernames and passwords. This technology originally allowed remote -employees to access their company's data but have evolved into one of -the top choices for anyone who wishes to mask their sensitive personal -data. - -*Internet Protocol Security (IPSec):** This protocol suite facilitates -communication between two or more hosts or subnets by authenticating and -encrypting the data packets. IPSec is used in a lot of VPNs to establish -the VPN connection through the transport and tunnel mode encryption -methods. IPSec encrypts just the data portion of packets in the -transport methods, but it encrypts both the data and headers in the -tunnel method (introducing an additional header for authentication). - -*Secure Shell (SSH):** SSH is another network protocol used to protect -network services by authenticating users through a secure channel. This -protocol is often used for command-line (shell) functions such as remote -shell commands, logins, and file transfers. - -*Kerberos:** Developed by MIT, Kerberos is a computer-network -authentication protocol that works on the basis of tickets to allow -nodes communicating over a non-secure network to prove their identity to -one another securely. This is most commonly used in business -environments when used as the authentication and encryption method for -Windows Active Directory (AD). - -* Cybersecurity Controls -If you're someone who needs solutions on how to control risks associated -with utilizing a crytograhpic system, start with a few basic controls: - -- *Policies:** A policy on the use of cryptographic controls for - protection of information is implemented and is in accordance with - organizational objectives. -- *Key management:** A policy on the use, protection and lifetime of - cryptographic keys is implemented through the entire application - lifecycle. -- *Key size:** The organization has researched the optimal key size for - their purposes, considering national laws, required processing power, - and longevity of the solution. -- *Algorithm selection:** Implemented algorithms are sufficiently - appropriate for the business of the organization, robust, and align - with recommended guidelines. -- *Protocol configuration:** Protocols have been reviewed and configured - suitable to the purpose of the business. diff --git a/blog/curseradio/index.org b/blog/curseradio/index.org deleted file mode 100644 index fb2c55b..0000000 --- a/blog/curseradio/index.org +++ /dev/null @@ -1,95 +0,0 @@ -#+title: CurseRadio: Listening to the Radio on the Command Line -#+date: 2022-07-25 -#+description: Use Curse Radio to listen to radio on the command line. -#+filetags: :linux: - -* Overview -While exploring some interesting Linux applications, I stumbled across -[[https://github.com/chronitis/curseradio][curseradio]], a command-line -radio player based on Python. - -This application is fantastic and incredibly easy to install, so I -wanted to dedicate a post today to this app. Let's look at the features -within the app and then walk through the installation process I took to -get =curseradio= working. - -* Features -#+caption: curseradio -[[https://img.cleberg.net/blog/20220725-curseradio/curseradio.png]] - -The radio player itself is quite minimal. As you can see in the -screenshot above, it contains a simple plaintext list of all available -categories, which can be broken down further and further. In addition, -radio shows are available for listening, alongside regular radio -stations. - -For example, the =Sports= > =Pro Basketball= > =Shows= category contains -a number of specific shows related to Professional Basketball. - -Aside from being able to play any of the listed stations/shows, you can -make a channel your favorite by pressing =f=. It will now show up at the -top of the radio player in the =Favourites= category. - -** Commands/Shortcuts -| Key(s) | Command | -|------------+---------------------------------| -| ↑, ↓ | navigate | -| PgUp, PgDn | navigate quickly | -| Home, End | to top/bottom | -| Enter | open/close folders, play stream | -| k | stop playing stream | -| q | quit | -| f | toggle favourite | - -* Installation -** Dependencies -Before installing =curseradio=, a handful of system and Python packages -are required. To get started, install =python3=, =pip3=, and =mpv= on -your system. In this example, I'm using Fedora Linux, which uses the -=dnf= package manager. You may need to adjust this if you're using a -different system. - -#+begin_src sh -sudo dnf install python3 pip3 mpv -#+end_src - -Next, use =pip3= to install =requests=, =xdg=, and =lxml=: - -#+begin_src sh -pip3 install requests xdg lxml -#+end_src - -** Repository Source Installation -Once all the dependencies are installed, we can clone the source code -and enter that directory: - -#+begin_src sh -git clone https://github.com/chronitis/curseradio && cd curseradio -#+end_src - -Once you're within the =curseradio= directory, you can install the -application with the provided =setup.py= script. - -#+begin_src sh -sudo python3 setup.py install -#+end_src - -In my case, I ran into a few errors and needed to create the folders -that curseradio wanted to use for its installation. If you don't get any -errors, you can skip this and run the app. - -#+begin_src sh -sudo mkdir /usr/local/lib/python3.10/ -sudo mkdir /usr/local/lib/python3.10/site-packages/ -#+end_src - -#+begin_src sh -sudo chown -R $USER:$USER /usr/local/lib/python3.10/ -#+end_src - -* Run the Application -Once fully installed without errors, you can run the application! - -#+begin_src sh -python3 /usr/local/bin/curseradio -#+end_src diff --git a/blog/customizing-ubuntu/index.org b/blog/customizing-ubuntu/index.org deleted file mode 100644 index 6461a9a..0000000 --- a/blog/customizing-ubuntu/index.org +++ /dev/null @@ -1,195 +0,0 @@ -#+title: Beginner's Guide: Customizing Ubuntu -#+date: 2020-05-19 -#+description: A beginner's guide to customizing the Ubuntu operating system. -#+filetags: :linux: - -* More Information -For inspiration on designing your *nix computer, check out the -[[https://libredd.it/r/unixporn][r/unixporn]] subreddit! - -* Customizing Ubuntu -New to Linux and want to add a personal touch to your machine? One of -the best perks of Linux is that it is *extremely** customizable. You can -change the styles of the windows, shell (status bars/docks), icons, -fonts, terminals, and more. - -In this post, I'm going to go through customization on Ubuntu 20.04 -(GNOME) since most new users tend to choose Ubuntu-based distros. If -you've found a way to install Arch with i3-gaps, I'm assuming you know -how to find more advanced tutorials out there on customizations. - -** Required Tools -#+caption: Gnome Tweaks -[[https://img.cleberg.net/blog/20200519-customizing-ubuntu/gnome-tweaks-min.png]] - -Ubuntu 20.04 ships with the default desktop environment -[[https://www.gnome.org/][Gnome]], which includes the handy -=gnome-tweaks= tool to quickly change designs. To install this, just -open your terminal and enter the following command: - -#+begin_src sh -sudo apt install gnome-tweaks -#+end_src - -After you've finished installing the tool, simply launch the Tweaks -application, and you'll be able to access the various customization -options available by default on Ubuntu. You might even like some of the -pre-installed options. - -** GNOME Application Themes -To change the themes applied to applications in GNOME, you will need to -change the Applications dropdown in the Appearance section of Tweaks. To -add more themes, you will have to find your preferred theme online and -follow the steps below to have it show up in the Tweaks tool. While you -may find themes anywhere, one of the most popular sites for GNOME themes -is [[https://www.gnome-look.org/][gnome-look.org]]. This website -contains themes for applications, shells, icons, and cursors. - -Steps to import themes into Tweaks: - -1. Download the theme. -2. These files are usually compressed (.zip, .tar.gz, .tar.xz), so you - will need to extract the contents. This is easiest when opening the - file explorer, right-clicking the compressed file, and choosing - "Extract here." -3. Move the theme folder to =/usr/share/themes/=. You can do so with the - following command: =sudo mv theme-folder/ /usr/share/themes/=. - - Icons and cursors will be moved to the =/usr/share/icons/= folder. - - Fonts will be moved to the =/usr/share/fonts/= folder - Alternatively, you can move them to the - =/usr/share/fonts/opentype/= or =/usr/share/fonts/opentype/= - folders, if you have a specific font type. -4. Close tweaks if it is open. Re-open Tweaks and your new theme will be - available in the Applications dropdown in the Appearance section of - Tweaks. - -If the theme is not showing up after you've moved it into the themes -folder, you may have uncompressed the folder into a sub-folder. You can -check this by entering the theme folder and listing the contents: - -#+begin_src sh -cd /usr/share/themes/Mojave-Dark && ls -la -#+end_src - -This is an example of what the contents of your theme folder should look -like. If you just see another folder there, you should move that folder -up into the =/usr/share/themes/= folder. - -#+begin_src sh -cinnamon COPYING gnome-shell gtk-2.0 gtk-3.0 index.theme metacity-1 plank xfwm4 -#+end_src - -** GNOME Shell Themes -To change the appearance of the title bar, default dock, app menu, and -other parts of the GNOME shell, you'll need to install the -[[https://extensions.gnome.org/extension/19/user-themes/][user themes]] -extension on [[https://extensions.gnome.org/][Gnome Extensions]]. To be -able to install extensions, you will first need to install the browser -extension that the website instructs you to. See this screenshot for the -blue box with a link to the extension. - -#+caption: Gnome Extensions -[[https://img.cleberg.net/blog/20200519-customizing-ubuntu/gnome-extensions-min.png]] - -After the browser extension is installed, you will need to install the -native host connector: - -#+begin_src sh -sudo apt install chrome-gnome-shell -#+end_src - -Finally, you can go the -[[https://extensions.gnome.org/extension/19/user-themes/][user themes]] -extension page and click the installation button. This will enable the -Shell option in Tweaks. Now you can move shell themes to the -=/usr/share/themes= directory, using the same steps mentioned in the -previous section, and enable the new theme in Tweaks. - -** Icons & Cursors -Icons and cursors are installed exactly the same way, so I'm grouping -these together in this post. Both of these items will need to follow the -same process as installing themes, except you will want to move your -font folders to the =/usr/share/icons/= directory instead. - -** Fonts -Fonts are one of the overlooked parts of customization, but a good font -can make the whole screen look different. For example, I have installed -the [[https://github.com/IBM/plex/releases][IBM Plex]] fonts on my -system. This follows the same process as installing themes, except you -will want to move your font folders to the =/usr/share/fonts/= directory -instead. - -** Terminal -If you spend a lot of time typing commands, you know how important the -style and functionality of the terminal is. After spending a lot of time -using the default GNOME terminal with -[[https://en.wikipedia.org/wiki/Bash_(Unix_shell)][unix shell]], I -decided to try some different options. I ended up choosing -[[https://terminator-gtk3.readthedocs.io/en/latest/][Terminator]] with -[[https://en.wikipedia.org/wiki/Z_shell][zsh]]. - -Terminator is great if you need to open multiple terminals at one time -by simply right-clicking and splitting the screen into as many terminals -as you want. While this project hasn't been updated in a while, -[[https://github.com/gnome-terminator/terminator/issues/1][it is coming -under new development]]. However, this terminal is great and I haven't -experienced any errors yet. - -For the shell choice, I decided to choose zsh after trying it out on a -fresh Manjaro installation. Zsh is great if you like to change the -themes of your terminal, include icons, or add plugins. - -The desktop uses the -[[https://github.com/zsh-users/zsh-autosuggestions][zsh-autosuggestions]] -to suggest past commands as you type. In addition, it suggests -corrections if you misspell a command. Lastly, it uses the =af-magic= -theme, which adds dashed lines between commands, moving the user@host -tag to the right side of the terminal, and changes the colors. There are -plenty of plugins and themes to choose from. Just figure out what you -like and add it to your =~/.zshrc= file! - -*** Steps to Replicate My Terminal -To install zsh on Ubuntu, enter the following command into a terminal: - -#+begin_src sh -sudo apt install zsh -#+end_src - -Then, enter the next command to activate zsh: - -#+begin_src sh -sudo chsh -s $(which zsh) $(whoami) -#+end_src - -To install Terminator on Ubuntu: - -#+begin_src sh -sudo apt install terminator -#+end_src - -To install Oh My Zsh on Ubuntu: - -#+begin_src sh -sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" -#+end_src - -To install zsh-autosuggestions via Oh My Zsh: - -#+begin_src sh -git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions -#+end_src - -Then, add the following plugin wording to your =~/.zshrc= file (the -default config usually has the =git= plugin activated, so just add any -other plugins to the parentheses separated by a space): - -#+begin_src sh -nano ~/.zshrc -#+end_src - -#+begin_src sh -plugins=(git zsh-autosuggestions) -#+end_src - -Finally, you need to log out of your computer and log back in so your -user shell can refresh. diff --git a/blog/daily-poetry/index.org b/blog/daily-poetry/index.org deleted file mode 100644 index e150c8b..0000000 --- a/blog/daily-poetry/index.org +++ /dev/null @@ -1,208 +0,0 @@ -#+title: Daily Plaintext Poetry via Email -#+date: 2022-06-22 -#+description: A small project to automatically deliver poetry to your inbox daily. -#+filetags: :selfhosting: - -* Source Code -I don't want to bury the lede here, so if you'd like to see the full -source code I use to email myself plaintext poems daily, visit the -repository: [[https://git.cleberg.net/daily-poem.git][daily-poem.git]]. - -* My Daily Dose of Poetry -Most of my programming projects are small, random projects that are made -strictly to fix some small problem I have or enhance my quality of life. - -In this case, I was looking for a simply and easy way to get a daily -dose of literature or poetry to read in the mornings. - -However, I don't want to sign up for a random mailing list on just any -website. I also don't want to have to work to find the reading content -each morning, as I know I would simply give up and stop reading daily. - -Thus, I found a way to deliver poetry to myself in plain-text format, on -a daily basis, and scheduled to deliver automatically. - -* Prerequisites -This solution uses Python and email, so the following process requires -the following to be installed: - -1. An SMTP server, which can be as easy as installing =mailutils= if - you're on a Debian-based distro. -2. Python (& pip!) -3. The following Python packages: =email=, =smtplib=, =json=, and - =requests= - -* Breaking Down the Logic -I want to break down the logic for this program, as it's quite simple -and informational. - -** Required Packages -This program starts with a simple import of the required packages, so I -wanted to explain why each package is used: - -#+begin_src python -from email.mime.text import MIMEText # Required for translating MIMEText -import smtplib # Required to process the SMTP mail delivery -import json # Required to parse the poetry API results -import requests # Required to send out a request to the API -#+end_src - -** Sending the API Request -Next, we need to actually send the API request. In my case, I'm calling -a random poem from the entire API. If you want, you can call specific -poems or authors from this API. - -#+begin_src python -json_data = requests.get('https://poetrydb.org/random').json() -#+end_src - -This gives us the following result in JSON: - -#+begin_src json -[ - { - "title": "Sonnet XXII: With Fools and Children", - "author": "Michael Drayton", - "lines": [ - "To Folly", - "", - "With fools and children, good discretion bears;", - "Then, honest people, bear with Love and me,", - "Nor older yet, nor wiser made by years,", - "Amongst the rest of fools and children be;", - "Love, still a baby, plays with gauds and toys,", - "And, like a wanton, sports with every feather,", - "And idiots still are running after boys,", - "Then fools and children fitt'st to go together.", - "He still as young as when he first was born,", - "No wiser I than when as young as he;", - "You that behold us, laugh us not to scorn;", - "Give Nature thanks you are not such as we.", - "Yet fools and children sometimes tell in play", - "Some, wise in show, more fools indeed than they." - ], - "linecount": "15" - } -] -#+end_src - -** Parsing the API Results -In order to parse this into a readable format, we need to use the =json= -package and extract the fields we want. In the example below, I am -grabbing every field presented by the API. - -For the actual poem content, we need to loop over each line in the -=lines= variable since each line is a separate string by default. - -#+begin_quote -You /could/ also extract the title or author and make another call out -to the API to avoid having to build the plaintext poem with a loop, but -it just doesn't make sense to me to send multiple requests when we can -create a simple loop on our local machine to work with the data we -already have. - -For -[[https://poetrydb.org/title/Sonnet%20XXII:%20With%20Fools%20and%20Children/lines.text][example]], -look at the raw data response of this link to see the poem's lines -returned in plaintext. - -#+end_quote - -#+begin_src python -title = json_data[0]['title'] -author = json_data[0]['author'] -line_count = json_data[0]['linecount'] -lines = '' -for line in json_data[0]['lines']: - lines = lines + line + "\n" -#+end_src - -** Composing the Email -Now that I have all the data I need, I just need to compose it into a -message and prepare the message metadata. - -For my daily email, I want to see the title of the poem first, followed -by the author, then a blank line, and finally the full poem. This code -snippet combines that data and packages it into a MIMEText container, -ready to be emailed. - -#+begin_src python -msg_body = title + "\n" + author + "\n\n" + lines -msg = MIMEText(msg_body) -#+end_src - -Before we send the email, we need to prepare the metadata (subject, -from, to, etc.): - -#+begin_src python -sender_email = 'example@server.local' -recipient_emails = ['user@example.com'] -msg['Subject'] = 'Your Daily Poem (' + line_count + ' lines)' -msg['From'] = sender_email -msg['To'] = recipient_email -#+end_src - -** Sending the Email -Now that I have everything ready to be emailed, the last step is to -simply connect to an SMTP server and send the email out to the -recipients. In my case, I installed =mailutils= on Ubuntu and let my -SMTP server be =localhost=. - -#+begin_src python -smtp_server = 'localhost' -s = smtplib.SMTP(smtp_server) -s.sendmail(sender_email, recipient_emails, msg.as_string()) -s.quit() -#+end_src - -* The Result! -Instead of including a screenshot, I've copied the contents of the email -that was delivered to my inbox below since I set this process up in -plaintext format. - -#+begin_src txt -Date: Wed, 22 Jun 2022 14:37:19 +0000 (UTC) -From: REDACTED -To: REDACTED -Subject: Your Daily Poem (36 lines) -MIME-Version: 1.0 -Content-Transfer-Encoding: 8bit -Content-Type: text/plain; charset=utf-8 - -Sonnet XXII: With Fools and Children -Michael Drayton - -With fools and children, good discretion bears; -Then, honest people, bear with Love and me, -Nor older yet, nor wiser made by years, -Amongst the rest of fools and children be; -Love, still a baby, plays with gauds and toys, -And, like a wanton, sports with every feather, -And idiots still are running after boys, -Then fools and children fitt'st to go together. -He still as young as when he first was born, -No wiser I than when as young as he; -You that behold us, laugh us not to scorn; -Give Nature thanks you are not such as we. -Yet fools and children sometimes tell in play -Some, wise in show, more fools indeed than they. -#+end_src - -* Scheduling the Daily Email -Last, but not least, is scheduling this Python script with =crontab=. To -schedule a script to run daily, you can add it to the =crontab= file. To -do this, open =crontab= in editing mode: - -#+begin_src sh -crontab -e -#+end_src - -In the file, simply paste the following snippet at the bottom of the -file and ensure that the file path is correctly pointing to wherever you -saved your Python script: - -#+begin_src config -0 8 ** ** ** python3 /home//dailypoem/main.py -#+end_src - -We have now set up the script and scheduled it to run daily at 08:00! diff --git a/blog/debian-and-nginx/index.org b/blog/debian-and-nginx/index.org deleted file mode 100644 index d346f82..0000000 --- a/blog/debian-and-nginx/index.org +++ /dev/null @@ -1,172 +0,0 @@ -#+title: Migrating to a New Web Server Setup with Debian, Nginx, and Agate -#+date: 2022-02-16 -#+description: A retrospective on my recent server migration. -#+filetags: :sysadmin: - -* Server OS: Debian -#+caption: Debian + neofetch -[[https://img.cleberg.net/blog/20220216-migrating-to-debian-and-nginx/neofetch.png]] - -I've used various Linux distributions throughout the years, but I've -never used anything except Ubuntu for my servers. Why? I really have no -idea, mostly just comfort around the commands and software availability. - -However, I have always wanted to try Debian as a server OS after testing -it out in a VM a few years ago (side-note: I'd love to try Alpine too, -but I always struggle with compatibility). So, I decided to launch a new -VPS and use [[https://www.debian.org][Debian]] 11 as the OS. Spoiler -alert: it feels identical to Ubuntu for my purposes. - -I did the normal things when first launching the VPS, such as adding a -new user, locking down SSH, etc. If you want to see that level of -detail, read my other post about -[[https://cleberg.net/blog/how-to-set-up-a-vps-web-server/][How to Set -Up a VPS Web Server]]. - -All of this has been similar, apart from small things such as the -location of users' home folders. No complaints at all from me - Debian -seems great. - -* Web Server: Nginx -#+caption: Nginx status -[[https://img.cleberg.net/blog/20220216-migrating-to-debian-and-nginx/nginx.png]] - -Once I had the baseline server configuration set-up for Debian, I moved -on to trying out [[https://nginx.org][Nginx]] as my web server software. -This required me to install the =nginx= and =ufw= packages, as well as -setting up the initial UFW config: - -#+begin_src sh -sudo apt install nginx ufw -sudo ufw allow 'Nginx Full' -sudo ufw allow SSH -sudo ufw enable -sudo ufw status -sudo systemctl status nginx -#+end_src - -Once I had the firewall set, I moved on to creating the directories and -files for my website. This is very easy and is basically the same as -setting up an Apache server, so no struggles here. - -#+begin_src sh -sudo mkdir -p /var/www/your_domain/html -sudo chown -R $USER:$USER /var/www/your_domain/html -sudo chmod -R 755 /var/www/your_domain -nano /var/www/your_domain/html/index.html -#+end_src - -The next part, creating the Nginx configuration files, is quite a bit -different from Apache. First, you need to create the files in the -=sites-available= folder and symlink it the =sites-enabled= folder. - -Creating the config file for your domain: - -#+begin_src sh -sudo nano /etc/nginx/sites-available/your_domain -#+end_src - -Default content for an Nginx config file: - -#+begin_src sh -server { - listen 80; - listen [::]:80; - - root /var/www/your_domain/html; - index index.html index.htm index.nginx-debian.html; - - server_name your_domain www.your_domain; - - location / { - try_files $uri $uri/ =404; - } -} -#+end_src - -Finally, symlink it together: - -#+begin_src sh -sudo ln -s /etc/nginx/sites-available/your_domain /etc/nginx/sites-enabled/ -#+end_src - -This will make your site available to the public (as long as you have -=your_domain= DNS records pointed at the server's IP address)! - -Next, I used [[https://certbot.eff.org/][certbot]] to issue an HTTPS -certificate for my domains using the following commands: - -#+begin_src sh -sudo apt install snapd; sudo snap install core; sudo snap refresh core -sudo snap install --classic certbot -sudo ln -s /snap/bin/certbot /usr/bin/certbot -sudo certbot --nginx -#+end_src - -Now that certbot ran successfully and updated my Nginx config files to -include a =443= server block of code, I went back in and edited the -config file to include security HTTP headers. This part is optional, but -is recommended for security purposes; you can even test a website's HTTP -header security at [[https://securityheaders.com/][Security Headers]]. - -The configuration below shows a set-up where you only want your website -to serve content from its own domain, except for images and scripts, -which may come from =nullitics.com=. All other content would be blocked -from loading in a browser. - -#+begin_src sh -sudo nano /etc/nginx/sites-available/your_domain -#+end_src - -#+begin_src sh -server { - ... - add_header Content-Security-Policy "default-src 'none'; img-src 'self' https://nullitics.com; script-src 'self' https://nullitics.com; style-src 'self'; font-src 'self'"; - add_header X-Content-Type-Options "nosniff"; - add_header X-XSS-Protection "1; mode=block"; - add_header X-Frame-Options "DENY"; - add_header Strict-Transport-Security "max-age=63072000; includeSubDomains"; - add_header Referrer-Policy "no-referrer"; - ... -} -#+end_src - -#+begin_src sh -sudo systemctl restart nginx -#+end_src - -** Nginx vs. Apache -As I stated at the beginning, my historical hesitation with trying Nginx -was that the differences in configuration formats scared me away from -leaving Apache. However, I prefer Nginx to Apache for a few reasons: - -1. Nginx uses only one config file (=your_domain=) vs. Apache's two-file - approach for HTTP vs. HTTPS (=your_domain.conf= and - =your_domain-le-ssl.conf=). -2. Symlinking new configurations files and reloading Nginx are way - easier than Apache's process of having to enable headers with - =a2enmod mod_headers=, enable PHP with =a2enmod php= (plus any other - mods you need), and then enabling sites with =a2ensite=, and THEN - reloading Apache. -3. The contents of the Nginx config files seem more organized and - logical with the curly-bracket approach. This is a minor reason, but - everything just felt cleaner while I was installing my sites and that - had a big quality of life impact on the installation for me. - -They're both great software packages, but Nginx just seems more -organized and easier to use these days. I will certainly be exploring -the Nginx docs to see what other fun things I can do with all of this. - -* Gemini Server: Agate -#+caption: Agate status -[[https://img.cleberg.net/blog/20220216-migrating-to-debian-and-nginx/agate.png]] - -Finally, I set up the Agate software on this server again to host my -Gemini server content, using Rust as I have before. You can read my -other post for more information on installing Agate: -[[https://cleberg.net/blog/hosting-a-gemini-server/][Hosting a Gemini -Server]]. - -All in all, Debian + Nginx is very slick and I prefer it over my old -combination of Ubuntu + Apache (although it's really just Nginx > Apache -for me, since Debian seems mostly the same as Ubuntu is so far). diff --git a/blog/delete-gitlab-repos/index.org b/blog/delete-gitlab-repos/index.org deleted file mode 100644 index e8ea28f..0000000 --- a/blog/delete-gitlab-repos/index.org +++ /dev/null @@ -1,110 +0,0 @@ -#+title: How to Delete All GitLab Repositories -#+date: 2021-07-15 -#+description: Learn how to delete all GitLab repositories in your account. -#+filetags: :dev: - -* Background -Have you ever used GitLab to host your source code, moved to a different -host, and wanted to delete everything from your GitLab account? Well, -this post covers any scenario where you would want to delete all -repositories from your GitLab account. - -For me, I currently maintain around 30 repositories and don't like to -manually delete them whenever I switch host. GitHub has a few different -tools online to delete all repositories for you, but I have not found -anything similar for GitLab, so I needed an alternative solution. - -* Use a Python Script -** Requirements -Before we look at the script, make sure you know your GitLab username. -Next, [[https://gitlab.com/-/profile/personal_access_tokens][create an -authorization token]] so that the Python script can delete your -repositories. Don't lose this token or else you'll need to create a new -one. - -** Create the Script -To run a Python script, you must first create it. Open a terminal and -enter the following commands in whichever directory you prefer to store -the script. You can do the same things in a file manager if you prefer. - -#+begin_src sh -mkdir delete-gitlab -#+end_src - -#+begin_src sh -cd delete-gitlab -#+end_src - -#+begin_src sh -nano main.py -#+end_src - -Enter the following code into your =main.py= script. - -#+begin_src python -import request -import json - - -def get_project_ids(): - url = "https://gitlab.com/api/v4/users/{user-id}/projects" - - querystring = {"owned": "true", "simple": "true", "per_page": "50"} - - payload = "" - headers = {'authorization': 'Bearer {auth-token}'} - - response = requests.request("GET", url, data=payload, headers=headers, params=querystring) - - projects = json.loads(response.text) - projects_ids = list(map(lambda project: project.get('id'), projects)) - - return projects_ids - - -def remove_project(project_id): - url_temp = "https://gitlab.com/api/v4/projects/{project}" - headers = {'authorization': 'Bearer {auth-token}'} - querystring = "" - payload = "" - - url = url_temp.format(project=project_id) - - response = requests.request("DELETE", url, data=payload, headers=headers, params=querystring) - project = json.loads(response.text) - print(project) - - -def main(): - projects_ids = get_project_ids() - - url_temp = "https://gitlab.com/api/v4/projects/{project}" - headers = {'authorization': 'Bearer {auth-token}'} - querystring = "" - payload = "" - - for project_id in projects_ids: - url = url_temp.format(project=project_id) - - response = requests.request("GET", url, data=payload, headers=headers, params=querystring) - project = json.loads(response.text) - print(str(project.get('id')) + " " + project.get('name')) - print("Removing...") - remove_project(project_id) - - -if __name__ == "__main__": - main() -#+end_src - -Now that you have the proper information, replace ={user-id}= with your -GitLab username and ={auth-token}= with the authorization token you -created earlier. - -Finally, simply run the script and watch the output. You can also use -PyCharm Community Edition to edit and run the Python script if you don't -want to work in a terminal. - -#+begin_src sh -python3 main.py -#+end_src diff --git a/blog/digital-minimalism/index.org b/blog/digital-minimalism/index.org deleted file mode 100644 index 84894d9..0000000 --- a/blog/digital-minimalism/index.org +++ /dev/null @@ -1,100 +0,0 @@ -#+title: Digital Minimalism -#+date: 2023-10-04 -#+description: My personal retrospective on digital minimalism. -#+filetags: :personal: - -I've written [[/wiki/#digital-garden][a note about minimalism]] before, -but I wanted to dedicate some time to reflect on digital minimalism and -how I've been able to minimize the impact of digital devices in my life. - -#+begin_quote -These changes crept up on us and happened fast, before we had a chance -to step back and ask what we really wanted out of the rapid advances of -the past decade. We added new technologies to the periphery of our -experience for minor reasons, then woke one morning to discover that -they had colonized the core of our daily life. We didn't, in other -words, sign up for the digital world in which we're currently -entrenched; we seem to have stumbled backward into it. - -/(Digital Minimalism, 2019)/ - -#+end_quote - -* The Principles of Digital Minimalism -As noted in Cal Newport's book, /Digital Minimalism/, there are three -main principles to digital minimalism that I tend to agree with: - -1. Clutter is costly. - - Digital minimalists recognize that cluttering their time and - attention with too many devices, apps, and services creates an - overall negative cost that can swamp the small benefits that each - individual item provides in isolation. -2. Optimization is important. - - Digital minimalists believe that deciding a particular technology - supports something they value is only the first step. To truly - extract its full potential benefit, it's necessary to think - carefully about how they'll use the technology. -3. Intentionality is satisfying. - - Digital minimalists derive significant satisfaction from their - general commitment to being more intentional about how they engage - with new technologies. This source of satisfaction is independent - of the specific decisions they make and is one of the biggest - reasons that minimalism tends to be immensely meaningful to its - practitioners. - -* Taking Action -In order to put the logic into practice, I've created a few new habits -and continued performing old habits that are working well: - -** Using Devices With Intention -- I already rarely use "social media", mostly limited to forums such as - Hacker News and Tildes, so I've just tweaked my behavior to stop - looking for content in those places when I'm bored. -- Use devices with intention. Each time I pick up a digital device, - there should be an intention to use the device to improve my current - situation. No more endless scrolling or searching for something to - interest me. - -** Prevent Distractions -- Disable (most) notifications on all devices. I spent 15-30 minutes - going through the notifications on my phone, watch, and computer to - ensure that only a select few apps have the ability to interrupt me: - Calendar, Messages, Phone, Reminders, & Signal. -- Disable badges for any apps except the ones mentioned in the bullet - above. -- Set-up focus profiles across devices so that I can enable different - modes, such as Personal when I only want to see notifications from - people I care about or Do Not Disturb, where absolutely nothing can - interrupt me. -- Clean up my home screens. This one was quite easy as I already - maintain a minimalist set-up, but I went extreme by limiting my phone - to just eight apps on the home screen and four in the dock. If I need - another app, I'll have to search or use the app library. -- Remove the work profile from my phone. This was a tough decision as - having my work profile on my device definitely makes my life easier at - times, but it also has quite a negative effect when I'm "always - online" and can see the notifications and team activity 24/7. I - believe creating a distinct barrier between my work and personal - devices will be beneficial in the end. - -** Creating Alternative Activities -This is the most difficult piece, as most of my hobbies and interests -lie in the digital world. However, I'm making a concerted effort to put -devices down unless necessary and force myself to perform other -activities in the physical world instead. - -I've started with a few basics that are always readily available to me: - -- Do a chore, such as organizing or cleaning. -- Read a book, study a piece of art, etc. -- Exercise or get outdoors. -- Participate in a hobby, such as photography, birding, disc golf, etc. -- Let yourself be bored and wander into creativity. - -* Making Progress -I'll be taking notes as I continue down this journey and hope to see -positive trends. I've always been a minimalist in the physical world and -it feels refreshing to filter out the clutter that has come to dominate -my digital life over the years. - -I'm excited to see where this journey leads. diff --git a/blog/ditching-cloudflare/index.org b/blog/ditching-cloudflare/index.org deleted file mode 100644 index 51a63c6..0000000 --- a/blog/ditching-cloudflare/index.org +++ /dev/null @@ -1,89 +0,0 @@ -#+title: Ditching Cloudflare for Njalla -#+date: 2022-06-01 -#+description: A retrospective on my decision to leave Cloudflare and move to Njalla for domain registration and DNS. -#+filetags: :sysadmin: - -* Registrar -After spending a year or so using Cloudflare for DNS only - no proxying -or applications - I spent the last few months using Cloudflare Tunnels -and Cloudflare Access to protect my self-hosted websites and -applications via their proxy traffic model. - -However, I have never liked using Cloudflare due to their increasingly -large share of control over web traffic, as well as their business model -of being a MITM for all of your traffic. - -So, as of today, I have switched over to [[https://njal.la][Njalla]] as -my registrar and DNS manager. I was able to easily transfer my domains -over rapidly, with only one domain taking more than 15-30 minutes to -propagate. - -+I do still have two domains sitting at Cloudflare for the moment while -I decide if they're worth the higher rates (one domain is 30€ and the -other is 45€).+ - -#+begin_quote -*Update (2022.06.03)*: I ended up transferring my final two domains over -to Njalla, clearing my Cloudflare account of personal data, and deleting -the Cloudflare account entirely. /I actually feel relieved to have moved -on to a provider I trust./ - -#+end_quote - -* DNS -As noted above, I'm using Njalla exclusively for DNS configurations on -my domains. - -However, the transfer process was not ideal. As soon as the domains -transferred over, I switched the nameservers from Cloudflare to Njalla -and lost most of the associated DNS records. So, the majority of the -time spent during the migration was simply re-typing all the DNS records -back in one-by-one. - -This would be much simpler if I were able to edit the plain-text format -of the DNS configuration. I was able to do that at a past registrar -(perhaps it was [[https://gandi.net/][Gandi.net]]?) and it made life a -lot easier. - -** Dynamic DNS Updates -I have built an easy Python script to run (or set-up in =cron= to run -automatically) that will check my server's IPv4 and IPv6, compare it to -Njalla, and update the DNS records if they don't match. You can see the -full script and process in my other post: -[[../njalla-dns-api/][Updating Dynamic DNS with Njalla API]]. - -I haven't used this other method, but I do know that you can create -=Dynamic= DNS records with Njalla that -[[https://njal.la/docs/ddns/][work for updating dynamic subdomains]]. - -** Njalla's DNS Tool -One neat upside to Njalla is that they have a -[[https://check.njal.la/dns/][DNS lookup tool]] that provides a lot of -great information for those of you (AKA: me) who hate using the =dig= -command. - -This was very useful for monitoring a couple of my transferred domains -to see when the changes in nameservers, records, and DNSSEC went into -effect. - -* Tunnel -Cloudflare Tunnel is a service that acts as a reverse-proxy (hosted on -Cloudflare's servers) and allowed me to mask the private IP address of -the server hosting my various websites and apps. - -However, as I was moving away from Cloudflare, I was not able to find a -suitable replacement that was both inexpensive and simple. So, I simply -went back to hosting [[/blog/set-up-nginx-reverse-proxy/][my own reverse -proxy with Nginx]]. With the recent additions of Unifi hardware in my -server/network rack, I am much more protected against spam and malicious -attacks at the network edge than I was before I switched to Cloudflare. - -* Access -Cloudflare Access, another app I used in combination with Cloudflare -Tunnel, provided an authentication screen that required you to enter -valid credentials before Cloudflare would forward you to the actual -website or app (if the website/app has their own authentication, you'd -then have to authenticate a second time). - -I did not replace this service with anything since I only host a handful -of non-sensitive apps that don't require duplicate authentication. diff --git a/blog/dont-say-hello/index.org b/blog/dont-say-hello/index.org deleted file mode 100644 index ef5662c..0000000 --- a/blog/dont-say-hello/index.org +++ /dev/null @@ -1,26 +0,0 @@ -#+title: Don't Say Hello -#+date: 2024-01-08 -#+description: A short post describing my displeasure with cliffhanger conversations. -#+filetags: :personal: - -I recently came back from a winter break and have started working -again... only to immediately run into the dilemma of people sending me -cliffhanger messages again. - -* No Hello -A year or two ago, I discovered [[https://nohello.net/en/][no hello]] -and have thought about it often since then. I've even sent it to a few -people since then (who wouldn't take offense to it). - -I work in a fast-paced environment where efficiency is extremely -valuable. Therefore, I have always held a deep displeasure for -conversations where people start with "Hello" and then. - -I searched back through my work messages and found that I received ~50 -messages from ~10 people last year from people that contained "hi", -"hey", or "hello" and did not contain any indication of the purpose of -the conversation. I also noticed that a few of the users were -responsible for the large majority of the cliffhangers. - -There's no real point to this post, just a desparate request for people -to please stop doing this. diff --git a/blog/exiftool/index.org b/blog/exiftool/index.org deleted file mode 100644 index 5735125..0000000 --- a/blog/exiftool/index.org +++ /dev/null @@ -1,60 +0,0 @@ -#+title: Stripping Image Metadata with exiftool -#+date: 2022-02-17 -#+description: A simple guide to remove exif data with exiftool. -#+filetags: :privacy: - -** Why Strip Metadata? -Okay, so you want to strip metadata from your photos. Perhaps you take -pictures of very rare birds, and the location metadata is a gold mine -for poachers, or perhaps you're just privacy-oriented like me and prefer -to strip metadata from publicly-available images. - -There are various components of image metadata that you may want to -delete before releasing a photo to the public. Here's an incomplete list -of things I could easily see just by inspecting a photo on my laptop: - -- Location (Latitude & Longitude) -- Dimensions -- Device Make & Model -- Color Space -- Color Profile -- Focal Length -- Alpha Channel -- Red Eye -- Metering Mode -- F Number - -Regardless of your reasoning, I'm going to explain how I used the -=exiftool= package in Linux to automatically strip metadata from all -images in a directory (+ subdirectories). - -** Installing =exiftool= -First things first: we need to install the tool. I'm running Debian 11 -on my server (Ubuntu will work the same), so the command is as simple -as: - -#+begin_src sh -sudo apt install exiftool -#+end_src - -There are different tools that can accomplish the same thing across -distributions, but I really only care to test out this one package. - -** Recursively Strip Data -I actually use this tool extensively to strip any photos uploaded to the -website that serves all the images for my blog (=img.cleberg.net=). - -The following command is incredibly useful and can be modified to -include any image extensions that =exiftool= supports: - -#+begin_src sh -exiftool -r -all= -ext jpg -ext png /path/to/directory/ -#+end_src - -See below for the results of my most recent usage of =exiftool= after I -uploaded the image for this blog post. You can see that the command will -let you know how many directories were scanned, how many images were -updated, and how many images were unchanged. - -#+caption: exiftool results -[[https://img.cleberg.net/blog/20220217-stripping-metadata-with-exiftool/exiftool.png]] diff --git a/blog/exploring-hare/index.org b/blog/exploring-hare/index.org deleted file mode 100644 index 749e46f..0000000 --- a/blog/exploring-hare/index.org +++ /dev/null @@ -1,169 +0,0 @@ -#+title: Exploring the Hare Programming Language -#+date: 2023-02-02 -#+description: A retrospective on my first time using the Hare Programming Language. -#+filetags: :dev: - -* A Quick Note -By no means am I a professional developer, so this post will be rather -short. I won't be going into depth on the specification or anything that -technical. - -Instead, I will simply be talking about how I (a relatively basic -hobbyist programmer) have been playing with Hare and what intrigues me -about the language. - -* Hare -The [[https://harelang.org][Hare]] programming language is a -straightforward language that should look familiar if you've ever -programmed with C, Rust, or other languages that aim to build software -at the system-level. - -The Hare homepage states the following: - -#+begin_quote -Hare is a systems programming language designed to be simple, stable, -and robust. Hare uses a static type system, manual memory management, -and minimal runtime. It is well-suited to writing operating systems, -system tools, compilers, networking software, and other low-level, high -performance tasks. - -#+end_quote - -I have found this all to be true while playing with it for the first -time today. In the next few sections, I'm going to walk through my -installation and first program. - -** Installation -I'm currently running Alpine Linux on my Thinkpad, so the installation -was quite easy as there is a package for Hare in the =apk= repositories. - -#+begin_src sh -doas apk add hare hare-doc -#+end_src - -However, I was able to install Hare from scratch on Fedora Linux a short -while ago, which was also very easy to do. If you need further -instructions and Hare doesn't have a package on your system, take a look -at the [[https://harelang.org/installation/][Hare Installation]] page. - -** Creating a Test Project -In order to play with the language, I created -[[https://git.sr.ht/~cmc/hare-projects][hare-test]] and will be putting -any of my Hare-related adventures in here. - -#+begin_quote -*Update:** I also created a simple Hare program for creating a file from -user input: -[[https://git.sr.ht/~cmc/hare-projects/tree/main/item/files/files.ha][files.ha]] - -#+end_quote - -Luckily, Hare doesn't require any complex set-up tools or build -environment. Once you have Hare installed, you simply need to create a -file ending with =.ha= and you can run a Hare program. - -I created a file called =rgb.ha= in order to test out the random number -generation and passing parameters between functions. - -#+begin_src sh -nano rgb.ha -#+end_src - -Within this file, I was able to easily import a few of the -[[https://harelang.org/tutorials/stdlib/][standard library modules]]: -=fmt=, =math::random=, and =datetime=. - -With these modules, I created two functions: - -1. =main=: This function calls the =generate_rgb= function and then - prints out the returned values. -2. =generate_rgb=: This function uses the current Unix epoch time to - generate a pseudo-random value and uses this value to create three - more random values between 0 and 255. These three numbers represent a - color in RGB format. - -#+begin_quote -*Note*: Some syntax coloring may look odd, as Zola currently doesn't -have a syntax highlighting theme for Hare. Instead, I'm using the C -theme, which may not be exactly accurate when coloring the code below. - -#+end_quote - -#+begin_src C -use datetime; -use fmt; -use math::random; - -export fn main() void = { - const rgb = generate_rgb(); - fmt::printfln("RGB: ({}, {}, {})", rgb[0], rgb[1], rgb[2])!; -}; - -fn generate_rgb() []u64 = { - // Use the current Unix epoch time as the seed value - let datetime = datetime::epochunix(&datetime::now()); - - // Generate initial pseudo-random value - // You must cast the datetime from int to u64 - let x = random::init(datetime: u64); - - // Generate RGB values between (0, 255) using pseudo-random init value - let r = random::u64n(&x, 255); - let g = random::u64n(&x, 255); - let b = random::u64n(&x, 255); - - // Structure data as array and return - let rgb_array: [3]u64 = [r, g, b]; - return rgb_array; -}; -#+end_src - -** Running a Program -Once you have a Hare file written and ready to run, you simply need to -run it: - -#+begin_src sh -hare run file.ha -#+end_src - -You can also compile the program into an executable: - -#+begin_src sh -hare build -o example file.ha -./example -#+end_src - -** Initial Thoughts -1. Documentation Improvements Would Help - - While I was able to piece everything together eventually, the biggest - downfall right now in Hare's documentation. For such a new project, - the documentation is in a great spot. However, bare specifications - don't help as much as a brief examples section would. - - For example, it took me a while to figure out what the =u64n= - function was looking for. I could tell that it took two parameters - and the second was my max value (255), but couldn't figure out what - the first value should be. Eventually, I inspected the =random.ha= - file in the - [[https://git.sr.ht/~sircmpwn/hare/tree/master/item/math/random/random.ha][Hare - source code]] and found the test suite that helped me discover that - it needed an =init()= value in the form of =&var=. - -2. More Basic Modules - - This is another point that comes from Hare being new and awaiting - more contributions, but there are some basic functions that I would - personally enjoy seeing in Hare, such as one to convert decimal - (base 10) values to hexadecimal (base 16). - - If I'm feeling comfortable with my math, I may work on the list of - functions I want and see if any can make it into the Hare source - code. - -3. Overall Thoughts - - Overall, I actually really enjoy Hare. It's not as tedious to get a - project up and running as Rust, but it's also simpler and more - user-friendly than learning C. I am going to continue playing with it - and see if I can make anything of particular value. diff --git a/blog/fediverse/index.org b/blog/fediverse/index.org deleted file mode 100644 index 5224b17..0000000 --- a/blog/fediverse/index.org +++ /dev/null @@ -1,92 +0,0 @@ -#+title: A Simple Guide to the Fediverse -#+date: 2021-01-04 -#+description: Learn about the basics of the Fediverse. -#+filetags: :social: - -* What is the Fediverse? -The fediverse is a federated universe of servers commonly used for -sharing content, like social media. So, instead of having to rely on a -single organization to run the server (e.g. Facebook), the fediverse is -a giant collection of servers across the world, owned by many people and -organizations. - -Take a look at this depiction of a federated network. Each server in -this photo is owned and run by different administrators/owners. -Federated networks are best explained as email servers: you have an -email account that exists on a server (e.g. Outlook), your friend has an -account on a different server (e.g. GMail), and another friend has an -account on a third server (e.g. ProtonMail). All three of you can talk -and communicate back and forth without having to be on the same server. -However, responsible email admins are there to set rules and control the -traffic going in/out of the server. - -#+caption: Federated services diagram -[[https://img.cleberg.net/blog/20210104-a-simple-guide-to-the-fediverse/federated-example.svg]] - -The main objective of this architecture is to decentralize the control -within the internet connections. For example, if you run your own -Mastodon instance, you and your users can't be censored or impacted in -any way by authorities of another Mastodon instance. Some users have -praised these features due to recent criticism of popular social media -websites that may be over-censoring their users. - -This strategy is great for making sure control of the social web isn't -controlled by a single organization, but it also has some downsides. If -I create a Mastodon instance and get a ton of users to sign up, I can -shut the server down at any time. That means you're at risk of losing -the content you've created unless you back it up, or the server backs it -up for you. Also, depending on the software used (e.g. Mastodon, -Pixelfed, etc.), censorship may still be an issue if the server admins -decide they want to censor their users. Now, censorship isn't always a -bad thing and can even benefit the community as a whole, but you'll want -to determine which servers align with your idea of proper censorship. - -However, these are risks that we take when we sign up for any online -platform. Whatever your reason is for trying out federated social -networks, they are part of the future of the internet. However, the -popularity of these services is still yet to be determined, especially -with the increased difficulty understanding and signing up for these -platforms. Perhaps increased regulation and litigation against current -social media sites will push more users into the fediverse. - -* Federated Alternatives to Popular Sites -The list below is a small guide that will show you federated -alternatives to current popular websites. There are many more out there, -so go and explore: you might just find the perfect home. - -** Reddit -- [[https://lemmy.ml/instances][Lemmy]] - -** Twitter/Facebook/Tumblr -- [[https://joinmastodon.org][Mastodon]] -- [[https://diasporafoundation.org][Diaspora]] -- [[https://friendi.ca][Friendica]] -- [[https://gnusocial.network][GNU Social]] -- [[https://pleroma.social][Pleroma]] - -** Instagram -- [[https://pixelfed.org][Pixelfed]] - -** Slack/Discord -- [[https://element.io][Matrix]] - -** Youtube/Vimeo -- [[https://joinpeertube.org][Peertube]] - -** Spotify/Soundcloud -- [[https://funkwhale.audio][Funkwhale]] - -** Podcasting -- [[https://pubcast.pub][Pubcast]] - -** Medium/Blogger -- [[https://writefreely.org][WriteFreely]] - -* Get Started -The best way to get started is to simply sign up and learn as you go. If -you're comfortable signing up through a Mastodon, Pleroma, or Friendica -server, here is [[https://fediverse.party/en/portal/servers][a list of -themed servers]] to choose from. If you're looking for something else, -try a web search for a federated alternative to your favorite sites. - -Find a server that focuses on your passions and start there! diff --git a/blog/fedora-i3/index.org b/blog/fedora-i3/index.org deleted file mode 100644 index f96bbb7..0000000 --- a/blog/fedora-i3/index.org +++ /dev/null @@ -1,152 +0,0 @@ -#+title: Rebooting My Love Affair with Linux -#+date: 2022-06-24 -#+description: A retrospective on moving from macOS to Linux. -#+filetags: :linux: - -* Leaving macOS -As I noted [[../foss-macos-apps][in a recent post]], I have been -planning on migrating from macOS back to a Linux-based OS. I am happy to -say that I have finally completed my migration and am now stuck in the -wonderful world of Linux again. - -My decision to leave macOS really came down to just a few important -things: - -- Apple Security (Gatekeeper) restricting me from running any software I - want. Even if you disable Gatekeeper and allow software to bypass the - rest of the device installation security, you still have to repeat - that process every time the allowed software is updated. -- macOS sends out nearly constant connections, pings, telemetry, etc. to - a myriad of mysterious Apple services. I'm not even going to dive into - how many macOS apps have constant telemetry on, as well. -- Lastly, I just /really/ missed the customization and freedom that - comes with Linux. Being able to switch to entirely new kernel, OS, or - desktop within minutes is a freedom I took for granted when I switched - to macOS. - -Now that I've covered macOS, I'm going to move on to more exciting -topics: my personal choice of OS, DE, and various customizations I'm -using. - -* Fedora -After trying a ton of distros (I think I booted and tested around 20-25 -distros), I finally landed on [[https://getfedora.org/][Fedora Linux]]. -I have quite a bit of experience with Fedora and enjoy the =dnf= package -manager. Fedora allows me to keep up-to-date with recent software (I'm -looking at you, Debian), but still provides a level of stability you -don't find in every distro. - -In a very close second place was Arch Linux, as well as its spin-off: -Garuda Linux (Garuda w/ sway is /beautiful/). Arch is great for -compatibility and the massive community it has, but I have just never -had the time to properly sit down and learn the methodology behind their -packaging systems. - -Basically, everything else I tested was unacceptable in at least one way -or another. Void (=glibc=) was great, but doesn't support all the -software I need. Slackware worked well as a tui, but I wasn't skilled -enough to get a tiling window manager (WM) working on it. - -** i3 -One of the reasons I settled on Fedora is that it comes with an official -i3 spin. Being able to use a tiling WM, such as i3 or sway, is one of -the biggest things I wanted to do as soon as I adopted Linux again. - -I will probably set up a dotfile repository soon, so that I don't lose -any of my configurations, but nothing big has been configured thus far. - -The two main things I have updated in i3wm are natural scrolling and -binding my brightness keys to the =brightnessctl= program. - -1. Natural Scrolling - - You can enable natural scrolling by opening the following file: - - #+begin_src sh - sudo nano /usr/share/X11/xorg.conf.d/40-libinput.conf - #+end_src - - Within the =40-libinput.conf= file, find the following input sections - and enable the natural scrolling option. - - This is the =pointer= section: - - #+begin_src conf - Section "InputClass" - Identifier "libinput pointer catchall" - MatchIsPointer "on" - MatchDevicePath "/dev/input/event*" - Driver "libinput" - Option "NaturalScrolling" "True" - EndSection - #+end_src - - This is the =touchpad= section: - - #+begin_src conf - Section "InputClass" - Identifier "libinput touchpad catchall" - MatchIsTouchpad "on" - MatchDevicePath "/dev/input/event*" - Driver "libinput" - Option "NaturalScrolling" "True" - EndSection - #+end_src - -2. Enabling Brightness Keys - - Likewise, enabling brightness key functionality is as simple as - binding the keys to the =brightnessctl= program. - - To do this, open up your i3 config file. Mine is located here: - - #+begin_src sh - nano /home//.config/i3/config - #+end_src - - #+begin_src conf - # Use brightnessctl to adjust brightness. - bindsym XF86MonBrightnessDown exec --no-startup-id brightnessctl --min-val=2 -q set 3%- - bindsym XF86MonBrightnessUp exec --no-startup-id brightnessctl -q set 3%+ - #+end_src - -3. =polybar= - - Instead of using the default =i3status= bar, I have opted to use - =polybar= instead (as you can also see in the screenshot above). - - My config for this menu bar is basically just the default settings - with modified colors and an added battery block to quickly show me - the machine's battery info. - -4. =alacritty= - - Not much to say on this part yet, as I haven't configured it much, - but I installed =alacritty= as my default terminal, and I am using - =zsh= and the shell. - -* Software Choices -Again, I'm not going to say much that I haven't said yet in other blog -posts, so I'll just do a quick rundown of the apps I installed -immediately after I set up the environment. - -Flatpak Apps: - -- Cryptomator -- pCloud -- Signal - -Fedora Packages: - -- gomuks -- neomutt -- neofetch -- Firefox - - uBlock Origin - - Bitwarden - - Stylus - - Privacy Redirect - -Other: - -- exiftool diff --git a/blog/fedora-login-manager/index.org b/blog/fedora-login-manager/index.org deleted file mode 100644 index 861a174..0000000 --- a/blog/fedora-login-manager/index.org +++ /dev/null @@ -1,40 +0,0 @@ -#+title: How to Remove the Login Manager from Fedora i3 -#+date: 2023-01-08 -#+description: Learn how to completely remove the login manager from Fedora i3. -#+filetags: :linux: - -* Fedora i3's Login Manager -Since I use the i3 spin of Fedora Workstation, I don't like to have a -login manager installed by default. As of the current version of Fedora -i3, the default login manager is LightDM. - -If this is no longer the case, you can search for currently-installed -packages with the following command and see if you can identify a -different login manager. - -#+begin_src sh -sudo dnf list installed -#+end_src - -* Removing the Login Manager -In order to remove the login manager, simply uninstall the package. - -#+begin_src sh -sudo dnf remove lightdm -#+end_src - -* Launching i3 Manually -In order to launch i3 manually, you need to set up your X session -properly. To start, create or edit the =~/.xinitrc= file to include the -following at the bottom. - -#+begin_src config -exec i3 -#+end_src - -Now, whenever you log in to the TTY, you can launch your desktop with -the following command. - -#+begin_src sh -startx -#+end_src diff --git a/blog/financial-database/index.org b/blog/financial-database/index.org deleted file mode 100644 index 55a6473..0000000 --- a/blog/financial-database/index.org +++ /dev/null @@ -1,256 +0,0 @@ -#+title: Maintaining a Personal Financial Database -#+date: 2022-03-03 -#+description: An example project showing to build and maintain a simple financial database. -#+filetags: :personal: - -* Personal Financial Tracking -For the last 6-ish years, I've tracked my finances in a spreadsheet. -This is common practice in the business world, but any good dev will -cringe at the thought of storing long-term data in a spreadsheet. A -spreadsheet is not for long-term storage or as a source of data to pull -data/reports. - -As I wanted to expand the functionality of my financial data (e.g., -adding more reports), I decided to migrate the data into a database. To -run reports, I would query the database and use a language like Python -or Javascript to process the data, perform calculations, and visualize -the data. - -* SQLite -When choosing the type of database I wanted to use for this project, I -was split between three options: - -1. MySQL: The database I have the most experience with and have used for - years. -2. PostgreSQL: A database I'm new to, but want to learn. -3. SQLite: A database that I've used for a couple projects and have - moderate experience. - -I ended up choosing SQLite since it can be maintained within a single -=.sqlite= file, which allows me more flexibility for storage and backup. -I keep this file in my cloud storage and pull it up whenever needed. - -** GUI Editing -Since I didn't want to try and import 1000--1500 records into my new -database via the command line, I opted to use -[[https://sqlitebrowser.org/][DB Browser for SQLite (DB4S)]] as a GUI -tool. This application is excellent, and I don't see myself going back -to the CLI when working in this database. - -DB4S allows you to copy a range of cells from a spreadsheet and paste it -straight into the SQL table. I used this process for all 36 accounts, -1290 account statements, and 126 pay statements. Overall, I'm guessing -this took anywhere between 4--8 hours. In comparison, it probably took -me 2-3 days to initially create the spreadsheet. - -#+caption: DB4S -[[https://img.cleberg.net/blog/20220303-maintaining-a-personal-financial-database/db4s.png]] - -** Schema -The schema for this database is actually extremely simple and involves -only three tables (for now): - -1. Accounts -2. Statements -3. Payroll - -*Accounts* - -The Accounts table contains summary information about an account, such -as a car loan or a credit card. By viewing this table, you can find -high-level data, such as interest rate, credit line, or owner. - -#+begin_src sql -CREATE TABLE "Accounts" ( - "AccountID" INTEGER NOT NULL UNIQUE, - "AccountType" TEXT, - "AccountName" TEXT, - "InterestRate" NUMERIC, - "CreditLine" NUMERIC, - "State" TEXT, - "Owner" TEXT, - "Co-Owner" TEXT, - PRIMARY KEY("AccountID" AUTOINCREMENT) -) -#+end_src - -*Statements* - -The Statements table uses the same unique identifier as the Accounts -table, meaning you can join the tables to find a monthly statement for -any of the accounts listed in the Accounts table. Each statement has an -account ID, statement date, and total balance. - -#+begin_src sql -CREATE TABLE "Statements" ( - "StatementID" INTEGER NOT NULL UNIQUE, - "AccountID" INTEGER, - "StatementDate" INTEGER, - "Balance" NUMERIC, - PRIMARY KEY("StatementID" AUTOINCREMENT), - FOREIGN KEY("AccountID") REFERENCES "Accounts"("AccountID") -) -#+end_src - -*Payroll* - -The Payroll table is a separate entity, unrelated to the Accounts or -Statements tables. This table contains all information you would find on -a pay statement from an employer. As you change employers or obtain new -perks/benefits, just add new columns to adapt to the new data. - -#+begin_src sql -CREATE TABLE "Payroll" ( - "PaycheckID" INTEGER NOT NULL UNIQUE, - "PayDate" TEXT, - "Payee" TEXT, - "Employer" TEXT, - "JobTitle" TEXT, - "IncomeRegular" NUMERIC, - "IncomePTO" NUMERIC, - "IncomeHoliday" NUMERIC, - "IncomeBonus" NUMERIC, - "IncomePTOPayout" NUMERIC, - "IncomeReimbursements" NUMERIC, - "FringeHSA" NUMERIC, - "FringeStudentLoan" NUMERIC, - "Fringe401k" NUMERIC, - "PreTaxMedical" NUMERIC, - "PreTaxDental" NUMERIC, - "PreTaxVision" NUMERIC, - "PreTaxLifeInsurance" NUMERIC, - "PreTax401k" NUMERIC, - "PreTaxParking" NUMERIC, - "PreTaxStudentLoan" NUMERIC, - "PreTaxOther" NUMERIC, - "TaxFederal" NUMERIC, - "TaxSocial" NUMERIC, - "TaxMedicare" NUMERIC, - "TaxState" NUMERIC, - PRIMARY KEY("PaycheckID" AUTOINCREMENT) -) -#+end_src - -** Python Reporting -Once I created the database tables and imported all my data, the only -step left was to create a process to report and visualize on various -aspects of the data. - -In order to explore and create the reports I'm interested in, I utilized -a two-part process involving Jupyter Notebooks and Python scripts. - -1. Step 1: Jupyter Notebooks - - When I need to explore data, try different things, and re-run my code - cell-by-cell, I use Jupyter Notebooks. For example, I explored the - =Accounts= table until I found the following useful information: - - #+begin_src python - import sqlite3 - import pandas as pd - import matplotlib - - # Set up database filename and connect - db = "finances.sqlite" - connection = sqlite3.connect(db) - df = pd.read_sql_query("SELECT ** FROM Accounts", connection) - - # Set global matplotlib variables - %matplotlib inline - matplotlib.rcParams['text.color'] = 'white' - matplotlib.rcParams['axes.labelcolor'] = 'white' - matplotlib.rcParams['xtick.color'] = 'white' - matplotlib.rcParams['ytick.color'] = 'white' - matplotlib.rcParams['legend.labelcolor'] = 'black' - - # Display graph - df.groupby(['AccountType']).sum().plot.pie(title='Credit Line by Account Type', y='CreditLine', figsize=(5,5), autopct='%1.1f%%') - #+end_src - -2. Step 2: Python Scripts - - Once I explored enough through the notebooks and had a list of - reports I wanted, I moved on to create a Python project with the - following structure: - - #+begin_src txt - finance/ - ├── notebooks/ - │ │ ├── account_summary.ipynb - │ │ ├── account_details.ipynb - │ │ └── payroll.ipynb - ├── public/ - │ │ ├── image-01.png - │ │ └── image-0X.png - ├── src/ - │ └── finance.sqlite - ├── venv/ - ├── _init.py - ├── database.py - ├── process.py - ├── requirements.txt - └── README.md - #+end_src - - This structure allows me to: - - 1. Compile all required python packages into =requirements.txt= for - easy installation if I move to a new machine. - 2. Activate a virtual environment in =venv/= so I don't need to - maintain a system-wide Python environment just for this project. - 3. Keep my =notebooks/= folder to continuously explore the data as I - see fit. - 4. Maintain a local copy of the database in =src/= for easy access. - 5. Export reports, images, HTML files, etc. to =public/=. - - Now, onto the differences between the code in a Jupyter Notebook and - the actual Python files. To create the report in the Notebook snippet - above, I created the following function inside =process.py=: - - #+begin_src python - # Create summary pie chart - def summary_data(accounts: pandas.DataFrame) -> None: - accounts_01 = accounts[accounts["Owner"] == "Person01"] - accounts_02 = accounts[accounts["Owner"] == "Person02"] - for x in range(1, 4): - if x == 1: - df = accounts - account_string = "All Accounts" - elif x == 2: - df = accounts_01 - account_string = "Person01's Accounts" - elif x == 3: - df = accounts_02 - account_string = "Person02's Accounts" - print(f"Generating pie chart summary image for {account_string}...") - summary_chart = ( - df.groupby(["AccountType"]) - .sum() - .plot.pie( - title=f"Credit Line by Type for {account_string}", - y="CreditLine", - autopct="%1.1f%%", - ) - ) - summary_chart.figure.savefig(f"public/summary_chart_{x}.png", dpi=1200) - #+end_src - - The result? A high-quality pie chart that is read directly by the - =public/index.html= template I use. - - #+caption: Summary Pie Chart - [[https://img.cleberg.net/blog/20220303-maintaining-a-personal-financial-database/summary_chart.png]] - - Other charts generated by this project include: - - - Charts of account balances over time. - - Line chart of effective tax rate (taxes divided by taxable income). - - Salary projections and error limits using past income and inflation - rates. - - Multi-line chart of gross income, taxable income, and net income. - - The best thing about this project? I can improve it at any given - time, shaping it into whatever helps me the most for that time. I - imagine that I will be introducing an asset tracking table soon to - track the depreciating value of cars, houses, etc. Who knows what's - next? diff --git a/blog/flac-to-opus/index.org b/blog/flac-to-opus/index.org deleted file mode 100644 index adb7763..0000000 --- a/blog/flac-to-opus/index.org +++ /dev/null @@ -1,169 +0,0 @@ -#+title: Recursive Command-Line FLAC to Opus Conversion -#+date: 2022-07-30 -#+description: Learn how to convert all FLAC files to Opus, including recursive files in subdirectories. -#+filetags: :linux: - -* Converting FLAC to OPUS -I am currently rebuilding my music library from scratch so that I can -effectively archive all the music I own in the -[[https://en.wikipedia.org/wiki/FLAC][FLAC file format]], a lossless -audio codec. - -However, streaming FLAC files outside the home can be difficult due to -the size of the files, especially if you're using a weak connection. - -So, in order to archive the music in a lossless format and still be able -to stream it easily, I opted to create a copy of my FLAC files in the -[[https://en.wikipedia.org/wiki/Opus_(audio_format)][Opus audio codec]]. -This allows me to archive a quality, lossless version of the music and -then point my streaming service to the smaller, stream-ready version. - -** Dependencies -The process I follow utilizes the =opus-tools= package in Ubuntu. Before -proceeding, install the package: - -#+begin_src sh -sudo apt install opus-tools -#+end_src - -If you want to use a different conversion method, such as =ffmpeg= or -=avconv=, simply install that package instead. - -** Conversion Process -The script I'm using is stored in my home directory, but feel free to -create it wherever you want. It does not need to be in the same -directory as your music files. - -#+begin_src sh -cd ~ && nano transform.sh -#+end_src - -Once you have your new bash script opened in an editor, go ahead and -paste the following logic into the script. - -You *MUST* edit the following variables in order for it to work: - -- =source=: The source directory where your FLAC files are stored. -- =dest=: The destination directory where you want the resulting Opus - files to be stored. - -You *MAY* want to edit the following variables to suit your needs: - -- =filename=: If you are converting to a file format other than Opus, - you'll need to edit this so that your resulting files have the correct - filename extension. -- =reldir=: This variable can be edited to strip out more leading - directories in the file path. As you'll see later, I ignore this for - now and simply clean it up afterward. -- =opusenc=: This is the actual conversion process. You may want to edit - the bitrate to suit your needs. I set mine at 128 but some prefer 160 - or higher. - -#+begin_src sh -#!/bin/bash -## - The IFS takes care of spaces in file and dirnames -## - your folders may vary -## - what you mount to the folders does not matter -## - in RELDIR, the f5 most likely MUST be edited, -## since its responsible, how many leading directories -## will be removed from the directory structure in order -## to append that exact path to the outfile -## - the commented echos are still in place in order to give -## you the variables for testing, before running. - -IFS=$'\n' - -## the paths given here contain the directory structure that I want to keep -## source=/mnt/music/archives/ARTIST/ALBUM/FLACFILE.flac -## local=/mnt/music/library/ARTIST/ALBUM/OPUSFILE.opus - -source=/mnt/music/archives -dest=/mnt/music/library - -for i in $(find $source -type f -iname '*.flac' ); -do -## SET VARIABLES for PATHS and FILENAMES - fullfile=$i - filename="${i##*/}" - filename="${filename%.*}.opus" - fulldir=$(dirname "${i}") - reldir="$(echo $fulldir | cut -d'/' -f5-)" - reldir=${reldir//flac} - outdir="$dest/$reldir" - outfile="$outdir/$filename" - -# is that working? -# outfile='$local/""$(echo $(dirname "${i}") | cut -d'/' -f5-)"//flac"/"${i##*/}"' -# echo 'output file: ' "$outfile" - -## SHOW ME THE CONTENTS of the VARIABLES -# echo 'File found:' "$i" -# echo 'Relative dir: ' "$reldir" -# echo 'directory will be created: ' "$outdir" -# echo 'Filename: ' "$filename" -# echo 'FileExt: ' "$extension" -# echo 'output file: ' "$outfile" - -echo "\n\n" - -## CREATE Output Folders - mkdir -p "$outdir" - -## RUN -# ffmpeg and avconv are alternative options if opusenc isn't adequate -opusenc --vbr --bitrate 128 --date "$DATE" \ ---title "$TITLE" --artist "$ARTIST" --album "$ALBUM" --genre "$GENRE" \ ---comment "ALBUMARTIST=$ALBUMARTIST" --comment "DISCNUMBER=$DISCNUMBER" \ ---comment "TRACKNUMBER=$TRACKNUMBER" --comment "TRACKTOTAL=$TRACKTOTAL" \ ---comment "LYRICS=$LYRICS" "$fullfile" "$outfile" - - -## just for testing -# sleep 1 -done -#+end_src - -Once you're done, simply save the file and exit your editor. Don't -forget to enable execution of the script: - -#+begin_src sh -chmod +x transform.sh -#+end_src - -Finally, you may now run the script: - -#+begin_src sh -./transform.sh -#+end_src - -If you used =opusenc=, you'll see the conversions happen within the -terminal as it progresses. You will also see variables printed if you -uncommented any of the bash script's comments. - -** Cleanup -As I noted above, I didn't customize my =reldir= variable in the script, -which caused my output directory to be =/mnt/music/library/archives= -instead of =/mnt/music/library=. So, I moved the output up one level and -deleted the accidental directory. - -#+begin_src sh -cd /mnt/music/library -mv archives/** . -rm -rf archives -#+end_src - -** Check the Resulting Size -If you want to see what kind of file size savings you've gained, you can -always use the =du= command to check: - -#+begin_src sh -cd /mnt/music -du -h --max-depth=1 . -#+end_src - -In my case, my small library went from 78GB to 6.3GB! - -#+begin_src txt -78G ./archives -6.3G ./library -#+end_src diff --git a/blog/flatpak-symlinks/index.org b/blog/flatpak-symlinks/index.org deleted file mode 100644 index d535f31..0000000 --- a/blog/flatpak-symlinks/index.org +++ /dev/null @@ -1,46 +0,0 @@ -#+title: Running Flatpak Apps with Symlinks -#+date: 2023-01-21 -#+description: Learn how to run Flatpak apps through menu launchers with symlinks. -#+filetags: :linux: - -* Running Flatpak Apps Should Be Faster -If you're like me and use Flatpak for those pesky apps that cannot run -on your system for one reason or another, you likely get annoyed with -opening a terminal and manually running the Flatpak app with the lengthy -=flatpak run ...= command. - -In the past, I manually created aliases in my =.zshrc= file for certain -apps. For example, an alias would look like the example below. - -This would allow me to run the command fast within the terminal, but it -wouldn't allow me to run it in an application launcher. - -#+begin_src sh -# ~/.zshrc -alias librewolf = "flatpak run io.gitlab.librewolf-community" -#+end_src - -However, I now use a much faster and better method that integrates with -the tiling WMs I use and their application launchers - =dmenu= and -=bemenu=. - -* Creating Symlinks for Flatpak Apps -Let's use the example of Librewolf below. I can install the application -like so: - -#+begin_src sh -flatpak install flathub io.gitlab.librewolf-community -#+end_src - -Once installed, I can create a symlink to link the flatpak app to my new -symlink in a location commonly included in your PATH. In this case, I -chose =/usr/bin=. You may need to choose a different location if -=/usr/bin= isn't in your PATH. - -#+begin_src sh -ln -s /var/lib/flatpak/exports/bin/io.gitlab.librewolf-community /usr/bin/librewolf -#+end_src - -Once complete, you should be able to launch the app using the command -name you chose above in the symlink (=librewolf=) from a terminal or -from your application launcher! diff --git a/blog/gemini-capsule/index.org b/blog/gemini-capsule/index.org deleted file mode 100644 index 69fd8f2..0000000 --- a/blog/gemini-capsule/index.org +++ /dev/null @@ -1,177 +0,0 @@ -#+title: Launching a Gemini Capsule -#+date: 2021-03-28 -#+description: A guide to self-hosting a Gemini capsule on your own server. -#+filetags: :dev: - -* What is Gemini? -[[https://gemini.circumlunar.space/][Gemini]] is an internet protocol -introduced in June 2019 as an alternative to HTTP(S) or Gopher. In -layman's terms, it's an alternative way to browse sites (called -capsules) that requires a special browser. Since Gemini is not -standardized as an internet standard, normal web browsers won't be able -to load a Gemini capsule. Instead, you'll need to use -[[https://gemini.%20circumlunar.space/clients.html][a Gemini-specific -browser]]. - -The content found within a Gemini page is called -[[https://gemini.circumlunar.space/docs/cheatsheet.gmi][Gemtext]] and is -/extremely/ basic (on purpose). Gemini only processes the text, no media -content like images. However, you're able to style 3 levels of headings, -regular text, links (which will display on their own line), quotes, and -an unordered list. - -Here's a complete listing of valid Gemtext: - -#+begin_src txt -# Heading 1 -## Heading 2 -### Heading 3 - -Regular text! Lorem ipsum dolor sit amet. - -=> https://example.com My Website -=> gemini://example.com My Gemini Capsule - -> "If life were predictable it would cease to be life, and be without flavor." - Eleanor Roosevelt - -My List: -,** Item -,** Item - -```Anything between three backticks will be rendered as code.``` -#+end_src - -*** Free Option -There are probably numerous websites that allow you to create your -personal Gemini capsule, but I'm going to focus on the two sites that I -have personally tested. The first option below, Midnight Pub, allows you -to create/edit any Gemini files you want in your account. This is -essentially a GUI option with a built-in text box for editing. The -second option below, Sourcehut, allows you to use a Git repository and -automatic build process to deploy your personal Gemini capsule every -time you push a commit. - -** Midnight Pub - Beginner Friendly -[[https://midnight.pub/][Midnight Pub]] is a small, virtual community -meant to reflect the atmosphere of wandering into a small alley pub. The -site is built in Gemtext and has a server-side process to convert -Gemtext to HTML if someone loads the site in an HTTP(S) browser. - -To create an account, you'll need to email the owner of the website to -obtain a key. You can find their email on the Midnight Pub homepage. -Once registered, head to [[https://midnight.pub/account][your account]] -and select [[https://midnight.pub/site][manage site]]. This is the -screen where you can upload or create any files to be displayed on the -internet. - -For example, I've created both an HTML file and a Gemini file. Remember -that Gemini is automatically converted to HTML on the Pub, so you don't -need an HTML version. For example, I created an HTML version to add in -some extra styling. - -All you need to do is create a page like =index.gmi= and use your Gemini -browser to head over to your-username.midnight.pub to see the result. - -That's all there is to it! Easy enough, right? Let's check out a more -advanced version in the next section. - -* Paid Option -As of 2021, Sourcehut has decided to require users to have a paid -account in order to utilize their automated build system. For now, paid -accounts can be as low as $2/month. - -** Sourcehut -[[https://sourcehut.org/][Sourcehut]] is a collection of software -development tools, but mostly surrounds their hosted Git repository -service. Simply put, it's a minimal and more private alternative to -services like GitHub. - -This walkthrough is more advanced and involves things like Git, SSH, the -command line. If you don't think you know enough to do this, check out -my walkthrough on creating a Gemini capsule for the Midnight Pub -instead. - -The first thing you'll need to do is create an SSH key pair, if you -don't already have one on your system. Once created, grab the contents -of =id_rsa.pub= and add it to your Sourcehut account settings - this -will allow you to push and pull code changes without using a -username/password. - -#+begin_src sh -ssh keygen -#+end_src - -Next up, let's create a repository with the proper name so that the -Sourcehut build system will know we want them to host a website for us. -Use the following format exactly: - -#+begin_src sh -mkdir your-username.srht.site && cd your-username.srht.site -#+end_src - -Now that we've created the repo, let's initialize Git and add the proper -remote URL. - -#+begin_src sh -git init -#+end_src - -#+begin_src sh -git remote add origin git@git.sr.ht:~your-username/your-username.srht.site -#+end_src - -Now that our repository is set up and configured, we will need to create -at least two files: - -- =index.gmi= -- =.build.yml= - -For your =.build.yml= file, use the following content and be sure to -update the =site= line with your username! - -#+begin_src yaml -image: alpine/latest -oauth: pages.sr.ht/PAGES:RW -environment: - site: your-username.srht.site -tasks: - - package: | - cd $site - tar -cvz . > ../site.tar.gz - - upload: | - acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz -Fprotocol=GEMINI -#+end_src - -For the =index.gmi= file, put whatever you want in there and save it. -You could even just copy and paste the Gemtext cheatsheet. - -If you want to serve both HTML and Gemini files from this repository, -just add a second command to the =upload= section: - -#+begin_src yaml -- upload: | - acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz -Fprotocol=GEMINI - acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz -#+end_src - -Lastly, commit your changes and push them to the remote repo. - -#+begin_src sh -git add .; git commit -m "initial commit"; git push --set-upstream origin HEAD -#+end_src - -If you've successfully created the files with the proper format, you'll -see the terminal print a message that lets you know where the automatic -build is taking place. For example, here's what the terminal tells me: - -#+begin_src sh -remote: Build started: -remote: https://builds.sr.ht/~user/job/689803 [.build.yml] -#+end_src - -Now that you've properly built your Sourcehut page, you can browse to -your-username.srht.site in a Gemini browser and view the final results. -Take a look at the image below for my Sourcehut Gemini capsule. - -#+caption: Gemini page on the amfora browser -[[https://img.cleberg.net/blog/20210328-launching-a-gemini-capsule/amfora.png]] diff --git a/blog/gemini-server/index.org b/blog/gemini-server/index.org deleted file mode 100644 index fd50c20..0000000 --- a/blog/gemini-server/index.org +++ /dev/null @@ -1,150 +0,0 @@ -#+title: Hosting a Gemini Server -#+date: 2021-04-17 -#+description: A guide to self-hosting a Gemini web server on your own server. -#+filetags: :sysadmin: - -* Similar Article Available -To read more about Gemini and ways to test out this new protocol without -your own server, see my previous post -[[../launching-a-gemini-capsule/][Launching a Gemini Capsule]]. - -* Preparation -This guide assumes you have access to a server accessible to the world -through a public IP address and that you own a domain name used for this -Gemini capsule. - -* Getting Started with Agate -We are going to use [[https://github.com/mbrubeck/agate][Agate]] for -this tutorial. This is a basic Gemini server written in Rust. It takes -very little time and maintenance to get it running. - -* Install Dependencies -First, you will need to install the Rust package for your system. On -Ubuntu, use the following commands (remember to use =sudo= if you are -not the root user). The Rust installation will give you options to -customize the installation; I used the default installation options. - -#+begin_src sh -sudo apt update && sudo apt upgrade -y -curl https://sh.rustup.rs -sSf | sh -#+end_src - -Remember to configure your shell with the new configuration: - -#+begin_src sh -source $HOME/.cargo/env -#+end_src - -Before we install agate, make sure you have the =gcc= package installed: - -#+begin_src sh -sudo apt install gcc -#+end_src - -Next, you'll need to install the agate executable with Rust's Cargo -package maintainer: - -#+begin_src sh -cargo install agate -#+end_src - -* Create Symlinks -Once Cargo has finished installing all the required packages, symlink -the executable to your $PATH. - -#+begin_src sh -sudo ln -s $HOME/.cargo/bin/agate /usr/local/bin/agate -#+end_src - -* Using Agate's Built-In Installation Tool -If you're running Ubuntu or Debian, use the Debian installation script -found in Agate's GitHub repository, under the =tools/debian= folder. - -#+begin_src sh -git clone https://github.com/mbrubeck/agate -cd agate/tools/debian -sudo ./install.sh -#+end_src - -* Configure the Gemini Service -We have a little more to do, but since this script tries to immediately -run the service, it will likely fail with an exit code. Let's add our -finishing touches. Edit the following file and replace the hostname with -your desired URL. You can also change the directory where content will -be served. - -#+begin_src sh -sudo nano /etc/systemd/system/gemini.service -#+end_src - -#+begin_src sh -# Edit these lines to whatever you want - see the next code block for my personal configuration. -WorkingDirectory=/srv/gemini -ExecStart=agate --hostname $(uname -n) --lang en -#+end_src - -This is my personal config: - -#+begin_src sh -WorkingDirectory=/var/gemini/ -ExecStart=agate --hostname gemini.example.com --lang en -#+end_src - -Since we've altered the systemd configuration files, we have to reload -the daemon. Let's do that, restart our service, and check its status. - -#+begin_src sh -sudo systemctl daemon-reload -sudo systemctl restart gemini.service -sudo systemctl status gemini.service -#+end_src - -* Fixing Systemd Errors -If you're still getting errors, the installation process may not have -properly enabled the gemini service. Fix it with the following commands. - -#+begin_src sh -sudo systemctl enable gemini.service -sudo systemctl restart gemini.service -sudo systemctl status gemini.service -#+end_src - -* Firewall Rules -Great! Our server is now functional and running. The first consideration -now is that you need to be able to access port 1965 on the server. If -you have a firewall enabled, you'll need to open that port up. - -#+begin_src sh -sudo ufw allow 1965 -sudo ufw reload -#+end_src - -* Creating Content -Let's create the Gemini capsule. Note that wherever you set the -WorkingDirectory variable to earlier, Agate will expect you to put your -Gemini capsule contents in a sub-folder called "content." So, I place my -files in "/var/gmi/content." I'm going to create that folder now and put -a file in there. - -#+begin_src sh -sudo mkdir /var/gemini/content -sudo nano /var/gemini/content/index.gmi -#+end_src - -You can put whatever you want in the "index.gmi" file, just make sure -it's valid Gemtext. - -* The Results -Here are some screenshots of the Gemini page I just created in the -[[https://gmi.skyjake.fi/lagrange/][Lagrange]] browser and the -[[https://github.com/makeworld-the-better-one/amfora][amfora]] browser. - -#+caption: GUI Gemini browser -[[https://img.cleberg.net/blog/20210417-hosting-a-gemini-server/lagrange.png]] - -/Lagrange/ - -#+caption: CLI Gemini browser -[[https://img.cleberg.net/blog/20210417-hosting-a-gemini-server/amfora.png]] - -/Amfora/ diff --git a/blog/git-server/index.org b/blog/git-server/index.org deleted file mode 100644 index c716484..0000000 --- a/blog/git-server/index.org +++ /dev/null @@ -1,617 +0,0 @@ -#+title: Self-Hosting a Personal Git Server -#+date: 2022-07-01 -#+description: A guide to self-hosting a Git server on your own server. -#+filetags: :selfhosting: - -* My Approach to Self-Hosting Git -I have often tried to self-host my Git repositories, but have always -fallen short when I tried to find a suitable web interface to show on -the front-end. - -After a few years, I have finally found a combination of methods that -allow me to easily self-host my projects, view them on the web, and -access them from anywhere. - -Before I dive into the details, I want to state a high-level summary of -my self-hosted Git approach: - -- This method uses the =ssh://= (read & write) and =git://= (read-only) - protocols for push and pull access. - - For the =git://= protocol, I create a =git-daemon-export-ok= file in - any repository that I want to be cloneable by anyone. - - The web interface I am using (=cgit=) allows simple HTTP cloning by - default. I do not disable this setting as I want beginners to be - able to clone one of my repositories even if they don't know the - proper method. -- I am not enabling Smart HTTPS for any repositories. Updates to - repositories must be pushed via SSH. -- Beyond the actual repository management, I am using =cgit= for the - front-end web interface. - - If you use the =scan-path== configuration in the =cgitrc= - configuration file to automatically find repositories, you can't - exclude a repository from =cgit= if it's stored within the path that - =cgit= reads. To host private repositories, you'd need to set up - another directory that =cgit= can't read. - -* Assumptions -For the purposes of this walkthrough, I am assuming you have a URL -(=git.example.com=) or IP address (=207.84.26.991=) addressed to the -server that you will be using to host your git repositories. - -* Adding a Git User -In order to use the SSH method associated with git, we will need to add -a user named =git=. If you have used the SSH method for other git -hosting sites, you are probably used to the following syntax: - -#+begin_src sh -git clone [user@]server:project.git -#+end_src - -The syntax above is an =scp=-like syntax for using SSH on the =git= user -on the server to access your repository. - -Let's delete any remnants of an old =git= user, if any, and create the -new user account: - -#+begin_src sh -sudo deluser --remove-home git -sudo adduser git -#+end_src - -** Import Your SSH Keys to the Git User -Once the =git= user is created, you will need to copy your public SSH -key on your local development machine to the =git= user on the server. - -If you don't have an SSH key yet, create one with this command: - -#+begin_src sh -ssh-keygen -#+end_src - -Once you create the key pair, the public should be saved to -=~/.ssh/id_rsa.pub=. - -If your server still has password-based authentication available, you -can copy it over to your user's home directory like this: - -#+begin_src sh -ssh-copy-id git@server -#+end_src - -Otherwise, copy it over to any user that you can access. - -#+begin_src sh -scp ~/.ssh/id_rsa.pub your_user@your_server: -#+end_src - -Once on the server, you will need to copy the contents into the =git= -user's =authorized_keys= file: - -#+begin_src sh -cat id_rsa.pub > /home/git/.ssh/authorized_keys -#+end_src - -** (Optional) Disable Password-Based SSH -If you want to lock down your server and ensure that no one can -authenticate in via SSH with a password, you will need to edit your SSH -configuration. - -#+begin_src sh -sudo nano /etc/ssh/sshd_config -#+end_src - -Within this file, find the following settings and set them to the values -I am showing below: - -#+begin_src conf -PermitRootLogin no -PasswordAuthentication no -AuthenticationMethods publickey -#+end_src - -You may have other Authentication Methods required in your personal -set-up, so the key here is just to ensure that =AuthenticationMethods= -does not allow passwords. - -*** Setting up the Base Directory -Now that we have set up a =git= user to handle all transport methods, we -need to set up the directory that we will be using as our base of all -repositories. - -In my case, I am using =/git= as my source folder. To create this folder -and assign it to the user we created, execute the following commands: - -#+begin_src sh -sudo mkdir /git -sudo chown -R git:git /git -#+end_src - -*** Creating a Test Repository -On your server, switch over to the =git= user in order to start managing -git files. - -#+begin_src sh -su git -#+end_src - -Once logged-in as the =git= user, go to your base directory and create a -test repository. - -#+begin_src sh -cd /git -mkdir test.git && cd test.git -git init --bare -#+end_src - -If you want to make this repo viewable/cloneable to the public via the -=git://= protocol, you need to create a =git-daemon-export-ok= file -inside the repository. - -#+begin_src sh -touch git-daemon-export-ok -#+end_src - -* Change the Login Shell for =git= -To make sure that the =git= user is only used for git operations and -nothing else, you need to change the user's login shell. To do this, -simply use the =chsh= command: - -#+begin_src sh -sudo chsh git -#+end_src - -The interactive prompt will ask which shell you want the =git= user to -use. You must use the following value: - -#+begin_src sh -/usr/bin/git-shell -#+end_src - -Once done, no one will be able to SSH to the =git= user or execute -commands other than the standard git commands. - -* Opening the Firewall -Don't forget to open up ports on the device firewall and network -firewall if you want to access these repositories publicly. If you're -using default ports, forward ports =22= (ssh) and =9418= (git) from your -router to your server's IP address. - -If your server also has a firewall, ensure that the firewall allows the -same ports that are forwarded from the router. For example, if you use -=ufw=: - -#+begin_src sh -sudo ufw allow 22 -sudo ufw allow 9418 -#+end_src - -** Non-Standard SSH Ports -If you use a non-standard port for SSH, such as =9876=, you will need to -create an SSH configuration file on your local development machine in -order to connect to your server's git repositories. - -To do this, you'll need to define your custom port on your client -machine in your =~/.ssh/config= file: - -#+begin_src sh -nano ~/.ssh/config -#+end_src - -#+begin_src conf -Host git.example.com - # HostName can be a URL or an IP address - HostName git.example.com - Port 9876 - User git -#+end_src - -** Testing SSH -There are two main syntaxes you can use to manage git over SSH: - -- =git clone [user@]server:project.git= -- =git clone ssh://[user@]server/project.git= - -I prefer the first, which is an =scp=-like syntax. To test it, try to -clone the test repository you set up on the server: - -#+begin_src sh -git clone git@git.example.com:/git/test.git -#+end_src - -* Enabling Read-Only Access -If you want people to be able to clone any repository where you've -placed a =git-daemon-export-ok= file, you will need to start the git -daemon. - -To do this on a system with =systemd=, create a service file: - -#+begin_src sh -sudo nano /etc/systemd/system/git-daemon.service -#+end_src - -Inside the =git-daemon.service= file, paste the following: - -#+begin_src conf -[Unit] -Description=Start Git Daemon - -[Service] -ExecStart=/usr/bin/git daemon --reuseaddr --base-path=/git/ /git/ - -Restart=always -RestartSec=500ms - -StandardOutput=syslog -StandardError=syslog -SyslogIdentifier=git-daemon - -User=git -Group=git - -[Install] -WantedBy=multi-user.target -#+end_src - -Once created, enable and start the service: - -#+begin_src sh -sudo systemctl enable git-daemon.service -sudo systemctl start git-daemon.service -#+end_src - -To clone read-only via the =git://= protocol, you can use the following -syntax: - -#+begin_src sh -git clone git://git.example.com/test.git -#+end_src - -* Migrating Repositories -At this point, we have a working git server that works with both SSH and -read-only access. - -For each of the repositories I had hosted a different provider, I -executed the following commands in order to place a copy on my server as -my new source of truth: - -Server: - -#+begin_src sh -su git -mkdir /git/.git && cd /git/.git -git init --bare - -# If you want to make this repo viewable/cloneable to the public -touch git-daemon-export-ok -#+end_src - -Client: - -#+begin_src sh -git clone git@: -git remote set-url origin git@git.EXAMPLE.COM:/git/.git -git push -#+end_src - -* Optional Web View: =cgit= -If you want a web viewer for your repositories, you can use various -tools, such as =gitweb=, =cgit=, or =klaus=. I chose =cgit= due to its -simple interface and fairly easy set-up (compared to others). Not to -mention that the [[https://git.kernel.org/][Linux kernel uses =cgit=]]. - -** Docker Compose -Instead of using my previous method of using a =docker run= command, -I've updated this section to use =docker-compose= instead for an easier -installation and simpler management and configuration. - -In order to use Docker Compose, you will set up a =docker-compose.yml= -file to automatically connect resources like the repositories, =cgitrc=, -and various files or folders to the =cgit= container you're creating: - -#+begin_src sh -mkdir ~/cgit && cd ~/cgit -nano docker-compose.yml -#+end_src - -#+begin_src conf -# docker-compose.yml -version: '3' - -services: - cgit: - image: invokr/cgit - volumes: - - /git:/git - - ./cgitrc:/etc/cgitrc - - ./logo.png:/var/www/htdocs/cgit/logo.png - - ./favicon.png:/var/www/htdocs/cgit/favicon.png - - ./filters:/var/www/htdocs/cgit/filters - ports: - - "8763:80" - restart: always -#+end_src - -Then, just start the container: - -#+begin_src sh -sudo docker-compose up -d -#+end_src - -Once it's finished installing, you can access the site at -=:8763= or use a reverse-proxy service to forward =cgit= to a -URL, such as =git.example.com=. See the next section for more details on -reverse proxying a URL to a local port. - -** Nginx Reverse Proxy -I am using Nginx as my reverse proxy so that the =cgit= Docker container -can use =git.example.com= as its URL. To do so, I simply created the -following configuration file: - -#+begin_src sh -sudo nano /etc/nginx/sites-available/git.example.com -#+end_src - -#+begin_src conf -server { - listen 80; - server_name git.example.com; - - if ($host = git.example.com) { - return 301 https://$host$request_uri; - } - - return 404; -} - -server { - server_name git.example.com; - listen 443 ssl http2; - - location / { - # The final `/` is important. - proxy_pass http://localhost:8763/; - add_header X-Frame-Options SAMEORIGIN; - add_header X-XSS-Protection "1; mode=block"; - proxy_redirect off; - proxy_buffering off; - proxy_set_header Host $host; - proxy_set_header X-Real-IP $remote_addr; - proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; - proxy_set_header X-Forwarded-Proto $scheme; - proxy_set_header X-Forwarded-Port $server_port; - } - - # INCLUDE ANY SSL CERTS HERE - include /etc/letsencrypt/options-ssl-nginx.conf; - ssl_dhparam /etc/letsencrypt/ssl-dhparams.pem; -} -#+end_src - -Once created, symlink it and restart the web server. - -#+begin_src sh -sudo ln -s /etc/nginx/sites-available/git.example.com /etc/nginx/sites-enabled/ -sudo systemctl restart nginx.service -#+end_src - -As we can see below, my site at =git.example.com= is available and -running: - -** Settings Up Git Details -Once you have =cgit= running, you can add some small details, such as -repository owners and descriptions by editing the following files within -each repository. - -Alternatively, you can use the =cgitrc= file to edit these details if -you only care to edit them for the purpose of seeing them on your -website. - -The =description= file within the repository on your server will display -the description online. - -#+begin_src sh -cd /git/example.git -nano description -#+end_src - -You can add a =[gitweb]= block to the =config= file in order to display -the owner of the repository. - -#+begin_src sh -cd /git/example.git -nano config -#+end_src - -#+begin_src conf -[gitweb] - owner = "YourName" -#+end_src - -Note that you can ignore the configuration within each repository and -simply set up this information in the =cgitrc= file, if you want to do -it that way. - -** Editing =cgit= -In order to edit certain items within =cgit=, you need to edit the -=cgitrc= file. - -#+begin_src sh -nano ~/cgit/cgitrc -#+end_src - -Below is an example configuration for =cgitrc=. You can find all the -configuration options within the [configuration manual] -([[https://git.zx2c4.com/cgit/plain/cgitrc.5.txt]]). - -#+begin_src conf -css=/cgit.css -logo=/logo.png -favicon=/favicon.png -robots=noindex, nofollow - -enable-index-links=1 -enable-commit-graph=1 -enable-blame=1 -enable-log-filecount=1 -enable-log-linecount=1 -enable-git-config=1 - -clone-url=git://git.example.com/$CGIT_REPO_URL ssh://git@git.example.com:/git/$CGIT_REPO_URL - -root-title=My Git Website -root-desc=My personal git repositories. - -# Allow download of tar.gz, tar.bz2 and zip-files -snapshots=tar.gz tar.bz2 zip - -## -## List of common mimetypes -## -mimetype.gif=image/gif -mimetype.html=text/html -mimetype.jpg=image/jpeg -mimetype.jpeg=image/jpeg -mimetype.pdf=application/pdf -mimetype.png=image/png -mimetype.svg=image/svg+xml - -# Highlight source code -# source-filter=/var/www/htdocs/cgit/filters/syntax-highlighting.sh -source-filter=/var/www/htdocs/cgit/filters/syntax-highlighting.py - -# Format markdown, restructuredtext, manpages, text files, and html files -# through the right converters -about-filter=/var/www/htdocs/cgit/filters/about-formatting.sh - -## -## Search for these files in the root of the default branch of repositories -## for coming up with the about page: -## -readme=:README.md -readme=:readme.md -readme=:README.mkd -readme=:readme.mkd -readme=:README.rst -readme=:readme.rst -readme=:README.html -readme=:readme.html -readme=:README.htm -readme=:readme.htm -readme=:README.txt -readme=:readme.txt -readme=:README -readme=:readme - -# Repositories - -# Uncomment the following line to scan a path instead of adding repositories manually -# scan-path=/git - -## Test Section -section=git/test-section - -repo.url=test.git -repo.path=/git/test.git -repo.readme=:README.md -repo.owner=John Doe -repo.desc=An example repository! -#+end_src - -** Final Fixes: Syntax Highlighting & README Rendering -After completing my initial install and playing around with it for a few -days, I noticed two issues: - -1. Syntax highlighting did not work when viewing the source code within - a file. -2. The =about= tab within a repository was not rendered to HTML. - -The following process fixes these issues. To start, let's go to the -=cgit= directory where we were editing our configuration file earlier. - -#+begin_src sh -cd ~/cgit -#+end_src - -In here, create two folders that will hold our syntax files: - -#+begin_src sh -mkdir filters && mkdir filters/html-converters && cd filters -#+end_src - -Next, download the default filters: - -#+begin_src sh -curl https://git.zx2c4.com/cgit/plain/filters/about-formatting.sh > about-formatting.sh -chmod 755 about-formatting.sh -curl https://git.zx2c4.com/cgit/plain/filters/syntax-highlighting.py > syntax-highlighting.py -chmod 755 syntax-highlighting.py -#+end_src - -Finally, download the HTML conversion files you need. The example below -downloads the Markdown converter: - -#+begin_src sh -cd html-converters -curl https://git.zx2c4.com/cgit/plain/filters/html-converters/md2html > md2html -chmod 755 md2html -#+end_src - -If you need other filters or html-converters found within -[[https://git.zx2c4.com/cgit/tree/filters][the cgit project files]], -repeat the =curl= and =chmod= process above for whichever files you -need. - -However, formatting will not work quite yet since the Docker cgit -container we're using doesn't have the formatting package installed. You -can install this easily by install Python 3+ and the =pygments= package: - -#+begin_src sh -# Enter the container's command line -sudo docker exec -it cgit bash -#+end_src - -#+begin_src sh -# Install the necessary packages and then exit -yum update -y && \ -yum upgrade -y && \ -yum install python3 python3-pip -y && \ -pip3 install markdown pygments && \ -exit -#+end_src - -*You will need to enter the cgit docker container and re-run these =yum= -commands every time you kill and restart the container!* - -If not done already, we need to add the following variables to our -=cgitrc= file in order for =cgit= to know where our filtering files are: - -#+begin_src conf -# Highlight source code with python pygments-based highlighter -source-filter=/var/www/htdocs/cgit/filters/syntax-highlighting.py - -# Format markdown, restructuredtext, manpages, text files, and html files -# through the right converters -about-filter=/var/www/htdocs/cgit/filters/about-formatting.sh -#+end_src - -Now you should see that syntax highlighting and README rendering to the -=about= tab is fixed. - -** Theming -I won't go into much detail in this section, but you can fully theme -your installation of =cgit= since you have access to the =cgit.css= file -in your web root. This is another file you can add as a volume to the -=docker-compose.yml= file if you want to edit this without entering the -container's command line. - -*** :warning: Remember to Back Up Your Data! -The last thing to note is that running services on your own equipment -means that you're assuming a level of risk that exists regarding data -loss, catastrophes, etc. In order to reduce the impact of any such -occurrence, I suggest backing up your data regularly. - -Backups can be automated via =cron=, by hooking your base directory up -to a cloud provider, or even setting up hooks to push all repository -info to git mirrors on other git hosts. Whatever the method, make sure -that your data doesn't vanish in the event that your drives or servers -fail. diff --git a/blog/gnupg/index.org b/blog/gnupg/index.org deleted file mode 100644 index 59e12e7..0000000 --- a/blog/gnupg/index.org +++ /dev/null @@ -1,297 +0,0 @@ -#+title: GNU Privacy Guard (GPG) -#+date: 2022-07-14 -#+description: Learn how to create a PGP key with GNU Privacy Guard (GPG). -#+filetags: :privacy: - -* The History of GPG -[[https://gnupg.org/][GNU Privacy Guard]], also known as GnuPG and GPG, -is a free ("free" as in both speech and beer) software that fully -implements the OpenPGP Message Format documented in -[[https://www.rfc-editor.org/rfc/rfc4880][RFC 4880]]. - -I won't go in-depth on the full history of the software in this post, -but it is important to understand that GPG is not the same as PGP -(Pretty Good Privacy), which is a different implementation of RFC 4880. -However, GPG was designed to interoperate with PGP. - -GPG was originally developed in the late 1990s by -[[https://en.wikipedia.org/wiki/Werner_Koch][Werner Koch]] and has -historically been funded generously by the German government. - -Now that we have all the high-level info out of the way, let's dive into -the different aspects of GPG and its uses. - -* Encryption Algorithms -GPG supports a wide range of different encryption algorithms, including -public-key, cipher, hash, and compression algorithms. The support for -these algorithms has grown since the adoption of the Libgcrypt library -in the 2.x versions of GPG. - -As you will be able to see below in an example of a full key generation -with the GPG command line tool, GPG recommends the following algorithms -to new users: - -#+begin_src sh -Please select what kind of key you want: - (1) RSA and RSA - (2) DSA and Elgamal - (3) DSA (sign only) - (4) RSA (sign only) - (9) ECC (sign and encrypt) *default* - (10) ECC (sign only) -#+end_src - -I am not doing an in-depth explanation here in order to keep the focus -on GPG and not encryption algorithms. If you want a deep dive into -cryptography or encryption algorithms, please read my other posts: - -- [[../aes-encryption/][AES Encryption]] (2018) -- [[../cryptography-basics/][Cryptography Basics]] (2020) - -** Vulnerabilities -As of 2022-07-14, there are a few different vulnerabilities associated -with GPG or the libraries it uses: - -- GPG versions 1.0.2--1.2.3 contains a bug where "as soon as one - (GPG-generated) ElGamal signature of an arbitrary message is released, - one can recover the signer's private key in less than a second on a - PC." ([[https://www.di.ens.fr/~pnguyen/pub_Ng04.htm][Source]]) -- GPG versions prior to 1.4.2.1 contain a false positive signature - verification bug. - ([[https://lists.gnupg.%20org/pipermail/gnupg-announce/2006q1/000211.html][Source]]) -- GPG versions prior to 1.4.2.2 cannot detect injection of unsigned - data. ( - [[https://lists.gnupg.org/pipermail/gnupg-announce/2006q1/000218.html][Source]]) -- Libgcrypt, a library used by GPG, contained a bug which enabled full - key recovery for RSA-1024 and some RSA-2048 keys. This was resolved in - a GPG update in 2017. ([[https://lwn.net/Articles/727179/][Source]]) -- The [[https://en.wikipedia.org/wiki/ROCA_vulnerability][ROCA - Vulnerability]] affects RSA keys generated by YubiKey 4 tokens. - ([[https://crocs.fi.%20muni.cz/_media/public/papers/nemec_roca_ccs17_preprint.pdf][Source]]) -- The [[https://en.wikipedia.org/wiki/SigSpoof][SigSpoof Attack]] allows - an attacker to spoof digital signatures. - ([[https://arstechnica.%20com/information-technology/2018/06/decades-old-pgp-bug-allowed-hackers-to-spoof-just-about-anyones-signature/][Source]]) -- Libgcrypt 1.9.0 contains a severe flaw related to a heap buffer - overflow, fixed in Libgcrypt 1.9.1 - ([[https://web.archive.%20org/web/20210221012505/https://www.theregister.com/2021/01/29/severe_libgcrypt_bug/][Source]]) - -*** Platforms -Originally developed as a command-line program for *nix systems, GPG now -has a wealth of front-end applications and libraries available for -end-users. However, the most recommended programs remain the same: - -- [[https://gnupg.org][GnuPG]] for Linux (depending on distro) -- [[https://gpg4win.org][Gpg4win]] for Windows -- [[https://gpgtools.org][GPGTools]] for macOS - -* Creating a Key Pair -In order to create a GPG key pair, a user would first need to install -GPG on their system. If we're assuming that the user is on Fedora Linux, -they would execute the following: - -#+begin_src sh -sudo dnf install gpg -#+end_src - -Once installed, a user can create a new key pair with the following -command(s): - -#+begin_src sh -gpg --full-generate-key -#+end_src - -GPG will walk the user through an interactive setup that asks for an -algorithm preference, expiration date, name, and email to associate with -this key. - -See the following example key set-up for a default key generation using -the GnuPG command-line interface: - -#+begin_src sh -gpg (GnuPG) 2.3.6; Copyright (C) 2021 Free Software Foundation, Inc. -This is free software: you are free to change and redistribute it. -There is NO WARRANTY, to the extent permitted by law. - -Please select what kind of key you want: - (1) RSA and RSA - (2) DSA and Elgamal - (3) DSA (sign only) - (4) RSA (sign only) - (9) ECC (sign and encrypt) *default* - (10) ECC (sign only) - (14) Existing key from card -Your selection? 9 -Please select which elliptic curve you want: - (1) Curve 25519 *default* - (4) NIST P-384 -Your selection? 1 -Please specify how long the key should be valid. - 0 = key does not expire - = key expires in n days - w = key expires in n weeks - m = key expires in n months - y = key expires in n years -Key is valid for? (0) 0 -Key does not expire at all -Is this correct? (y/N) y - -GnuPG needs to construct a user ID to identify your key. - -Real name: John Doe -Email address: johndoe@example.com -Comment: test key -You selected this USER-ID: - "John Doe (test key) " - -Change (N)ame, (C)omment, (E)mail or (O)kay/(Q)uit? O -We need to generate a lot of random bytes. It is a good idea to perform -some other action (type on the keyboard, move the mouse, utilize the -disks) during the prime generation; this gives the random number -generator a better chance to gain enough entropy. -We need to generate a lot of random bytes. It is a good idea to perform -some other action (type on the keyboard, move the mouse, utilize the -disks) during the prime generation; this gives the random number -generator a better chance to gain enough entropy. -gpg: revocation certificate stored as 'example.rev' -public and secret key created and signed. - -pub ed25519 2022-07-14 [SC] - E955B7700FFC11EF51C2BA1FE096AACDD4C32E9C -uid John Doe (test key) -sub cv25519 2022-07-14 [E] -#+end_src - -Please note that GUI apps may differ slightly from the GPG command-line -interface. - -* Common Usage -As noted in RFC 4880, the general functions of OpenPGP are as follows: - -- digital signatures -- encryption -- compression -- Radix-64 conversion -- key management and certificate services - -From this, you can probably gather that the main use of GPG is for -encrypting data and/or signing the data with a key. The purpose of -encrypting data with GPG is to ensure that no one except the intended -recipient(s) can access the data. - -Let's explore some specific GPG use-cases. - -** Email -One of the more popular uses of GPG is to sign and/or encrypt emails. -With the use of a GPG keypair, you can encrypt a message, its subject, -and even the attachments within. - -The first process, regarding the signing of a message without any -encryption, is generally used to provide assurance that an email is -truly coming from the sender that the message claims. When I send an -email, and it's signed with my public key, the recipient(s) of the -message can verify that the message was signed with my personal key. - -The second process, regarding the actual encryption of the message and -its contents, works by using a combination of the sender's keys and the -recipient's keys. This process may vary slightly by implementation, but -it most commonly uses asymmetric cryptography, also known as public-key -cryptography. In this version of encryption, the sender's private key to -sign the message and a combination of the sender's keys and the -recipient's public key to encrypt the message. - -If two people each have their own private keys and exchange their public -keys, they can send encrypted messages back and forth with GPG. This is -also possible with symmetric cryptography, but the process differs since -there are no key pairs. - -Implementation of email encryption varies greatly between email clients, -so you will need to reference your email client's documentation to -ensure you are setting it up correctly for that specific client. - -** File Encryption -As noted in the section above regarding emails, GPG enables users to be -able to send a message to each other if they are both set-up with GPG -keys. In this example, I am going to show how a user could send a file -called =example_file.txt= to another user via the recipient's email. - -The sender would find the file they want to send and execute the -following command: - -#+begin_src sh -gpg --encrypt --output example_file.txt.gpg --recipient \ -recipient@example.com example_file.txt -#+end_src - -Once received, the recipient can decrypt the file with the following -command: - -#+begin_src sh -gpg --decrypt --output example_file.txt example_file.txt.gpg -#+end_src - -** Ownership Signatures -One important aspect of GPG, especially for developers, is the ability -to sign data without encrypting it. For example, developers often sign -code changes when they commit the changes back to a central repository, -in order to display ownership of who made the changes. This allows other -users to look at a code change and determine that the change was valid. - -In order to do this using [[https://git-scm.com][Git]], the developer -simply needs to alter the =git commit= command to include the =-S= flag. -Here's an example: - -#+begin_src sh -git commit -S -m "my commit message" -#+end_src - -As an expansion of the example above, Git users can configure their -environment with a default key to use by adding their GPG signature: - -#+begin_src sh -git config --global user.signingkey XXXXXXXXXXXXXXXX -#+end_src - -If you're not sure what your signature is, you can find it titled =sig= -in the output of this command: - -#+begin_src sh -gpg --list-signatures -#+end_src - -** File Integrity -When a person generates a signature for data, they are allowing users -the ability to verify the signature on that data in the future to ensure -the data has not been corrupted. This is most common with software -applications hosted on the internet - developers provide signatures so -that users can verify a website was not hijacked and download links -replaced with dangerous software. - -In order to verify signed data, a user needs to have: - -1. The signed data -2. A signature file -3. The public GPG key of the signer - -Once the signer's public key is imported on the user's system, and they -have the data and signature, they can verify the data with the following -commands: - -#+begin_src sh -# If the signature is attached to the data -gpg --verify [signature-file] - -# If the signature is detached as a separate file from the data -gpg --verify [signature-file] [original-file] -#+end_src - -*** Finding Public Keys -In order to use GPG with others, a user needs to know the other user(s) -keys. This is easy to do if the user knows the other user(s) in person, -but may be hard if the relationship is strictly digital. Luckily, there -are a few options. The first option is to look at a user's web page or -social pages if they have them. - -Otherwise, the best option is to use a keyserver, such as: - -- [[https://pgp.mit.edu][pgp.mit.edu]] -- [[https://keys.openpgp.org][keys.openpgp.org]] diff --git a/blog/goaccess-geoip/index.org b/blog/goaccess-geoip/index.org deleted file mode 100644 index 6136c21..0000000 --- a/blog/goaccess-geoip/index.org +++ /dev/null @@ -1,64 +0,0 @@ -#+title: Inspecting Nginx Logs with GoAccess and MaxMind GeoIP Data -#+date: 2023-06-08 -#+description: Learn how to use GoAccess and MaxMind to evaluate visitors to your web server. -#+filetags: :sysadmin: - -* Overview -[[https://goaccess.io/][GoAccess]] is an open source real-time web log -analyzer and interactive viewer that runs in a terminal in *nix systems -or through your browser. - -* Installation -To start, you'll need to install GoAccess for your OS. Here's an example -for Debian-based distros: - -#+begin_src sh -sudo apt install goaccess -#+end_src - -Next, find any number of the MaxMind GeoIP database files on GitHub or -another file hosting website. We're going to use P3TERX's version in -this example: - -#+begin_src sh -wget https://github.com/P3TERX/GeoLite.mmdb/raw/download/GeoLite2-City.mmdb -#+end_src - -Be sure to save this file in an easy to remember location! - -* Usage -In order to utilize the full capabilities of GoAccess and MMDB, start -with the command template below and customize as necessary. This will -export an HTML view of the GoAccess dashboard, showing all relevant -information related to that site's access log. You can also omit the -=-o output.html= parameter if you prefer to view the data within the CLI -instead of creating an HTML file. - -With the addition of the GeoIP Database parameter, section -=16 - Geo Location= will be added with the various countries that are -associated with the collected IP addresses. - -#+begin_src sh -zcat /var/log/nginx/example.access.log.*.gz | goaccess \ ---geoip-database=/home/user/GeoLite2-City.mmdb \ ---date-format=%d/%b/%Y \ ---time-format=%H:%M:%S \ ---log-format=COMBINED \ --o output.html \ -/var/log/nginx/example.access.log - -#+end_src - -** Example Output -See below for an example of the HTML output: - -#+caption: GoAccess HTML -[[https://img.cleberg.net/blog/20230608-goaccess/goaccess-dashboard.png]] - -You can also see the GeoIP card created by the integration of the -MaxMind database information. - -#+caption: GoAccess GeoIP -[[https://img.cleberg.net/blog/20230608-goaccess/goaccess-geoip.png]] - -That's all there is to it! Informational data is provided in an -organized fashion with minimal effort. diff --git a/blog/graphene-os/index.org b/blog/graphene-os/index.org deleted file mode 100644 index 2e34a00..0000000 --- a/blog/graphene-os/index.org +++ /dev/null @@ -1,154 +0,0 @@ -#+title: Installing Graphene OS on the Pixel 6 Pro -#+date: 2022-09-21 -#+description: A retrospective on the successful command-line installation of Graphene OS on a Pixel 6 Pro. -#+filetags: :privacy: - -* Introduction -After using iOS for a couple of years, I finally took the plunge and -purchased a Pixel 6 Pro in order to test and use [GrapheneOS] -([[https://grapheneos.org]]). - -The installation process was rather quick once you have the tools and -files you need. Overall, it can be done in just a few minutes. - -* Gathering Tools & Files -** Android Tools -First, in order to interact with the device, we will need the -[[https://developer.android.com/studio/releases/platform-tools.html][Android -platform tools]]. Find the Linux download and save the ZIP folder to -your preferred location. - -Once we've downloaded the files, we will need to unzip them, enter the -directory, and move the necessary executables to a central location, -such as =/usr/bin/=. For this installation, we only need the =fastboot= -and =adb= executables. - -#+begin_src sh -cd ~/Downloads -#+end_src - -#+begin_src sh -unzip platform-tools_r33.0.3-linux.zip -cd platform-tools -sudo mv fastboot /usr/bin/ -sudo mv adb /usr/bin -#+end_src - -** GrapheneOS Files -Next, we need the [[https://grapheneos.org/releases][GrapheneOS files]] -for our device and model. For example, the Pixel 6 Pro is codenamed -=raven= on the release page. - -Once we have the links, let's download them to our working directory: - -#+begin_src sh -curl -O https://releases.grapheneos.org/factory.pub -curl -0 https://releases.grapheneos.org/raven-factory-2022091400.zip -curl -0 https://releases.grapheneos.org/raven-factory-2022091400.zip.sig -#+end_src - -1. Validate Integrity - - In order to validate the integrity of the downloaded files, we will - need the =signify= package and Graphene's =factory.pub= file. - - #+begin_src sh - sudo dnf install signify - #+end_src - - #+begin_src sh - curl -O https://releases.grapheneos.org/factory.pub - #+end_src - - Then we can validate the files and ensure that no data was corrupted - or modified before it was saved to our device. - - #+begin_src sh - signify -Cqp factory.pub -x raven-factory-2022091400.zip.sig && echo verified - #+end_src - -2. Unzip Files - - Once the files are verified, we can unzip the Graphene image and - enter the directory: - - #+begin_src sh - unzip raven-factory-2022091400.zip && cd raven-factory-2022091400 - #+end_src - -* Installation Process -** Enable Developer Debugging & OEM Unlock -Before we can actually flash anything to the phone, we will need to -enable OEM Unlocking, as well as either USB Debugging or Wireless -Debugging, depending on which method we will be using. - -To start, enable developer mode by going to =Settings= > =About= and -tapping =Build Number= seven (7) times. You may need to enter your PIN -to enable this mode. - -Once developer mode is enabled, go to =Settings= > =System= > -=Devloper Options= and enable OEM Unlocking, as well as USB or Wireless -Debugging. In my case, I chose USB Debugging and performed all actions -via USB cable. - -Once these options are enabled, plug the phone into the computer and -execute the following command: - -#+begin_src sh -adb devices -#+end_src - -If an unauthorized error occurs, make sure the USB mode on the phone is -changed from charging to something like "File Transfer" or "PTP." You -can find the USB mode in the notification tray. - -** Reboot Device -Once we have found the device via =adb=, we can either boot into the -bootloader interface by holding the volume down button while the phone -reboots or by executing the following command: - -#+begin_src sh -adb reboot bootloader -#+end_src - -** Unlock the Bootloader -The phone will reboot and load the bootloader screen upon startup. At -this point, we are ready to start the actual flashing of GrapheneOS onto -the device. - -*NOTE*: In my situation, I needed to use =sudo= with every =fastboot= -command, but not with =adb= commands. I am not sure if this is standard -or a Fedora quirk, but I'm documenting my commands verbatim in this -post. - -First, we start by unlocking the bootloader so that we can load other -ROMs: - -#+begin_src sh -sudo fastboot flashing unlock -#+end_src - -** Flashing Factory Images -Once the phone is unlocked, we can flash it with the =flash-all.sh= -script found inside the =raven-factory-2022091400= folder we entered -earlier: - -#+begin_src sh -sudo ./flash-all.sh -#+end_src - -This process should take a few minutes and will print informational -messages as things progress. Avoid doing anything on the phone while -this process is operating. - -** Lock the Bootloader -If everything was successful, the phone should reboot a few times and -finally land back on the bootloader screen. At this point, we can -re-lock the bootloader to enable full verified boot and protect the -device from unwanted flashing or erasure of data. - -#+begin_src sh -sudo fastboot flashing lock -#+end_src - -Once done, the device will be wiped and ready for a fresh set-up! diff --git a/blog/happiness-map/index.org b/blog/happiness-map/index.org deleted file mode 100644 index 1eab63e..0000000 --- a/blog/happiness-map/index.org +++ /dev/null @@ -1,217 +0,0 @@ -#+title: Data Visualization: World Choropleth Map of Happiness -#+date: 2020-09-25 -#+description: Exploring and visualizing data with Python. -#+filetags: :data: - -* Background Information -The dataset (obtained from -[[https://www.kaggle.com/unsdsn/world-happiness][Kaggle]]) used in this -article contains a list of countries around the world, their happiness -rankings and scores, as well as other national scoring measures. - -Fields include: - -- Overall rank -- Country or region -- GDP per capita -- Social support -- Healthy life expectancy -- Freedom to make life choices -- Generosity -- Perceptions of corruption - -There are 156 records. Since there are ~195 countries in the world, we -can see that around 40 countries will be missing from this dataset. - -* Install Packages -As always, run the =install= command for all packages needed to perform -analysis. - -#+begin_src python -!pip install folium geopandas matplotlib numpy pandas -#+end_src - -* Import the Data -We only need a couple packages to create a choropleth map. We will use -[[https://python-visualization.github.io/folium/][Folium]], which -provides map visualizations in Python. We will also use geopandas and -pandas to wrangle our data before we put it on a map. - -#+begin_src python -# Import the necessary Python packages -import folium -import geopandas as gpd -import pandas as pd -#+end_src - -To get anything to show up on a map, we need a file that will specify -the boundaries of each country. Luckily, GeoJSON files exist (for free!) -on the internet. To get the boundaries of every country in the world, we -will use the GeoJSON link shown below. - -GeoPandas will take this data and load it into a dataframe so that we -can easily match it to the data we're trying to analyze. Let's look at -the GeoJSON dataframe: - -#+begin_src python -# Load the GeoJSON data with geopandas -geo_data = gpd.read_file('https://raw.githubusercontent.com/datasets/geo-countries/master/data/countries.geojson') -geo_data.head() -#+end_src - -#+caption: GeoJSON Dataframe -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/geojson_df.png]] - -Next, let's load the data from the Kaggle dataset. I've downloaded this -file, so update the file path if you have it somewhere else. After -loading, let's take a look at this dataframe: - -#+begin_src python -# Load the world happiness data with pandas -happy_data = pd.read_csv(r'~/Downloads/world_happiness_data_2019.csv') -happy_data.head() -#+end_src - -#+caption: Happiness Dataframe -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/happiness_df.png]] - -* Clean the Data -Some countries need to be renamed, or they will be lost when you merge -the happiness and GeoJSON dataframes. This is something I discovered -when the map below showed empty countries. I searched both data frames -for the missing countries to see the naming differences. Any countries -that do not have records in the =happy_data= df will not show up on the -map. - -#+begin_src python -# Rename some countries to match our GeoJSON data - -# Rename USA -usa_index = happy_data.index[happy_data['Country or region'] == 'United States'] -happy_data.at[usa_index, 'Country or region'] = 'United States of America' - -# Rename Tanzania -tanzania_index = happy_data.index[happy_data['Country or region'] == 'Tanzania'] -happy_data.at[tanzania_index, 'Country or region'] = 'United Republic of Tanzania' - -# Rename the Congo -republic_congo_index = happy_data.index[happy_data['Country or region'] == 'Congo (Brazzaville)'] -happy_data.at[republic_congo_index, 'Country or region'] = 'Republic of Congo' - -# Rename the DRC -democratic_congo_index = happy_data.index[happy_data['Country or region'] == 'Congo (Kinshasa)'] -happy_data.at[democratic_congo_index, 'Country or region'] = 'Democratic Republic of the Congo' -#+end_src - -* Merge the Data -Now that we have clean data, we need to merge the GeoJSON data with the -happiness data. Since we've stored them both in dataframes, we just need -to call the =.merge()= function. - -We will also rename a couple columns, just so that they're a little -easier to use when we create the map. - -#+begin_src python -# Merge the two previous dataframes into a single geopandas dataframe -merged_df = geo_data.merge(happy_data,left_on='ADMIN', right_on='Country or region') - -# Rename columns for ease of use -merged_df = merged_df.rename(columns = {'ADMIN':'GeoJSON_Country'}) -merged_df = merged_df.rename(columns = {'Country or region':'Country'}) -#+end_src - -#+caption: Merged Dataframe -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/merged_df.png]] - -* Create the Map -The data is finally ready to be added to a map. The code below shows the -simplest way to find the center of the map and create a Folium map -object. The important part is to remember to reference the merged -dataframe for our GeoJSON data and value data. The columns specify which -geo data and value data to use. - -#+begin_src python -# Assign centroids to map -x_map = merged_df.centroid.x.mean() -y_map = merged_df.centroid.y.mean() -print(x_map,y_map) - -# Creating a map object -world_map = folium.Map(location=[y_map, x_map], zoom_start=2,tiles=None) -folium.TileLayer('CartoDB positron',name='Dark Map',control=False).add_to(world_map) - -# Creating choropleth map -folium.Choropleth( - geo_data=merged_df, - name='Choropleth', - data=merged_df, - columns=['Country','Overall rank'], - key_on='feature.properties.Country', - fill_color='YlOrRd', - fill_opacity=0.6, - line_opacity=0.8, - legend_name='Overall happiness rank', - smooth_factor=0, - highlight=True -).add_to(world_map) -#+end_src - -Let's look at the resulting map. - -#+caption: Choropleth Map -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/map.png]] - -* Create a Tooltip on Hover -Now that we have a map set up, we could stop. However, I want to add a -tooltip so that I can see more information about each country. The -=tooltip_data= code below will show a popup on hover with all the data -fields shown. - -#+begin_src python - # Adding labels to map - style_function = lambda x: {'fillColor': '#ffffff', - 'color':'#000000', - 'fillOpacity': 0.1, - 'weight': 0.1} - -tooltip_data = folium.features.GeoJson( - merged_df, - style_function=style_function, - control=False, - tooltip=folium.features.GeoJsonTooltip( - fields=['Country' - ,'Overall rank' - ,'Score' - ,'GDP per capita' - ,'Social support' - ,'Healthy life expectancy' - ,'Freedom to make life choices' - ,'Generosity' - ,'Perceptions of corruption' - ], - aliases=['Country: ' - ,'Happiness rank: ' - ,'Happiness score: ' - ,'GDP per capita: ' - ,'Social support: ' - ,'Healthy life expectancy: ' - ,'Freedom to make life choices: ' - ,'Generosity: ' - ,'Perceptions of corruption: ' - ], - style=('background-color: white; color: #333333; font-family: arial; font-size: 12px; padding: 10px;') - ) -) -world_map.add_child(tooltip_data) -world_map.keep_in_front(tooltip_data) -folium.LayerControl().add_to(world_map) - -# Display the map -world_map -#+end_src - -The final image below will show you what the tooltip looks like whenever -you hover over a country. - -#+caption: Choropleth Map Tooltip -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/tooltip_map.png]] diff --git a/blog/homelab/index.org b/blog/homelab/index.org deleted file mode 100644 index ffefe5d..0000000 --- a/blog/homelab/index.org +++ /dev/null @@ -1,149 +0,0 @@ -#+title: An Inside Look at My Homelab -#+date: 2020-05-03 -#+description: A retrospective on the first iteration of my home lab. -#+filetags: :sysadmin: - -* What is a Homelab? -Starting as a developer, I have largely stayed away from hardware-based -hobbies (other than building a gaming desktop). However, as the -quarantine for COVID-19 stretches out further and further, I found -myself bored and in search of new hobbies. After spending the last few -months browsing the [[https://www.reddit.com/r/homelab/][r/homelab]] -subreddit, I decided it was time to jump in and try things out for -myself. - -Since I am a beginner and just recently graduated from college, -everything I've done so far in my homelab is fairly low-budget. - -* Hardware -#+caption: HomeLab Diagram -[[https://img.cleberg.net/blog/20200503-homelab/homelab-min.png]] - -*** Raspberry Pi 4 -Luckily, I had actually purchased a -[[https://www.raspberrypi.org/products/raspberry-pi-4-model-b/][Raspberry -Pi 4]] before the quarantine started so that I could try to keep Plex -Media Center running, even while my desktop computer was turned off. I -started here, using the Pi to hold Plex and Pi-hole until I grew tired -with the slow performance. - -Here are the specifications for the Pi 4: - -- Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz -- 4GB LPDDR4-3200 SDRAM -- Gigabit Ethernet -- H.265 (4kp60 decode), H264 (1080p60 decode, 1080p30 encode) -- 64 GB MicroSD Card - -** Dell Optiplex 5040 -Since I wasn't happy with the Pi as my main server, I turned to -Craigslist. I know a lot of other homelabbers use Ebay, but I can't seem -to ever trust it enough to purchase items on there. So I used Craigslist -and found a Dell Optiplex 5040 desktop computer on sale for $90. While -this computer might be underpowered, it was one of the few computers -under $100 that was available during quarantine. - -Here are the specifications for the Dell Optiplex 5040: - -- Intel Core i3 6100 -- 8GB RAM DDR3 -- Intel HD Graphics -- Gigabit Ethernet -- 500GB Hard Drive - -While this hardware would be awful for a work computer or a gaming rig, -it turned out to be wonderful for my server purposes. The only -limitation I have found so far is the CPU. The i3-6100 only has enough -power for a single 4k video transcode at a time. I haven't tested more -than three 1080p streams at a time, but the maximum amount of streams -I've ever actually used is two. - -** WD easystore 10TB & 8TB -Application storage and temporary files are stored on the internal hard -drive of the server, but all media files (movies, tv, games, books, etc) -are stored externally on my WD easystore hard drive. Creating auto-boot -configurations in the =/etc/fstab= file on my server allows the hard -drives to automatically mount whenever I need to restart my server. - -#+begin_quote -Update: In March 2022, I shucked the hard drives out of their external -cases, put some Kapton tape on the third power pin to prevent power -shutdowns, and stuck them inside my server tower using internal SATA -cables. - -#+end_quote - -** Netgear Unmanaged Switch -To manage all the ethernet cords used by my homelab, my desktop, and my -living room media center, I purchased an 8-port gigabit ethernet switch -for $50 at my local computer store. This is probably much more than I -should have spent on an unmanaged switch, but I am comfortable with the -choice. - -** TP-Link Managed Switch -Since I use the unmanaged switch to group all living room devices -together, I use the managed switch to configure VLANs and secure my -network. - -** Arris TM1602A Modem & Sagecom Fast 5280 Router -My default modem and router, provided by my ISP, are fairly standard. -The Arris modem supports DOCSIS 3.0, which is something that I -definitely wanted as a minimum. The Sagecom router is also standard, no -fancy bells or whistles. However, it does support DHCP and DHCPv6, which -is something you can use to route all household traffic through a -pi-hole or firewall. - -** TP-Link EAP -In order to gain better control over the network, I use my own wireless -access point instead of the one included in the Sagecom router above. -Now I can control and organize all of my ethernet connections through -the VLANs on the managed switch and wireless connections through the -VLANS on the EAP. - -** Generic Printer -The last piece to my homelab is a standard wireless printer. Nothing -special here. - -* Software -** Ubuntu Server 20.04 -While the 20.04 version of Ubuntu was just released, I always like to -experiment with new features (and I don't mind breaking my system - it -just gives me more experience learning how to fix things). So, I have -Ubuntu Server 20.04 installed on the Dell Optiplex server and Ubuntu -Server 19.10 installed on the Raspberry Pi. Once I find an acceptable -use for the Pi, I will most likely switch the operating system. - -** Docker -I am /very/ new to Docker, but I have had a lot of fun playing with it -so far. Docker is used to create containers that can hold all the -contents of a system without interfering with other software on the same -system. So far, I have successfully installed pi-hole, GitLab, Gogs, and -Nextcloud in containers. However, I opted to delete all of those so that -I can reconfigure them more professionally at a later time. - -** Plex Media Server -Plex is a media center software that allows you to organize your movies, -TV shows, music, photos, and videos automatically. It will even download -metadata for you so that you can easily browse these collections. - -** Pi-hole -Pi-hole is an alternative ad-blocker that runs at the DNS level, -allowing you to block traffic when it hits your network, so that you can -reject any traffic you deem to be bad. Pi-hole uses blacklists and -whitelists to decide which traffic block and, luckily, there are a lot -of pre-made lists out there on Reddit, GitHub, etc. - -** Nextcloud -While I had trouble with the Docker version of Nextcloud, I was very -successful when setting up the snap version. Using this, I was able to -map Nextcloud to a subdomain of a domain I own in Namecheap. -Additionally, Nextcloud has an integration with Let's Encrypt that -allows me to issue certificates automatically to any new domain I -authorize. - -** Webmin -To monitor my servers, and the processes running on them, I use the -Webmin dashboard. This was fairly painless to set up, and I currently -access it straight through the server's IP address. In the future, I -will be looking to configure Webmin to use a custom domain just like -Nextcloud. diff --git a/blog/index.org b/blog/index.org deleted file mode 100644 index b19030a..0000000 --- a/blog/index.org +++ /dev/null @@ -1,141 +0,0 @@ -#+title: Blog -#+options: toc:nil - -Use =Ctrl + F= to search blog post titles for keywords. - -* TODO Create RSS Feed - -* 2024 - -- 2024-02-26 [[./org-blog/][Blogging in Org-Mode]] -- 2024-02-21 [[./self-hosting-otter-wiki/][Self-Hosting An Otter Wiki]] -- 2024-02-13 [[./ubuntu-emergency-mode/][Stuck in Ubuntu's Emergency Mode? Try Fixing the Fstab File]] -- 2024-02-06 [[./zfs/][How to Create a ZFS Pool on Ubuntu Linux]] -- 2024-01-27 [[./tableau-dashboard/][Data Visualization: Mapping Omaha Crime Data with Tableau]] -- 2024-01-26 [[./audit-dashboard/][Building an Audit Status Dashboard]] -- 2024-01-13 [[./local-llm/][Running Local LLMs on macOS and iOS]] -- 2024-01-09 [[./macos-customization/][Customizing macOS]] -- 2024-01-08 [[./dont-say-hello/][Don't Say Hello]] - -* 2023 - -- 2023-12-03 [[./unifi-nextdns/][How to Install NextDNS on the Unifi Dream Machine]] -- 2023-11-08 [[./scli/][Installing scli on Alpine Linux (musl)]] -- 2023-10-17 [[./self-hosting-anonymousoverflow/][Self-Hosting AnonymousOverflow]] -- 2023-10-15 [[./alpine-ssh-hardening/][SSH Hardening for Alpine Linux]] -- 2023-10-11 [[./self-hosting-authelia/][Self-Hosting Authelia]] -- 2023-10-04 [[./digital-minimalism/][Digital Minimalism]] -- 2023-09-19 [[./audit-sql-scripts/][Useful SQL Scripts for Auditing Logical Access]] -- 2023-09-15 [[./self-hosting-gitweb/][Self-Hosting GitWeb via Nginx]] -- 2023-08-18 [[./agile-auditing/][Agile Auditing: An Introduction]] -- 2023-07-19 [[./plex-transcoder-errors/][How to Avoid Plex Error: 'Conversion failed. The transcoder failed to start up.']] -- 2023-07-12 [[./wireguard-lan/][Enable LAN Access in Mullvad Wireguard Conf Files]] -- 2023-06-30 [[./self-hosting-voyager/][Self-Hosting Voyager - A Lemmy Web Client]] -- 2023-06-28 [[./backblaze-b2/][Getting Started with Backblaze B2 Cloud Storage]] -- 2023-06-23 [[./self-hosting-convos/][Self-Hosting Convos IRC Web Client]] -- 2023-06-23 [[./byobu/][Byobu]] -- 2023-06-20 [[./audit-review-template/][Audit Testing Review Template]] -- 2023-06-18 [[./unifi-ip-blocklist/][Block IP Addresses and Subnets with Unifi Network Firewall]] -- 2023-06-08 [[./goaccess-geoip/][Inspecting Nginx Logs with GoAccess and MaxMind GeoIP Data]] -- 2023-06-08 [[./self-hosting-baikal/][Self-Hosting Baikal Server (CalDAV & CardDAV)]] -- 2023-05-22 [[./burnout/][RE: Burnout]] -- 2023-02-02 [[./exploring-hare/][Exploring the Hare Programming Language]] -- 2023-01-28 [[./self-hosting-wger/][Self-Hosting Wger Workout Manager]] -- 2023-01-23 [[./random-wireguard/][Connecting to a Random Mullvad Wireguard Host on Boot]] -- 2023-01-21 [[./flatpak-symlinks/][Running Flatpak Apps with Symlinks]] -- 2023-01-08 [[./fedora-login-manager/][How to Remove the Login Manager from Fedora i3]] -- 2023-01-05 [[./mass-unlike-tumblr-posts/][How to Easily Mass Unlike Tumblr Posts with Javascript]] -- 2023-01-03 [[./recent-website-changes/][Recent Website Changes]] - -* 2022 - -- 2022-12-23 [[./alpine-desktop/][Alpine Linux as a Desktop OS]] -- 2022-12-17 [[./st/][Simple Terminal]] -- 2022-12-07 [[./nginx-wildcard-redirect/][Redirect Nginx Subdomains & Trailing Content with Regex]] -- 2022-12-01 [[./nginx-compression/][Enable GZIP Compression in Nginx]] -- 2022-11-29 [[./nginx-referrer-ban-list/][Creating a Referrer Ban List in Nginx]] -- 2022-11-27 [[./server-build/][Building a Custom Rack-Mounted Server]] -- 2022-11-11 [[./nginx-tmp-errors/][Fixing Permission Errors in /var/lib/nginx]] -- 2022-10-30 [[./linux-display-manager/][How to Disable or Change the Display Manager on Void Linux]] -- 2022-10-22 [[./alpine-linux/][Alpine Linux: My New Server OS]] -- 2022-10-04 [[./syncthing/][Syncthing: A Minimal Self-Hosted Cloud Storage Solution]] -- 2022-10-04 [[./mtp-linux/][How to Mount an MTP Mobile Device on Fedora Linux]] -- 2022-09-21 [[./graphene-os/][Installing Graphene OS on the Pixel 6 Pro]] -- 2022-09-17 [[./serenity-os/][Serenity OS: Testing Out a Unique System]] -- 2022-08-31 [[./privacy-com-changes/][Concerning Changes on Privacy.com]] -- 2022-03-23 [[./cloudflare-dns-api/][Dynamic DNS with Cloudflare API]] -- 2022-07-31 [[./bash-it/][Upgrade Bash with Bash-It & Ble.sh]] -- 2022-07-30 [[./flac-to-opus/][Recursive Command-Line FLAC to Opus Conversion]] -- 2022-07-25 [[./curseradio/][CurseRadio: Listening to the Radio on the Command Line]] -- 2022-07-14 [[./gnupg/][GNU Privacy Guard (GPG)]] -- 2022-07-01 [[./git-server/][Self-Hosting a Personal Git Server]] -- 2022-06-24 [[./fedora-i3/][Rebooting My Love Affair with Linux]] -- 2022-06-22 [[./daily-poetry/][Daily Plaintext Poetry via Email]] -- 2022-06-16 [[./terminal-lifestyle/][A Terminal Lifestyle]] -- 2022-06-07 [[./self-hosting-freshrss/][Self-Hosting FreshRSS]] -- 2022-06-01 [[./ditching-cloudflare/][Ditching Cloudflare for Njalla]] -- 2022-04-09 [[./pinetime/][PineTime: An Open-Source Smart Watch]] -- 2022-04-02 [[./nginx-reverse-proxy/][Set-Up a Reverse Proxy with Nginx]] -- 2022-03-26 [[./ssh-mfa/][Enable TOTP MFA for SSH]] -- 2022-03-24 [[./server-hardening/][Hardening a Public-Facing Home Server]] -- 2022-03-23 [[./nextcloud-on-ubuntu/][Nextcloud on Ubuntu]] -- 2022-03-08 [[./plex-migration/][Migrating Plex to a New Server (& Nvidia Transcoding)]] -- 2022-03-03 [[./financial-database/][Maintaining a Personal Financial Database]] -- 2022-03-02 [[./reliable-notes/][Easy, Reliable Note-Taking]] -- 2022-02-22 [[./tuesday/][Tuesday]] -- 2022-02-20 [[./nginx-caching/][Caching Static Content with Nginx]] -- 2022-02-17 [[./exiftool/][Stripping Image Metadata with exiftool]] -- 2022-02-16 [[./debian-and-nginx/][Migrating to a New Web Server Setup with Debian, Nginx, and Agate]] -- 2022-02-10 [[./leaving-the-office/][Leaving Office-Based Work in the Past]] -- 2022-02-10 [[./njalla-dns-api/][Dynamic DNS with Njalla API]] - -* 2021 - -- 2021-12-04 [[./cisa/][I Passed the CISA!]] -- 2021-10-09 [[./apache-redirect/][Apache Redirect HTML Files to a Directory]] -- 2021-08-25 [[./audit-sampling/][Audit Sampling with Python]] -- 2021-07-15 [[./delete-gitlab-repos/][How to Delete All GitLab Repositories]] -- 2021-05-30 [[./changing-git-authors/][Changing Git Authors]] -- 2021-04-28 [[./photography/][Jumping Back Into Photography]] -- 2021-04-23 [[./php-comment-system/][Roll Your Own Static Commenting System in PHP]] -- 2021-04-17 [[./gemini-server/][Hosting a Gemini Server]] -- 2021-03-30 [[./vps-web-server/][How to Set Up a VPS Web Server]] -- 2021-03-28 [[./vaporwave-vs-outrun/][Vaporwave vs Outrun]] -- 2021-03-28 [[./gemini-capsule/][Launching a Gemini Capsule]] -- 2021-03-19 [[./clone-github-repos/][How to Clone All Repositories from a GitHub or Sourcehut Account]] -- 2021-02-19 [[./macos/][macOS: Testing Out A New OS]] -- 2021-01-04 [[./fediverse/][A Simple Guide to the Fediverse]] -- 2021-01-07 [[./ufw/][Secure Your Network with the Uncomplicated Firewall (ufw)]] -- 2021-01-01 [[./seum/][SEUM: Speedrunners from Hell]] - -* 2020 - -- 2020-12-29 [[./zork/][Zork: Let's Explore a Classic]] -- 2020-12-27 [[./website-redesign/][Redesigning My Website: The 5 KB Result]] -- 2020-12-28 [[./neon-drive/][Neon Drive: A Nostalgic 80s Arcade Racing Game]] -- 2020-10-12 [[./mediocrity/][On the Pursuit of Mediocrity]] -- 2020-09-25 [[./happiness-map/][Data Visualization: World Choropleth Map of Happiness]] -- 2020-09-22 [[./internal-audit/][What is Internal Audit?]] -- 2020-09-01 [[./visual-recognition/][IBM Watson Visual Recognition]] -- 2020-08-29 [[./php-auth-flow/][PHP Authentication Flow]] -- 2020-08-22 [[./redirect-github-pages/][Redirect GitHub Pages from Subdomain to the Top-Level Domain]] -- 2020-07-26 [[./business-analysis/][Algorithmically Analyzing Local Businesses]] -- 2020-07-20 [[./video-game-sales/][Data Exploration: Video Game Sales]] -- 2020-05-19 [[./customizing-ubuntu/][Beginner's Guide: Customizing Ubuntu]] -- 2020-05-03 [[./homelab/][An Inside Look at My Homelab]] -- 2020-03-25 [[./session-manager/][Session Private Messenger]] -- 2020-02-09 [[./cryptography-basics/][Cryptography Basics]] -- 2020-01-26 [[./steam-on-ntfs/][Linux Gaming Tweak: Steam on NTFS Drives]] -- 2020-01-25 [[./linux-software/][Linux Software]] - -* 2019 - -- 2019-12-16 [[./password-security//][Password Security]] -- 2019-12-03 [[./the-ansoff-matrix//][The Ansoff Matrix]] -- 2019-09-09 [[./audit-analytics//][Data Analysis in Auditing]] -- 2019-01-07 [[./useful-css//][Useful CSS Snippets]] - -* 2018 - -- 2018-12-08 [[./aes-encryption//][AES Encryption]] -- 2018-11-28 [[./cpp-compiler//][The C++ Compiler]] diff --git a/blog/internal-audit/index.org b/blog/internal-audit/index.org deleted file mode 100644 index 3074266..0000000 --- a/blog/internal-audit/index.org +++ /dev/null @@ -1,247 +0,0 @@ -#+title: What is Internal Audit? -#+date: 2020-09-22 -#+description: Learn about the Internal Audit function and their purpose. -#+filetags: :audit: - -#+caption: Internal Audit Overview -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/internal-audit-overview.jpg]] - -* Definitions -One of the many reasons that Internal Audit needs such thorough -explaining to non-auditors is that Internal Audit can serve many -purposes, depending on the organization's size and needs. However, the -Institute of Internal Auditors (IIA) defines Internal Auditing as: - -#+begin_quote -Internal auditing is an independent, objective assurance and consulting -activity designed to add value and improve an organization's operations. -It helps an organization accomplish its objectives by bringing a -systematic, disciplined approach to evaluate and improve the -effectiveness of risk management, control, and governance processes. - -#+end_quote - -However, this definition uses quite a few terms that aren't clear unless -the reader already has a solid understanding of the auditing profession. -To further explain, the following is a list of definitions that can help -supplement understanding of internal auditing. - -** Independent -Independence is the freedom from conditions that threaten the ability of -the internal audit activity to carry out internal audit responsibilities -in an unbiased manner. To achieve the degree of independence necessary -to effectively carry out the responsibilities of the internal audit -activity, the chief audit executive has direct and unrestricted access -to senior management and the board. This can be achieved through a -dual-reporting relationship. Threats to independence must be managed at -the individual auditor, engagement, functional, and organizational -levels. - -** Objective -Objectivity is an unbiased mental attitude that allows internal auditors -to perform engagements in such a manner that they believe in their work -product and that no quality compromises are made. Objectivity requires -that internal auditors do not subordinate their judgment on audit -matters to others. Threats to objectivity must be managed at the -individual auditor, engagement, functional, and organizational levels. - -** Assurance -Assurance services involve the internal auditor's objective assessment -of evidence to provide opinions or conclusions regarding an entity, -operation, function, process, system, or other subject matters. The -internal auditor determines the nature and scope of an assurance -engagement. Generally, three parties are participants in assurance -services: (1) the person or group directly involved with the entity, -operation, function, process, system, or other subject - (the process -owner), (2) the person or group making the assessment - (the internal -auditor), and (3) the person or group using the assessment - (the user). - -** Consulting -Consulting services are advisory in nature and are generally performed -at the specific request of an engagement client. The nature and scope of -the consulting engagement are subject to agreement with the engagement -client. Consulting services generally involve two parties: (1) the -person or group offering the advice (the internal auditor), and (2) the -person or group seeking and receiving the advice (the engagement -client). When performing consulting services, the internal auditor -should maintain objectivity and not assume management responsibility. - -** Governance, Risk Management, & Compliance (GRC) -The integrated collection of capabilities that enable an organization to -reliably achieve objectives, address uncertainty and act with integrity. - -* Audit Charter & Standards -First, it's important to note that not every organization needs internal -auditors. In fact, it's unwise for an organization to hire internal -auditors unless they have regulatory requirements for auditing and have -the capital to support the department. Internal audit is a cost center -that can only affect revenue indirectly. - -Once an organization determines the need for internal assurance -services, they will hire a Chief Audit Executive and create the audit -charter. This charter is a document, approved by the company's governing -body, that will define internal audit's purpose, authority, -responsibility, and position within the organization. Fortunately, the -IIA has model charters available to IIA members for those developing or -improving their charter. - -Beyond the charter and organizational documents, internal auditors -follow a few different standards in order to perform their job. First is -the International Professional Practices Framework (IPPF) by the IIA, -which is the model of standards for internal auditing. In addition, -ISACA's Information Technology Assurance Framework (ITAF) helps guide -auditors in reference to information technology (IT) compliance and -assurance. Finally, additional standards such as FASB, GAAP, and -industry-specific standards are used when performing internal audit -work. - -* Three Lines of Defense -[[https://theiia.org][The IIA]] released the original Three Lines of -Defense model in 2013, but have released an updated version in 2020. -Here is what the Three Lines of Defense model has historically looked -like: - -#+caption: 2013 Three Lines of Defense Model -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/three_lines_model.png]] - -I won't go into depth about the changes made to the model in this -article. Instead, let's take a look at the most current model. - -#+caption: 2020 Three Lines of Defense Model -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/updated_three_lines_model.png]] - -The updated model forgets the strict idea of areas performing their own -functions or line of defense. Instead of talking about management, risk, -and internal audit as 1-2-3, the new model creates a more fluid and -cooperative model. - -Looking at this model from an auditing perspective shows us that -auditors will need to align, communicate, and collaborate with -management, including business area managers and chief officers, as well -as reporting to the governing body. The governing body will instruct -internal audit /functionally/ on their goals and track their progress -periodically. - -However, the internal audit department will report /administratively/ to -a chief officer in the company for the purposes of collaboration, -direction, and assistance with the business. Note that in most -situations, the governing body is the audit committee on the company's -board of directors. - -The result of this structure is that internal audit is an independent -and objective function that can provide assurance over the topics they -audit. - -* Audit Process -A normal audit will generally follow the same process, regardless of the -topic. However, certain special projects or abnormal business areas may -call for changes to the audit process. The audit process is not set in -stone, it's simply a set of best practices so that audits can be -performed consistently. - -#+caption: The Internal Audit Process -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/internal-audit-process.jpg]] - -While different organizations may tweak the process, it will generally -follow this flow: - -** 1. Risk Assessment -The risk assessment part of the process has historically been performed -annually, but many organizations have moved to performing this process -much more frequently. In fact, some organizations are moving to an agile -approach that can take new risks into the risk assessment and -re-prioritize risk areas on-the-go. To perform a risk assessment, -leaders in internal audit will research industry risks, consult with -business leaders around the company, and perform analyses on company -data. - -Once a risk assessment has been documented, the audit department has a -prioritized list of risks that can be audited. This is usually in the -form of auditable entities, such as business areas or departments. - -** 2. Planning -During the planning phase of an audit, auditors will meet with the -business area to discuss the various processes, controls, and risks -applicable to the business. This helps the auditors determine the scope -limits for the audit, as well as timing and subject-matter experts. -Certain documents will be created in this phase that will be used to -keep the audit on-track an in-scope as it goes forward. - -** 3. Testing -The testing phase, also known as fieldwork or execution, is where -internal auditors will take the information they've discovered and test -it against regulations, industry standards, company rules, best -practices, as well as validating that any processes are complete and -accurate. For example, an audit of HR would most likely examine -processes such as employee on-boarding, employee termination, security -of personally identifiable information (PII), or the IT systems involved -in these processes. Company standards would be examined and compared -against how the processes are actually being performed day-to-day, as -well as compared against regulations such as the Equal Employment -Opportunity (EEO), American with Disabilities Act, and National Labor -Relations Act. - -** 4. Reporting -Once all the tests have been completed, the audit will enter the -reporting phase. This is when the audit team will conclude on the -evidence they've collected, interviews they've held, and any opinions -they've formed on the controls in place. A summary of the audit -findings, conclusions, and specific recommendations are officially -communicated to the client through a draft report. Clients have the -opportunity to respond to the report and submit an action plan and time -frame. These responses become part of the final report which is -distributed to the appropriate level of administration. - -** 5. Follow-Up -After audits have been completed and management has formed action plans -and time frames for audit issues, internal audit will follow up once -that due date has arrived. In most cases, the follow-up will simply -consist of a meeting to discuss how the action plan has been completed -and to request documentation to prove it. - -* Audit Department Structure -While an internal audit department is most often thought of as a team of -full-time employees, there are actually many different ways in which a -department can be structured. As the world becomes more digital and -fast-paced, outsourcing has become a more attractive option for some -organizations. Internal audit can be fully outsourced or partially -outsourced, allowing for flexibility in cases where turnover is high. - -In addition, departments can implement a rotational model. This allows -for interested employees around the organization to rotate into the -internal audit department for a period of time, allowing them to obtain -knowledge of risks and controls and allowing the internal audit team to -obtain more business area knowledge. This program is popular in very -large organizations, but organizations tend to rotate lower-level audit -staff instead of managers. This helps prevent any significant knowledge -loss as auditors rotate out to business areas. - -* Consulting -Consulting is not an easy task at any organization, especially for a -department that can have negative perceptions within the organization as -the "compliance police." However, once an internal audit department has -delivered value to organization, adding consulting to their suite of -services is a smart move. In most cases, Internal Audit can insert -themselves into a consulting role without affecting the process of -project management at the company. This means that internal audit can -add objective assurance and opinions to business areas as they develop -new processes, instead of coming in periodically to audit an area and -file issues that could have been fixed at the beginning. - -* Data Science & Data Analytics -#+caption: Data Science Skill Set -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/data-science-skillset.png]] - -One major piece of the internal audit function in the modern world is -data science. While the process is data science, most auditors will -refer to anything in this realm as data analytics. Hot topics such as -robotic process automation (RPA), machine learning (ML), and data mining -have taken over the auditing world in recent years. These technologies -have been immensely helpful with increasing the effectiveness and -efficiency of auditors. - -For example, mundane and repetitive tasks can be automated in order for -auditors to make more room in their schedules for labor-intensive work. -Further, auditors will need to adapt technologies like machine learning -in order to extract more value from the data they're using to form -conclusions. diff --git a/blog/leaving-the-office/index.org b/blog/leaving-the-office/index.org deleted file mode 100644 index 34db40a..0000000 --- a/blog/leaving-the-office/index.org +++ /dev/null @@ -1,240 +0,0 @@ -#+title: Leaving Office-Based Work in the Past -#+date: 2022-02-10 -#+description: My thoughts on the current surge of remote work and what that means for full-time office-based roles. -#+filetags: :audit: - -* The Working World is Changing -There has been a trend for the past few years of companies slowly -realizing that the pandemic is not just a temporary state that will go -away eventually and let everything return to the way it was before. In -terms of business and employment, this means that more and more jobs are -being offered as permanently remote roles. - -I had always dreamt of working from home but thought of it as a fantasy, -especially since I did not want to move over into the software -development field. However, I have found that almost all roles being -sent to me via recruiters are permanently remote (although most are -limited to US citizens or even region-locked for companies who only -operate in select states). - -I decided to take a look back at my relatively short career so far and -compare the positive and negative effects of the different work -environments I've been in. - -* In-Person Offices -** Retail Internship -I started my first job as a management intern at a busy retail pharmacy, -working my 40-hour weeks on my feet. As these retail stores don't -believe in resting or sitting down, you can guarantee that you will -spend entire shifts standing, walking, or running around the store. -Unfortunately, I worked at a time when our store didn't have enough -managers, so I spent the majority of my tenure at the store running and -breaking a sweat. - -Now, things aren't all bad in retail stores like this. It is definitely -tiring and inefficient to force employees to work constantly, or pretend -to work if there's nothing to do, and not allow anyone to sit down. -However, if you are able to operate a retail store with a limited crew -and provide enough comfort and support, I believe these jobs could be -both comfortable and efficient. - -** Semi-Private Cubicles -#+caption: Semi-Private Cubicles -[[https://img.cleberg.net/blog/20220210-leaving-office-based-work-in-the-past/private_cubicles.png]] - -After about a year, I was able to find another internship - this time, -it was in my field of interest: internal auditing. This was for a life -insurance company that was well over 100 years old. The age of the -company shows if you work there, as most people in management are well -into their 40s-60s with little to no youthful leadership in the company. -Likewise, they owned a large headquarters in a nice area of town with -plenty of space, parking, etc. - -One upside is that each person gets their own large L-shaped desk, -formed into cubicles that house 4 desks/employees. These "pods" of -4-person cubicles are linked throughout each floor of the headquarters -(except the sales people, who had that open-floor concept going on). The -walls of the cubicle were tall and provided a lot of privacy and -sound-proofing, except when I used the standing desk feature (I'm over 6 -feet tall, so probably not an issue for most people). - -I loved this environment, it allowed me to focus on my work with minimal -distractions, but also allowed easy access, so I could spin around in my -chair and chat with my friends without leaving my chair. This is the -closest I've been to a home office environment (which is my personal -favorite, as I'll get to later in this post). - -** Semi-Open Floor Concept -#+caption: Semi-Open Floor Concept -[[https://img.cleberg.net/blog/20220210-leaving-office-based-work-in-the-past/semi_open_office.png]] - -When I shifted to my first full-time internal audit job out of college, -I was working at a company that was headquartered on a floor in a -downtown high-rise building. The company was only about 20 years old -when I worked there and were trying a lot of new things to attract young -talent, one of which was a semi-open floor concept for the office. My -department worked just around the hallway corner from the executive -offices and used that "modern" layout young tech companies started using -in the 2000s/2010s. - -Each desk was brief, and you could look most coworkers in the face -without moving from your chair, I hated this so much. Directly to my -left was the Chief Audit Executive (our department's leading boss), and -his desk was pointed so that his face would stare straight at my desk -all day. I spent more time thinking about who was looking at me or -checking on me than actually working. - -The other annoying part of the open concept they used was that the -kitchen area and pathways were too close to everyone's desks (since the -desks were spread out, to provide space or something), so noise and -conversation would be constant throughout the day while you try to work. -For someone like me, who needs silence to get work done, that was a -non-starter. - -** Hotel Office Concept -#+caption: Hotel Office Concept -[[https://img.cleberg.net/blog/20220210-leaving-office-based-work-in-the-past/hotel_desks.png]] - -I currently work for a company remotely (for now) and travel to the -office every once in a while for events and to help coach the staff -underneath me. The office I visit uses the hotel desk concept, where you -need to check in at a touch screen when you enter the office and "rent" -a desk for the day. The same goes for offices and meeting rooms. - -These desks are flat-top only and do not have any walls at all. In -addition, they're stacked with one row of 4 desks facing another row of -4 desks. These pairs of desk rows are repeated through the office. - -This means that when I go, I need to rent a random desk or try to -remember the unique ID numbers on desks I like. Once I rent it, I have -to make sure no one sat down in that desk without renting it. Then, I -can sit down and work, but will probably need to adjust the monitors so -that I'm not staring in the face of the person across from me all day. -Finally, I need to wear headphones as this environment does nothing to -provide you with peace or quiet. - -Luckily, you can rent offices with doors that offer quiet and privacy, -which can be very nice if you have a lot of meetings or webinars on a -certain day. - -* Home Office -#+caption: Home Office -[[https://img.cleberg.net/blog/20220210-leaving-office-based-work-in-the-past/home_office.png]] - -Okay, now let's finally get to the home office concept. I have worked -from home for a little over two years at this point, across three -different jobs/employers. Over this time, I have experimented with a -plethora of different organizational ideas, desks, and room layouts to -find what works best for me. - -These things might not apply to you, and that's fine. Everyone has a -different situation, and I really don't think you'll know what works -until you try. - -** Tip #1 -Let's start with my top rule for a home office: - -#+begin_quote -If you live with others, working in a shared space is not effective. - -#+end_quote - -It just does not work. If you have another person sleeping in your -bedroom, it is difficult to manage your work schedule with their -sleeping/work/school schedule. If they wake up after you need to start -work, you might wake them up or have to suffer the agony of staring at -bright screens in a dark room. - -In a similar vein, working from a location such as the living room -likely won't work either. Distractions will come far more frequently: -televisions, cooking, cleaning, deliveries, etc. If you're like me, -you'll end up playing a game instead of actually doing any work. - -** Tip #2 -Okay, the second thing I've discovered that works for me: - -#+begin_quote -Use the pomodoro method (or something similar) to balance work tasks -with personal tasks. - -#+end_quote - -I use a very casual version of the pomodoro method where I will work for -1-2 hours (usually set in strict intervals like 1, 1.5, 2 hours) and -then will allow myself 30-60 minutes for personal tasks. This schedule -works for me, since my work schedule really only comes to 3-6 hours of -work per day. - -In this case, I'll work through my list of tasks for an hour or two and -then give myself personal time to get drinks and food, wash dishes, put -clothes in the washer, get the mail, etc. If you're in a convenient -location, this usually gives time for things like getting groceries (as -long as you're not a slow shopper). - -** Tip #3 -While I listed this one as number three, I don't think I'd accomplish -anything without it: - -#+begin_quote -Document everything: even things you didn't before - such as task lists -and notes from casual calls or meetings. - -#+end_quote - -I've noticed that staying in an office gave me more constant reminders -of outstanding tasks or facts I had learned in a conversation. -Translating everything to a digital world has made me lose a bit of that -focus (perhaps since I don't have visual reminders?). - -Keeping a running task list of all things I have to do - even potential -tasks! - has helped me keep up without missing anything small. Likewise, -keeping notes for ALL meetings and calls, no matter how casual/quick, -has improved my retention immensely. Beyond helping my mental -recollection, it has saved me numerous times when I need to do a keyword -search for some topic that was discussed 6+ months ago. - -** Tip #4 -Okay, last one for now. - -#+begin_quote -Keep your work area clean. - -#+end_quote - -This one is straightforward, but I know some people struggle with -cleanliness or may not believe it makes a difference. Trust me, keeping -your desk area clean and organized makes a huge difference, both -mentally and emotionally. - -Just think about it, you walk into your home office and see a clean desk -with a laptop, dock, monitors, keyboard, mouse, and a notepad with a pen -on top. - -Now imagine the opposite, there's an office with the same equipment, but -there are clothes hanging on the chair, empty drink bottles, candy -wrappers and dirty plates. This can take both a mental and emotional -toll by bringing constant disarray and stress into your working -environment. - -Just keep things clean each day, and you won't need to do any big -cleaning days to recover. - -* My Preferences -I've talked about the different environments I've worked in and -expressed some honest thoughts on pros or cons to each, but what do I -prefer? Well, if you're reading along, you should be able to tell that I -much prefer a home office above all else. - -Being able to control my own day and allot my time as needed has brought -a calmness to my life and has allowed me to maximize each day. I feel -far more effective and efficient in a home office than any other office, -especially open-office layouts. - -If I do need to return to an office part-time in the future, I really -hope the office will have privacy and quietness in order for me to get -my work done. - -Cubicles are good! I agree with Alice (from the comic Dilbert): - -#+caption: Dilbert comic strip -[[https://img.cleberg.net/blog/20220210-leaving-office-based-work-in-the-past/dilbert_120109.png]] diff --git a/blog/linux-display-manager/index.org b/blog/linux-display-manager/index.org deleted file mode 100644 index 3d8d6d7..0000000 --- a/blog/linux-display-manager/index.org +++ /dev/null @@ -1,72 +0,0 @@ -#+title: How to Disable or Change the Display Manager on Void Linux -#+date: 2022-10-30 -#+description: Learn how to remove or modify the display manager on Void Linux. -#+filetags: :linux: - -* Display Manager Services -In order to change the -[[https://en.wikipedia.org/wiki/Display_manager][display manager]] on -Void Linux - or any other Linux distro - you need to identify the -currently enabled display manager. - -** Disabling the Current Display Manager -Void Linux only has one ISO available for download with a pre-built -display manager at the time of this post: the XFCE ISO. If you've -installed this version, the pre-assigned display manager is =lxdm=. If -you installed another display manager, replace =lxdm= in the following -command with the display manager you have installed. - -To disable =lxdm=, simply remove the service symlink: - -#+begin_src sh -sudo rm /var/service/lxdm -#+end_src - -** Enabling a New Display Manager -If you want to enable a new display manager, you can do so after =lxdm= -is disabled. Make sure to replace == with your new -DM, such as =gdm=, =xdm=, etc. - -#+begin_src sh -sudo ln -s /etc/sv/ /var/service -#+end_src - -* Set Up =.xinitrc= -Depending on your setup, you may need to create a few X files, such as -=~/.xinitrc=. For my personal set-up, I created this file to launch the -i3wm as my desktop. - -#+begin_src sh -nano ~/.xinitrc -#+end_src - -#+begin_src sh -#!/bin/sh - -exec i3 -#+end_src - -If you run a desktop other than i3, simply replace =i3= with the shell -command that launches that desktop. - -* Set Up Your Shell Profile -Finally, in order to automatically launch an X session upon login, you -will need to edit the =.bash_profile= (bash) or =.zprofile= (zsh) files -for your shell: - -#+begin_src sh -nano ~/.zprofile -#+end_src - -Add the following snippet to the end of the shell profile file. This -will execute the =startx= command upon login. - -#+begin_src sh -if [ -z "${DISPLAY}" ] && [ "${XDG_VTNR}" -eq 1 ]; then - exec startx -fi -#+end_src - -Alternatively, you can ignore this step and simply choose to manually -execute =startx= upon login. This can be useful if you have issues with -your desktop or like to manually launch different desktops by choice. diff --git a/blog/linux-software/index.org b/blog/linux-software/index.org deleted file mode 100644 index 8397483..0000000 --- a/blog/linux-software/index.org +++ /dev/null @@ -1,271 +0,0 @@ -#+title: Linux Software -#+date: 2020-01-25 -#+description: A look at some useful Linux applications. -#+filetags: :linux: - -* GUI Applications -** Etcher -#+caption: Etcher Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/etcher.png]] - -[[https://www.balena.io/etcher/][Etcher]] is a quick and easy way to -burn ISO images to CDs and USB devices. There are two different ways you -can install this program. First, you can navigate to the -[[https://www.balena.io/etcher/][official website]] and download the -AppImage file, which can run without installation. - -However, AppImage files are not executable by default, so you'll either -need to right-click to open the properties of the file and click the -"Allow executing file as program" box in the Permissions tab or use the -following command: - -#+begin_src sh -chmod u+x FILE_NAME -#+end_src - -If you don't like AppImage files or just prefer repositories, you can -use the following commands to add the author's repository and install it -through the command-line only. - -First, you'll have to echo the repo and write it to a list file: - -#+begin_src sh -echo "deb https://deb.etcher.io stable etcher" | sudo tee /etc/apt/sources.list.d/balena-etcher.list -#+end_src - -Next, add the application keys to Ubuntu's keyring: - -#+begin_src sh -sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61 -#+end_src - -Finally, update the repositories and install the app. - -#+begin_src sh -sudo apt update && sudo apt install balena-etcher-electron -#+end_src - -Using Arch, Manjaro, or another distro using the AUR? Use this command -instead: - -#+begin_src sh -sudo pacman -S etcher -#+end_src - -** Atom -#+caption: Atom Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/atom.png]] - -[[https://atom.io][Atom]] is the self-proclaimed "hackable text editor -for the 21st century". This text editor is made by GitHub, -[[https://news.microsoft.com/2018/06/04/microsoft-to-acquire-github-for-7-5-billion/][now -owned by Microsoft]], and has some of the best add-ons available to -customize the layout and abilities of the app. - -First, add the Atom repository to your sources. - -#+begin_src sh -sudo add-apt-repository ppa:webupd8team/atom -#+end_src - -Next, update your package listings and install atom. - -#+begin_src sh -sudo apt update && sudo apt install atom -#+end_src - -If you have issues updating your packages with the Atom repository, -you'll need to use the snap package described below instead of the -repository. To remove the repository we just added, use this command: - -#+begin_src sh -sudo add-apt-repository -r ppa:webupd8team/atom -#+end_src - -You can also install Atom as a snap package, but it must be installed -with the =--classic= flag. A -[[https://language-bash.com/blog/how-to-snap-introducing-classic-confinement][full -explanation is available]] if you'd like to read more about why you need -the classic flag. - -#+begin_src sh -snap install atom --classic -#+end_src - -Using Arch, Manjaro, or another distro using the AUR? Use this command -instead: - -#+begin_src sh -sudo pacman -S atom -#+end_src - -** Visual Studio Code -#+caption: Visual Studio Code Code -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/vscode.png]] - -[[https://code.visualstudio.com][Visual Studio Code]] is yet another -fantastic choice for programming on Linux, especially if you need those -extra add-ons to spice up your late-night coding sessions. The theme -used in the screenshot is -[[https://marketplace.visualstudio.com/items?itemName=EliverLara.mars][Mars]] -by theme creator [[https://github.com/EliverLara][Eliver Lara]], who -makes a ton of great themes for VS Code, Atom, and various Linux desktop -environments. - -To install VS Code, you'll need to download the =.deb= file from the -official website. Once you've downloaded the file, either double-click -it to install through the Software Center or run the following command: - -#+begin_src sh -sudo dpkg -i FILE_NAME.deb -#+end_src - -You can also install VS Code as a snap package, but it must be installed -with the =--classic= flag. A -[[https://language-bash.com/blog/how-to-snap-introducing-classic-confinement][full -explanation is available]] if you'd like to read more about why you need -the classic flag. - -#+begin_src sh -snap install code --classic -#+end_src - -Using Arch, Manjaro, or another distro using the AUR? Use these commands -instead: - -#+begin_src sh -sudo pacman -S yay binutils make gcc pkg-config fakeroot yay -S visual-studio-code-bin -#+end_src - -** GNOME Tweaks -#+caption: Gnome Tweaks Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/gnome-tweaks.png]] - -[[https://gitlab.gnome.org/GNOME/gnome-tweaks][Gnome Tweaks]] is the -ultimate tool to use if you want to customize your GNOME desktop -environment. This is how you can switch application themes (GTK), shell -themes, icons, fonts, and more. To install GNOME Tweaks on Ubuntu, you -just need to install the official package. - -#+begin_src sh -sudo apt install gnome-tweaks -#+end_src - -If you've installed Manjaro or Arch with Gnome, you should have the -tweak tool pre-installed. If you're on Fedora, this tool is available as -an official package: - -#+begin_src sh -sudo dnf install gnome-tweaks -#+end_src - -** Steam -#+caption: Steam Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/steam.png]] - -[[https://steampowered.com][Steam]] is one of the most popular gaming -libraries for computers and is one of the main reasons that many people -have been able to switch to Linux in recent years, thanks to Steam -Proton, which makes it easier to play games not officially created for -Linux platforms. - -To install Steam on Ubuntu, you just need to install the official -package. - -#+begin_src sh -sudo apt install steam-installer -#+end_src - -For Arch-based systems, you'll simply need to install the =steam= -package. However, this requires that you enable the =multilib= source. -To do so, use the following command: - -#+begin_src sh -sudo nano /etc/pacman.conf -#+end_src - -Now, scroll down and uncomment the =multilib= section. - -#+begin_src config -# Before: -#[multilib] -#Include = /etc/pacman.d/mirrorlist - -# After: -[multilib] -Include = /etc/pacman.d/mirrorlist -#+end_src - -Finally, install the program: - -#+begin_src sh -sudo pacman -S steam -#+end_src - -[[./2020-01-26-steam-on-ntfs-drives.html][Problem Launching Steam Games? -Click Here.]] - -* Command-Line Packages -** neofetch -#+caption: Neofetch Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/neofetch.png]] - -[[https://github.com/dylanaraps/neofetch][Neofetch]] is a customizable -tool used in the command-line to show system information. This is -exceptionally useful if you want to see your system's information -quickly without the clutter of some resource-heavy GUI apps. - -This is an official package if you're running Ubuntu 17.04 or later, so -simply use the following command: - -#+begin_src sh -sudo apt install neofetch -#+end_src - -If you're running Ubuntu 16.10 or earlier, you'll have to use a series -of commands: - -#+begin_src sh -sudo add-apt-repository ppa:dawidd0811/neofetch; sudo apt update; sudo apt install neofetch -#+end_src - -Using Arch, Manjaro, or another distro using the AUR? Use this command -instead: - -#+begin_src sh -sudo pacman -S neofetch -#+end_src - -** yt-dlp -#+caption: yt-dlp Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/yt-dlp.png]] - -[[https://github.com/yt-dlp/yt-dlp][yt-dlp]] is an extremely handy -command-line tool that allows you to download video or audio files from -various websites, such as YouTube. There are a ton of different options -when running this package, so be sure to run =yt-dlp --help= first to -look through everything you can do (or give up and search for the best -config online). - -While this shouldn't be a problem for most users, yt-dlp requires Python -2.6, 2.7, or 3.2+ to work correctly, so install Python if you don't have -it already. You can check to see if you have Python installed by -running: - -#+begin_src sh -python -V -#+end_src - -To get the youtube-dl package, simply curl the URL and output the -results. - -#+begin_src sh -sudo curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp -#+end_src - -Finally, make the file executable so that it can be run from the -command-line. - -#+begin_src sh -sudo chmod a+rx /usr/local/bin/yt-dlp -#+end_src diff --git a/blog/local-llm/index.org b/blog/local-llm/index.org deleted file mode 100644 index ccde66e..0000000 --- a/blog/local-llm/index.org +++ /dev/null @@ -1,108 +0,0 @@ -#+title: Running Local LLMs on macOS and iOS -#+date: 2024-01-13 -#+description: Finding some useful applications for running local LLMs on macOS and iOS. -#+filetags: :apple: - -* Requirements -I've recently started playing with large language models (LLMs), mostly -in the popular chatbot form, as part of my job and have decided to see -if there's a consistent and reliable way to interact with these models -on Apple devices without sacrificing privacy or requiring in-depth -technical setup. - -My requirements for this test: - -- Open source platform -- On-device model files -- Minimal required configuration -- Preferably pre-built, but a simple build process is acceptable - -I tested a handful of apps and have summarized my favorite (so far) for -macOS and iOS below. - -#+begin_quote -TL;DR - Here are the two that met my requirements and I have found the -easiest to install and use so far: - -#+end_quote - -- macOS: [[https://ollama.ai/][Ollama]] -- iOS : [[https://llmfarm.site/][LLM Farm]] - -* macOS -[[https://ollama.ai/][Ollama]] is a simple Go application for macOS and -Linux that can run various LLMs locally. - -For macOS, you can download the pplication on the -[[https://ollama.ai/download/mac][Ollama download page]] and install it -by unzipping the =Ollama.app= file and moving it to the =Applications= -folder. - -If you prefer the command line, you can run these commands after the -download finished: - -#+begin_src sh -cd ~/Downloads && \ -unzip Ollama-darwin.zip && \ -mv ~/Downloads/Ollama.app /Applications/ -#+end_src - -After running the app, the app will ask you to open a terminal and run -the default =llama2= model, which will open an interactive chat session -in the terminal. You can startfully using the application at this point. - -#+caption: Ollama -[[https://img.cleberg.net/blog/20240113-local-llm/ollama.png]] - -If you don't want to use the default =llama2= model, you can download -and run additional models found on the -[[https://ollama.ai/library][Models]] page. - -To see the information for the currently-used model, you can run the -=/show info= command in the chat. - -#+caption: Model Info -[[https://img.cleberg.net/blog/20240113-local-llm/ollama_info.png]] - -** Community Integrations -I highly recommend browsing the -[[https://github.com/jmorganca/ollama#community-integrations][Community -Integrations]] section of the project to see how you would prefer to -extend Ollama beyond a simple command-line interface. There are options -for APIs, browser UIs, advanced terminal configurations, and more. - -#+caption: Ollama SwiftUI -[[https://img.cleberg.net/blog/20240113-local-llm/ollama-swiftui.png]] - -* iOS -While there are a handful of decent macOS options, it was quite -difficult to find an iOS app that offered an open source platform -without an extensive configuration and building process. I found LLM -Farm to be decent enough in quality to sit at the top of my list - -however, it's definitely not user friendly enough for me to consider -using it on a daily basis. - -[[https://llmfarm.site/][LLM Farm]] is available on TestFlight, so -there's no manual build process required. However, you can view the -[[https://github.com/guinmoon/LLMFarm][LLMFarm repository]] if you wish. - -The caveat is that you will have to manually download the model files -from the links in the -[[https://github.com/guinmoon/LLMFarm/blob/main/models.md][models.md]] -file to your iPhone to use the app - there's currently no option in the -app to reach out and grab the latest version of any supported model. - -Once you have a file downloaded, you simply create a new chat and select -the downloaded model file and ensure the inference matches the -requirement in the =models.md= file. - -See below for a test of the ORCA Mini v3 model: - -| Chat List | Chat | -|------------------------------------------------------------------------+------------------------------------------------------------------| -| [[https://img.cleberg.net/blog/20240113-local-llm/llm_farm_chats.png]] | [[https://img.cleberg.net/blog/20240113-local-llm/llm_farm.png]] | - -[[https://github.com/AugustDev/enchanted][Enchanted]] is also an iOS for -private AI models, but it requires a public-facing Ollama API, which did -not meet my "on device requirement." Nonetheless, it's an interesting -looking app and I will likely set it up to test soon. diff --git a/blog/macos-customization/index.org b/blog/macos-customization/index.org deleted file mode 100644 index 82e2d0a..0000000 --- a/blog/macos-customization/index.org +++ /dev/null @@ -1,170 +0,0 @@ -#+title: Customizing macOS -#+date: 2024-01-09 -#+description: Learn how to customize macOS beyond the standard, built-in options provided by Apple. -#+filetags: :apple: - -I have been using macOS more than Linux lately, so I wrote this post to -describe some simple options to customize macOS beyond the normal -built-in settings menu. - -While not all-encompassing, the options below should be a good start for -anyone looking to dive down the rabbit hole. - -* Basics -** Package Management -To install a lot of software on macOS, you will need -[[https://brew.sh/][Homebrew]]. You can use their installation script to -get started. Simply open the =Terminal= application and paste the -following snippet: - -#+begin_src sh -/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -#+end_src - -This will allow you to easily install and manage applications and other -software easily through the =brew= command. - -** Terminal -If you're serious about customizing your macOS system, I highly -recommend installing a terminal emulator that you like and if you're not -comfortable on the command line yet, start learning. A lot of -customization options require you to edit hidden files, which is easiest -in a terminal. - -There are options like iTerm2, Kitty, Alacritty, Hyper, Warp, or the -built-in Terminal app. - -I use [[https://iterm2.com/][iTerm2]], which can be installed with -Homebrew: - -#+begin_src sh -brew install iterm2 -#+end_src - -#+caption: iTerm2 -[[https://img.cleberg.net/blog/20240109-macos-customization/iterm2.png]] - -To install color schemes, such as the Dracula scheme shown in the -screenshot above, you visit [[https://iterm2colorschemes.com/][iTerm -Themes]] and follow their installation instructions to install any of -the themes. - -* Desktop -** Window Management -[[https://github.com/koekeishiya/yabai][yabai]] is a tiling window -manager for macOS. While other window managers exist, I found that most -of them struggled to create logical layouts and to allow me to easily -move windows around the screen. - -Some advanced settings for yabai are only available if partially disable -System Integrity Protection (SIP). However, I chose not to do this and -it hasn't affected my basic usage of yabai at all. - -Refer to the -[[https://github.com/koekeishiya/yabai/wiki/Installing-yabai-(latest-release)][yabai -wiki]] for installation instructions. You will need to ensure that yabai -is allowed to access the accessibility and screen recording APIs. - -You can see a basic three-pane layout atuomatically configured by yabai -for me as I opened the windows below. - -#+caption: yabai window manager -[[https://img.cleberg.net/blog/20240109-macos-customization/yabai.png]] - -** Keyboard Shortcuts -[[https://github.com/koekeishiya/skhd][skhd]] is a simple hotkey daemon -that allows you to define hotkeys in a file for usage on your system. - -Installation is simple: - -#+begin_src sh -brew install koekeishiya/formulae/skhd -skhd --start-service -#+end_src - -After installation, be sure to allow =skhd= access to the accessibility -API in the macOS privacy settings. - -You can configure your hotkeys in the =~/.config/skhd/skhdrc= file: - -#+begin_src sh -nano ~/.config/skhd/skhdrc -#+end_src - -For example, I have hotkeys to open my browser and terminal: - -#+begin_src conf -# Terminal -cmd - return : /Applications/iTerm.app/Contents/MacOS/iTerm2 - -# Browser -cmd + shift - return : /Applications/LibreWolf.app/Contents/MacOS/librewolf -#+end_src - -** Widgets -[[https://github.com/felixhageloh/uebersicht/][uebersicht]] is a handy -desktop-based widget tool with a plethora of community-made widgets -available in the [[https://tracesof.net/uebersicht-widgets/][widgets -gallery]]. You can also write your own widgets with this tool. - -To install, simply download the latest release from the -[[https://tracesof.net/uebersicht/][uebersicht website]] and copy it to -the Applications folder. - -See below for an example of the -[[https://tracesof.net/uebersicht-widgets/#Mond][Mond]] widget in -action. - -#+caption: uebersicht desktop widgets -[[https://img.cleberg.net/blog/20240109-macos-customization/uebersicht.png]] - -** Status Bar -[[https://github.com/FelixKratz/SketchyBar][SketchyBar]] is a -customizable replacement for the macOS status or menu bar. - -You can browse a discussion where various users shared their -[[https://github.com/FelixKratz/SketchyBar/discussions/47?sort=top][configurations]] -for inspiration or to copy their dotfiles. - -See below for a quick (& slightly broken) copy of -[[https://github.com/zer0yu/dotfiles][zer0yu's]] SketchyBar -configuration. - -#+caption: SketchyBar -[[https://img.cleberg.net/blog/20240109-macos-customization/sketchybar.png]] - -** Dock -The easiest way to customize the dock is to install -[[https://ubarapp.com/][uBar]], which uses a Windows-like menu bar as -the default style. - -However, the built-in macOS dock cannot be disabled and can only be set -to "always hidden". This can be annoying as it will pop out any time -your mouse cursor passes closely to the dock edge of the screen. Because -of this, I simply use the built-in dock instead of customizing it with -third-party software. - -Regardless, see below for the default installation style of uBar. - -#+caption: uBar -[[https://img.cleberg.net/blog/20240109-macos-customization/ubar.png]] - -** Application Icons -You can also customize the icon of any application in macOS, which will -show up in Finder, the Dock, Launchpad, search results, etc. I recommend -using [[https://macosicons.com/][macOSicons]] to download icons you -want, and then apply them by following this process. - -1. Open the Finder application. -2. Navigate to the =Applications= folder. -3. Right-click an application of your choice, and select =Get Info=. -4. Drag the image you downloaded on top of the application's icon at the - top of information window (you will see a green "plus" symbol when - you're hovering over it). -5. Release the new icon on top of the old icon and it will update! - -You can see an example of me dragging a new =signal.icns= file onto my -Signal.app information window to update it below: - -#+caption: replace macOS icons -[[https://img.cleberg.net/blog/20240109-macos-customization/replace_icon.png]] diff --git a/blog/macos/index.org b/blog/macos/index.org deleted file mode 100644 index 37aca9d..0000000 --- a/blog/macos/index.org +++ /dev/null @@ -1,200 +0,0 @@ -#+title: macOS: Testing Out A New OS -#+date: 2021-02-19 -#+description: A retrospective on my migration from Linux to macOS. -#+filetags: :apple: - -* Diving into macOS -After spending nearly 15 years working with Windows and 8 years on -Linux, I have experienced macOS for the first time. By chance, my spouse -happened to buy a new MacBook and gifted me their 2013 model. Of course, -I still consider my Linux desktop to be my daily driver and keep Windows -around for gaming needs, but over the past week I've found myself using -the MacBook more and more for things that don't require gaming specs or -advanced dev tools. - -* Initial Thoughts -Before I move on to the technical aspects of my set-up, I want to take -some time and express my thoughts on the overall OS. - -#+caption: macOS Desktop -[[https://img.cleberg.net/blog/20210219-macos-testing-out-a-new-os/macos-desktop.png]] - -As expected, the initial computer setup is a breeze with Mac's guided -GUI installer. - -The desktop itself reminds me of GNOME more than anything else I've -seen: even Pantheon from [[https://elementary.io/][ElementaryOS]], which -people commonly refer to as the closest Linux distro to macOS. The -desktop toolbar is great and far surpasses the utility of the GNOME -toolbar due to the fact that the extensions and icons /actually work/. I -launch macOS and immediately see my shortcuts for Tresorit, Bitwarden, -and Mullvad pop up as the computer loads. - -Even further, the app dock is very useful and will be yet another -familiarity for GNOME users. I know many people like panels instead of -docks, but I've always found docks to have a more pleasing UI. However, -I had to disable the "Show recent applications in Dock" preference; I -can't stand items taking up precious screen space if I'm not currently -using them. On that same note, it's taking me some time to get use to -the fact that I have to manually quit an app or else it will still stay -open/active in the dock, even if I've closed out all windows for that -app (e.g. Firefox). - -Overall, I'm having a lot of fun and for users who spend a large -majority of their time performing basic tasks like web browsing, -writing, watching media, etc., macOS is a fantastic option. - -The rest of this post explains the technicalities of how I set up my CLI -environment to make me feel more at-home, similar to the environments I -set up on Fedora, Ubuntu, etc. - -* Making it Feel Like Home -If you're someone who uses Linux primarily, no doubt your first thought -when booting macOS will be the same as mine was: "Where is the terminal -and how do I set up my favorite utilities?" - -Luckily, macOS hasn't completely hidden away the development tools from -the average user. You can easily find the Terminal app in the Launchpad -area, but it's probably not what you're used to. I was surprised (and -happy) to see that the default shell is =zsh=, the shell I use on all of -my Linux distros. However, the commands are not the same - even the ones -you may think are native to the shell. Commands like =dir= do not exist, -so other native commands like =ls -la= or =pwd= are more useful here. - -With only a few minutes of installing and tweaking a few packages, I was -able to recreate a terminal environment that I feel very comfortable -using. See the image below for a preview of the iTerm2 app with a split -view between my macOS desktop shell and an SSH session into my server. - -#+caption: iTerm2 -[[https://img.cleberg.net/blog/20210219-macos-testing-out-a-new-os/iterm2.png]] - -* Xcode -My first step was to search the web for any hints on how to get =zsh= -back up to the state I like, with extensions, themes, etc. My first step -was to install the CLI tools for -[[https://developer.apple.com/xcode/][Xcode]], Apple's suite of -development tools. - -#+begin_src sh -sudo xcode-select -r -#+end_src - -#+begin_src sh -sudo xcode-select --install -#+end_src - -* Homebrew -Next up is to install [[https://brew.sh][Homebrew]], a nifty package -manager for macOS. - -#+begin_src sh -/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -#+end_src - -I ran into a permission error when installing Homebrew: - -#+begin_src sh -Error: Failed to link all completions, docs and manpages: - Permission denied @ rb_file_s_symlink - (../../../Homebrew/completions/zsh/_brew, /usr/local/share/zsh/site-functions/_brew) -Failed during: /usr/local/bin/brew update --force --quiet -#+end_src - -I found that the following permission modification worked like a charm. -However, I noted that some users online discussed the fact that this -solution may not work if your system has multiple users who use -Homebrew. - -#+begin_src sh -sudo chown -R $(whoami) $(brew --prefix)/* -#+end_src - -Next up is to ensure Homebrew is updated and cleaned. - -#+begin_src sh -brew update -#+end_src - -#+begin_src sh -brew cleanup -#+end_src - -* iTerm2 -Now that I've installed the basic utilities for development, I moved -onto installing iTerm2, a much better terminal than the default. - -#+begin_src sh -brew install --cask iterm2 -#+end_src - -I also used the =Make iTerm2 Default Term= and -=Install Shell Integration= options in the iTerm2 application menu to -make sure I don't run into any issues later on with different terminals. - -We will also install =zsh= so we can use it in iTerm2. - -#+begin_src sh -brew install zsh -#+end_src - -* Oh-My-Zsh -I've shown the great aspects of [[https://ohmyz.sh][Oh My Zsh]] in other -blog posts, so I'll skip over that speech for now. Simply install it and -run an update. - -#+begin_src sh -sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)" -#+end_src - -#+begin_src sh -omz update -#+end_src - -Finally, restart the iTerm2 application to ensure all changes go into -effect. - -* Oh-My-Zsh Themes -Let's change the theme of the terminal to make it a little more -friendly. - -#+begin_src sh -open ~/.zshrc -#+end_src - -The third section of this file should contain a line like the code -below. Change that theme to -[[https://github.com/ohmyzsh/ohmyzsh/wiki/Themes][any theme you want]], -save the file, and exit. - -#+begin_src sh -ZSH_THEME="af-magic" -#+end_src - -After changing the =.zshrc= file, you'll need to close your terminal and -re-open it to see the changes. Optionally, just open a new tab if you're -using iTerm2, and you'll see the new shell config. - -* Oh-My-Zsh Plugins -Of course, my customization of =zsh= would not be complete without -[[https://github.com/zsh-users/zsh-autosuggestions][zsh-autosuggestions]]. -This will bring up commands you've run in the past as you type them. For -example, if you've run =ssh user@192.168.1.99= before, the terminal will -show this command as soon as you start typing it (e.g. =zsh u=), and you -can hit the right arrow to autocomplete the command. - -#+begin_src sh -git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions -#+end_src - -#+begin_src sh -open ~/.zshrc -#+end_src - -#+begin_src sh -# Scroll down the script and edit this line to add zsh-autosuggestions -plugins=(git zsh-autosuggestions) -#+end_src - -Remember: After changing the =.zshrc= file, you'll need to close your -terminal and re-open it to see the changes. Optionally, just open a new -tab if you're using iTerm2, and you'll see the new shell config. diff --git a/blog/mass-unlike-tumblr-posts/index.org b/blog/mass-unlike-tumblr-posts/index.org deleted file mode 100644 index 8e7574c..0000000 --- a/blog/mass-unlike-tumblr-posts/index.org +++ /dev/null @@ -1,87 +0,0 @@ -#+title: How to Easily Mass Unlike Tumblr Posts with Javascript -#+date: 2023-01-05 -#+description: Learn how to unlike Tumblr posts en masse in the browser. -#+filetags: :dev: - -* The Dilemma -The dilemma I had was pretty simple: I wanted to unlike all the posts I -have liked on Tumblr so that I could follow a new focus on blogs and -start fresh. Otherwise, Tumblr will keep recommending content based on -your previous likes. - -* The Solution -I searched the web for a while and noted that most solutions referenced -Tumblr setting and dashboard pages that no longer exist. Additionally, I -did not want to install a third party extension to do this, as some -suggested. - -Luckily, I used Javascript for a while a few years ago and figured it -would be easy enough to script a solution, as long as Tumblr had a -system for the unlike buttons. - -** Identifying Unlike Buttons -Tumblr's unlike buttons are structured as you can see in the following -code block. All unlike buttons have an =aria-label= with a value of -=Unlike=. - -#+begin_src html - -#+end_src - -** Running a Script to Unlike All Likes -To run this script, you will need to load the -[[https://www.tumblr.com/likes][Likes | Tumblr]] page while logged in to -your account. - -Further, be sure to scroll down to the bottom and force Tumblr to load -more posts so that this script unlikes more posts at a time. - -Once you are logged in and the page is loaded, open the Developer Tools -and be sure you're on the "Console" tab. It should look something like -this (this is in Firefox, Chromium should be similar): - -#+caption: Firefox Dev !Tools -[[https:///img.cleberg.net/blog/20230105-mass-unlike-tumblr-posts/dev_console.png]] - -All you need to do is paste the following snippet into the dev console. -This code will collect all unlike buttons (=elements=) and then click -each button to unlike it. - -Optionally, you can comment-out the line =elements[i].click();= and -uncomment the =console.log()= lines to simply print out information -without performing any actions. This can be useful to debug issues or -confirm that the code below isn't doing anything you don't want it to. - -#+begin_src javascript -const elements = document.querySelectorAll('[aria-label="Unlike"]'); -// console.log(elements); // 👉 [button] - -for (let i=0; i < elements.length; i++) { - // console.log(elements[i]); - elements[i].click(); -} -#+end_src - -* Results -The results were quick for my situation, as it unliked ~200 posts within -2-3 seconds. I am not sure how this will perform on larger sets of likes -(or if Tumblr has a limit to unliking posts). - -You can see the below screenshot showing that I pasted the snippet into -the console, pressed Enter, and then the posts are automatically -unliked. - -#+caption: Script !Results -[[https:///img.cleberg.net/blog/20230105-mass-unlike-tumblr-posts/script_results.png]] - -Thinking about this further, I would bet that this would be fairly -simple to package into a browser add-on so that users could install the -add-on, go to their Likes page, and click a button to run the script. -Food for thought. diff --git a/blog/mediocrity/index.org b/blog/mediocrity/index.org deleted file mode 100644 index a653f80..0000000 --- a/blog/mediocrity/index.org +++ /dev/null @@ -1,122 +0,0 @@ -#+title: On the Pursuit of Mediocrity -#+date: 2020-10-12 -#+description: Musings on mediocrity. -#+filetags: :personal: - -* Perfect is the Enemy of Good -As the saying goes, "the best is the enemy of the good." As we strive -for perfection, we often fail to realize the implications of such an -undertaking. Attempting to reach perfection is often unrealistic. Even -worse, it can get in the way of achieving a good outcome. In certain -situations, we try so hard to achieve the ideal solution that we have -burned the bridges that would have allowed us to reach a lesser yet -still superb solution. - -Philosophers throughout history have inspected this plight from many -viewpoints. Greek mythology speaks of the -[[https://en.wikipedia.org/wiki/Golden_mean_(philosophy)][golden mean]], -which uses the story of Icarus to illustrate that sometimes "the middle -course" is the best solution. In this story, Daedalus, a famous artist -of his time, built feathered wings for himself and his son so that they -might escape the clutches of King Minos. Daedalus warns his beloved son -whom he loved so much to "fly the middle course", between the sea spray -and the sun's heat. Icarus did not heed his father; he flew up and up -until the sun melted the wax off his wings. For not heeding the middle -course, he fell into the sea and drowned. - -More recently, management scholars have explored the -[[https://en.wikipedia.org/wiki/Pareto_principle][Pareto principle]] and -found that as we increase the frequency of something, or strive to -perform actions to achieve some form of perfection, we run into -[[https://en.wikipedia.org/wiki/Diminishing_returns][diminishing -returns]]. - -Even further, Harold Demsetz is noted as coining the term -[[https://en.wikipedia.org/wiki/Nirvana_fallacy][the Nirvana fallacy]] -in 1969, which shows the fallacy of comparing actual things with -unrealistic, idealized alternatives. This is another trap that we may -fall into, where we are constantly thinking of the ultimate solutions to -problems, when something more realistic needs to be considered. - -Over and over throughout history, we've found that perfection is often -unrealistic and unachievable. However, we push ourselves and our peers -to "give 100%" or "go the extra mile," while it may be that the better -course is to give a valuable level of effort while considering the -effects of further effort on the outcome. Working harder does not always -help us achieve loftier goals. - -This has presented itself to me most recently during my time studying at -my university. I was anxious and feeling the stresses of my courses, -career, and personal life for quite a while, which was greatly affecting -how well I was doing at school and my level of effort at work. One day, -I happened to be talking to my father when he said something simple that -hit home: - -#+begin_quote -All you can do is show up and do your best. Worrying about the outcomes -won't affect the outcome itself. -#+end_quote - -The thought was extremely straightforward and uncomplicated, yet it was -something that I had lost sight of during my stress-filled years at -school. Ever since then, I've found myself pausing and remembering that -quote every time I get anxious or stressed. It helps to stop and think -"Can I do anything to affect the outcome, or am I simply worrying over -something I can't change?" - -* When Mediocrity Isn't Enough -One problem with the philosophies presented in this post is that they -are implemented far too often in situations where mediocrity simply -isn't adequate. For example, let's take a look at digital user data, -specifically personally-identifiable information (PII). As a -cybersecurity auditor in the United States, I have found that most -companies are concerned more with compliance than any actual safeguards -over the privacy or protection of user data. Other than companies who -have built their reputation on privacy and security, most companies will -use [[https://en.wikipedia.org/wiki/Satisficing][satisficing]] as their -primary decision-making strategy around user data. - -#+begin_quote -Satisficing is a decision-making strategy or cognitive heuristic that -entails searching through the available alternatives until an -acceptability threshold is met. -#+end_quote - -This means that each decision will be met with certain possible -solutions until one of the solutions meets their minimum acceptable -standards. For companies that deal with user data, the -minimum-acceptable standards come from three areas: - -1. Laws and regulations -2. Competitive pressure -3. Risk of monetary or reputation loss - -Working with project management or auditing, the primary concern here is -the risk of legal ramifications. Since the primary risk comes from laws -and regulations, companies will require that any project that involves -user data must follow all the rules of those laws so that the company -can protect itself from fines or other penalties. - -Following this, companies will consider best practices in order to place -itself in a competitive position (e.g. Google vs. Apple) and review any -recent or ongoing litigation against companies regarding user data. In a -perfect company, management would then consider the ethical -responsibilities of their organization and discuss their -responsibilities over things like personally-identifiable information. - -However, as we mentioned above, most companies follow the idea of -satisficing, which states that they have met the minimum acceptable -standards and can now move on to other decisions. Modern business -culture in the United States dictates that profits are the golden -measure of how well a company or manager is performing, so we often -don't think about our responsibilities beyond these basic standards. - -Not all situations demand excellence, but I believe that applying any -philosophy as a broad stroke across one's life can be a mistake. We must -be able to think critically about what we are doing as we do it and ask -ourselves a few questions. Have I done everything I can in this -situation? Is mediocrity an acceptable outcome, or should we strive for -perfection, even if we can't attain it? - -Taking a few moments to think critically throughout our day, as we make -decisions, can have a tremendous effect on the outcomes we create. diff --git a/blog/mtp-linux/index.org b/blog/mtp-linux/index.org deleted file mode 100644 index 1163e63..0000000 --- a/blog/mtp-linux/index.org +++ /dev/null @@ -1,73 +0,0 @@ -#+title: How to Mount an MTP Mobile Device on Fedora Linux -#+date: 2022-10-04 -#+description: Learn how to mount an MTP mobile device on Fedora Linux. -#+filetags: :linux: - -I recently ran into trouble attempting to mount my GrapheneOS phone to -my laptop running Fedora Linux via the -[[https://en.wikipedia.org/wiki/Media_transfer_protocol][Media Transfer -Protocol]] (MTP) and discovered a simple and effective solution. - -* Use a USB 3.0 Port -First, ensure that the device was plugged in to the laptop through a USB -3.0 port, if possible. From a brief glance online, it seems that USB 2.0 -ports may cause issues with dropped connections over MTP. This is purely -anecdotal since I don't have any evidence to link showing that USB 2.0 -causes issues, but I can confirm that switching to a USB 3.0 port seemed -to cut out most of my issues. - -* Switch USB Preferences to MTP -Secondly, you need to ensure that the phone's USB preferences/mode is -changed to MTP or File Transfer once the phone is plugged in. Other -modes will not allow you to access the phone's file system. - -* Install =jmtpfs= -Next, I used the =jmtpfs= package to mount my phone to my laptop. There -are other packages that exist, but this one worked perfectly for me. On -Fedora Linux, you can install it like this: - -#+begin_src sh -sudo dnf install jmtpfs -y -#+end_src - -* Create a Mount Point -Once you have the package installed, you just need to create a folder -for the device to use as a mount point. In my case, I used =/mnt/pixel=: - -#+begin_src sh -sudo mkdir /mnt/pixel -sudo chown -R $USER:$USER /mnt/pixel -#+end_src - -* Mount & Access the Phone's File System -Finally, plug-in and mount the device, and you should be able to see all -storage (internal and external) inside your new folder! - -#+begin_src sh -jmtpfs /mnt/pixel -#+end_src - -The output should look something like this: - -#+begin_src sh -Device 0 (VID=18d1 and PID=4ee1) is a Google Inc Nexus/Pixel (MTP). -Android device detected, assigning default bug flags -#+end_src - -Now you are mounted and can do anything you'd like with the device's -files: - -#+begin_src sh -cd /mnt/pixel -ls -lha -#+end_src - -From here, you will be able to see any internal or external storage -available on the device: - -#+begin_src sh -total 0 -drwxr-xr-x. 3 user user 0 Jan 1 1970 . -drwxr-xr-x. 1 root root 10 Oct 4 13:29 .. -drwxr-xr-x. 16 user user 0 Apr 21 4426383 'Internal shared storage' -#+end_src diff --git a/blog/neon-drive/index.org b/blog/neon-drive/index.org deleted file mode 100644 index 957bd33..0000000 --- a/blog/neon-drive/index.org +++ /dev/null @@ -1,93 +0,0 @@ -#+title: Neon Drive: A Nostalgic 80s Arcade Racing Game -#+date: 2020-12-28 -#+description: A video game review for Neon Drive. -#+filetags: :gaming: - -* Game Description -[[https://store.steampowered.com/app/433910/Neon_Drive/][Neon Drive]] -presents itself as a simple arcade-style game inspired by the arcade -race games of the 1980s, yet it has managed to take up hours of my life -without much effort. The game description, directly from the Steam page, -is intriguing enough to entice anyone who's been looking for a good -arcade racing game: - -#+begin_quote -Neon Drive is a slick retro-futuristic arcade game that will make your -brain melt. You've been warned. From beautiful cityscapes and ocean -roads to exploding enemy spaceships, Neon Drive has it all. - -#+end_quote - -* Gameplay -The game holds true to the -[[https://en.wikipedia.org/wiki/Retrofuturism][retro-futurism]] style, -including chrome female robots, pixelated arcade machines, and -[[https://teddit.net/r/outrun/][outrun]] aesthetics. - -Each level of the game is shown as a separate arcade machine. Each -arcade machine lets you play on Normal, Hard, Insane, Practice, and Free -Run. To beat each arcade, you must reach the end of the level without -crashing your car into the various obstacles on the course. Basic levels -let you move left or right to avoid blocks in the road. Later levels put -you through other tests, such as dodging traffic or blasting asteroids. - -The game uses synthwave music to keep you on track to make the correct -moves by timing the beats of the songs to the correct moves on the -screen. It reminds me of the early Guitar Hero games, as well as mobile -apps like VOEZ - repetition and staying on-beat is the only way to win. - -* In-Game Screenshots -Taking a look at the main menu, you can see that Neon Drive plays into -every stereotype you can think of around retro-futuristic, synthwave -arcades (in a good way). - -#+caption: Neon Drive Menu -[[https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_menu.png]] - -Once you get into the first level, we see that the choice of car fits -right in with the stereotypical cars of the 80s, like the -[[https://en.wikipedia.org/wiki/DMC_DeLorean][DeLorean]] or the -[[https://en.wikipedia.org/wiki/Ferrari_F40][Ferrari F40]]. Each new -level comes with new color schemes and cars, so you should never get -tired of the aesthetic. - -#+caption: Neon Drive Race -[[https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_race.png]] - -Personally, I love the orange and blue colors used in level 2: - -#+caption: Level 2 -[[https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_level_2.png]] - -If you're the competitive type and getting 100% on all arcade machines -isn't enough, there are leaderboards for the regular part of the game, -and the endurance game mode. - -#+caption: Leaderboard -[[https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_leaderboard.png]] - -* Other Suggestions -Neon Drive sits nicely within the well-founded cult genre of Outrun. -Other games that I've enjoyed in this same spectrum are: - -- [[https://store.steampowered.com/app/233270/Far_Cry_3__Blood_Dragon/][Far - Cry 3: Blood Dragon]] -- [[https://store.steampowered.com/app/1239690/Retrowave/][Retrowave]] -- [[https://store.steampowered.com/app/732810/Slipstream/][Slipstream]] - -Although these games aren't necessarily in the same genre, they do have -aspects that place them close enough to interest gamers that enjoyed -Neon Drive: - -- [[https://store.steampowered.com/app/311800/Black_Ice/][Black Ice]] -- [[https://store.steampowered.com/app/746850/Cloudpunk/][Cloudpunk]] -- [[https://store.steampowered.com/app/1222680/Need_for_Speed_Heat/][Need - for Speed: Heat]] -- [[https://store.steampowered.com/app/1019310/VirtuaVerse/][VirtuaVerse]] - -Of course, if all you really care about is the arcade aspect of these -games, you can check out the -[[https://store.steampowered.com/app/400020/Atari_Vault/][Atari Vault]] -or any of the other classic games sold on Steam by companies like Namco, -Atari. For something like Nintendo, you'd have to settle for buying used -classic consoles or delve into the world of emulation. diff --git a/blog/nextcloud-on-ubuntu/index.org b/blog/nextcloud-on-ubuntu/index.org deleted file mode 100644 index baa7976..0000000 --- a/blog/nextcloud-on-ubuntu/index.org +++ /dev/null @@ -1,159 +0,0 @@ -#+title: Nextcloud on Ubuntu -#+date: 2022-03-23 -#+description: A guide to self-hosting the NextCloud application on your own server. -#+filetags: :selfhosting: - -* What is Nextcloud? -[[https://nextcloud.com/][Nextcloud]] is a self-hosted solution for -storage, communications, editing, calendar, contacts, and more. - -This tutorial assumes that you have an Ubuntu server and a domain name -configured to point toward the server. - -* Install Dependencies -To start, you will need to install the packages that Nextcloud requires: - -#+begin_src sh -sudo apt install apache2 mariadb-server libapache2-mod-php7.4 -sudo apt install php7.4-gd php7.4-mysql php7.4-curl php7.4-mbstring php7.4-intl -sudo apt install php7.4-gmp php7.4-bcmath php-imagick php7.4-xml php7.4-zip -#+end_src - -* Set Up MySQL -Next, you will need to log in to MySQL as the =root= user of the -machine. - -#+begin_src sh -sudo mysql -uroot -p -#+end_src - -Once you've logged in, you must create a new user so that Nextcloud can -manage the database. You will also create a =nextcloud= database and -assign privileges: - -#+begin_src sql -CREATE USER 'username'@'localhost' IDENTIFIED BY 'password'; -CREATE DATABASE IF NOT EXISTS nextcloud CHARACTER SET utf8mb4 COLLATE utf8mb4_general_ci; -GRANT ALL PRIVILEGES ON nextcloud.** TO 'username'@'localhost'; -FLUSH PRIVILEGES; -quit; -#+end_src - -* Download & Install Nextcloud -To download Nextcloud, go the -[[https://nextcloud.com/install/#instructions-server][Nextcloud -downloads page]], click on =Archive File= and right-click the big blue -button to copy the link. - -Then, go to your server and enter the following commands to download, -unzip, and move the files to your destination directory. This example -uses =example.com= as the destination, but you can put it wherever you -want to server your files from. - -#+begin_src sh -wget https://download.nextcloud.com/server/releases/nextcloud-23.0.3.zip -sudo apt install unzip -unzip nextcloud-23.0.3.zip -sudo cp -r nextcloud /var/www/example.com -#+end_src - -* Configure the Apache Web Server -Now that the database is set up and Nextcloud is installed, you need to -set up the Apache configuration files to tell the server how to handle -requests for =example.com/nextcloud=. - -First, open the following file in the editor: - -#+begin_src sh -sudo nano /etc/apache2/sites-available/nextcloud.conf -#+end_src - -Once the editor is open, paste the following information in. Then, save -and close the file. - -#+begin_src config - - DocumentRoot /var/www/example.com - ServerName example.com - ServerAlias www.example.com - ErrorLog ${APACHE_LOG_DIR}/error.log - CustomLog ${APACHE_LOG_DIR}/access.log combined - - - Require all granted - AllowOverride All - Options FollowSymLinks MultiViews - Satisfy Any - - - Dav off - - - -#+end_src - -Once the file is saved, enable it with Apache: - -#+begin_src sh -sudo a2ensite nextcloud.conf -#+end_src - -Next, enable the Apache mods required by Nextcloud: - -#+begin_src sh -sudo a2enmod rewrite headers env dir mime -#+end_src - -Finally, restart Apache. If any errors arise, you must solve those -before continuing. - -#+begin_src sh -sudo systemctl restart apache2 -#+end_src - -For the app to work, you must have the correct file permissions on your -=nextcloud= directory. Set the owner to be =www-data=: - -#+begin_src sh -sudo chown -R www-data:www-data /var/www/example.com/nextcloud/ -#+end_src - -* DNS -If you do not have a static IP address, you will need to update your DNS -settings (at your DNS provider) whenever your dynamic IP address -changes. - -For an example on how I do that with Cloudflare, see my other post: -[[../updating-dynamic-dns-with-cloudflare-api/][Updating Dynamic DNS -with Cloudflare API]] - -* Certbot -If you want to serve Nextcloud from HTTPS rather than plain HTTP, use -the following commands to issue Let's Encrypt SSL certificates: - -#+begin_src sh -sudo apt install snapd -sudo snap install core -sudo snap refresh core -sudo snap install --classic certbot -sudo ln -s /snap/bin/certbot /usr/bin/certbot -sudo certbot --apache -#+end_src - -* Results -Voilà! You're all done and should be able to access Nextcloud from your -domain or IP address. - -See the screenshots below for the dashboard and a settings page on my -instance of Nextcloud, using the =Breeze Dark= theme I installed from -the Apps page. - -#+caption: Nextcloud Dashboard -[[https://img.cleberg.net/blog/20220323-installing-nextcloud-on-ubuntu/nextcloud_dashboard.png]] - -/Figure 01: Nextcloud Dashboard/ - -#+caption: Nextcloud Settings -[[https://img.cleberg.net/blog/20220323-installing-nextcloud-on-ubuntu/nextcloud_settings.png]] - -/Figure 02: Nextcloud Security Settings/ diff --git a/blog/nginx-caching/index.org b/blog/nginx-caching/index.org deleted file mode 100644 index 5e815d9..0000000 --- a/blog/nginx-caching/index.org +++ /dev/null @@ -1,68 +0,0 @@ -#+title: Caching Static Content with Nginx -#+date: 2022-02-20 -#+description: Learn how to enable the static content cache in Nginx. -#+filetags: :nginx: - -* Update Your Nginx Config to Cache Static Files -If you run a website on Nginx that serves static content (i.e., content -that is not dynamic and changing with interactions from the user), you -would likely benefit from caching that content on the client-side. If -you're used to Apache and looking for the Nginx equivalent, this post -should help. - -Luckily, setting up the cache is as easy as identifying the file types -you want to cache and determining the expiration length. To include more -file types, simply use the bar separator (=|=) and type the new file -extension you want to include. - -#+begin_src config -server { - ... - - location ~** .(css|js|jpg|jpeg|gif|png|ico)$ { - expires 30d; - } - - ... -} -#+end_src - -I have seen some people who prefer to set =expires= as =365d= or even -=max=, but that is only for stable, infrequently changing websites. As -my site often changes (i.e., I'm never content with my website), I need -to know that my readers are seeing the new content without waiting too -long. - -So, I went ahead and set the expiration date at =30d=, which is short -enough to refresh for readers but long enough that clients/browsers -won't be re-requesting the static files too often, hopefully resulting -in faster loading times, as images should be the only thing slowing down -my site. - -* Testing Results -To test my changes to the Nginx configuration, I used the -[[https://addons.mozilla.org/en-US/firefox/addon/http-header-live/][HTTP -Header Live]] extension on my Gecko browser and used the sidebar to -inspect the headers of a recent image from my blog. - -In the image below, you can see that the =Cache-Control= header is now -present and set to 2592000, which is 30 days represented in seconds (30 -days _ 24 hours/day _ 60 minutes/hour ** 60 seconds/minute = 2,592,000 -seconds). - -The =Expires= field is now showing 22 March 2022, which is 30 days from -the day of this post, 20 February 2022. - -#+caption: Image Headers -[[https://img.cleberg.net/blog/20220220-caching-static-content-with-nginx/image_headers.png]] - -* Caveats -Remember that this caching system is *client-side*, which means that -content is only cached for as long as a client allows it. For example, -my browser purges all caches, data, etc. upon exit, so this caching -policy will only work as long as my browser remains open and running. - -If you need to test updates to your site, you'll need to clear the cache -to see updates for any file extension you configured. This can often be -done with the =Shift + F5= or =Ctrl + F5= key combinations in most -browsers. diff --git a/blog/nginx-compression/index.org b/blog/nginx-compression/index.org deleted file mode 100644 index 73d218b..0000000 --- a/blog/nginx-compression/index.org +++ /dev/null @@ -1,73 +0,0 @@ -#+title: Enable GZIP Compression in Nginx -#+date: 2022-12-01 -#+description: Learn how to enable compression in Nginx. -#+filetags: :nginx: - -* Text Compression -Text compression allows a web server to serve text-based resources -faster than uncompressed data. This can speed up things like First -Contentful Paint, Tie to Interactive, and Speed Index. - -* Enable Nginx Compression with gzip -In order to enable text compression on Nginx, we need to enable it -within the configuration file: - -#+begin_src sh -nano /etc/nginx/nginx.conf -#+end_src - -Within the =http= block, find the section that shows something like the -block below. This is the default gzip configuration I found in my -=nginx.conf= file on Alpine Linux 3.17. Yours may look slightly -different, just make sure that you're not creating any duplicate gzip -options. - -#+begin_src conf -# Enable gzipping of responses. -#gzip on; - -# Set the Vary HTTP header as defined in the RFC 2616. Default is 'off'. -gzip_vary on; -#+end_src - -Remove the default gzip lines and replace them with the following: - -#+begin_src conf -# Enable gzipping of responses. -gzip on; -gzip_vary on; -gzip_min_length 10240; -gzip_proxied expired no-cache no-store private auth; -gzip_types text/plain text/css text/xml text/javascript application/x-javascript application/xml; -gzip_disable "MSIE [1-6]"; -#+end_src - -* Explanations of ngx_{httpgzipmodule} Options -Each of the lines above enables a different aspect of the gzip response -for Nginx. Here are the full explanations: - -- =gzip= -- Enables or disables gzipping of responses. -- =gzip_vary= -- Enables or disables inserting the "Vary: - Accept-Encoding" response header field if the directives gzip, - gzip_{static}, or gunzip are active. -- =gzip_min_length= -- Sets the minimum length of a response that will - be gzipped. The length is determined only from the "Content-Length" - response header field. -- =gzip_proxied= -- Enables or disables gzipping of responses for - proxied requests depending on the request and response. The fact that - the request is proxied is determined by the presence of the "Via" - request header field. -- =gzip_types= -- Enables gzipping of responses for the specified MIME - types in addition to "text/html". The special value “*” matches any - MIME type (0.8.29). Responses with the "text/html" type are always - compressed. -- =gzip_disable= -- Disables gzipping of responses for requests with - "User-Agent" header fields matching any of the specified regular - expressions. - - The special mask "msie6" (0.7.12) corresponds to the regular - expression "MSIE [4-6].", but works faster. Starting from version - 0.8.11, "MSIE 6.0; ... SV1" is excluded from this mask. - -More information on these directives and their options can be found on -the [[https://nginx.org/en/docs/http/ngx_http_gzip_module.html][Module -ngx_{httpgzipmodule}]] page in Nginx's documentation. diff --git a/blog/nginx-referrer-ban-list/index.org b/blog/nginx-referrer-ban-list/index.org deleted file mode 100644 index a80a602..0000000 --- a/blog/nginx-referrer-ban-list/index.org +++ /dev/null @@ -1,126 +0,0 @@ -#+title: Creating a Referrer Ban List in Nginx -#+date: 2022-11-29 -#+description: Learn how to create a ban list for referring sites in Nginx. -#+filetags: :nginx: - -* Creating the Ban List -In order to ban list referral domains or websites with Nginx, you need -to create a ban list file. The file below will accept regexes for -different domains or websites you wish to block. - -First, create the file in your nginx directory: - -#+begin_src sh -doas nano /etc/nginx/banlist.conf -#+end_src - -Next, paste the following contents in and fill out the regexes with -whichever domains you're blocking. - -#+begin_src conf -# /etc/nginx/banlist.conf - -map $http_referer $bad_referer { - hostnames; - - default 0; - - # Put regexes for undesired referrers here - "~news.ycombinator.com" 1; -} -#+end_src - -* Configuring Nginx -In order for the ban list to work, Nginx needs to know it exists and how -to handle it. For this, edit the =nginx.conf= file. - -#+begin_src sh -doas nano /etc/nginx/nginx.conf -#+end_src - -Within this file, find the =http= block and add your ban list file -location to the end of the block. - -#+begin_src conf -# /etc/nginx/nginx.conf - -http { - ... - - # Include ban list - include /etc/nginx/banlist.conf; -} -#+end_src - -* Enabling the Ban List -Finally, we need to take action when a bad referral site is found. To do -so, edit the configuration file for your website. For example, I have -all website configuration files in the =http.d= directory. You may have -them in the =sites-available= directory on some distributions. - -#+begin_src sh -doas nano /etc/nginx/http.d/example.com.conf -#+end_src - -Within each website's configuration file, edit the =server= blocks that -are listening to ports 80 and 443 and create a check for the -=$bad_referrer= variable we created in the ban list file. - -If a matching site is found, you can return any -[[https://en.wikipedia.org/wiki/List_of_HTTP_status_codes][HTTP Status -Code]] you want. Code 403 (Forbidden) is logical in this case since you -are preventing a client connection due to a banned domain. - -#+begin_src conf -server { - ... - - # If a referral site is banned, return an error - if ($bad_referer) { - return 403; - } - - ... -} -#+end_src - -* Restart Nginx -Lastly, restart Nginx to enable all changes made. - -#+begin_src sh -doas rc-service nginx restart -#+end_src - -* Testing Results -In order to test the results, let's curl the contents of our site. To -start, I'll curl the site normally: - -#+begin_src sh -curl https://cleberg.net -#+end_src - -The HTML contents of the page come back successfully: - -#+begin_src html -... -#+end_src - -Next, let's include a banned referrer: - -#+begin_src sh -curl --referer https://news.ycombinator.com https://cleberg.net -#+end_src - -This time, I'm met with a 403 Forbidden response page. That means we are -successful and any clients being referred from a banned domain will be -met with this same response code. - -#+begin_src html - -403 Forbidden - -

403 Forbidden

-
nginx
- - -#+end_src diff --git a/blog/nginx-reverse-proxy/index.org b/blog/nginx-reverse-proxy/index.org deleted file mode 100644 index 6467f29..0000000 --- a/blog/nginx-reverse-proxy/index.org +++ /dev/null @@ -1,220 +0,0 @@ -#+title: Set-Up a Reverse Proxy with Nginx -#+date: 2022-04-02 -#+description: Learn how to set-up an Nginx reverse proxy from scratch. -#+filetags: :nginx: - -* What is a Reverse Proxy? -A reverse proxy is a server that is placed between local servers or -services and clients/users (e.g., the internet). The reverse proxy -intercepts all requests from clients at the network edge and uses its -configuration files to determine where each request should be sent. - -** A Brief Example -For example, let's say that I run three servers in my home: - -- Server_{01} (=example.com=) -- Server_{02} (=service01.example.com=) -- Server_{03} (=service02.example.com=) - -I also run a reverse proxy in my home that intercepts all public -traffic: - -- Reverse Proxy - -Assume that I have a domain name (=example.com=) that allows clients to -request websites or services from my home servers. - -In this case, the reverse proxy will intercept all traffic from -=example.com= that enters my network and determine if the client is -requesting valid data, based on my configuration. - -If the user is requesting =example.com= and my configuration files say -that Server_{01} holds that data, Nginx will send the user to -Server_{01}. If I were to change the configuration so that =example.com= -is routed to Server_{02}, that same user would be sent to Server_{02} -instead. - -#+begin_src txt -┌──────┐ ┌───────────┐ -│ User │─┐ ┌──► Server_01 │ -└──────┘ │ │ └───────────┘ - │ ┌──────────┐ ┌───────────────┐ │ ┌───────────┐ - ├────► Internet ├───► Reverse Proxy ├─────├──► Server_02 │ - │ └──────────┘ └───────────────┘ │ └───────────┘ -┌──────┐ │ │ ┌───────────┐ -│ User │─┘ └──► Server_03 │ -└──────┘ └───────────┘ -#+end_src - -* Reverse Proxy Options -There are a lot of options when it comes to reverse proxy servers, so -I'm just going to list a few of the options I've heard recommended over -the last few years: - -- [[https://nginx.com][Nginx]] -- [[https://caddyserver.com][Caddy]] -- [[https://traefik.io/][Traefik]] -- [[https://www.haproxy.org/][HAProxy]] -- [[https://ubuntu.com/server/docs/proxy-servers-squid][Squid]] - -In this post, we will be using Nginx as our reverse proxy, running on -Ubuntu Server 20.04.4 LTS. - -* Nginx Reverse Proxy Example -** Local Applications -You may be like me and have a lot of applications running on your local -network that you'd like to expose publicly with a domain. - -In my case, I have services running in multiple Docker containers within -a single server and want a way to visit those services from anywhere -with a URL. For example, on my local network, -[[https://dashy.to][Dashy]] runs through port 4000 (=localhost:4000=) -and [[https://github.com/louislam/uptime-kuma][Uptime Kuma]] runs -through port 3001 (=localhost:3001=). - -In order to expose these services to the public, I will need to do the -following: - -1. Set up DNS records for a domain or subdomain (one per service) to - point toward the IP address of the server. -2. Open up the server network's HTTP and HTTPS ports (80 & 443) so that - the reverse proxy can accept traffic and determine where to send it. -3. Install the reverse proxy software. -4. Configure the reverse proxy to recognize which service should get - traffic from any of the domains or subdomains. - -** Step 1: DNS Configuration -To start, update your DNS configuration so that you have an =A= record -for each domain or subdomain. - -The =A= records should point toward the public IP address of the server. -If you don't know the public IP address, log in to the server and run -the following command: - -#+begin_src sh -curl ifconfig.co -#+end_src - -In the DNS example below, =xxx.xxx.xxx.xxx= is the public IP address of -the server. - -#+begin_src config -example.com A xxx.xxx.xxx.xxx -uptime.example.com A xxx.xxx.xxx.xxx -dashy.example.com A xxx.xxx.xxx.xxx -www CNAME example.com -#+end_src - -Finally, ensure the DNS has propagated correctly with -[[https://dnschecker.org][DNS Checker]] by entering your domains or -subdomains in the search box and ensuring the results are showing the -correct IP address. - -** Step 2: Open Network Ports -This step will be different depending on which router you have in your -home. If you're not sure, try to visit -[[http://192.168.1.1][192.168.1.1]] in your browser. Login credentials -are usually written on a sticker somewhere on your modem/router. - -Once you're able to log in to your router, find the Port Forwarding -settings. You will need to forward ports =80= and =443= to whichever -machine is running the reverse proxy. - -In my case, the table below shows the port-forwarding rules I've -created. In this table, =xxx.xxx.xxx.xxx= is the local device IP of the -reverse proxy server, it will probably be an IP between =192.168.1.1= -and =192.168.1.255=. - -| NAME | FROM | PORT | DEST PORT/IP | ENABLED | -|-------+------+------+-----------------+---------| -| HTTP | ​** | 80 | xxx.xxx.xxx.xxx | TRUE | -| HTTPS | ​** | 443 | xxx.xxx.xxx.xxx | TRUE | - -Once configured, these rules will direct all web traffic to your reverse -proxy. - -** Step 3: Nginx Installation -To install Nginx, simply run the following command: - -#+begin_src sh -sudo apt install nginx -#+end_src - -If you have a firewall enabled, open up ports =80= and =443= on your -server so that Nginx can accept web traffic from the router. - -For example, if you want to use =ufw= for web traffic and SSH, run the -following commands: - -#+begin_src sh -sudo ufw allow 'Nginx Full' -sudo ufw allow SSH -sudo ufw enable -#+end_src - -** Step 4: Nginx Configuration -Now that we have domains pointing toward the server, the only step left -is to configure the reverse proxy to direct traffic from domains to -local services. - -To start, you'll need to create a configuration file for each domain in -=/etc/nginx/sites-available/=. They will look identical except for the -=server_name= variable and the =proxy_pass= port. - -Dashy: - -#+begin_src sh -nano /etc/nginx/sites-available/dashy.example.com -#+end_src - -#+begin_src config -server { - listen 80; - server_name dashy.example.com; - - location / { - proxy_pass http://localhost:4000; - } -} -#+end_src - -Uptime: - -#+begin_src sh -nano /etc/nginx/sites-available/uptime.example.com -#+end_src - -#+begin_src config -server { - listen 80; - server_name uptime.example.com; - - location / { - proxy_pass http://localhost:3001; - } -} -#+end_src - -Once the configuration files are created, you will need to enable them -with the =symlink= command: - -#+begin_src sh -sudo ln -s /etc/nginx/sites-available/dashy.example.com /etc/nginx/sites-enabled/ -#+end_src - -Voilà! Your local services should now be available through their URLs. - -* HTTPS with Certbot -If you've followed along, you'll notice that your services are only -available via HTTP (not HTTPS). - -If you want to enable HTTPS for your new domains, you will need to -generate SSL/TLS certificates for them. The easiest way to generate -certificates on Nginx is [[https://certbot.eff.org][Certbot]]: - -#+begin_src sh -sudo apt install snapd; sudo snap install core; sudo snap refresh core -sudo snap install --classic certbot -sudo ln -s /snap/bin/certbot /usr/bin/certbot -sudo certbot --nginx -#+end_src diff --git a/blog/nginx-tmp-errors/index.org b/blog/nginx-tmp-errors/index.org deleted file mode 100644 index 092b146..0000000 --- a/blog/nginx-tmp-errors/index.org +++ /dev/null @@ -1,75 +0,0 @@ -#+title: Fixing Permission Errors in /var/lib/nginx -#+date: 2022-11-11 -#+description: Learn how to fix permission errors related to the Nginx temporary file storage. -#+filetags: :nginx: - -/This is a brief post so that I personally remember the solution as it -has occurred multiple times for me./ - -* The Problem -After migrating to a new server OS, I started receiving quite a few -permission errors like the one below. These popped up for various -different websites I'm serving via Nginx on this server, but did not -prevent the website from loading. - -I found the errors in the standard log file: - -#+begin_src sh -cat /var/log/nginx/error.log -#+end_src - -#+begin_src sh -2022/11/11 11:30:34 [crit] 8970#8970: *10 open() "/var/lib/nginx/tmp/proxy/3/00/0000000003" failed (13: Permission denied) while reading upstream, client: 169.150.203.10, server: cyberchef.example.com, request: "GET /assets/main.css HTTP/2.0", upstream: "http://127.0.0.1:8111/assets/main.css", host: "cyberchef.example.com", referrer: "https://cyberchef.example.com/" -#+end_src - -You can see that the error is =13: Permission denied= and it occurs in -the =/var/lib/nginx/tmp/= directory. In my case, I had thousands of -errors where Nginx was denied permission to read/write files in this -directory. - -So how do I fix it? - -* The Solution -In order to resolve the issue, I had to ensure the =/var/lib/nginx= -directory is owned by Nginx. Mine was owned by the =www= user and Nginx -was not able to read or write files within that directory. This -prevented Nginx from caching temporary files. - -#+begin_src sh -# Alpine Linux -doas chown -R nginx:nginx /var/lib/nginx - -# Other Distros -sudo chown -R nginx:nginx /var/lib/nginx -#+end_src - -You /may/ also be able to change the =proxy_temp_path= in your Nginx -config, but I did not try this. Here's a suggestion I found online that -may work if the above solution does not: - -#+begin_src sh -nano /etc/nginx/http.d/example.com.conf -#+end_src - -#+begin_src conf -server { - ... - - # Set the proxy_temp_path to your preference, make sure it's owned by the - # `nginx` user - proxy_temp_path /tmp; - - ... -} -#+end_src - -Finally, restart Nginx and your server should be able to cache temporary -files again. - -#+begin_src sh -# Alpine Linux (OpenRC) -doas rc-service nginx restart - -# Other Distros (systemd) -sudo systemctl restart nginx -#+end_src diff --git a/blog/nginx-wildcard-redirect/index.org b/blog/nginx-wildcard-redirect/index.org deleted file mode 100644 index 41e84cb..0000000 --- a/blog/nginx-wildcard-redirect/index.org +++ /dev/null @@ -1,116 +0,0 @@ -#+title: Redirect Nginx Subdomains & Trailing Content with Regex -#+date: 2022-12-07 -#+description: A simple Nginx configuration to redirect all subdomains and trailing content. -#+filetags: :nginx: - -* Problem -I recently migrated domains and replaced the old webpage with a simple -info page with instructions to users on how to edit their bookmarks and -URLs to get to the page they were seeking. - -This was not ideal as it left the work up to the user and may have -caused friction for users who accessed my RSS feed. - -* Solution -Instead, I finally found a solution that allows me to redirect both -subdomains AND trailing content. For example, both of these URLs now -redirect properly using the logic I'll explain below: - -#+begin_src txt -# Example 1 - Simple base domain redirect with trailing content -https://domain1.com/blog/alpine-linux/ -> https://domain2.com/blog/alpine-linux/ - -# Example 2 - Complex redirect with both a subdomain and trailing content -https://libreddit.domain1.com/r/history/comments/7z8cbg/new_discovery_mode_turns_video_game_assassins/ --> -https://libreddit.domain2.com/r/history/comments/7z8cbg/new_discovery_mode_turns_video_game_assassins/ -#+end_src - -Go ahead, try the URLs if you want to test them. - -** Nginx Config -To make this possible. I needed to configure a proper redirect scheme in -my Nginx configuration. - -#+begin_src sh -doas nano /etc/nginx/http.d/domain1.conf -#+end_src - -Within this file, I had one block configured to redirect HTTP requests -to HTTPS for the base domain and all subdomains. - -#+begin_src conf -server { - listen [::]:80; - listen 80; - server_name domain1.com *.domain1.com; - - if ($host = domain1.com) { - return 301 https://$host$request_uri; - } - - if ($host = *.domain1.com) { - return 301 https://$host$request_uri; - } - - return 404; -} -#+end_src - -For the base domain, I have another =server= block dedicated to -redirecting all base domain requests. You can see that the =rewrite= -line is instructing Nginx to gather all trailing content and append it -to the new =domain2.com= URL. - -#+begin_src conf -server { - listen [::]:443 ssl http2; - listen 443 ssl http2; - - server_name domain1.com; - - rewrite ^/(.*)$ https://domain2.com/$1 permanent; - - ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem; - ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem; -} -#+end_src - -Finally, the tricky part is figuring out how to tell Nginx to redirect -while keeping both a subdomain and trailing content intact. I found that -the easiest way to do this is to give it a =server= block of its own. - -Within this block, we need to do some regex on the =server_name= line -before we can rewrite anything. This creates a variable called -=subdomain=. - -Once the server gets to the =rewrite= line, it pulls the =subdomain= -variable from above and uses it on the new =domain2.com= domain before -appending the trailing content (=$request_uri=). - -#+begin_src conf -server { - listen [::]:443 ssl http2; - listen 443 ssl http2; - - server_name ~^(?\w+)\.domain1\.com$; - - rewrite ^ https://$subdomain.domain2.com$request_uri permanent; - - ssl_certificate /etc/letsencrypt/live/domain1.com/fullchain.pem; - ssl_certificate_key /etc/letsencrypt/live/domain1.com/privkey.pem; -} -#+end_src - -That's all there is to it. With this, I simply restarted Nginx and -watched the redirections work in-action. - -#+begin_src sh -doas rc-service nginx restart -#+end_src - -Looking back on it, I wish I had done this sooner. Who knows how many -people went looking for my sites or bookmarks and gave up when they saw -the redirect instructions page. - -Oh well, it's done now. Live and learn. diff --git a/blog/njalla-dns-api/index.org b/blog/njalla-dns-api/index.org deleted file mode 100644 index 363e9e3..0000000 --- a/blog/njalla-dns-api/index.org +++ /dev/null @@ -1,199 +0,0 @@ -#+title: Dynamic DNS with Njalla API -#+date: 2022-02-10 -#+description: Learn how to dynamically update DNS records for changing IPs with Njalla. -#+filetags: :sysadmin: - -* Njalla's API -As noted in my recent post about [[/blog/ditching-cloudflare/][switching -to Njalla from Cloudflare]], I was searching for a way to replace my -very easy-to-use bash script to [[/blog/cloudflare-dns-api/][update -Cloudflare's DNS via their API]]. - -To reiterate what I said in those posts, this is a common necessity for -those of us who have non-static IP addresses that can change at any -moment due to ISP policy. - -In order to keep a home server running smoothly, the server admin needs -to have a process to constantly monitor their public IP address and -update their domain's DNS records if it changes. - -This post explains how to use Python to update Njalla's DNS records -whenever a machine's public IP address changes. - -** Creating a Token -To use Njalla's API, you will first need to create a token that will be -used to authenticate you every time you call the API. Luckily, this is -very easy to do if you have an account with Njalla. - -Simply go the [[https://njal.la/settings/api/][API Settings]] page and -click the =Add Token= button. Next, enter a name for the token and click -=Add=. - -Finally, click the =Manage= button next to your newly created token and -copy the =API Token= field. - -** Finding the Correct API Request -Once you have a token, you're ready to call the Njalla API for any -number of requests. For a full listing of available requests, see the -[[https://njal.la/api/][Njalla API Documentation]]. - -For this demo, we are using the =list-records= and =edit-record= -requests. - -The =list-records= request requires the following payload to be sent -when calling the API: - -#+begin_src txt -params: { - domain: string -} -#+end_src - -The =edit-record= request requires the following payload to be sent when -calling the API: - -#+begin_src txt -params: { - domain: string - id: int - content: string -} -#+end_src - -* Server Set-Up -To create this script, we will be using Python. By default, I use Python -3 on my servers, so please note that I did not test this in Python 2, -and I do not know if Python 2 will work for this. - -** Creating the Script -First, find a suitable place to create your script. Personally, I just -create a directory called =ddns= in my home directory: - -#+begin_src sh -mkdir ~/ddns -#+end_src - -Next, create a Python script file: - -#+begin_src sh -nano ~/ddns/ddns.py -#+end_src - -The following code snippet is quite long, so I won't go into depth on -each part. However, I suggest you read through the entire script before -running it; it is quite simple and contains comments to help explain -each code block. - -:warning: *Note*: You will need to update the following variables for -this to work: - -- =token=: This is the Njalla API token you created earlier. -- =user_domain=: This is the top-level domain you want to modify. -- =include_subdomains=: Set this to =True= if you also want to modify - subdomains found under the TLD. -- =subdomains=: If =include_subdomains= = =True=, you can include your - list of subdomains to be modified here. - -#+begin_src python -#!/usr/bin/python -# -*- coding: utf-8 -*- -# Import Python modules - -from requests import get -import requests -import json - -# Set global variables - -url = 'https://njal.la/api/1/' -token = '' -user_domain = 'example.com' -include_subdomains = True -subdomains = ['one', 'two'] - - -# Main API call function - -def njalla(method, **params): - headers = {'Authorization': 'Njalla ' + token} - response = requests.post(url, json={'method': method, - 'params': params}, headers=headers).json() - if 'result' not in response: - raise Exception('API Error', response) - return response['result'] - - -# Gather all DNS records for a domain - -def get_records(domain): - return njalla('list-records', domain=user_domain) - - -# Update a DNS record for a domain - -def update_record(domain, record_id, record_content): - return njalla('edit-record', domain=domain, id=record_id, - content=record_content) - - -# Get public IP addresses - -ipv4 = get('https://api.ipify.org').text -print('IPv4: {}'.format(ipv4)) -ipv6 = get('https://api64.ipify.org').text -print('IPv6: {}'.format(ipv6)) - -# Call API to get all DNS records - -data = get_records(user_domain) - -# Loop through records and check if each one is IPv4 (A) or IPv6 (AAAA) -# Update only if DNS is different from server IP - -for record in data['records']: - if record['name'] == '@' or (include_subdomains and record['name'] \ - in subdomains): - if record['type'] == 'A': - if record['content'] == ipv4: - print(record['type'], 'record for', record['name'], - 'already matches public IPv4 address. Skipping...' - ) - else: - print('IPv4 of', ipv4, - 'does not match Njalla's value of', - record['content'], '. Updating...') - update_record(user_domain, record['id'], ipv4) - elif record['type'] == 'AAAA': - if record['content'] == ipv6: - print(record['type'], 'record for', record['name'], - 'already matches public IPv6 address. Skipping...' - ) - else: - print('IPv6 of', ipv6, - 'does not match Njalla's value of', - record['content'], '. Updating...') - update_record(user_domain, record['id'], ipv6) -#+end_src - -** Running the Script -Once you've created the script and are ready to test it, run the -following command: - -#+begin_src sh -python3 ~/ddns/ddns.py -#+end_src - -** Setting the Script to Run Automatically -To make sure the scripts run automatically, add it to the =cron= file so -that it will run on a schedule. To do this, open the =cron= file: - -#+begin_src sh -crontab -e -#+end_src - -In the cron file, paste the following at the bottom of the editor in -order to check the IP every five minutes: - -#+begin_src sh -,*/5 ** ** ** ** python3 /home//ddns/ddns.py -#+end_src diff --git a/blog/org-blog/index.org b/blog/org-blog/index.org deleted file mode 100644 index b806ae3..0000000 --- a/blog/org-blog/index.org +++ /dev/null @@ -1,71 +0,0 @@ -#+title: Blogging in Org-Mode -#+date: 2024-02-26 -#+description: A guide to blogging with org-mode, no third-party tools required. -#+filetags: :dev: - -* TODO Write a post on emacs first? - -- Could write-up my Doom Emacs config and workflow first, then reference here. - -* TODO Configure Emacs - -#+begin_src sh -emacs -nw -#+end_src - -=SPC f f= ---> =~/.doom.d/config.el= - -#+begin_src lisp -;; org-publish -(require 'ox-publish) - -(setq org-publish-project-alist - `(("blog" - :base-directory "~/Source/cleberg.net/" - :base-extension "org" - :recursive t - :publishing-directory "~/Source/cleberg.net/public/" - :publishing-function org-html-publish-to-html - ;; HTML5 - :html-doctype "html5" - :html-html5-fancy t - ;; Disable some Org's HTML defaults - :html-head-include-scripts nil - :html-head-include-default-style nil - :section-numbers nil - :with-title nil - ;; Generate sitemap - :auto-sitemap t - :sitemap-filename "sitemap.org" - ;; Customize HTML output - :html-divs ((preamble "header" "preamble") - (content "main" "content") - (postamble "footer" "postamble")) - :html-head "" - :html-preamble " -

%t

- " - :html-postamble " -

Last build: %T

-

Created with %c

" - ) - - ("static" - :base-directory "~/Source/cleberg.net/static/" - :base-extension "css\\|txt\\|jpg\\|gif\\|png" - :recursive t - :publishing-directory "~/Source/cleberg.net/public/" - :publishing-function org-publish-attachment) - - ("cleberg.net" :components ("blog" "static")))) -#+end_src - -* TODO Build Process - -* TODO Deploy Process diff --git a/blog/password-security/index.org b/blog/password-security/index.org deleted file mode 100644 index 0ebbb84..0000000 --- a/blog/password-security/index.org +++ /dev/null @@ -1,121 +0,0 @@ -#+title: Password Security -#+date: 2019-12-16 -#+description: Password security basics. -#+filetags: :security: - -* Users -** Why Does It Matter? -Information security, including passwords and identities, has become one -of the most important digital highlights of the last decade. With -[[https://www.usatoday.com/story/money/2018/12/28/data-breaches-2018-billions-hit-growing-number-cyberattacks/2413411002/][billions -of people affected by data breaches each year]], there's a greater need -to introduce strong information security systems. If you think you've -been part of a breach, or you want to check and see, you can use -[[https://haveibeenpwned.com/][Have I Been Pwned]] to see if your email -has been involved in any public breaches. Remember that there's a -possibility that a company experienced a breach and did not report it to -anyone. - -** How Do I Protect Myself? -The first place to start with any personal security check-up is to -gather a list of all the different websites, apps, or programs that -require you to have login credentials. Optionally, once you know where -your information is being stored, you can sort the list from the -most-important items such as banks or government logins to less -important items such as your favorite meme site. You will want to ensure -that your critical logins are secure before getting to the others. - -Once you think you have a good idea of all your different authentication -methods, I recommend using a password manager such as -[[https://bitwarden.com/][Bitwarden]]. Using a password manager allows -you to automatically save your logins, create randomized passwords, and -transfer passwords across devices. However, you'll need to memorize your -"vault password" that allows you to open the password manager. It's -important to make this something hard to guess since it would allow -anyone who has it to access every password you've stored in there. - -Personally, I recommend using a -[[https://en.wikipedia.org/wiki/Passphrase][passphrase]] instead of a -[[https://en.wikipedia.org/wiki/Password][password]] for your vault -password. Instead of using a string of characters (whether random or -simple), use a phrase and add in symbols and a number. For example, your -vault password could be =Racing-Alphabet-Gourd-Parrot3=. Swap the -symbols out for whichever symbol you want, move the number around, and -fine-tune the passphrase until you are confident that you can remember -it whenever necessary. - -Once you've stored your passwords, make sure you continually check up on -your account and make sure you aren't following bad password practices. -Krebs on Security has a great -[[https://krebsonsecurity.com/password-dos-and-donts/][blog post on -password recommendations]]. Any time that a data breach happens, make -sure you check to see if you were included, and if you need to reset any -account passwords. - -* Developers -** What Are the Basic Requirements? -When developing any password-protected application, there are a few -basic rules that anyone should follow even if they do not follow any -official guidelines such as NIST. The foremost practice is to require -users to use passwords that are at least 8 characters and cannot easily -be guessed. This sounds extremely simple, but it requires quite a few -different strategies. First, the application should check the potential -passwords against a dictionary of insecure passwords such =password=, -=1234abc=, or =application_name=. - -Next, the application should offer guidance on the strength of passwords -being entered during enrollment. Further, NIST officially recommends -*not** implementing any composition rules that make passwords hard to -remember (e.g. passwords with letters, numbers, and special characters) -and instead encouraging the use of long pass phrases which can include -spaces. It should be noted that to be able to keep spaces within -passwords, all unicode characters should be supported, and passwords -should not be truncated. - -** What Does NIST Recommend? -The National Institute of Standards and Technology -([[https://www.nist.gov][NIST]]) in the US Department of Commerce -regularly publishes information around information security and digital -identity guidelines. Recently, NIST published -[[https://pages.nist.gov/800-63-3/sp800-63b.html][Special Publication -800-63b]]: Digital Identity Guidelines and Authentication and Lifecycle -Management. - -#+begin_quote -A Memorized Secret authenticator - commonly referred to as a password -or, if numeric, a PIN - is a secret value intended to be chosen and -memorized by the user. Memorized secrets need to be of sufficient -complexity and secrecy that it would be impractical for an attacker to -guess or otherwise discover the correct secret value. A memorized secret -is something you know. - -- NIST Special Publication 800-63B -#+end_quote - -NIST offers a lot of guidance on passwords, but I'm going to highlight -just a few of the important factors: - -- Require passwords to be a minimum of 8 characters (6 characters if - randomly generated and be generated using an approved random bit - generator). -- Compare potential passwords against a list that contains values known - to be commonly-used, expected, or compromised. -- Offer guidance on password strength, such as a strength meter. -- Implement a rate-limiting mechanism to limit the number of failed - authentication attempts for each user account. -- Do not require composition rules for passwords and do not require - passwords to be changed periodically (unless compromised). -- Allow pasting of user identification and passwords to facilitate the - use of password managers. -- Allow users to view the password as it is being entered. -- Use secure forms of communication and storage, including salting and - hashing passwords using a one-way key derivation function. - -NIST offers further guidance on other devices that require specific -security policies, querying for passwords, and more. All the information -discussed so far comes from -[[https://pages.nist.gov/800-63-3/sp800-63b.html][NIST SP800-63b]] but -NIST offers a lot of information on digital identities, enrollment, -identity proofing, authentication, lifecycle management, federation, and -assertions in the total [[https://pages.nist.gov/800-63-3/][NIST -SP800-63 Digital Identity Guidelines]]. diff --git a/blog/photography/index.org b/blog/photography/index.org deleted file mode 100644 index cc5f388..0000000 --- a/blog/photography/index.org +++ /dev/null @@ -1,68 +0,0 @@ -#+title: Jumping Back Into Photography -#+date: 2021-04-28 -#+description: Some thoughts on photography. -#+filetags: :personal: - -* Why Photography? -I've often wondered why photography is as enticing as it is. You can see -billions of people around the world taking photographs every single -moment of the day. New technology often boasts about their photographic -capabilities, especially smartphones. I would even assume that we live -in a world where there is never a moment in which a photograph is not -being taken somewhere on Earth. - -As for myself, I would simply say that I enjoy preserving a memory in -physical (or digital) form. I've never had the best memory when it comes -to recalling details of places and people gone by, so it helps to have a -frame of reference lying around. - -Regardless of the reason, I think most people would agree that you -simply cannot have too many hobbies. - -* Older Cameras -I started playing around with the idea of photography when my family -purchased a Fujifilm camera for family-specific events. I don't recall -the specific model, but I do recall it was a point-and-shoot camera -without an interchangeable lens. However, it was of great value to -someone, like myself, who couldn't afford any other camera. I took about -10,000 shots with that camera over a 3-5 year span. Most notably, all of -my trips to California were documented through this camera. - -When possible, I would borrow my sister's camera, which is a Sony -SLT-A58. This camera was great and allowed for some of my best early -shots, especially those taken in Utah's and Nevada's parks. - -* My Current Kit -I've finally come to a point in my life where I have the disposable -income to invest in a solid photography kit. I played around with the -idea of a lot of different cameras, different types, new vs used, etc. -Finally, I settled on the -[[https://en.wikipedia.org/wiki/Sony_%CE%B17_III][Sony α7 III]]. This -camera is mirror-less and uses a full-frame image sensor at 24 -megapixels. I don't create large prints, and I am mostly focused on -preserving memories in high quality for the next 5-10 years with this -camera, so the specifications here are just perfect for me. - -For lenses, I decided to buy two lenses that could carry me through most -situations: - -- [[https://electronics.sony.com/imaging/lenses/full-frame-e-mount/p/sel2470z][Vario-Tessar - T** FE 24-70 mm F4 ZA OSS]] -- [[https://www.tamron-usa.com/product/lenses/a047.html][Tamron 70-300mm - f4.5-6.3 Di III RXD]] - -In addition, I grabbed a couple -[[https://www.promaster.com/Product/6725][HGX Prime 67mm]] protection -filters for the lenses. - -As I delve further into photography and pick up more skills, I will most -likely go back and grab a lens with a higher f-stop value, such as -f/1.8. I toyed with the idea of grabbing a 50 mm at =f/1.8=, but decided -to keep things in a reasonable price range instead. - -Finally, I made sure to buy a photography-specific backpack with a rain -guard, and the zipper on the back panel, to protect the equipment while -wearing the bag. If you've ever had to haul around a DSLR (or camera of -similar heft) in a bag that only has a shoulder strap, you'll know the -pain it can cause. Putting all my equipment in a backpack was an easy -decision. diff --git a/blog/php-auth-flow/index.org b/blog/php-auth-flow/index.org deleted file mode 100644 index 2e5cf5c..0000000 --- a/blog/php-auth-flow/index.org +++ /dev/null @@ -1,188 +0,0 @@ -#+title: PHP Authentication Flow -#+date: 2020-08-29 -#+description: Learn how to establish and maintain a basic user authentication flow in PHP. -#+filetags: :dev: - -* Introduction -When creating websites that will allow users to create accounts, the -developer always needs to consider the proper authentication flow for -their app. For example, some developers will utilize an API for -authentication, some will use OAuth, and some may just use their own -simple database. - -For those using pre-built libraries, authentication may simply be a -problem of copying and pasting the code from their library's -documentation. For example, here's the code I use to authenticate users -with the Tumblr OAuth API for my Tumblr client, Vox Populi: - -#+begin_src php -// Start the session -session_start(); - -// Use my key/secret pair to create a new client connection -$consumer_key = getenv('CONSUMER_KEY'); -$consumer_secret = getenv('CONSUMER_SECRET'); -$client = new Tumblr\API\Client($consumer_key, $consumer_secret); -$requestHandler = $client->getRequestHandler(); -$requestHandler->setBaseUrl('https://www.tumblr.com/'); - -// Check the session and cookies to see if the user is authenticated -// Otherwise, send user to Tumblr authentication page and set tokens from Tumblr's response - -// Authenticate client -$client = new Tumblr\API\Client( - $consumer_key, - $consumer_secret, - $token, - $token_secret -); -#+end_src - -However, developers creating authentication flows from scratch will need -to think carefully about when to make sure a web page will check the -user's authenticity. - -In this article, we're going to look at a simple authentication flow -using a MySQL database and PHP. - -* Creating User Accounts -The beginning to any type of user authentication is to create a user -account. This process can take many formats, but the simplest is to -accept user input from a form (e.g., username and password) and send it -over to your database. For example, here's a snippet that shows how to -get username and password parameters that would come when a user submits -a form to your PHP script. - -*Note*: Ensure that your password column is large enough to hold the -hashed value (at least 60 characters or longer). - -#+begin_src php -// Get the values from the URL -$username = $_POST['username']; -$raw_password = $_POST['password']; - -// Hash password -// password_hash() will create a random salt if one isn't provided, and this is generally the easiest and most secure approach. -$password = password_hash($raw_password, PASSWORD_DEFAULT); - -// Save database details as variables -$servername = "localhost"; -$username = "username"; -$password = "password"; -$dbname = "myDB"; - -// Create connection to the database -$conn = new mysqli($servername, $username, $password, $dbname); - -// Check connection -if ($conn->connect_error) { - die("Connection failed: " . $conn->connect_error); -} - -$sql = "INSERT INTO users (username, password) -VALUES ('$username', '$password')"; - -if ($conn->query($sql) === TRUE) { - echo "New record created successfully"; -} else { - echo "Error: " . $sql . "
" . $conn->error; -} - -$conn->close(); -#+end_src - -** Validate Returning Users -To be able to verify that a returning user has a valid username and -password in your database is as simple as having users fill out a form -and comparing their inputs to your database. - -#+begin_src php -// Query the database for username and password -// ... - -if(password_verify($password_input, $hashed_password)) { - // If the input password matched the hashed password in the database - // Do something, log the user in. -} - -// Else, Redirect them back to the login page. -... -#+end_src - -* Storing Authentication State -Once you've created the user's account, now you're ready to initialize -the user's session. *You will need to do this on every page you load -while the user is logged in.** To do so, simply enter the following code -snippet: - -#+begin_src php -session_start(); -#+end_src - -Once you've initialized the session, the next step is to store the -session in a cookie so that you can access it later. - -#+begin_src php -setcookie(session_name()); -#+end_src - -Now that the session name has been stored, you'll be able to check if -there's an active session whenever you load a page. - -#+begin_src php -if(isset(session_name())) { - // The session is active -} -#+end_src - -** Removing User Authentication -The next logical step is to give your users the option to log out once -they are done using your application. This can be tricky in PHP since a -few of the standard ways do not always work. - -#+begin_src php -// Initialize the session. -// If you are using session_name("something"), don't forget it now! -session_start(); - -// Delete authentication cookies -unset($_COOKIE[session_name()]); -setcookie(session_name(), "", time() - 3600, "/logged-in/"); -unset($_COOKIE["PHPSESSID"]); -setcookie("PHPSESSID", "", time() - 3600, "/logged-in/"); - -// Unset all of the session variables. -$_SESSION = array(); -session_unset(); - -// If it's desired to kill the session, also delete the session cookie. -// Note: This will destroy the session, and not just the session data! -if (ini_get("session.use_cookies")) { - $params = session_get_cookie_params(); - setcookie(session_name(), '', time() - 42000, - $params["path"], $params["domain"], - $params["secure"], $params["httponly"] - ); -} - -// Finally, destroy the session. -session_destroy(); -session_write_close(); - -// Go back to sign-in page -header('Location: https://example.com/logged-out/'); -die(); -#+end_src - -* Wrapping Up -Now you should be ready to begin your authentication programming with -PHP. You can create user accounts, create sessions for users across -different pages of your site, and then destroy the user data when -they're ready to leave. - -For more information on this subject, I recommend reading the -[[https://www.php.net/][PHP Documentation]]. Specifically, you may want -to look at [[https://www.php.net/manual/en/features.http-auth.php][HTTP -Authentication with PHP]], -[[https://www.php.net/manual/en/book.session.php][session handling]], -and [[https://www.php.net/manual/en/function.hash.php][hash]]. diff --git a/blog/php-comment-system/index.org b/blog/php-comment-system/index.org deleted file mode 100644 index 92dd984..0000000 --- a/blog/php-comment-system/index.org +++ /dev/null @@ -1,265 +0,0 @@ -#+title: Roll Your Own Static Commenting System in PHP -#+date: 2021-04-23 -#+description: A simple guide to creating a commenting system in PHP. -#+filetags: :dev: - -* The Terrible-ness of Commenting Systems -The current state of affairs regarding interactive comment systems is, -well, terrible. It is especially awful if you're a privacy conscious -person who does not generally load third-party scripts or frames on the -websites you visit. - -Even further, many comment systems are charging exorbitant fees for -something that should be standard. - -Of course, there are some really terrible options: - -- Facebook Comments -- Discourse - -There are some options that are better but still use too many scripts, -frames, or social integrations on your web page that could impact some -users: - -- Disqus -- Isso -- Remark42 - -Lastly, I looked into a few unique ways of generating blog comments, -such as using Twitter threads or GitHub issues to automatically post -issues. However, these both rely on external third-party sites that I -don't currently use. - -* Stay Static with Server-Side Comments -The main issue for my personal use-case is that my blog is completely, -100% static. I use PHP on the back-end but website visitors only see -HTML and a single CSS file. No external javascript and no embedded -frames. - -So, how do we keep a site static and still allow users to interact with -blog posts? The key actually pretty simple - I'm already using PHP, so -why not rely on the classic HTML =
= and a PHP script to save the -comments somewhere? As it turns out, this was a perfect solution for me. - -The second issue for my personal use-case is that I am trying to keep -the contents of my website accessible over time, as described by -[cite/t:@brandur], in his post entitled -[[https://brandur.org/fragments/graceful-degradation-time][Blog with -Markdown + Git, and degrade gracefully through time]] . - -This means I cannot rely on a database for comments, since I do not rely -on a database for any other part of my websites. - -I blog in plain Markdown files, commit all articles to Git, and ensure -that future readers will be able to see the source data long after I'm -gone, or the website has gone offline. However, I still haven't -committed any images served on my blog to Git, as I'm not entirely sold -on Git LFS yet - for now, images can be found at -[[https://img.cleberg.net][img.cleberg.net]]. - -Saving my comments back to the Git repository ensures that another -aspect of my site will degrade gracefully. - -* Create a Comment Form -Okay, let's get started. The first step is to create an HTML form that -users can see and utilize to submit comments. This is fairly easy and -can be changed depending on your personal preferences. - -Take a look at the code block below for the form I currently use. Note -that == is replaced automatically in PHP with the current -post's URL, so that my PHP script used later will know which blog post -the comment is related to. - -The form contains the following structure: - -1. == - This is the form and will determine which PHP script to - send the comment to. -2. =