From 74992aaa27eb384128924c4a3b93052961a3eaab Mon Sep 17 00:00:00 2001 From: Christian Cleberg Date: Sat, 27 Apr 2024 17:01:13 -0500 Subject: test conversion back to markdown --- content/blog/2018-11-28-aes-encryption.md | 125 ++ content/blog/2018-11-28-aes-encryption.org | 103 -- content/blog/2018-11-28-cpp-compiler.md | 140 +++ content/blog/2018-11-28-cpp-compiler.org | 121 -- content/blog/2019-01-07-useful-css.md | 200 ++++ content/blog/2019-01-07-useful-css.org | 178 --- content/blog/2019-09-09-audit-analytics.md | 233 ++++ content/blog/2019-09-09-audit-analytics.org | 211 ---- content/blog/2019-12-03-the-ansoff-matrix.md | 128 ++ content/blog/2019-12-03-the-ansoff-matrix.org | 105 -- content/blog/2019-12-16-password-security.md | 127 ++ content/blog/2019-12-16-password-security.org | 103 -- content/blog/2020-01-25-linux-software.md | 282 +++++ content/blog/2020-01-25-linux-software.org | 249 ---- content/blog/2020-01-26-steam-on-ntfs.md | 97 ++ content/blog/2020-01-26-steam-on-ntfs.org | 91 -- content/blog/2020-02-09-cryptography-basics.md | 180 +++ content/blog/2020-02-09-cryptography-basics.org | 158 --- content/blog/2020-03-25-session-manager.org | 120 -- content/blog/2020-03-25-session-messenger.md | 137 +++ content/blog/2020-05-03-homelab.md | 166 +++ content/blog/2020-05-03-homelab.org | 136 --- content/blog/2020-05-19-customizing-ubuntu.md | 211 ++++ content/blog/2020-05-19-customizing-ubuntu.org | 178 --- content/blog/2020-07-20-video-game-sales.md | 183 +++ content/blog/2020-07-20-video-game-sales.org | 173 --- content/blog/2020-07-26-business-analysis.md | 389 ++++++ content/blog/2020-07-26-business-analysis.org | 374 ------ content/blog/2020-08-22-redirect-github-pages.md | 137 +++ content/blog/2020-08-22-redirect-github-pages.org | 110 -- content/blog/2020-08-29-php-auth-flow.md | 196 +++ content/blog/2020-08-29-php-auth-flow.org | 181 --- content/blog/2020-09-01-visual-recognition.md | 205 ++++ content/blog/2020-09-01-visual-recognition.org | 186 --- content/blog/2020-09-22-internal-audit.md | 264 +++++ content/blog/2020-09-22-internal-audit.org | 232 ---- content/blog/2020-09-25-happiness-map.md | 226 ++++ content/blog/2020-09-25-happiness-map.org | 212 ---- content/blog/2020-10-12-mediocrity.md | 121 ++ content/blog/2020-10-12-mediocrity.org | 122 -- content/blog/2020-12-27-website-redesign.md | 104 ++ content/blog/2020-12-27-website-redesign.org | 96 -- content/blog/2020-12-28-neon-drive.md | 95 ++ content/blog/2020-12-28-neon-drive.org | 83 -- content/blog/2020-12-29-zork.md | 96 ++ content/blog/2020-12-29-zork.org | 77 -- content/blog/2021-01-01-seum.md | 93 ++ content/blog/2021-01-01-seum.org | 92 -- content/blog/2021-01-04-fediverse.md | 107 ++ content/blog/2021-01-04-fediverse.org | 92 -- content/blog/2021-01-07-ufw.md | 225 ++++ content/blog/2021-01-07-ufw.org | 213 ---- content/blog/2021-02-19-macos.md | 215 ++++ content/blog/2021-02-19-macos.org | 200 ---- content/blog/2021-03-19-clone-github-repos.md | 156 +++ content/blog/2021-03-19-clone-github-repos.org | 148 --- content/blog/2021-03-28-gemini-capsule.md | 186 +++ content/blog/2021-03-28-gemini-capsule.org | 177 --- content/blog/2021-03-28-vaporwave-vs-outrun.md | 133 +++ content/blog/2021-03-28-vaporwave-vs-outrun.org | 124 -- content/blog/2021-03-30-vps-web-server.md | 418 +++++++ content/blog/2021-03-30-vps-web-server.org | 399 ------- content/blog/2021-04-17-gemini-server.md | 165 +++ content/blog/2021-04-17-gemini-server.org | 150 --- content/blog/2021-04-23-php-comment-system.md | 277 +++++ content/blog/2021-04-23-php-comment-system.org | 265 ----- content/blog/2021-04-28-photography.md | 73 ++ content/blog/2021-04-28-photography.org | 68 -- content/blog/2021-05-30-changing-git-authors.md | 75 ++ content/blog/2021-05-30-changing-git-authors.org | 72 -- content/blog/2021-07-15-delete-gitlab-repos.md | 116 ++ content/blog/2021-07-15-delete-gitlab-repos.org | 110 -- content/blog/2021-08-25-audit-sampling.md | 278 +++++ content/blog/2021-08-25-audit-sampling.org | 264 ----- content/blog/2021-10-09-apache-redirect.md | 47 + content/blog/2021-10-09-apache-redirect.org | 43 - content/blog/2021-12-04-cisa.md | 215 ++++ content/blog/2021-12-04-cisa.org | 205 ---- content/blog/2022-02-10-leaving-the-office.md | 243 ++++ content/blog/2022-02-10-leaving-the-office.org | 240 ---- content/blog/2022-02-10-njalla-dns-api.md | 212 ++++ content/blog/2022-02-10-njalla-dns-api.org | 199 ---- content/blog/2022-02-16-debian-and-nginx.md | 181 +++ content/blog/2022-02-16-debian-and-nginx.org | 172 --- content/blog/2022-02-17-exiftool.md | 68 ++ content/blog/2022-02-17-exiftool.org | 60 - content/blog/2022-02-20-nginx-caching.md | 73 ++ content/blog/2022-02-20-nginx-caching.org | 68 -- content/blog/2022-02-22-tuesday.md | 42 + content/blog/2022-02-22-tuesday.org | 37 - content/blog/2022-03-02-reliable-notes.md | 138 +++ content/blog/2022-03-02-reliable-notes.org | 137 --- content/blog/2022-03-03-financial-database.md | 272 +++++ content/blog/2022-03-03-financial-database.org | 256 ---- content/blog/2022-03-08-plex-migration.md | 248 ++++ content/blog/2022-03-08-plex-migration.org | 230 ---- content/blog/2022-03-23-cloudflare-dns-api.md | 195 +++ content/blog/2022-03-23-cloudflare-dns-api.org | 190 --- content/blog/2022-03-23-nextcloud-on-ubuntu.md | 170 +++ content/blog/2022-03-23-nextcloud-on-ubuntu.org | 159 --- content/blog/2022-03-24-server-hardening.md | 353 ++++++ content/blog/2022-03-24-server-hardening.org | 334 ------ content/blog/2022-03-26-ssh-mfa.md | 199 ++++ content/blog/2022-03-26-ssh-mfa.org | 189 --- content/blog/2022-04-02-nginx-reverse-proxy.md | 235 ++++ content/blog/2022-04-02-nginx-reverse-proxy.org | 220 ---- content/blog/2022-04-09-pinetime.md | 158 +++ content/blog/2022-04-09-pinetime.org | 146 --- content/blog/2022-06-01-ditching-cloudflare.md | 95 ++ content/blog/2022-06-01-ditching-cloudflare.org | 89 -- content/blog/2022-06-07-self-hosting-freshrss.md | 255 ++++ content/blog/2022-06-07-self-hosting-freshrss.org | 232 ---- content/blog/2022-06-16-terminal-lifestyle.md | 216 ++++ content/blog/2022-06-16-terminal-lifestyle.org | 211 ---- content/blog/2022-06-22-daily-poetry.md | 221 ++++ content/blog/2022-06-22-daily-poetry.org | 208 ---- content/blog/2022-06-24-fedora-i3.md | 159 +++ content/blog/2022-06-24-fedora-i3.org | 152 --- content/blog/2022-07-01-git-server.md | 655 ++++++++++ content/blog/2022-07-01-git-server.org | 617 ---------- content/blog/2022-07-14-gnupg.md | 314 +++++ content/blog/2022-07-14-gnupg.org | 297 ----- content/blog/2022-07-25-curseradio.md | 107 ++ content/blog/2022-07-25-curseradio.org | 95 -- content/blog/2022-07-30-flac-to-opus.md | 178 +++ content/blog/2022-07-30-flac-to-opus.org | 169 --- content/blog/2022-07-31-bash-it.md | 243 ++++ content/blog/2022-07-31-bash-it.org | 233 ---- content/blog/2022-08-31-privacy-com-changes.md | 99 ++ content/blog/2022-08-31-privacy-com-changes.org | 94 -- content/blog/2022-09-17-serenity-os.md | 122 ++ content/blog/2022-09-17-serenity-os.org | 116 -- content/blog/2022-09-21-graphene-os.md | 168 +++ content/blog/2022-09-21-graphene-os.org | 154 --- content/blog/2022-10-04-mtp-linux.md | 81 ++ content/blog/2022-10-04-mtp-linux.org | 73 -- content/blog/2022-10-04-syncthing.md | 179 +++ content/blog/2022-10-04-syncthing.org | 169 --- content/blog/2022-10-22-alpine-linux.md | 281 +++++ content/blog/2022-10-22-alpine-linux.org | 269 ----- content/blog/2022-10-30-linux-display-manager.md | 82 ++ content/blog/2022-10-30-linux-display-manager.org | 72 -- content/blog/2022-11-07-self-hosting-matrix.md | 221 ++++ content/blog/2022-11-07-self-hosting-matrix.org | 207 ---- content/blog/2022-11-11-nginx-tmp-errors.md | 80 ++ content/blog/2022-11-11-nginx-tmp-errors.org | 75 -- content/blog/2022-11-27-server-build.md | 148 +++ content/blog/2022-11-27-server-build.org | 141 --- content/blog/2022-11-29-nginx-referrer-ban-list.md | 134 +++ .../blog/2022-11-29-nginx-referrer-ban-list.org | 126 -- content/blog/2022-12-01-nginx-compression.md | 80 ++ content/blog/2022-12-01-nginx-compression.org | 73 -- content/blog/2022-12-07-nginx-wildcard-redirect.md | 123 ++ .../blog/2022-12-07-nginx-wildcard-redirect.org | 116 -- content/blog/2022-12-17-st.md | 95 ++ content/blog/2022-12-17-st.org | 87 -- content/blog/2022-12-23-alpine-desktop.md | 273 +++++ content/blog/2022-12-23-alpine-desktop.org | 261 ---- content/blog/2023-01-03-recent-website-changes.md | 86 ++ content/blog/2023-01-03-recent-website-changes.org | 74 -- .../blog/2023-01-05-mass-unlike-tumblr-posts.md | 95 ++ .../blog/2023-01-05-mass-unlike-tumblr-posts.org | 87 -- content/blog/2023-01-08-fedora-login-manager.md | 45 + content/blog/2023-01-08-fedora-login-manager.org | 40 - content/blog/2023-01-21-flatpak-symlinks.md | 51 + content/blog/2023-01-21-flatpak-symlinks.org | 46 - content/blog/2023-01-23-random-wireguard.md | 117 ++ content/blog/2023-01-23-random-wireguard.org | 112 -- content/blog/2023-01-28-self-hosting-wger.md | 154 +++ content/blog/2023-01-28-self-hosting-wger.org | 143 --- content/blog/2023-02-02-exploring-hare.md | 172 +++ content/blog/2023-02-02-exploring-hare.org | 169 --- content/blog/2023-05-22-burnout.md | 45 + content/blog/2023-05-22-burnout.org | 41 - content/blog/2023-06-08-goaccess-geoip.md | 70 ++ content/blog/2023-06-08-goaccess-geoip.org | 64 - content/blog/2023-06-08-self-hosting-baikal.md | 160 +++ content/blog/2023-06-08-self-hosting-baikal.org | 150 --- content/blog/2023-06-18-unifi-ip-blocklist.md | 88 ++ content/blog/2023-06-18-unifi-ip-blocklist.org | 82 -- content/blog/2023-06-20-audit-review-template.md | 81 ++ content/blog/2023-06-20-audit-review-template.org | 76 -- content/blog/2023-06-23-byobu.md | 70 ++ content/blog/2023-06-23-byobu.org | 66 -- content/blog/2023-06-23-self-hosting-convos.md | 169 +++ content/blog/2023-06-23-self-hosting-convos.org | 160 --- content/blog/2023-06-28-backblaze-b2.md | 189 +++ content/blog/2023-06-28-backblaze-b2.org | 176 --- content/blog/2023-06-30-self-hosting-voyager.md | 127 ++ content/blog/2023-06-30-self-hosting-voyager.org | 119 -- content/blog/2023-07-12-wireguard-lan.md | 145 +++ content/blog/2023-07-12-wireguard-lan.org | 143 --- content/blog/2023-07-19-plex-transcoder-errors.md | 65 + content/blog/2023-07-19-plex-transcoder-errors.org | 58 - content/blog/2023-08-18-agile-auditing.md | 159 +++ content/blog/2023-08-18-agile-auditing.org | 137 --- content/blog/2023-09-15-self-hosting-gitweb.md | 78 ++ content/blog/2023-09-15-self-hosting-gitweb.org | 72 -- content/blog/2023-09-19-audit-sql-scripts.md | 273 +++++ content/blog/2023-09-19-audit-sql-scripts.org | 262 ---- content/blog/2023-10-04-digital-minimalism.md | 107 ++ content/blog/2023-10-04-digital-minimalism.org | 100 -- content/blog/2023-10-11-self-hosting-authelia.md | 458 +++++++ content/blog/2023-10-11-self-hosting-authelia.org | 443 ------- content/blog/2023-10-15-alpine-ssh-hardening.md | 75 ++ content/blog/2023-10-15-alpine-ssh-hardening.org | 71 -- .../2023-10-17-self-hosting-anonymousoverflow.md | 135 +++ .../2023-10-17-self-hosting-anonymousoverflow.org | 127 -- content/blog/2023-11-08-scli.md | 158 +++ content/blog/2023-11-08-scli.org | 145 --- content/blog/2023-12-03-unifi-nextdns.md | 1252 ++++++++++++++++++++ content/blog/2023-12-03-unifi-nextdns.org | 1241 ------------------- content/blog/2024-01-08-dont-say-hello.md | 29 + content/blog/2024-01-08-dont-say-hello.org | 26 - content/blog/2024-01-09-macos-customization.md | 181 +++ content/blog/2024-01-09-macos-customization.org | 170 --- content/blog/2024-01-13-local-llm.md | 111 ++ content/blog/2024-01-13-local-llm.org | 108 -- content/blog/2024-01-26-audit-dashboard.md | 188 +++ content/blog/2024-01-26-audit-dashboard.org | 171 --- content/blog/2024-01-27-tableau-dashboard.md | 166 +++ content/blog/2024-01-27-tableau-dashboard.org | 156 --- content/blog/2024-02-06-zfs.md | 337 ++++++ content/blog/2024-02-06-zfs.org | 324 ----- content/blog/2024-02-13-ubuntu-emergency-mode.md | 75 ++ content/blog/2024-02-13-ubuntu-emergency-mode.org | 69 -- content/blog/2024-02-21-self-hosting-otter-wiki.md | 143 +++ .../blog/2024-02-21-self-hosting-otter-wiki.org | 135 --- content/blog/2024-03-13-doom-emacs.md | 362 ++++++ content/blog/2024-03-13-doom-emacs.org | 354 ------ .../blog/2024-03-15-self-hosting-ddns-updater.md | 321 +++++ .../blog/2024-03-15-self-hosting-ddns-updater.org | 314 ----- content/blog/2024-03-29-org-blog.md | 277 +++++ content/blog/2024-03-29-org-blog.org | 259 ---- .../blog/2024-04-06-convert-onenote-to-markdown.md | 115 ++ .../2024-04-06-convert-onenote-to-markdown.org | 102 -- content/blog/2024-04-08-docker-local-web-server.md | 149 +++ .../blog/2024-04-08-docker-local-web-server.org | 137 --- content/blog/2024-04-18-mu4e.md | 323 +++++ content/blog/2024-04-18-mu4e.org | 300 ----- content/blog/_index.md | 5 + content/index.org | 2 - content/salary.md | 35 + content/salary/index.org | 54 - content/services.md | 17 + content/services/index.org | 16 - content/wiki/_index.md | 5 + content/wiki/ai.md | 5 + content/wiki/ai.org | 3 - content/wiki/audit.md | 5 + content/wiki/audit.org | 3 - content/wiki/automation.md | 5 + content/wiki/automation.org | 3 - content/wiki/blogroll.md | 35 + content/wiki/blogroll.org | 34 - content/wiki/cli.md | 5 + content/wiki/cli.org | 3 - content/wiki/data.md | 5 + content/wiki/data.org | 3 - content/wiki/editors.md | 5 + content/wiki/editors.org | 3 - content/wiki/foss.md | 5 + content/wiki/foss.org | 3 - content/wiki/git.md | 5 + content/wiki/git.org | 3 - content/wiki/hardware.md | 95 ++ content/wiki/hardware.org | 114 -- content/wiki/ios.md | 151 +++ content/wiki/ios.org | 156 --- content/wiki/linux.md | 51 + content/wiki/linux.org | 53 - content/wiki/macos.md | 148 +++ content/wiki/macos.org | 152 --- content/wiki/management.md | 11 + content/wiki/management.org | 16 - content/wiki/minimalism.md | 5 + content/wiki/minimalism.org | 3 - content/wiki/networking.md | 27 + content/wiki/networking.org | 48 - content/wiki/plaintext.md | 11 + content/wiki/plaintext.org | 16 - content/wiki/privacy.md | 5 + content/wiki/privacy.org | 3 - content/wiki/python.md | 5 + content/wiki/python.org | 3 - content/wiki/security.md | 5 + content/wiki/security.org | 3 - content/wiki/selfhosted.md | 5 + content/wiki/selfhosted.org | 3 - content/wiki/windows.md | 18 + content/wiki/windows.org | 18 - 291 files changed, 22292 insertions(+), 20941 deletions(-) create mode 100644 content/blog/2018-11-28-aes-encryption.md delete mode 100644 content/blog/2018-11-28-aes-encryption.org create mode 100644 content/blog/2018-11-28-cpp-compiler.md delete mode 100644 content/blog/2018-11-28-cpp-compiler.org create mode 100644 content/blog/2019-01-07-useful-css.md delete mode 100644 content/blog/2019-01-07-useful-css.org create mode 100644 content/blog/2019-09-09-audit-analytics.md delete mode 100644 content/blog/2019-09-09-audit-analytics.org create mode 100644 content/blog/2019-12-03-the-ansoff-matrix.md delete mode 100644 content/blog/2019-12-03-the-ansoff-matrix.org create mode 100644 content/blog/2019-12-16-password-security.md delete mode 100644 content/blog/2019-12-16-password-security.org create mode 100644 content/blog/2020-01-25-linux-software.md delete mode 100644 content/blog/2020-01-25-linux-software.org create mode 100644 content/blog/2020-01-26-steam-on-ntfs.md delete mode 100644 content/blog/2020-01-26-steam-on-ntfs.org create mode 100644 content/blog/2020-02-09-cryptography-basics.md delete mode 100644 content/blog/2020-02-09-cryptography-basics.org delete mode 100644 content/blog/2020-03-25-session-manager.org create mode 100644 content/blog/2020-03-25-session-messenger.md create mode 100644 content/blog/2020-05-03-homelab.md delete mode 100644 content/blog/2020-05-03-homelab.org create mode 100644 content/blog/2020-05-19-customizing-ubuntu.md delete mode 100644 content/blog/2020-05-19-customizing-ubuntu.org create mode 100644 content/blog/2020-07-20-video-game-sales.md delete mode 100644 content/blog/2020-07-20-video-game-sales.org create mode 100644 content/blog/2020-07-26-business-analysis.md delete mode 100644 content/blog/2020-07-26-business-analysis.org create mode 100644 content/blog/2020-08-22-redirect-github-pages.md delete mode 100644 content/blog/2020-08-22-redirect-github-pages.org create mode 100644 content/blog/2020-08-29-php-auth-flow.md delete mode 100644 content/blog/2020-08-29-php-auth-flow.org create mode 100644 content/blog/2020-09-01-visual-recognition.md delete mode 100644 content/blog/2020-09-01-visual-recognition.org create mode 100644 content/blog/2020-09-22-internal-audit.md delete mode 100644 content/blog/2020-09-22-internal-audit.org create mode 100644 content/blog/2020-09-25-happiness-map.md delete mode 100644 content/blog/2020-09-25-happiness-map.org create mode 100644 content/blog/2020-10-12-mediocrity.md delete mode 100644 content/blog/2020-10-12-mediocrity.org create mode 100644 content/blog/2020-12-27-website-redesign.md delete mode 100644 content/blog/2020-12-27-website-redesign.org create mode 100644 content/blog/2020-12-28-neon-drive.md delete mode 100644 content/blog/2020-12-28-neon-drive.org create mode 100644 content/blog/2020-12-29-zork.md delete mode 100644 content/blog/2020-12-29-zork.org create mode 100644 content/blog/2021-01-01-seum.md delete mode 100644 content/blog/2021-01-01-seum.org create mode 100644 content/blog/2021-01-04-fediverse.md delete mode 100644 content/blog/2021-01-04-fediverse.org create mode 100644 content/blog/2021-01-07-ufw.md delete mode 100644 content/blog/2021-01-07-ufw.org create mode 100644 content/blog/2021-02-19-macos.md delete mode 100644 content/blog/2021-02-19-macos.org create mode 100644 content/blog/2021-03-19-clone-github-repos.md delete mode 100644 content/blog/2021-03-19-clone-github-repos.org create mode 100644 content/blog/2021-03-28-gemini-capsule.md delete mode 100644 content/blog/2021-03-28-gemini-capsule.org create mode 100644 content/blog/2021-03-28-vaporwave-vs-outrun.md delete mode 100644 content/blog/2021-03-28-vaporwave-vs-outrun.org create mode 100644 content/blog/2021-03-30-vps-web-server.md delete mode 100644 content/blog/2021-03-30-vps-web-server.org create mode 100644 content/blog/2021-04-17-gemini-server.md delete mode 100644 content/blog/2021-04-17-gemini-server.org create mode 100644 content/blog/2021-04-23-php-comment-system.md delete mode 100644 content/blog/2021-04-23-php-comment-system.org create mode 100644 content/blog/2021-04-28-photography.md delete mode 100644 content/blog/2021-04-28-photography.org create mode 100644 content/blog/2021-05-30-changing-git-authors.md delete mode 100644 content/blog/2021-05-30-changing-git-authors.org create mode 100644 content/blog/2021-07-15-delete-gitlab-repos.md delete mode 100644 content/blog/2021-07-15-delete-gitlab-repos.org create mode 100644 content/blog/2021-08-25-audit-sampling.md delete mode 100644 content/blog/2021-08-25-audit-sampling.org create mode 100644 content/blog/2021-10-09-apache-redirect.md delete mode 100644 content/blog/2021-10-09-apache-redirect.org create mode 100644 content/blog/2021-12-04-cisa.md delete mode 100644 content/blog/2021-12-04-cisa.org create mode 100644 content/blog/2022-02-10-leaving-the-office.md delete mode 100644 content/blog/2022-02-10-leaving-the-office.org create mode 100644 content/blog/2022-02-10-njalla-dns-api.md delete mode 100644 content/blog/2022-02-10-njalla-dns-api.org create mode 100644 content/blog/2022-02-16-debian-and-nginx.md delete mode 100644 content/blog/2022-02-16-debian-and-nginx.org create mode 100644 content/blog/2022-02-17-exiftool.md delete mode 100644 content/blog/2022-02-17-exiftool.org create mode 100644 content/blog/2022-02-20-nginx-caching.md delete mode 100644 content/blog/2022-02-20-nginx-caching.org create mode 100644 content/blog/2022-02-22-tuesday.md delete mode 100644 content/blog/2022-02-22-tuesday.org create mode 100644 content/blog/2022-03-02-reliable-notes.md delete mode 100644 content/blog/2022-03-02-reliable-notes.org create mode 100644 content/blog/2022-03-03-financial-database.md delete mode 100644 content/blog/2022-03-03-financial-database.org create mode 100644 content/blog/2022-03-08-plex-migration.md delete mode 100644 content/blog/2022-03-08-plex-migration.org create mode 100644 content/blog/2022-03-23-cloudflare-dns-api.md delete mode 100644 content/blog/2022-03-23-cloudflare-dns-api.org create mode 100644 content/blog/2022-03-23-nextcloud-on-ubuntu.md delete mode 100644 content/blog/2022-03-23-nextcloud-on-ubuntu.org create mode 100644 content/blog/2022-03-24-server-hardening.md delete mode 100644 content/blog/2022-03-24-server-hardening.org create mode 100644 content/blog/2022-03-26-ssh-mfa.md delete mode 100644 content/blog/2022-03-26-ssh-mfa.org create mode 100644 content/blog/2022-04-02-nginx-reverse-proxy.md delete mode 100644 content/blog/2022-04-02-nginx-reverse-proxy.org create mode 100644 content/blog/2022-04-09-pinetime.md delete mode 100644 content/blog/2022-04-09-pinetime.org create mode 100644 content/blog/2022-06-01-ditching-cloudflare.md delete mode 100644 content/blog/2022-06-01-ditching-cloudflare.org create mode 100644 content/blog/2022-06-07-self-hosting-freshrss.md delete mode 100644 content/blog/2022-06-07-self-hosting-freshrss.org create mode 100644 content/blog/2022-06-16-terminal-lifestyle.md delete mode 100644 content/blog/2022-06-16-terminal-lifestyle.org create mode 100644 content/blog/2022-06-22-daily-poetry.md delete mode 100644 content/blog/2022-06-22-daily-poetry.org create mode 100644 content/blog/2022-06-24-fedora-i3.md delete mode 100644 content/blog/2022-06-24-fedora-i3.org create mode 100644 content/blog/2022-07-01-git-server.md delete mode 100644 content/blog/2022-07-01-git-server.org create mode 100644 content/blog/2022-07-14-gnupg.md delete mode 100644 content/blog/2022-07-14-gnupg.org create mode 100644 content/blog/2022-07-25-curseradio.md delete mode 100644 content/blog/2022-07-25-curseradio.org create mode 100644 content/blog/2022-07-30-flac-to-opus.md delete mode 100644 content/blog/2022-07-30-flac-to-opus.org create mode 100644 content/blog/2022-07-31-bash-it.md delete mode 100644 content/blog/2022-07-31-bash-it.org create mode 100644 content/blog/2022-08-31-privacy-com-changes.md delete mode 100644 content/blog/2022-08-31-privacy-com-changes.org create mode 100644 content/blog/2022-09-17-serenity-os.md delete mode 100644 content/blog/2022-09-17-serenity-os.org create mode 100644 content/blog/2022-09-21-graphene-os.md delete mode 100644 content/blog/2022-09-21-graphene-os.org create mode 100644 content/blog/2022-10-04-mtp-linux.md delete mode 100644 content/blog/2022-10-04-mtp-linux.org create mode 100644 content/blog/2022-10-04-syncthing.md delete mode 100644 content/blog/2022-10-04-syncthing.org create mode 100644 content/blog/2022-10-22-alpine-linux.md delete mode 100644 content/blog/2022-10-22-alpine-linux.org create mode 100644 content/blog/2022-10-30-linux-display-manager.md delete mode 100644 content/blog/2022-10-30-linux-display-manager.org create mode 100644 content/blog/2022-11-07-self-hosting-matrix.md delete mode 100644 content/blog/2022-11-07-self-hosting-matrix.org create mode 100644 content/blog/2022-11-11-nginx-tmp-errors.md delete mode 100644 content/blog/2022-11-11-nginx-tmp-errors.org create mode 100644 content/blog/2022-11-27-server-build.md delete mode 100644 content/blog/2022-11-27-server-build.org create mode 100644 content/blog/2022-11-29-nginx-referrer-ban-list.md delete mode 100644 content/blog/2022-11-29-nginx-referrer-ban-list.org create mode 100644 content/blog/2022-12-01-nginx-compression.md delete mode 100644 content/blog/2022-12-01-nginx-compression.org create mode 100644 content/blog/2022-12-07-nginx-wildcard-redirect.md delete mode 100644 content/blog/2022-12-07-nginx-wildcard-redirect.org create mode 100644 content/blog/2022-12-17-st.md delete mode 100644 content/blog/2022-12-17-st.org create mode 100644 content/blog/2022-12-23-alpine-desktop.md delete mode 100644 content/blog/2022-12-23-alpine-desktop.org create mode 100644 content/blog/2023-01-03-recent-website-changes.md delete mode 100644 content/blog/2023-01-03-recent-website-changes.org create mode 100644 content/blog/2023-01-05-mass-unlike-tumblr-posts.md delete mode 100644 content/blog/2023-01-05-mass-unlike-tumblr-posts.org create mode 100644 content/blog/2023-01-08-fedora-login-manager.md delete mode 100644 content/blog/2023-01-08-fedora-login-manager.org create mode 100644 content/blog/2023-01-21-flatpak-symlinks.md delete mode 100644 content/blog/2023-01-21-flatpak-symlinks.org create mode 100644 content/blog/2023-01-23-random-wireguard.md delete mode 100644 content/blog/2023-01-23-random-wireguard.org create mode 100644 content/blog/2023-01-28-self-hosting-wger.md delete mode 100644 content/blog/2023-01-28-self-hosting-wger.org create mode 100644 content/blog/2023-02-02-exploring-hare.md delete mode 100644 content/blog/2023-02-02-exploring-hare.org create mode 100644 content/blog/2023-05-22-burnout.md delete mode 100644 content/blog/2023-05-22-burnout.org create mode 100644 content/blog/2023-06-08-goaccess-geoip.md delete mode 100644 content/blog/2023-06-08-goaccess-geoip.org create mode 100644 content/blog/2023-06-08-self-hosting-baikal.md delete mode 100644 content/blog/2023-06-08-self-hosting-baikal.org create mode 100644 content/blog/2023-06-18-unifi-ip-blocklist.md delete mode 100644 content/blog/2023-06-18-unifi-ip-blocklist.org create mode 100644 content/blog/2023-06-20-audit-review-template.md delete mode 100644 content/blog/2023-06-20-audit-review-template.org create mode 100644 content/blog/2023-06-23-byobu.md delete mode 100644 content/blog/2023-06-23-byobu.org create mode 100644 content/blog/2023-06-23-self-hosting-convos.md delete mode 100644 content/blog/2023-06-23-self-hosting-convos.org create mode 100644 content/blog/2023-06-28-backblaze-b2.md delete mode 100644 content/blog/2023-06-28-backblaze-b2.org create mode 100644 content/blog/2023-06-30-self-hosting-voyager.md delete mode 100644 content/blog/2023-06-30-self-hosting-voyager.org create mode 100644 content/blog/2023-07-12-wireguard-lan.md delete mode 100644 content/blog/2023-07-12-wireguard-lan.org create mode 100644 content/blog/2023-07-19-plex-transcoder-errors.md delete mode 100644 content/blog/2023-07-19-plex-transcoder-errors.org create mode 100644 content/blog/2023-08-18-agile-auditing.md delete mode 100644 content/blog/2023-08-18-agile-auditing.org create mode 100644 content/blog/2023-09-15-self-hosting-gitweb.md delete mode 100644 content/blog/2023-09-15-self-hosting-gitweb.org create mode 100644 content/blog/2023-09-19-audit-sql-scripts.md delete mode 100644 content/blog/2023-09-19-audit-sql-scripts.org create mode 100644 content/blog/2023-10-04-digital-minimalism.md delete mode 100644 content/blog/2023-10-04-digital-minimalism.org create mode 100644 content/blog/2023-10-11-self-hosting-authelia.md delete mode 100644 content/blog/2023-10-11-self-hosting-authelia.org create mode 100644 content/blog/2023-10-15-alpine-ssh-hardening.md delete mode 100644 content/blog/2023-10-15-alpine-ssh-hardening.org create mode 100644 content/blog/2023-10-17-self-hosting-anonymousoverflow.md delete mode 100644 content/blog/2023-10-17-self-hosting-anonymousoverflow.org create mode 100644 content/blog/2023-11-08-scli.md delete mode 100644 content/blog/2023-11-08-scli.org create mode 100644 content/blog/2023-12-03-unifi-nextdns.md delete mode 100644 content/blog/2023-12-03-unifi-nextdns.org create mode 100644 content/blog/2024-01-08-dont-say-hello.md delete mode 100644 content/blog/2024-01-08-dont-say-hello.org create mode 100644 content/blog/2024-01-09-macos-customization.md delete mode 100644 content/blog/2024-01-09-macos-customization.org create mode 100644 content/blog/2024-01-13-local-llm.md delete mode 100644 content/blog/2024-01-13-local-llm.org create mode 100644 content/blog/2024-01-26-audit-dashboard.md delete mode 100644 content/blog/2024-01-26-audit-dashboard.org create mode 100644 content/blog/2024-01-27-tableau-dashboard.md delete mode 100644 content/blog/2024-01-27-tableau-dashboard.org create mode 100644 content/blog/2024-02-06-zfs.md delete mode 100644 content/blog/2024-02-06-zfs.org create mode 100644 content/blog/2024-02-13-ubuntu-emergency-mode.md delete mode 100644 content/blog/2024-02-13-ubuntu-emergency-mode.org create mode 100644 content/blog/2024-02-21-self-hosting-otter-wiki.md delete mode 100644 content/blog/2024-02-21-self-hosting-otter-wiki.org create mode 100644 content/blog/2024-03-13-doom-emacs.md delete mode 100644 content/blog/2024-03-13-doom-emacs.org create mode 100644 content/blog/2024-03-15-self-hosting-ddns-updater.md delete mode 100644 content/blog/2024-03-15-self-hosting-ddns-updater.org create mode 100644 content/blog/2024-03-29-org-blog.md delete mode 100644 content/blog/2024-03-29-org-blog.org create mode 100644 content/blog/2024-04-06-convert-onenote-to-markdown.md delete mode 100644 content/blog/2024-04-06-convert-onenote-to-markdown.org create mode 100644 content/blog/2024-04-08-docker-local-web-server.md delete mode 100644 content/blog/2024-04-08-docker-local-web-server.org create mode 100644 content/blog/2024-04-18-mu4e.md delete mode 100644 content/blog/2024-04-18-mu4e.org create mode 100644 content/blog/_index.md delete mode 100644 content/index.org create mode 100644 content/salary.md delete mode 100644 content/salary/index.org create mode 100644 content/services.md delete mode 100644 content/services/index.org create mode 100644 content/wiki/_index.md create mode 100644 content/wiki/ai.md delete mode 100644 content/wiki/ai.org create mode 100644 content/wiki/audit.md delete mode 100644 content/wiki/audit.org create mode 100644 content/wiki/automation.md delete mode 100644 content/wiki/automation.org create mode 100644 content/wiki/blogroll.md delete mode 100644 content/wiki/blogroll.org create mode 100644 content/wiki/cli.md delete mode 100644 content/wiki/cli.org create mode 100644 content/wiki/data.md delete mode 100644 content/wiki/data.org create mode 100644 content/wiki/editors.md delete mode 100644 content/wiki/editors.org create mode 100644 content/wiki/foss.md delete mode 100644 content/wiki/foss.org create mode 100644 content/wiki/git.md delete mode 100644 content/wiki/git.org create mode 100644 content/wiki/hardware.md delete mode 100644 content/wiki/hardware.org create mode 100644 content/wiki/ios.md delete mode 100644 content/wiki/ios.org create mode 100644 content/wiki/linux.md delete mode 100644 content/wiki/linux.org create mode 100644 content/wiki/macos.md delete mode 100644 content/wiki/macos.org create mode 100644 content/wiki/management.md delete mode 100644 content/wiki/management.org create mode 100644 content/wiki/minimalism.md delete mode 100644 content/wiki/minimalism.org create mode 100644 content/wiki/networking.md delete mode 100644 content/wiki/networking.org create mode 100644 content/wiki/plaintext.md delete mode 100644 content/wiki/plaintext.org create mode 100644 content/wiki/privacy.md delete mode 100644 content/wiki/privacy.org create mode 100644 content/wiki/python.md delete mode 100644 content/wiki/python.org create mode 100644 content/wiki/security.md delete mode 100644 content/wiki/security.org create mode 100644 content/wiki/selfhosted.md delete mode 100644 content/wiki/selfhosted.org create mode 100644 content/wiki/windows.md delete mode 100644 content/wiki/windows.org (limited to 'content') diff --git a/content/blog/2018-11-28-aes-encryption.md b/content/blog/2018-11-28-aes-encryption.md new file mode 100644 index 0000000..8d39b1d --- /dev/null +++ b/content/blog/2018-11-28-aes-encryption.md @@ -0,0 +1,125 @@ ++++ +date = 2018-11-28 +title = "AES Encryption" +description = "" +draft = false ++++ + +# Basic AES + +If you\'re not familiar with encryption techniques, +[AES](https://en.wikipedia.org/wiki/Advanced_Encryption_Standard) is the +**Advanced Encryption Standard**. This specification was established by +the National Institute of Standards and Technology, sub-selected from +the Rijndael family of ciphers (128, 192, and 256 bits) in 2001. +Furthering its popularity and status, the US government chose AES as +their default encryption method for top-secret data, removing the +previous standard which had been in place since 1977. + +AES has proven to be an extremely safe encryption method, with 7-round +and 8-round attacks making no material improvements since the release of +this encryption standard almost two decades ago. + +> Though many papers have been published on the cryptanalysis of AES, +> the fastest single-key attacks on round-reduced AES variants \[20, +> 33\] so far are only slightly more powerful than those proposed 10 +> years ago \[23,24\]. +> +> - [Bogdonav, et +> al.](http://research.microsoft.com/en-us/projects/cryptanalysis/aesbc.pdf) + +# How Secure is AES? + +In theory, AES-256 is non-crackable due to the massive number of +combinations that can be produced. However, AES-128 is no longer +recommended as a viable implementation to protect important data. + +A semi-short [comic +strip](http://www.moserware.com/2009/09/stick-figure-guide-to-advanced.html) +from Moserware quickly explains AES for the public to understand. +Basically AES encrypts the data by obscuring the relationship between +the data and the encrypted data. Additionally, this method spreads the +message out. Lastly, the key produced by AES is the secret to decrypting +it. Someone may know the method of AES, but without the key, they are +powerless. + +To obscure and spread the data out, AES creates a +substitution-permutation network. Wikipedia has a wonderful [example of +an SP +network](https://upload.wikimedia.org/wikipedia/commons/thumb/c/cd/SubstitutionPermutationNetwork2.png/468px-SubstitutionPermutationNetwork2.png) +available. This network sends the data through a set of S boxes (using +the unique key) to substitute the bits with another block of bits. Then, +a P box will permutate, or rearrange, the bits. This is done over and +over, with the key being derived from the last round. For AES, the key +size specifies the number of transformation rounds: 10, 12, and 14 +rounds for 128-bit, 192-bit, and 256-bit keys, respectively. + +# The Process + +1. \*KeyExpansion=: Using [Rijndael\'s key + schedule](https://en.m.wikipedia.org/wiki/Advanced_Encryption_Standard), + the keys are dynamically generated. +2. **AddRoundKey**: Each byte of the data is combined with this key + using bitwise xor. +3. **SubBytes**: This is followed by the substitution of each byte of + data. +4. **ShiftRows**: Then, the final three rows are shifted a certain + number of steps, dictated by the cipher. +5. **MixColumns**: After the rows have been shifted, the columns are + mixed and combined. + +This process does not necessarily stop after one full round. Steps 2 +through 5 will repeat for the number of rounds specified by the key. +However, the final round excludes the MixColumns step. As you can see, +this is a fairly complex process. One must have a solid understanding of +general mathematic principles to fully understand how the sequence works +(and to even attempt to find a weakness). + +According to research done by Bogdanov et al., it would take billions of +years to brute force a 126-bit key with current hardware. Additionally, +this brute force attack would require storing 2^88^ bits of data! +However, there are a few different attacks that have been used to show +vulnerabilities with the use of this technology. Side-channel attacks +use inadvertent leaks of data from the hardware or software, which can +allow attackers to obtain the key or run programs on a user\'s hardware. + +Please note that this is not something you should run out and try to +implement in your `Hello, World!` app after only a few hours +of research. While AES (basically all encryption methods) is extremely +efficient in what it does, it takes a lot of time and patience to +understand. If you\'re looking for something which currently implements +AES, check out the [Legion of the Bouncy +Castle](https://www.bouncycastle.org/documentation.html) for Java +implementations of cryptographic algorithms. + +# Why Does Encryption Matter? + +There are limitless reasons to enable encryption at-rest or in-transit +for various aspects of your digital life. You can research specific +examples, such as [Australia passes new law to thwart strong +encryption](https://arstechnica.com/tech-policy/2018/12/australia-passes-new-law-to-thwart-strong-encryption/). +However, I will simply list a few basic reasons to always enable +encryption, where feasible: + +1. Privacy is a human right and is recognized as a national right in + some countries (e.g., [US Fourth + Amendment](https://www.law.cornell.edu/wex/fourth_amendment)). +2. \"Why not?\" Encryption rarely affects performance or speed, so + there\'s usually not a reason to avoid it in the first place. +3. Your digital identity and activity (texts, emails, phone calls, + online accounts, etc.) are extremely valuable and can result in + terrible consequences, such as identity theft, if leaked to other + parties. Encrypting this data prevents such leaks from ruining + lives. +4. Wiping or factory-resetting does not actually wipe all data from the + storage device. There are methods to read data from the physical + disks/boards inside devices. +5. Corporations, governments, and other nefarious groups/individuals + are actively looking for ways to collect personal information about + anyone they can. If someone\'s data is unencrypted, that person may + become a target due to the ease of data collection. + +​**Read More:** + +- [Federal Information Processing Standards Publication + 197](http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.197.pdf) diff --git a/content/blog/2018-11-28-aes-encryption.org b/content/blog/2018-11-28-aes-encryption.org deleted file mode 100644 index 03dcbf9..0000000 --- a/content/blog/2018-11-28-aes-encryption.org +++ /dev/null @@ -1,103 +0,0 @@ -#+title: AES Encryption -#+description: Learn how the AES Encryption algorithm works. -#+date: <2018-11-28 Wed> -#+filetags: :security: - -* Basic AES -If you're not familiar with encryption techniques, [[https://en.wikipedia.org/wiki/Advanced_Encryption_Standard][AES]] is the *Advanced -Encryption Standard*. This specification was established by the National -Institute of Standards and Technology, sub-selected from the Rijndael family of -ciphers (128, 192, and 256 bits) in 2001. Furthering its popularity and status, -the US government chose AES as their default encryption method for top-secret -data, removing the previous standard which had been in place since 1977. - -AES has proven to be an extremely safe encryption method, with 7-round and -8-round attacks making no material improvements since the release of this -encryption standard almost two decades ago. - -#+begin_quote -Though many papers have been published on the cryptanalysis of AES, the fastest -single-key attacks on round-reduced AES variants [20, 33] so far are only -slightly more powerful than those proposed 10 years ago [23,24]. - -- [[http://research.microsoft.com/en-us/projects/cryptanalysis/aesbc.pdf][Bogdonav, et al.]] -#+end_quote - -* How Secure is AES? -In theory, AES-256 is non-crackable due to the massive number of combinations -that can be produced. However, AES-128 is no longer recommended as a viable -implementation to protect important data. - -A semi-short [[http://www.moserware.com/2009/09/stick-figure-guide-to-advanced.html][comic strip]] from Moserware quickly explains AES for the public to -understand. Basically AES encrypts the data by obscuring the relationship -between the data and the encrypted data. Additionally, this method spreads the -message out. Lastly, the key produced by AES is the secret to decrypting it. -Someone may know the method of AES, but without the key, they are powerless. - -To obscure and spread the data out, AES creates a substitution-permutation -network. Wikipedia has a wonderful [[https://upload.wikimedia.org/wikipedia/commons/thumb/c/cd/SubstitutionPermutationNetwork2.png/468px-SubstitutionPermutationNetwork2.png][example of an SP network]] available. This -network sends the data through a set of S boxes (using the unique key) to -substitute the bits with another block of bits. Then, a P box will permutate, or -rearrange, the bits. This is done over and over, with the key being derived from -the last round. For AES, the key size specifies the number of transformation -rounds: 10, 12, and 14 rounds for 128-bit, 192-bit, and 256-bit keys, -respectively. - -* The Process -1. *KeyExpansion=: Using [[https://en.m.wikipedia.org/wiki/Advanced_Encryption_Standard][Rijndael's key schedule]], the keys are dynamically - generated. -2. *AddRoundKey*: Each byte of the data is combined with this key using bitwise - xor. -3. *SubBytes*: This is followed by the substitution of each byte of data. -4. *ShiftRows*: Then, the final three rows are shifted a certain number of - steps, dictated by the cipher. -5. *MixColumns*: After the rows have been shifted, the columns are mixed and - combined. - -This process does not necessarily stop after one full round. Steps 2 through 5 -will repeat for the number of rounds specified by the key. However, the final -round excludes the MixColumns step. As you can see, this is a fairly complex -process. One must have a solid understanding of general mathematic principles to -fully understand how the sequence works (and to even attempt to find a -weakness). - -According to research done by Bogdanov et al., it would take billions of years -to brute force a 126-bit key with current hardware. Additionally, this brute -force attack would require storing 2^{88} bits of data! However, there are a few -different attacks that have been used to show vulnerabilities with the use of -this technology. Side-channel attacks use inadvertent leaks of data from the -hardware or software, which can allow attackers to obtain the key or run -programs on a user's hardware. - -Please note that this is not something you should run out and try to implement -in your =Hello, World!= app after only a few hours of research. While AES -(basically all encryption methods) is extremely efficient in what it does, it -takes a lot of time and patience to understand. If you're looking for something -which currently implements AES, check out the [[https://www.bouncycastle.org/documentation.html][Legion of the Bouncy Castle]] for -Java implementations of cryptographic algorithms. - -* Why Does Encryption Matter? -There are limitless reasons to enable encryption at-rest or in-transit for -various aspects of your digital life. You can research specific examples, such -as [[https://arstechnica.com/tech-policy/2018/12/australia-passes-new-law-to-thwart-strong-encryption/][Australia passes new law to thwart strong encryption]]. However, I will simply -list a few basic reasons to always enable encryption, where feasible: - -1. Privacy is a human right and is recognized as a national right in some - countries (e.g., [[https://www.law.cornell.edu/wex/fourth_amendment][US Fourth Amendment]]). -2. "Why not?" Encryption rarely affects performance or speed, so there's usually - not a reason to avoid it in the first place. -3. Your digital identity and activity (texts, emails, phone calls, online - accounts, etc.) are extremely valuable and can result in terrible - consequences, such as identity theft, if leaked to other parties. Encrypting - this data prevents such leaks from ruining lives. -4. Wiping or factory-resetting does not actually wipe all data from the storage - device. There are methods to read data from the physical disks/boards inside - devices. -5. Corporations, governments, and other nefarious groups/individuals are - actively looking for ways to collect personal information about anyone they - can. If someone's data is unencrypted, that person may become a target due to - the ease of data collection. - -​*Read More:* - -- [[http://nvlpubs.nist.gov/nistpubs/FIPS/NIST.FIPS.197.pdf][Federal Information Processing Standards Publication 197]] diff --git a/content/blog/2018-11-28-cpp-compiler.md b/content/blog/2018-11-28-cpp-compiler.md new file mode 100644 index 0000000..abad6a5 --- /dev/null +++ b/content/blog/2018-11-28-cpp-compiler.md @@ -0,0 +1,140 @@ ++++ +date = 2018-11-28 +title = "The C++ Compiler" +description = "" +draft = false ++++ + +# A Brief Introduction + +[C++](https://en.wikipedia.org/wiki/C%2B%2B) is a general-purpose +programming language with object-oriented, generic, and functional +features in addition to facilities for low-level memory manipulation. + +The source code, shown in the snippet below, must be compiled before it +can be executed. There are many steps and intricacies to the compilation +process, and this post was a personal exercise to learn and remember as +much information as I can. + +``` cpp +#include + +int main() +{ + std::cout << "Hello, world!\n"; +} +``` + +## Compilation Process + +### An Overview + +Compiling C++ projects is a frustrating task most days. Seemingly +nonexistent errors keeping your program from successfully compiling can +be annoying (especially since you know you wrote it perfectly the first +time, right?). + +I\'m learning more and more about C++ these days and decided to write +this concept down so that I can cement it even further in my own head. +However, C++ is not the only compiled language. Check out [the Wikipedia +entry for compiled +languages](https://en.wikipedia.org/wiki/Compiled_language) for more +examples of compiled languages. + +I\'ll start with a wonderful, graphical way to conceptualize the C++ +compiler. View [The C++ Compilation +Process](https://web.archive.org/web/20190419035048/http://faculty.cs.niu.edu/~mcmahon/CS241/Notes/compile.html) +by Kurt MacMahon, an NIU professor, to see the graphic and an +explanation. The goal of the compilation process is to take the C++ code +and produce a shared library, dynamic library, or an executable file. + +## Compilation Phases + +Let\'s break down the compilation process. There are four major steps to +compiling C++ code. + +### Step 1 + +The first step is to expand the source code file to meet all +dependencies. The C++ preprocessor includes the code from all the header +files, such as `#include +`. Now, what does that mean? The previous example +includes the `iostream` header. This tells the computer that +you want to use the `iostream` standard library, which +contains classes and functions written in the core language. This +specific header allows you to manipulate input/output streams. After all +this, you\'ll end up which a temporary file that contains the expanded +source code. + +In the example of the C++ code above, the `iostream` class +would be included in the expanded code. + +### Step 2 + +After the code is expanded, the compiler comes into play. The compiler +takes the C++ code and converts this code into the assembly language, +understood by the platform. You can see this in action if you head over +to the [GodBolt Compiler Explorer](https://godbolt.org), which shows C++ +being converted into assembly dynamically. + +For example, the `Hello, world!` code snippet above compiles +into the following assembly code: + +``` asm +.LC0: + .string "Hello, world!\n" +main: + push rbp + mov rbp, rsp + mov esi, OFFSET FLAT:.LC0 + mov edi, OFFSET FLAT:_ZSt4cout + call std::basic_ostream >& std::operator<< >(std::basic_ostream >&, char const*) + mov eax, 0 + pop rbp + ret +__static_initialization_and_destruction_0(int, int): + push rbp + mov rbp, rsp + sub rsp, 16 + mov DWORD PTR [rbp-4], edi + mov DWORD PTR [rbp-8], esi + cmp DWORD PTR [rbp-4], 1 + jne .L5 + cmp DWORD PTR [rbp-8], 65535 + jne .L5 + mov edi, OFFSET FLAT:_ZStL8__ioinit + call std::ios_base::Init::Init() [complete object constructor] + mov edx, OFFSET FLAT:__dso_handle + mov esi, OFFSET FLAT:_ZStL8__ioinit + mov edi, OFFSET FLAT:_ZNSt8ios_base4InitD1Ev + call __cxa_atexit +.L5: + nop + leave + ret +_GLOBAL__sub_I_main: + push rbp + mov rbp, rsp + mov esi, 65535 + mov edi, 1 + call __static_initialization_and_destruction_0(int, int) + pop rbp + ret +``` + +### Step 3 + +Third, the assembly code generated by the compiler is assembled into the +object code for the platform. Essentially, this is when the compiler +takes the assembly code and assembles it into machine code in a binary +format. After researching this online, I figured out that a lot of +compilers will allow you to stop compilation at this step. This would be +useful for compiling each source code file separately. This saves time +later if a single file changes; only that file needs to be recompiled. + +### Step 4 + +Finally, the object code file generated by the assembler is linked +together with the object code files for any library functions used to +produce a shared library, dynamic library, or an executable file. It +replaces all references to undefined symbols with the correct addresses. diff --git a/content/blog/2018-11-28-cpp-compiler.org b/content/blog/2018-11-28-cpp-compiler.org deleted file mode 100644 index 09aeb50..0000000 --- a/content/blog/2018-11-28-cpp-compiler.org +++ /dev/null @@ -1,121 +0,0 @@ -#+title: The C++ Compiler -#+date: 2018-11-28 -#+description: Learn basics about the C++ compilation process. -#+filetags: :dev: - -* A Brief Introduction -[[https://en.wikipedia.org/wiki/C%2B%2B][C++]] is a general-purpose programming language with object-oriented, generic, and -functional features in addition to facilities for low-level memory manipulation. - -The source code, shown in the snippet below, must be compiled before it can be -executed. There are many steps and intricacies to the compilation process, and -this post was a personal exercise to learn and remember as much information as I -can. - -#+begin_src cpp -#include - -int main() -{ - std::cout << "Hello, world!\n"; -} -#+end_src - -** Compilation Process -*** An Overview -Compiling C++ projects is a frustrating task most days. Seemingly nonexistent -errors keeping your program from successfully compiling can be annoying -(especially since you know you wrote it perfectly the first time, right?). - -I'm learning more and more about C++ these days and decided to write this -concept down so that I can cement it even further in my own head. However, C++ -is not the only compiled language. Check out [[https://en.wikipedia.org/wiki/Compiled_language][the Wikipedia entry for compiled -languages]] for more examples of compiled languages. - -I'll start with a wonderful, graphical way to conceptualize the C++ compiler. -View [[https://web.archive.org/web/20190419035048/http://faculty.cs.niu.edu/~mcmahon/CS241/Notes/compile.html][The C++ Compilation Process]] by Kurt MacMahon, an NIU professor, to see the -graphic and an explanation. The goal of the compilation process is to take the -C++ code and produce a shared library, dynamic library, or an executable file. - -** Compilation Phases -Let's break down the compilation process. There are four major steps to -compiling C++ code. - -*** Step 1 -The first step is to expand the source code file to meet all dependencies. The -C++ preprocessor includes the code from all the header files, such as =#include -=. Now, what does that mean? The previous example includes the -=iostream= header. This tells the computer that you want to use the =iostream= -standard library, which contains classes and functions written in the core -language. This specific header allows you to manipulate input/output streams. -After all this, you'll end up which a temporary file that contains the expanded -source code. - -In the example of the C++ code above, the =iostream= class would be included in -the expanded code. - -*** Step 2 -After the code is expanded, the compiler comes into play. The compiler takes the -C++ code and converts this code into the assembly language, understood by the -platform. You can see this in action if you head over to the [[https://godbolt.org][GodBolt Compiler -Explorer]], which shows C++ being converted into assembly dynamically. - -For example, the =Hello, world!= code snippet above compiles into the following -assembly code: - -#+begin_src asm -.LC0: - .string "Hello, world!\n" -main: - push rbp - mov rbp, rsp - mov esi, OFFSET FLAT:.LC0 - mov edi, OFFSET FLAT:_ZSt4cout - call std::basic_ostream >& std::operator<< >(std::basic_ostream >&, char const*) - mov eax, 0 - pop rbp - ret -__static_initialization_and_destruction_0(int, int): - push rbp - mov rbp, rsp - sub rsp, 16 - mov DWORD PTR [rbp-4], edi - mov DWORD PTR [rbp-8], esi - cmp DWORD PTR [rbp-4], 1 - jne .L5 - cmp DWORD PTR [rbp-8], 65535 - jne .L5 - mov edi, OFFSET FLAT:_ZStL8__ioinit - call std::ios_base::Init::Init() [complete object constructor] - mov edx, OFFSET FLAT:__dso_handle - mov esi, OFFSET FLAT:_ZStL8__ioinit - mov edi, OFFSET FLAT:_ZNSt8ios_base4InitD1Ev - call __cxa_atexit -.L5: - nop - leave - ret -_GLOBAL__sub_I_main: - push rbp - mov rbp, rsp - mov esi, 65535 - mov edi, 1 - call __static_initialization_and_destruction_0(int, int) - pop rbp - ret -#+end_src - -*** Step 3 -Third, the assembly code generated by the compiler is assembled into the object -code for the platform. Essentially, this is when the compiler takes the assembly -code and assembles it into machine code in a binary format. After researching -this online, I figured out that a lot of compilers will allow you to stop -compilation at this step. This would be useful for compiling each source code -file separately. This saves time later if a single file changes; only that file -needs to be recompiled. - -*** Step 4 -Finally, the object code file generated by the assembler is linked together with -the object code files for any library functions used to produce a shared -library, dynamic library, or an executable file. It replaces all references to -undefined symbols with the correct addresses. diff --git a/content/blog/2019-01-07-useful-css.md b/content/blog/2019-01-07-useful-css.md new file mode 100644 index 0000000..687be03 --- /dev/null +++ b/content/blog/2019-01-07-useful-css.md @@ -0,0 +1,200 @@ ++++ +date = 2019-01-07 +title = "Useful CSS Snippets" +description = "" +draft = false ++++ + +# Introduction to CSS + +[CSS](https://en.wikipedia.org/wiki/CSS), the language used to markup +HTML code and make it \"pretty\", is one of the most effective ways to +increase the attractiveness of a website. It can also lead to increased +user engagement, retention, and satisfaction. In fact, there are whole +career fields are dedicated to the improvement of user experiences, +known as UI design and UX design. + +Some web developers are used to the common CSS properties, such as +element sizing, fonts, colors, etc., but are not as well versed in +less-used properties and values such as `flexbox`, +`clip-path`, and `transform`. This article will +provide some insight into the less-used and unique CSS properties. + +# CSS Variables + +The first topic today is CSS variables. Variables are not often used by +smaller developers. CSS variables allow you to give your website a +well-defined structure, where you can easily reuse CSS properties +throughout the project. + +You can use variables to define things, such as color palettes. Then, +you can use these colors for backgrounds anywhere else in the HTML. This +could be extended, where extra variables could be defined for +`primary-text`, `quoted-text`, etc. Variables can +also be used to define spacing (e.g. `32px` or +`2rem`), which can then be applied to margins, padding, font +sizes, and more. + +For example, here are some variables defined at the root of the website, +which allows for any subsequent CSS rules to use those variables: + +``` css +:root { + --primary-color: black; + --secondary-color: white; +} + +body { + background-color: var(--primary-color); + color: var(--secondary-color); +} +``` + +# CSS Box Shadows + +Box shadows were once my mortal enemy. No matter how hard I tried, I +just couldn\'t get them to work how I wanted. Because of this, my +favorite discovery has been CSSMatic\'s [box shadow +generator](https://www.cssmatic.com/box-shadow). It provides an +excellent tool to generate box shadows using their simple sliders. +Surprisingly, this is the reason I learned how box shadows work! You can +use the sliders and watch how the CSS code changes in the image that is +displayed. Through this, you should understand that the basic structure +for box shadows is: + +``` css +box-shadow: inset horizontal vertical blur spread color; +``` + +Now, let\'s look at some basic examples! You can copy and paste the +following code into a site like CodePen or your own HTML files. Feel +free to play around with the code, experiment, and learn. + +**Box Shadow #1** + +``` html +
+
+

Effect 1

+
+
+``` + +``` css +.box h3 { + text-align: center; + position: relative; + top: 80px; +} +.box { + width: 70%; + height: 200px; + background: #fff; + margin: 40px auto; +} +.effect1 { + box-shadow: 0 10px 6px -6px #777; +} +``` + +**Box Shadow #2** + +``` html +
+
+

Effect 2

+
+
+``` + +``` css +.box h3 { + text-align: center; + position: relative; + top: 80px; +} +.box { + width: 70%; + height: 200px; + background: #fff; + margin: 40px auto; +} +.effect2 { + box-shadow: 10px 10px 5px -5px rgba(0, 0, 0, 0.75); +} +``` + +Try these box shadows out on your own and see how changing each shadow +value works. + +# CSS Flexbox + +Now, let\'s move on to the best part of this article: flexbox. The +flexbox is by far my favorite new toy. I originally stumbled across this +solution after looking for more efficient ways of centering content +horizontally AND vertically. I had used a few hack-ish methods before, +but flexbox throws those out the window. The best part of it all is that +flexbox is *dead simple*. + +Flexbox pertains to the parent div of any element. You want the parent +to be the flexbox in which items are arranged to use the flex methods. +It\'s easier to see this in action that explained, so let\'s see an +example. + +**Flexbox** + +``` html +
+
+

1

+
+
+

2

+
+
+``` + +``` css +.flex-examples { + display: flex; + flex-wrap: wrap; + justify-content: flex-start; + align-items: center; + padding: 10px; + background-color: #f2f2f2; +} +.sm-box { + display: flex; + justify-content: center; + align-items: center; + width: 20%; + height: 100px; + background: #fff; + margin: 40px 10px; +} +``` + +You may notice that we no longer need to use the `top` +property for the `h3` elements in our code. This is because +we set the display box to be a flex container for the small boxes, AND +we made the small boxes flex containers for their elements (the h3 +tags). Flex boxes can be nested like this to center content that is +inside centered content. + +For the example above, we designated the `justify-content` +property to be `flex-start` so that the boxes stack from the +left side of the screen. This property can be changed to +`center` to make the boxes appear in the center of the +screen. + +For an interactive example, [check out this +CodePen](https://codepen.io/LandonSchropp/pen/KpzzGo) from +[LandonScropp](https://codepen.io/LandonSchropp/). Resize the window +with dice to see how they collapse and re-align. + +# Even More CSS + +For more inspiration, you can visit [CodePen](https://www.codepen.io), +[Dribbble](https://dribbble.com), or [UI +Movement](https://uimovement.com) to browse the collections of many +amazing web designers. diff --git a/content/blog/2019-01-07-useful-css.org b/content/blog/2019-01-07-useful-css.org deleted file mode 100644 index c3c4b19..0000000 --- a/content/blog/2019-01-07-useful-css.org +++ /dev/null @@ -1,178 +0,0 @@ -#+title: Useful CSS Snippets -#+date: 2019-01-07 -#+description: Explore some useful CSS snippets. -#+filetags: :dev: - -* Introduction to CSS -[[https://en.wikipedia.org/wiki/CSS][CSS]], the language used to markup HTML code and make it "pretty", is one of the -most effective ways to increase the attractiveness of a website. It can also -lead to increased user engagement, retention, and satisfaction. In fact, there -are whole career fields are dedicated to the improvement of user experiences, -known as UI design and UX design. - -Some web developers are used to the common CSS properties, such as element -sizing, fonts, colors, etc., but are not as well versed in less-used properties -and values such as =flexbox=, =clip-path=, and =transform=. This article will -provide some insight into the less-used and unique CSS properties. - -* CSS Variables -The first topic today is CSS variables. Variables are not often used by smaller -developers. CSS variables allow you to give your website a well-defined -structure, where you can easily reuse CSS properties throughout the project. - -You can use variables to define things, such as color palettes. Then, you can -use these colors for backgrounds anywhere else in the HTML. This could be -extended, where extra variables could be defined for =primary-text=, -=quoted-text=, etc. Variables can also be used to define spacing (e.g. =32px= or -=2rem=), which can then be applied to margins, padding, font sizes, and more. - -For example, here are some variables defined at the root of the website, which -allows for any subsequent CSS rules to use those variables: - -#+begin_src css -:root { - --primary-color: black; - --secondary-color: white; -} - -body { - background-color: var(--primary-color); - color: var(--secondary-color); -} -#+end_src - -* CSS Box Shadows -Box shadows were once my mortal enemy. No matter how hard I tried, I just -couldn't get them to work how I wanted. Because of this, my favorite discovery -has been CSSMatic's [[https://www.cssmatic.com/box-shadow][box shadow generator]]. It provides an excellent tool to -generate box shadows using their simple sliders. Surprisingly, this is the -reason I learned how box shadows work! You can use the sliders and watch how the -CSS code changes in the image that is displayed. Through this, you should -understand that the basic structure for box shadows is: - -#+begin_src css -box-shadow: inset horizontal vertical blur spread color; -#+end_src - -Now, let's look at some basic examples! You can copy and paste the following -code into a site like CodePen or your own HTML files. Feel free to play around -with the code, experiment, and learn. - -*Box Shadow #1* - -#+begin_src html -
-
-

Effect 1

-
-
-#+end_src - -#+begin_src css -.box h3 { - text-align: center; - position: relative; - top: 80px; -} -.box { - width: 70%; - height: 200px; - background: #fff; - margin: 40px auto; -} -.effect1 { - box-shadow: 0 10px 6px -6px #777; -} -#+end_src - -*Box Shadow #2* - -#+begin_src html -
-
-

Effect 2

-
-
-#+end_src - -#+begin_src css -.box h3 { - text-align: center; - position: relative; - top: 80px; -} -.box { - width: 70%; - height: 200px; - background: #fff; - margin: 40px auto; -} -.effect2 { - box-shadow: 10px 10px 5px -5px rgba(0, 0, 0, 0.75); -} -#+end_src - -Try these box shadows out on your own and see how changing each shadow value -works. - -* CSS Flexbox -Now, let's move on to the best part of this article: flexbox. The flexbox is by -far my favorite new toy. I originally stumbled across this solution after -looking for more efficient ways of centering content horizontally AND -vertically. I had used a few hack-ish methods before, but flexbox throws those -out the window. The best part of it all is that flexbox is /dead simple/. - -Flexbox pertains to the parent div of any element. You want the parent to be the -flexbox in which items are arranged to use the flex methods. It's easier to see -this in action that explained, so let's see an example. - -*Flexbox* - -#+begin_src html -
-
-

1

-
-
-

2

-
-
-#+end_src - -#+begin_src css -.flex-examples { - display: flex; - flex-wrap: wrap; - justify-content: flex-start; - align-items: center; - padding: 10px; - background-color: #f2f2f2; -} -.sm-box { - display: flex; - justify-content: center; - align-items: center; - width: 20%; - height: 100px; - background: #fff; - margin: 40px 10px; -} -#+end_src - -You may notice that we no longer need to use the =top= property for the =h3= -elements in our code. This is because we set the display box to be a flex -container for the small boxes, AND we made the small boxes flex containers for -their elements (the h3 tags). Flex boxes can be nested like this to center -content that is inside centered content. - -For the example above, we designated the =justify-content= property to be -=flex-start= so that the boxes stack from the left side of the screen. This -property can be changed to =center= to make the boxes appear in the center of -the screen. - -For an interactive example, [[https://codepen.io/LandonSchropp/pen/KpzzGo][check out this CodePen]] from [[https://codepen.io/LandonSchropp/][LandonScropp]]. Resize the -window with dice to see how they collapse and re-align. - -* Even More CSS -For more inspiration, you can visit [[https://www.codepen.io][CodePen]], [[https://dribbble.com][Dribbble]], or [[https://uimovement.com][UI Movement]] to browse -the collections of many amazing web designers. diff --git a/content/blog/2019-09-09-audit-analytics.md b/content/blog/2019-09-09-audit-analytics.md new file mode 100644 index 0000000..80a1ffb --- /dev/null +++ b/content/blog/2019-09-09-audit-analytics.md @@ -0,0 +1,233 @@ ++++ +date = 2019-09-09 +title = "Audit Analytics" +description = "" +draft = false ++++ + +# What Are Data Analytics? + +A quick aside before I dive into this post: `data analytics` +is a vague term that has become popular in recent years. Think of a +`data analytic` as the output of any data analysis you +perform. For example, a pivot table or a pie chart could be a data +analytic. + +[Data analysis](https://en.wikipedia.org/wiki/Data_analysis) is a +process that utilizes statistics and other mathematical methods to +discover useful information within datasets. This involves examining, +cleaning, transforming, and modeling data so that you can use the data +to support an opinion, create more useful viewpoints, and gain knowledge +to implement into audit planning or risk assessments. + +One of the common mistakes that managers (and anyone new to the process) +make is assuming that everything involved with this process is \"data +analytics\". In fact, data analytics are only a small part of the +process. + +See **Figure 1\*** for a more accurate representation of where data +analysis sits within the full process. This means that data analysis +does not include querying or extracting data, selecting samples, or +performing audit tests. These steps can be necessary for an audit (and +may even be performed by the same associates), but they are not data +analytics. + +![The Intelligence +Cycle](https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/intelligence_cycle-min.png) + +# Current Use of Analytics in Auditing + +While data analysis has been an integral part of most businesses and +departments for the better part of the last century, only recently have +internal audit functions been adopting this practice. The internal audit +function works exclusively to provide assurance and consulting services +to the business areas within the firm (except for internal auditing +firms who are hired by different companies to perform their roles). + +> Internal Auditing helps an organization accomplish its objectives by +> bringing a systematic, disciplined approach to evaluate and improve +> the effectiveness of risk management, control and governance +> processes. +> +> - The IIA\'s Definition of Internal Audit + +Part of the blame for the slow adoption of data analysis can be +attributed to the fact that internal auditing is strongly based on +tradition and following the precedents set by previous auditors. +However, there can be no progress without auditors who are willing to +break the mold and test new audit techniques. In fact, as of 2018, [only +63% of internal audit departments currently utilize data +analytics](https://www.cpapracticeadvisor.com/accounting-audit/news/12404086/internal-audit-groups-are-lagging-in-data-analytics) +in North America. This number should be as close as possible to 100%. I +have never been part of an audit that would not have benefited from data +analytics. + +So, how do internal audit functions remedy this situation? It\'s +definitely not as easy as walking into work on Monday and telling your +Chief Audit Executive that you\'re going to start implementing analytics +in the next audit. You need a plan and a system to make the analysis +process as effective as possible. + +# The DELTA Model + +One of the easiest ways to experiment with data analytics and gain an +understanding of the processes is to implement them within your own +department. But how do we do this if we\'ve never worked with analysis +before? One of the most common places to start is to research some data +analysis models currently available. For this post, we\'ll take a look +at the DELTA model. You can take a look at **\*\*\*Figure 2\*\*\*\*** +for a quick overview of the model. + +The DELTA model sets a few guidelines for areas wanting to implement +data analytics so that the results can be as comprehensive as possible: + +- **Data**: Must be clean, accessible, and (usually) unique. +- **Enterprise-Wide Focus**: Key data systems and analytical resources + must be available for use (by the Internal Audit Function). +- **Leaders**: Must promote a data analytics approach and show the + value of analytical results. +- **Targets**: Must be set for key areas and risks that the analytics + can be compared against (KPIs). +- **Analysts**: There must be auditors willing and able to perform + data analytics or else the system cannot be sustained. + +![The Delta +Model](https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/delta-min.png) + +# Finding the Proper KPIs + +Once the Internal Audit Function has decided that they want to start +using data analytics internally and have ensured they\'re properly set +up to do so, they need to figure out what they will be testing against. +Key Performance Indicators (KPIs) are qualitative or quantitative +factors that can be evaluated and assessed to determine if the +department is performing well, usually compared to historical or +industry benchmarks. Once KPIs have been agreed upon and set, auditors +can use data analytics to assess and report on these KPIs. This allows +the person performing the analytics the freedom to express opinions on +the results, whereas the results are ambiguous if no KPIs exist. + +It should be noted that tracking KPIs in the department can help ensure +you have a rigorous Quality Assurance and Improvement Program (QAIP) in +accordance with some applicable standards, such as IPPF Standard 1300. + +> The chief audit executive must develop and maintain a quality +> assurance and improvement program that covers all aspects of the +> internal audit activity. +> +> - IPPF Standard 1300 + +Additionally, IPPF Standard 2060 discusses reporting: + +> The chief audit executive must report periodically to senior +> management and the board on the internal audit activity\'s purpose, +> authority, responsibility, and performance relative to its plan and on +> its conformance with the Code of Ethics and the Standards. Reporting +> must also include significant risk and control issues, including fraud +> risks, governance issues, and other matters that require the attention +> of senior management and/or the board. +> +> - IPPF Standard 2060 + +The hardest part of finding KPIs is to determine which KPIs are +appropriate for your department. Since every department is different and +has different goals, KPIs will vary drastically between companies. To +give you an idea of where to look, here are some ideas I came up with +when discussing the topic with a few colleagues. + +- Efficiency/Budgeting: + - Audit hours to staff utilization ratio (annual hours divided by + total annual work hours). + - Audit hours compared to the number of audits completed. + - Time between audit steps or to complete the whole audit. E.g., + time from fieldwork completion to audit report issuance. +- Reputation: + - The frequency that management has requested the services of the + IAF. + - Management, audit committee, or external audit satisfaction + survey results. + - Education, experience, certifications, tenure, and training of + the auditors on staff. +- Quality: + - Number and frequency of audit findings. Assign monetary or + numerical values, if possible. + - Percentage of recommendations issued and implemented. +- Planning: + - Percentage or number of key risks audited per year or per audit. + - Proportion of audit universe audited per year. + +# Data Analysis Tools + +Finally, to be able to analyze and report on the data analysis, auditors +need to evaluate the tools at their disposal. There are many options +available, but a few of the most common ones can easily get the job +done. For example, almost every auditor already has access to Microsoft +Excel. Excel is more powerful than most people give it credit for and +can accomplish a lot of basic statistics without much work. If you +don\'t know a lot about statistics but still want to see some of the +more basic results, Excel is a great option. + +To perform more in-depth statistical analysis or to explore large +datasets that Excel cannot handle, auditors will need to explore other +options. The big three that have had a lot of success in recent years +are Python, R, and ACL. ACL can be used as either a graphical tool +(point and click) or as a scripting tool, where the auditor must write +the scripts manually. Python and the R-language are solely scripting +languages. + +The general trend in the data analytics environment is that if the tool +allows you to do everything by clicking buttons or dragging elements, +you won\'t be able to fully utilize the analytics you need. The most +robust solutions are created by those who understand how to write the +scripts manually. It should be noted that as the utility of a tool +increases, it usually means that the learning curve for that tool will +also be higher. It will take auditors longer to learn how to utilize +Python, R, or ACL versus learning how to utilize Excel. + +# Visualization + +Once an auditor has finally found the right data, KPIs, and tools, they +must report these results so that actions can be taken. Performing +in-depth data analysis is only useful if the results are understood by +the audiences of the data. The best way to create this understanding is +to visualize the results of the data. Let\'s take a look at some of the +best options to visualize and report the results you\'ve found. + +Some of the most popular commercial tools for visualization are +Microsoft PowerBI and Tableau Desktop. However, other tools exist such +as JMP, Plotly, Qlikview, Alteryx, or D3. Some require commercial +licenses while others are simply free to use. For corporate data, you +may want to make sure that the tool does not communicate any of the data +outside the company (such as cloud storage). I won\'t be going into +depth on any of these tools since visualization is largely a subjective +and creative experience, but remember to constantly explore new options +as you repeat the process. + +Lastly, let\'s take a look at an example of data visualization. This +example comes from a [blog post written by Kushal +Chakrabarti](https://talent.works/2018/03/28/the-science-of-the-job-search-part-iii-61-of-entry-level-jobs-require-3-years-of-experience/) +in 2018 about the percent of entry-level US jobs that require +experience. **Figure 3\*** shows us an easy-to-digest picture of the +data. We can quickly tell that only about 12.5% of entry-level jobs +don\'t require experience. + +This is the kind of result that easily describes the data for you. +However, make sure to include an explanation of what the results mean. +Don\'t let the reader assume what the data means, especially if it +relates to a complex subject. *Tell a story* about the data and why the +results matter. For example, **Figure 4\*** shows a part of the +explanation the author gives to illustrate his point. + +![Entry-Level +Visualization](https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/vis_example-min.png) + +![Visualization +Explanation](https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/vis_example_explanation-min.png) + +# Wrap-Up + +While this is not an all-encompassing program that you can just adopt +into your department, it should be enough to get anyone started on the +process of understanding and implementing data analytics. Always +remember to continue learning and exploring new options as your +processes grow and evolve. diff --git a/content/blog/2019-09-09-audit-analytics.org b/content/blog/2019-09-09-audit-analytics.org deleted file mode 100644 index 5621b5f..0000000 --- a/content/blog/2019-09-09-audit-analytics.org +++ /dev/null @@ -1,211 +0,0 @@ -#+title: Data Analysis in Auditing -#+date: 2019-09-09 -#+description: Learn how to use data analysis in the world of auditing. -#+filetags: :audit: - -* What Are Data Analytics? -A quick aside before I dive into this post: =data analytics= is a vague term -that has become popular in recent years. Think of a =data analytic= as the -output of any data analysis you perform. For example, a pivot table or a pie -chart could be a data analytic. - -[[https://en.wikipedia.org/wiki/Data_analysis][Data analysis]] is a process that utilizes statistics and other mathematical -methods to discover useful information within datasets. This involves examining, -cleaning, transforming, and modeling data so that you can use the data to -support an opinion, create more useful viewpoints, and gain knowledge to -implement into audit planning or risk assessments. - -One of the common mistakes that managers (and anyone new to the process) make is -assuming that everything involved with this process is "data analytics". In -fact, data analytics are only a small part of the process. - -See *Figure 1** for a more accurate representation of where data analysis sits -within the full process. This means that data analysis does not include querying -or extracting data, selecting samples, or performing audit tests. These steps -can be necessary for an audit (and may even be performed by the same -associates), but they are not data analytics. - -#+caption: The Intelligence Cycle -[[https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/intelligence_cycle-min.png]] - -* Current Use of Analytics in Auditing -While data analysis has been an integral part of most businesses and departments -for the better part of the last century, only recently have internal audit -functions been adopting this practice. The internal audit function works -exclusively to provide assurance and consulting services to the business areas -within the firm (except for internal auditing firms who are hired by different -companies to perform their roles). - -#+begin_quote -Internal Auditing helps an organization accomplish its objectives by bringing a -systematic, disciplined approach to evaluate and improve the effectiveness of -risk management, control and governance processes. - -- The IIA's Definition of Internal Audit - -#+end_quote - -Part of the blame for the slow adoption of data analysis can be attributed to -the fact that internal auditing is strongly based on tradition and following the -precedents set by previous auditors. However, there can be no progress without -auditors who are willing to break the mold and test new audit techniques. In -fact, as of 2018, [[https://www.cpapracticeadvisor.com/accounting-audit/news/12404086/internal-audit-groups-are-lagging-in-data-analytics][only 63% of internal audit departments currently utilize data -analytics]] in North America. This number should be as close as possible to 100%. -I have never been part of an audit that would not have benefited from data -analytics. - -So, how do internal audit functions remedy this situation? It's definitely not -as easy as walking into work on Monday and telling your Chief Audit Executive -that you're going to start implementing analytics in the next audit. You need a -plan and a system to make the analysis process as effective as possible. - -* The DELTA Model -One of the easiest ways to experiment with data analytics and gain an -understanding of the processes is to implement them within your own department. -But how do we do this if we've never worked with analysis before? One of the -most common places to start is to research some data analysis models currently -available. For this post, we'll take a look at the DELTA model. You can take a -look at ****Figure 2***** for a quick overview of the model. - -The DELTA model sets a few guidelines for areas wanting to implement data -analytics so that the results can be as comprehensive as possible: - -- *Data*: Must be clean, accessible, and (usually) unique. -- *Enterprise-Wide Focus*: Key data systems and analytical resources must be - available for use (by the Internal Audit Function). -- *Leaders*: Must promote a data analytics approach and show the value of - analytical results. -- *Targets*: Must be set for key areas and risks that the analytics can be - compared against (KPIs). -- *Analysts*: There must be auditors willing and able to perform data analytics - or else the system cannot be sustained. - -#+caption: The Delta Model -[[https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/delta-min.png]] - -* Finding the Proper KPIs -Once the Internal Audit Function has decided that they want to start using data -analytics internally and have ensured they're properly set up to do so, they -need to figure out what they will be testing against. Key Performance Indicators -(KPIs) are qualitative or quantitative factors that can be evaluated and -assessed to determine if the department is performing well, usually compared to -historical or industry benchmarks. Once KPIs have been agreed upon and set, -auditors can use data analytics to assess and report on these KPIs. This allows -the person performing the analytics the freedom to express opinions on the -results, whereas the results are ambiguous if no KPIs exist. - -It should be noted that tracking KPIs in the department can help ensure you have -a rigorous Quality Assurance and Improvement Program (QAIP) in accordance with -some applicable standards, such as IPPF Standard 1300. - -#+begin_quote -The chief audit executive must develop and maintain a quality assurance and -improvement program that covers all aspects of the internal audit activity. - -- IPPF Standard 1300 - -#+end_quote - -Additionally, IPPF Standard 2060 discusses reporting: - -#+begin_quote -The chief audit executive must report periodically to senior management and the -board on the internal audit activity's purpose, authority, responsibility, and -performance relative to its plan and on its conformance with the Code of Ethics -and the Standards. Reporting must also include significant risk and control -issues, including fraud risks, governance issues, and other matters that require -the attention of senior management and/or the board. - -- IPPF Standard 2060 - -#+end_quote - -The hardest part of finding KPIs is to determine which KPIs are appropriate for -your department. Since every department is different and has different goals, -KPIs will vary drastically between companies. To give you an idea of where to -look, here are some ideas I came up with when discussing the topic with a few -colleagues. - -- Efficiency/Budgeting: - - Audit hours to staff utilization ratio (annual hours divided by total annual - work hours). - - Audit hours compared to the number of audits completed. - - Time between audit steps or to complete the whole audit. E.g., time from - fieldwork completion to audit report issuance. -- Reputation: - - The frequency that management has requested the services of the IAF. - - Management, audit committee, or external audit satisfaction survey results. - - Education, experience, certifications, tenure, and training of the auditors - on staff. -- Quality: - - Number and frequency of audit findings. Assign monetary or numerical values, - if possible. - - Percentage of recommendations issued and implemented. -- Planning: - - Percentage or number of key risks audited per year or per audit. - - Proportion of audit universe audited per year. - -* Data Analysis Tools -Finally, to be able to analyze and report on the data analysis, auditors need to -evaluate the tools at their disposal. There are many options available, but a -few of the most common ones can easily get the job done. For example, almost -every auditor already has access to Microsoft Excel. Excel is more powerful than -most people give it credit for and can accomplish a lot of basic statistics -without much work. If you don't know a lot about statistics but still want to -see some of the more basic results, Excel is a great option. - -To perform more in-depth statistical analysis or to explore large datasets that -Excel cannot handle, auditors will need to explore other options. The big three -that have had a lot of success in recent years are Python, R, and ACL. ACL can -be used as either a graphical tool (point and click) or as a scripting tool, -where the auditor must write the scripts manually. Python and the R-language are -solely scripting languages. - -The general trend in the data analytics environment is that if the tool allows -you to do everything by clicking buttons or dragging elements, you won't be able -to fully utilize the analytics you need. The most robust solutions are created -by those who understand how to write the scripts manually. It should be noted -that as the utility of a tool increases, it usually means that the learning -curve for that tool will also be higher. It will take auditors longer to learn -how to utilize Python, R, or ACL versus learning how to utilize Excel. - -* Visualization -Once an auditor has finally found the right data, KPIs, and tools, they must -report these results so that actions can be taken. Performing in-depth data -analysis is only useful if the results are understood by the audiences of the -data. The best way to create this understanding is to visualize the results of -the data. Let's take a look at some of the best options to visualize and report -the results you've found. - -Some of the most popular commercial tools for visualization are Microsoft -PowerBI and Tableau Desktop. However, other tools exist such as JMP, Plotly, -Qlikview, Alteryx, or D3. Some require commercial licenses while others are -simply free to use. For corporate data, you may want to make sure that the tool -does not communicate any of the data outside the company (such as cloud -storage). I won't be going into depth on any of these tools since visualization -is largely a subjective and creative experience, but remember to constantly -explore new options as you repeat the process. - -Lastly, let's take a look at an example of data visualization. This example -comes from a [[https://talent.works/2018/03/28/the-science-of-the-job-search-part-iii-61-of-entry-level-jobs-require-3-years-of-experience/][blog post written by Kushal Chakrabarti]] in 2018 about the percent -of entry-level US jobs that require experience. *Figure 3** shows us an -easy-to-digest picture of the data. We can quickly tell that only about 12.5% of -entry-level jobs don't require experience. - -This is the kind of result that easily describes the data for you. However, make -sure to include an explanation of what the results mean. Don't let the reader -assume what the data means, especially if it relates to a complex subject. /Tell -a story/ about the data and why the results matter. For example, *Figure 4** -shows a part of the explanation the author gives to illustrate his point. - -#+caption: Entry-Level Visualization -[[https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/vis_example-min.png]] - -#+caption: Visualization Explanation -[[https://img.cleberg.net/blog/20190909-data-analysis-in-auditing/vis_example_explanation-min.png]] - -* Wrap-Up -While this is not an all-encompassing program that you can just adopt into your -department, it should be enough to get anyone started on the process of -understanding and implementing data analytics. Always remember to continue -learning and exploring new options as your processes grow and evolve. diff --git a/content/blog/2019-12-03-the-ansoff-matrix.md b/content/blog/2019-12-03-the-ansoff-matrix.md new file mode 100644 index 0000000..da3fa11 --- /dev/null +++ b/content/blog/2019-12-03-the-ansoff-matrix.md @@ -0,0 +1,128 @@ ++++ +date = 2019-12-03 +title = "The Ansoff Matrix" +description = "" +draft = false ++++ + +# Overview + +As the world of business evolves, managers must approach business +planning and strategy with a contemporary mindset. According to Dess, +McNamara, Eisner, and Lee, managers must be willing to adapt to the +modern business environment by going beyond \"\'incremental +management\', whereby they view their job as making a series of small, +minor changes to improve the efficiency of the firm\'s +operations\"(2019). + +One reason that strategic management is crucial is because most +businesses that fail in the United States each year fail due to a lack +of strategic focus or direction(2019). The rate of failure for +businesses with poor strategies shows that strategic planning and +management are crucial to a business\'s strength and longevity, +injecting the critical factors of growth and direction into a company\'s +business plan. + +One of the most significant strategic planning and management frameworks +that companies can use is the [Ansoff +Matrix](https://en.wikipedia.org/wiki/Ansoff_matrix). While this +framework has unique purposes and use-cases, it can effectively help an +organization grow and compete. Specifically, the Ansoff matrix is one of +the most effective frameworks for companies who want to focus on +increasing sales revenue or profitability(2019). + +This framework uses a two-by-two figure to show the four strategic +options for companies to use in this framework: market penetration, +market development, product development, and diversification (see +**Figure 1**). The x-axis of the matrix focuses on the firm\'s markets +and also determines if the firm is looking to enter new markets or +innovate in its current markets. The y-axis of the matrix focuses on the +firm\'s products and determines if the firm wants to pursue strategies +around their existing products or explore new products. + +![The Ansoff Matrix by JaisonAbeySabu, Own work, CC BY-SA +3.0](https://img.cleberg.net/blog/20191203-the-ansoff-matrix/ansoff_matrix-min.png) + +# Strategic Options + +## Market Penetration + +The most straightforward strategy in the Ansoff matrix is to focus on +existing products in existing markets, also known as market +penetration(2019). Companies such as Coca-Cola have used market +penetration successfully by investing a lot of money to get further +value out of their current markets. Coca-Cola does this by introducing +new features such as Christmas-themed bottles, personal names on the +bottles, and other marketing schemes. + +## Market Development + +Market development extends existing products into new markets in an +attempt to increase the number of buyers. One interesting way that +Coca-Cola used this strategy comes from the stigma that Diet Coke is a +woman\'s drink(2019). Coca-Cola introduced Coca-Cola Zero, which +contained the same nutritional content as Diet Coke, but was packaged in +a dark black can to appear more \"manly\"(2019). + +## Product Development + +Product development uses existing markets to introduce new products so +that the firm can better meet customer needs(2019). The extreme end of +diversification is home to companies such as Johnson & Johnson, a +healthcare company that has developed a business portfolio of more than +60,000 different products(2019). Johnson & Johnson\'s dedication to +continuous diversification has led them to a balance sheet rating of +\"AAA\", industry recognition for diversification, and increases in +their investor dividends for 57 consecutive years(2019). + +## Related Diversification + +Diversification, the final strategy of the Ansoff Matrix, is more +difficult than the others since it involves exploring both new markets +and new products. Related diversification is a diversification strategy +that closely relates to the firm\'s core business. Coca-Cola\'s best +example of related diversification is its acquisition of Glaceau and +Vitamin Water, which expanded their drinking lines of business(2019). + +## Unrelated Diversification + +Unrelated diversification is a diversification strategy that does not +really relate to the firm\'s core business but still diversifies their +business portfolio. A good example of this would be a coffee company who +has decided to enter the market for bicycle sales. The main purpose of +this strategy is to an extremely diverse company that will not go +bankrupt if one market goes through difficult times. However, this +requires a lot of independent skills and heavy investments since the +company most likely cannot easily transfer knowledge between the markets +they compete in. + +# Requirements for Success + +To use the Ansoff Matrix framework, managers need to formulate corporate +goals and objectives. Without goals and direction, management frameworks +do not present much practical utility. Further, the Ansoff Matrix +requires the managers involved to make tactical decisions and create a +path for the company to take toward their goals. Lastly, both the Ansoff +Matrix needs to consider both internal and external perspectives +throughout the strategy formulation process. + +One interesting probability is that companies will be using multiple +strategic planning and management frameworks at the same time. While +this may sound like it could crowd the management process, there are +numerous reasons to do so. For example, the Ansoff Matrix and the +Balanced Scorecard are relatively popular, and they cover entirely +different parts of a company\'s strategy. Using the results from the +Balanced Scorecard could inform a company of the potential product and +market demands, such as from customer or supplier survey results, to +help the company determine which Ansoff Matrix strategy to pursue. +However, a combined approach at this level would require mature +frameworks and focused managers who are able to strategize at a high +level. + +Lastly, it should be noted that the author of the Ansoff matrix, Igor +Ansoff, often used the term [paralysis by +analysis](https://en.wikipedia.org/wiki/Analysis_paralysis) to explain +the mistake of companies who overuse analysis and spend too much time +planning. Companies need to understand the utility of a strategic +management framework while ensuring that the company is poised to +execute as efficiently as they have planned. diff --git a/content/blog/2019-12-03-the-ansoff-matrix.org b/content/blog/2019-12-03-the-ansoff-matrix.org deleted file mode 100644 index 1bee6a8..0000000 --- a/content/blog/2019-12-03-the-ansoff-matrix.org +++ /dev/null @@ -1,105 +0,0 @@ -#+title: The Ansoff Matrix -#+date: 2019-12-03 -#+description: Learn about the Ansoff Matrix, a strategic management tool. -#+filetags: :business: - -* Overview -As the world of business evolves, managers must approach business planning and -strategy with a contemporary mindset. According to Dess, McNamara, Eisner, and -Lee, managers must be willing to adapt to the modern business environment by -going beyond "'incremental management', whereby they view their job as making a -series of small, minor changes to improve the efficiency of the firm's -operations"(2019). - -One reason that strategic management is crucial is because most businesses that -fail in the United States each year fail due to a lack of strategic focus or -direction(2019). The rate of failure for businesses with poor strategies shows -that strategic planning and management are crucial to a business's strength and -longevity, injecting the critical factors of growth and direction into a -company's business plan. - -One of the most significant strategic planning and management frameworks that -companies can use is the [[https://en.wikipedia.org/wiki/Ansoff_matrix][Ansoff Matrix]]. While this framework has unique purposes -and use-cases, it can effectively help an organization grow and compete. -Specifically, the Ansoff matrix is one of the most effective frameworks for -companies who want to focus on increasing sales revenue or profitability(2019). - -This framework uses a two-by-two figure to show the four strategic options for -companies to use in this framework: market penetration, market development, -product development, and diversification (see *Figure 1*). The x-axis of the -matrix focuses on the firm's markets and also determines if the firm is looking -to enter new markets or innovate in its current markets. The y-axis of the -matrix focuses on the firm's products and determines if the firm wants to pursue -strategies around their existing products or explore new products. - -#+caption: The Ansoff Matrix by JaisonAbeySabu, Own work, CC BY-SA 3.0 -[[https://img.cleberg.net/blog/20191203-the-ansoff-matrix/ansoff_matrix-min.png]] - -* Strategic Options -** Market Penetration -The most straightforward strategy in the Ansoff matrix is to focus on existing -products in existing markets, also known as market penetration(2019). Companies -such as Coca-Cola have used market penetration successfully by investing a lot -of money to get further value out of their current markets. Coca-Cola does this -by introducing new features such as Christmas-themed bottles, personal names on -the bottles, and other marketing schemes. - -** Market Development -Market development extends existing products into new markets in an attempt to -increase the number of buyers. One interesting way that Coca-Cola used this -strategy comes from the stigma that Diet Coke is a woman's drink(2019). -Coca-Cola introduced Coca-Cola Zero, which contained the same nutritional -content as Diet Coke, but was packaged in a dark black can to appear more -"manly"(2019). - -** Product Development -Product development uses existing markets to introduce new products so that the -firm can better meet customer needs(2019). The extreme end of diversification is -home to companies such as Johnson & Johnson, a healthcare company that has -developed a business portfolio of more than 60,000 different products(2019). -Johnson & Johnson's dedication to continuous diversification has led them to a -balance sheet rating of "AAA", industry recognition for diversification, and -increases in their investor dividends for 57 consecutive years(2019). - -** Related Diversification -Diversification, the final strategy of the Ansoff Matrix, is more difficult than -the others since it involves exploring both new markets and new products. -Related diversification is a diversification strategy that closely relates to -the firm's core business. Coca-Cola's best example of related diversification is -its acquisition of Glaceau and Vitamin Water, which expanded their drinking -lines of business(2019). - -** Unrelated Diversification -Unrelated diversification is a diversification strategy that does not really -relate to the firm's core business but still diversifies their business -portfolio. A good example of this would be a coffee company who has decided to -enter the market for bicycle sales. The main purpose of this strategy is to an -extremely diverse company that will not go bankrupt if one market goes through -difficult times. However, this requires a lot of independent skills and heavy -investments since the company most likely cannot easily transfer knowledge -between the markets they compete in. - -* Requirements for Success -To use the Ansoff Matrix framework, managers need to formulate corporate goals -and objectives. Without goals and direction, management frameworks do not -present much practical utility. Further, the Ansoff Matrix requires the managers -involved to make tactical decisions and create a path for the company to take -toward their goals. Lastly, both the Ansoff Matrix needs to consider both -internal and external perspectives throughout the strategy formulation process. - -One interesting probability is that companies will be using multiple strategic -planning and management frameworks at the same time. While this may sound like -it could crowd the management process, there are numerous reasons to do so. For -example, the Ansoff Matrix and the Balanced Scorecard are relatively popular, -and they cover entirely different parts of a company's strategy. Using the -results from the Balanced Scorecard could inform a company of the potential -product and market demands, such as from customer or supplier survey results, to -help the company determine which Ansoff Matrix strategy to pursue. However, a -combined approach at this level would require mature frameworks and focused -managers who are able to strategize at a high level. - -Lastly, it should be noted that the author of the Ansoff matrix, Igor Ansoff, -often used the term [[https://en.wikipedia.org/wiki/Analysis_paralysis][paralysis by analysis]] to explain the mistake of companies -who overuse analysis and spend too much time planning. Companies need to -understand the utility of a strategic management framework while ensuring that -the company is poised to execute as efficiently as they have planned. diff --git a/content/blog/2019-12-16-password-security.md b/content/blog/2019-12-16-password-security.md new file mode 100644 index 0000000..bb4ab1d --- /dev/null +++ b/content/blog/2019-12-16-password-security.md @@ -0,0 +1,127 @@ ++++ +date = 2019-12-16 +title = "Password Security" +description = "" +draft = false ++++ + +# Users + +## Why Does It Matter? + +Information security, including passwords and identities, has become one +of the most important digital highlights of the last decade. With +[billions of people affected by data breaches each +year](https://www.usatoday.com/story/money/2018/12/28/data-breaches-2018-billions-hit-growing-number-cyberattacks/2413411002/), +there\'s a greater need to introduce strong information security +systems. If you think you\'ve been part of a breach, or you want to +check and see, you can use [Have I Been +Pwned](https://haveibeenpwned.com/) to see if your email has been +involved in any public breaches. Remember that there\'s a possibility +that a company experienced a breach and did not report it to anyone. + +## How Do I Protect Myself? + +The first place to start with any personal security check-up is to +gather a list of all the different websites, apps, or programs that +require you to have login credentials. Optionally, once you know where +your information is being stored, you can sort the list from the +most-important items such as banks or government logins to less +important items such as your favorite meme site. You will want to ensure +that your critical logins are secure before getting to the others. + +Once you think you have a good idea of all your different authentication +methods, I recommend using a password manager such as +[Bitwarden](https://bitwarden.com/). Using a password manager allows you +to automatically save your logins, create randomized passwords, and +transfer passwords across devices. However, you\'ll need to memorize +your \"vault password\" that allows you to open the password manager. +It\'s important to make this something hard to guess since it would +allow anyone who has it to access every password you\'ve stored in +there. + +Personally, I recommend using a +[passphrase](https://en.wikipedia.org/wiki/Passphrase) instead of a +[password](https://en.wikipedia.org/wiki/Password) for your vault +password. Instead of using a string of characters (whether random or +simple), use a phrase and add in symbols and a number. For example, your +vault password could be `Racing-Alphabet-Gourd-Parrot3`. Swap +the symbols out for whichever symbol you want, move the number around, +and fine-tune the passphrase until you are confident that you can +remember it whenever necessary. + +Once you\'ve stored your passwords, make sure you continually check up +on your account and make sure you aren\'t following bad password +practices. Krebs on Security has a great [blog post on password +recommendations](https://krebsonsecurity.com/password-dos-and-donts/). +Any time that a data breach happens, make sure you check to see if you +were included, and if you need to reset any account passwords. + +# Developers + +## What Are the Basic Requirements? + +When developing any password-protected application, there are a few +basic rules that anyone should follow even if they do not follow any +official guidelines such as NIST. The foremost practice is to require +users to use passwords that are at least 8 characters and cannot easily +be guessed. This sounds extremely simple, but it requires quite a few +different strategies. First, the application should check the potential +passwords against a dictionary of insecure passwords such +`password`, `1234abc`, or +`application_name`. + +Next, the application should offer guidance on the strength of passwords +being entered during enrollment. Further, NIST officially recommends +**not\*** implementing any composition rules that make passwords hard to +remember (e.g. passwords with letters, numbers, and special characters) +and instead encouraging the use of long pass phrases which can include +spaces. It should be noted that to be able to keep spaces within +passwords, all unicode characters should be supported, and passwords +should not be truncated. + +## What Does NIST Recommend? + +The National Institute of Standards and Technology +([NIST](https://www.nist.gov)) in the US Department of Commerce +regularly publishes information around information security and digital +identity guidelines. Recently, NIST published [Special Publication +800-63b](https://pages.nist.gov/800-63-3/sp800-63b.html): Digital +Identity Guidelines and Authentication and Lifecycle Management. + +> A Memorized Secret authenticator - commonly referred to as a password +> or, if numeric, a PIN - is a secret value intended to be chosen and +> memorized by the user. Memorized secrets need to be of sufficient +> complexity and secrecy that it would be impractical for an attacker to +> guess or otherwise discover the correct secret value. A memorized +> secret is something you know. +> +> - NIST Special Publication 800-63B + +NIST offers a lot of guidance on passwords, but I\'m going to highlight +just a few of the important factors: + +- Require passwords to be a minimum of 8 characters (6 characters if + randomly generated and be generated using an approved random bit + generator). +- Compare potential passwords against a list that contains values + known to be commonly-used, expected, or compromised. +- Offer guidance on password strength, such as a strength meter. +- Implement a rate-limiting mechanism to limit the number of failed + authentication attempts for each user account. +- Do not require composition rules for passwords and do not require + passwords to be changed periodically (unless compromised). +- Allow pasting of user identification and passwords to facilitate the + use of password managers. +- Allow users to view the password as it is being entered. +- Use secure forms of communication and storage, including salting and + hashing passwords using a one-way key derivation function. + +NIST offers further guidance on other devices that require specific +security policies, querying for passwords, and more. All the information +discussed so far comes from [NIST +SP800-63b](https://pages.nist.gov/800-63-3/sp800-63b.html) but NIST +offers a lot of information on digital identities, enrollment, identity +proofing, authentication, lifecycle management, federation, and +assertions in the total [NIST SP800-63 Digital Identity +Guidelines](https://pages.nist.gov/800-63-3/). diff --git a/content/blog/2019-12-16-password-security.org b/content/blog/2019-12-16-password-security.org deleted file mode 100644 index 465afdc..0000000 --- a/content/blog/2019-12-16-password-security.org +++ /dev/null @@ -1,103 +0,0 @@ -#+title: Password Security -#+date: 2019-12-16 -#+description: Password security basics. -#+filetags: :security: - -* Users -** Why Does It Matter? -Information security, including passwords and identities, has become one of the -most important digital highlights of the last decade. With [[https://www.usatoday.com/story/money/2018/12/28/data-breaches-2018-billions-hit-growing-number-cyberattacks/2413411002/][billions of people -affected by data breaches each year]], there's a greater need to introduce strong -information security systems. If you think you've been part of a breach, or you -want to check and see, you can use [[https://haveibeenpwned.com/][Have I Been Pwned]] to see if your email has -been involved in any public breaches. Remember that there's a possibility that a -company experienced a breach and did not report it to anyone. - -** How Do I Protect Myself? -The first place to start with any personal security check-up is to gather a list -of all the different websites, apps, or programs that require you to have login -credentials. Optionally, once you know where your information is being stored, -you can sort the list from the most-important items such as banks or government -logins to less important items such as your favorite meme site. You will want to -ensure that your critical logins are secure before getting to the others. - -Once you think you have a good idea of all your different authentication -methods, I recommend using a password manager such as [[https://bitwarden.com/][Bitwarden]]. Using a -password manager allows you to automatically save your logins, create randomized -passwords, and transfer passwords across devices. However, you'll need to -memorize your "vault password" that allows you to open the password manager. -It's important to make this something hard to guess since it would allow anyone -who has it to access every password you've stored in there. - -Personally, I recommend using a [[https://en.wikipedia.org/wiki/Passphrase][passphrase]] instead of a [[https://en.wikipedia.org/wiki/Password][password]] for your vault -password. Instead of using a string of characters (whether random or simple), -use a phrase and add in symbols and a number. For example, your vault password -could be =Racing-Alphabet-Gourd-Parrot3=. Swap the symbols out for whichever -symbol you want, move the number around, and fine-tune the passphrase until you -are confident that you can remember it whenever necessary. - -Once you've stored your passwords, make sure you continually check up on your -account and make sure you aren't following bad password practices. Krebs on -Security has a great [[https://krebsonsecurity.com/password-dos-and-donts/][blog post on password recommendations]]. Any time that a data -breach happens, make sure you check to see if you were included, and if you need -to reset any account passwords. - -* Developers -** What Are the Basic Requirements? -When developing any password-protected application, there are a few basic rules -that anyone should follow even if they do not follow any official guidelines -such as NIST. The foremost practice is to require users to use passwords that -are at least 8 characters and cannot easily be guessed. This sounds extremely -simple, but it requires quite a few different strategies. First, the application -should check the potential passwords against a dictionary of insecure passwords -such =password=, =1234abc=, or =application_name=. - -Next, the application should offer guidance on the strength of passwords being -entered during enrollment. Further, NIST officially recommends *not** -implementing any composition rules that make passwords hard to remember (e.g. -passwords with letters, numbers, and special characters) and instead encouraging -the use of long pass phrases which can include spaces. It should be noted that -to be able to keep spaces within passwords, all unicode characters should be -supported, and passwords should not be truncated. - -** What Does NIST Recommend? -The National Institute of Standards and Technology ([[https://www.nist.gov][NIST]]) in the US Department -of Commerce regularly publishes information around information security and -digital identity guidelines. Recently, NIST published [[https://pages.nist.gov/800-63-3/sp800-63b.html][Special Publication -800-63b]]: Digital Identity Guidelines and Authentication and Lifecycle -Management. - -#+begin_quote -A Memorized Secret authenticator - commonly referred to as a password or, if -numeric, a PIN - is a secret value intended to be chosen and memorized by the -user. Memorized secrets need to be of sufficient complexity and secrecy that it -would be impractical for an attacker to guess or otherwise discover the correct -secret value. A memorized secret is something you know. - -- NIST Special Publication 800-63B -#+end_quote - -NIST offers a lot of guidance on passwords, but I'm going to highlight just a -few of the important factors: - -- Require passwords to be a minimum of 8 characters (6 characters if randomly - generated and be generated using an approved random bit generator). -- Compare potential passwords against a list that contains values known to be - commonly-used, expected, or compromised. -- Offer guidance on password strength, such as a strength meter. -- Implement a rate-limiting mechanism to limit the number of failed - authentication attempts for each user account. -- Do not require composition rules for passwords and do not require passwords to - be changed periodically (unless compromised). -- Allow pasting of user identification and passwords to facilitate the use of - password managers. -- Allow users to view the password as it is being entered. -- Use secure forms of communication and storage, including salting and hashing - passwords using a one-way key derivation function. - -NIST offers further guidance on other devices that require specific security -policies, querying for passwords, and more. All the information discussed so far -comes from [[https://pages.nist.gov/800-63-3/sp800-63b.html][NIST SP800-63b]] but NIST offers a lot of information on digital -identities, enrollment, identity proofing, authentication, lifecycle management, -federation, and assertions in the total [[https://pages.nist.gov/800-63-3/][NIST SP800-63 Digital Identity -Guidelines]]. diff --git a/content/blog/2020-01-25-linux-software.md b/content/blog/2020-01-25-linux-software.md new file mode 100644 index 0000000..a240731 --- /dev/null +++ b/content/blog/2020-01-25-linux-software.md @@ -0,0 +1,282 @@ ++++ +date = 2020-01-25 +title = "My Favorite Linux Software" +description = "" +draft = false ++++ + +# GUI Applications + +## Etcher + +![Etcher +Screenshot](https://img.cleberg.net/blog/20200125-the-best-linux-software/etcher.png) + +[Etcher](https://www.balena.io/etcher/) is a quick and easy way to burn +ISO images to CDs and USB devices. There are two different ways you can +install this program. First, you can navigate to the [official +website](https://www.balena.io/etcher/) and download the AppImage file, +which can run without installation. + +However, AppImage files are not executable by default, so you\'ll either +need to right-click to open the properties of the file and click the +\"Allow executing file as program\" box in the Permissions tab or use +the following command: + +```sh +chmod u+x FILE_NAME +``` + +If you don\'t like AppImage files or just prefer repositories, you can +use the following commands to add the author\'s repository and install +it through the command-line only. + +First, you\'ll have to echo the repo and write it to a list file: + +```sh +echo "deb https://deb.etcher.io stable etcher" | sudo tee /etc/apt/sources.list.d/balena-etcher.list +``` + +Next, add the application keys to Ubuntu\'s keyring: + +```sh +sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61 +``` + +Finally, update the repositories and install the app. + +```sh +sudo apt update && sudo apt install balena-etcher-electron +``` + +Using Arch, Manjaro, or another distro using the AUR? Use this command +instead: + +```sh +sudo pacman -S etcher +``` + +## Atom + +![Atom +Screenshot](https://img.cleberg.net/blog/20200125-the-best-linux-software/atom.png) + +[Atom](https://atom.io) is the self-proclaimed \"hackable text editor +for the 21st century\". This text editor is made by GitHub, [now owned +by +Microsoft](https://news.microsoft.com/2018/06/04/microsoft-to-acquire-github-for-7-5-billion/), +and has some of the best add-ons available to customize the layout and +abilities of the app. + +First, add the Atom repository to your sources. + +```sh +sudo add-apt-repository ppa:webupd8team/atom +``` + +Next, update your package listings and install atom. + +```sh +sudo apt update && sudo apt install atom +``` + +If you have issues updating your packages with the Atom repository, +you\'ll need to use the snap package described below instead of the +repository. To remove the repository we just added, use this command: + +```sh +sudo add-apt-repository -r ppa:webupd8team/atom +``` + +You can also install Atom as a snap package, but it must be installed +with the `--classic` flag. A [full explanation is +available](https://language-bash.com/blog/how-to-snap-introducing-classic-confinement) +if you\'d like to read more about why you need the classic flag. + +```sh +snap install atom --classic +``` + +Using Arch, Manjaro, or another distro using the AUR? Use this command +instead: + +```sh +sudo pacman -S atom +``` + +## Visual Studio Code + +![Visual Studio Code +Code](https://img.cleberg.net/blog/20200125-the-best-linux-software/vscode.png) + +[Visual Studio Code](https://code.visualstudio.com) is yet another +fantastic choice for programming on Linux, especially if you need those +extra add-ons to spice up your late-night coding sessions. The theme +used in the screenshot is +[Mars](https://marketplace.visualstudio.com/items?itemName=EliverLara.mars) +by theme creator [Eliver Lara](https://github.com/EliverLara), who makes +a ton of great themes for VS Code, Atom, and various Linux desktop +environments. + +To install VS Code, you\'ll need to download the `.deb` file +from the official website. Once you\'ve downloaded the file, either +double-click it to install through the Software Center or run the +following command: + +```sh +sudo dpkg -i FILE_NAME.deb +``` + +You can also install VS Code as a snap package, but it must be installed +with the `--classic` flag. A [full explanation is +available](https://language-bash.com/blog/how-to-snap-introducing-classic-confinement) +if you\'d like to read more about why you need the classic flag. + +```sh +snap install code --classic +``` + +Using Arch, Manjaro, or another distro using the AUR? Use these commands +instead: + +```sh +sudo pacman -S yay binutils make gcc pkg-config fakeroot yay -S visual-studio-code-bin +``` + +## GNOME Tweaks + +![Gnome Tweaks +Screenshot](https://img.cleberg.net/blog/20200125-the-best-linux-software/gnome-tweaks.png) + +[Gnome Tweaks](https://gitlab.gnome.org/GNOME/gnome-tweaks) is the +ultimate tool to use if you want to customize your GNOME desktop +environment. This is how you can switch application themes (GTK), shell +themes, icons, fonts, and more. To install GNOME Tweaks on Ubuntu, you +just need to install the official package. + +```sh +sudo apt install gnome-tweaks +``` + +If you\'ve installed Manjaro or Arch with Gnome, you should have the +tweak tool pre-installed. If you\'re on Fedora, this tool is available +as an official package: + +```sh +sudo dnf install gnome-tweaks +``` + +## Steam + +![Steam +Screenshot](https://img.cleberg.net/blog/20200125-the-best-linux-software/steam.png) + +[Steam](https://steampowered.com) is one of the most popular gaming +libraries for computers and is one of the main reasons that many people +have been able to switch to Linux in recent years, thanks to Steam +Proton, which makes it easier to play games not officially created for +Linux platforms. + +To install Steam on Ubuntu, you just need to install the official +package. + +```sh +sudo apt install steam-installer +``` + +For Arch-based systems, you\'ll simply need to install the +`steam` package. However, this requires that you enable the +`multilib` source. To do so, use the following command: + +```sh +sudo nano /etc/pacman.conf +``` + +Now, scroll down and uncomment the `multilib` section. + +``` config +# Before: +#[multilib] +#Include = /etc/pacman.d/mirrorlist + +# After: +[multilib] +Include = /etc/pacman.d/mirrorlist +``` + +Finally, install the program: + +```sh +sudo pacman -S steam +``` + +[Problem Launching Steam Games? Click +Here.](./2020-01-26-steam-on-ntfs-drives.html) + +# Command-Line Packages + +## neofetch + +![Neofetch +Screenshot](https://img.cleberg.net/blog/20200125-the-best-linux-software/neofetch.png) + +[Neofetch](https://github.com/dylanaraps/neofetch) is a customizable +tool used in the command-line to show system information. This is +exceptionally useful if you want to see your system\'s information +quickly without the clutter of some resource-heavy GUI apps. + +This is an official package if you\'re running Ubuntu 17.04 or later, so +simply use the following command: + +```sh +sudo apt install neofetch +``` + +If you\'re running Ubuntu 16.10 or earlier, you\'ll have to use a series +of commands: + +```sh +sudo add-apt-repository ppa:dawidd0811/neofetch; sudo apt update; sudo apt install neofetch +``` + +Using Arch, Manjaro, or another distro using the AUR? Use this command +instead: + +```sh +sudo pacman -S neofetch +``` + +## yt-dlp + +![yt-dlp +Screenshot](https://img.cleberg.net/blog/20200125-the-best-linux-software/yt-dlp.png) + +[yt-dlp](https://github.com/yt-dlp/yt-dlp) is an extremely handy +command-line tool that allows you to download video or audio files from +various websites, such as YouTube. There are a ton of different options +when running this package, so be sure to run `yt-dlp --help` +first to look through everything you can do (or give up and search for +the best config online). + +While this shouldn\'t be a problem for most users, yt-dlp requires +Python 2.6, 2.7, or 3.2+ to work correctly, so install Python if you +don\'t have it already. You can check to see if you have Python +installed by running: + +```sh +python -V +``` + +To get the youtube-dl package, simply curl the URL and output the +results. + +```sh +sudo curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp +``` + +Finally, make the file executable so that it can be run from the +command-line. + +```sh +sudo chmod a+rx /usr/local/bin/yt-dlp +``` diff --git a/content/blog/2020-01-25-linux-software.org b/content/blog/2020-01-25-linux-software.org deleted file mode 100644 index 54d7b85..0000000 --- a/content/blog/2020-01-25-linux-software.org +++ /dev/null @@ -1,249 +0,0 @@ -#+title: Linux Software -#+date: 2020-01-25 -#+description: A look at some useful Linux applications. -#+filetags: :linux: - -* GUI Applications -** Etcher -#+caption: Etcher Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/etcher.png]] - -[[https://www.balena.io/etcher/][Etcher]] is a quick and easy way to burn ISO images to CDs and USB devices. There -are two different ways you can install this program. First, you can navigate to -the [[https://www.balena.io/etcher/][official website]] and download the AppImage file, which can run without -installation. - -However, AppImage files are not executable by default, so you'll either need to -right-click to open the properties of the file and click the "Allow executing -file as program" box in the Permissions tab or use the following command: - -#+begin_src sh -chmod u+x FILE_NAME -#+end_src - -If you don't like AppImage files or just prefer repositories, you can use the -following commands to add the author's repository and install it through the -command-line only. - -First, you'll have to echo the repo and write it to a list file: - -#+begin_src sh -echo "deb https://deb.etcher.io stable etcher" | sudo tee /etc/apt/sources.list.d/balena-etcher.list -#+end_src - -Next, add the application keys to Ubuntu's keyring: - -#+begin_src sh -sudo apt-key adv --keyserver keyserver.ubuntu.com --recv-keys 379CE192D401AB61 -#+end_src - -Finally, update the repositories and install the app. - -#+begin_src sh -sudo apt update && sudo apt install balena-etcher-electron -#+end_src - -Using Arch, Manjaro, or another distro using the AUR? Use this command instead: - -#+begin_src sh -sudo pacman -S etcher -#+end_src - -** Atom -#+caption: Atom Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/atom.png]] - -[[https://atom.io][Atom]] is the self-proclaimed "hackable text editor for the 21st century". This -text editor is made by GitHub, [[https://news.microsoft.com/2018/06/04/microsoft-to-acquire-github-for-7-5-billion/][now owned by Microsoft]], and has some of the best -add-ons available to customize the layout and abilities of the app. - -First, add the Atom repository to your sources. - -#+begin_src sh -sudo add-apt-repository ppa:webupd8team/atom -#+end_src - -Next, update your package listings and install atom. - -#+begin_src sh -sudo apt update && sudo apt install atom -#+end_src - -If you have issues updating your packages with the Atom repository, you'll need -to use the snap package described below instead of the repository. To remove the -repository we just added, use this command: - -#+begin_src sh -sudo add-apt-repository -r ppa:webupd8team/atom -#+end_src - -You can also install Atom as a snap package, but it must be installed with the -=--classic= flag. A [[https://language-bash.com/blog/how-to-snap-introducing-classic-confinement][full explanation is available]] if you'd like to read more -about why you need the classic flag. - -#+begin_src sh -snap install atom --classic -#+end_src - -Using Arch, Manjaro, or another distro using the AUR? Use this command instead: - -#+begin_src sh -sudo pacman -S atom -#+end_src - -** Visual Studio Code -#+caption: Visual Studio Code Code -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/vscode.png]] - -[[https://code.visualstudio.com][Visual Studio Code]] is yet another fantastic choice for programming on Linux, -especially if you need those extra add-ons to spice up your late-night coding -sessions. The theme used in the screenshot is [[https://marketplace.visualstudio.com/items?itemName=EliverLara.mars][Mars]] by theme creator [[https://github.com/EliverLara][Eliver Lara]], -who makes a ton of great themes for VS Code, Atom, and various Linux desktop -environments. - -To install VS Code, you'll need to download the =.deb= file from the official -website. Once you've downloaded the file, either double-click it to install -through the Software Center or run the following command: - -#+begin_src sh -sudo dpkg -i FILE_NAME.deb -#+end_src - -You can also install VS Code as a snap package, but it must be installed with -the =--classic= flag. A [[https://language-bash.com/blog/how-to-snap-introducing-classic-confinement][full explanation is available]] if you'd like to read more -about why you need the classic flag. - -#+begin_src sh -snap install code --classic -#+end_src - -Using Arch, Manjaro, or another distro using the AUR? Use these commands -instead: - -#+begin_src sh -sudo pacman -S yay binutils make gcc pkg-config fakeroot yay -S visual-studio-code-bin -#+end_src - -** GNOME Tweaks -#+caption: Gnome Tweaks Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/gnome-tweaks.png]] - -[[https://gitlab.gnome.org/GNOME/gnome-tweaks][Gnome Tweaks]] is the ultimate tool to use if you want to customize your GNOME -desktop environment. This is how you can switch application themes (GTK), shell -themes, icons, fonts, and more. To install GNOME Tweaks on Ubuntu, you just need -to install the official package. - -#+begin_src sh -sudo apt install gnome-tweaks -#+end_src - -If you've installed Manjaro or Arch with Gnome, you should have the tweak tool -pre-installed. If you're on Fedora, this tool is available as an official -package: - -#+begin_src sh -sudo dnf install gnome-tweaks -#+end_src - -** Steam -#+caption: Steam Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/steam.png]] - -[[https://steampowered.com][Steam]] is one of the most popular gaming libraries for computers and is one of -the main reasons that many people have been able to switch to Linux in recent -years, thanks to Steam Proton, which makes it easier to play games not -officially created for Linux platforms. - -To install Steam on Ubuntu, you just need to install the official package. - -#+begin_src sh -sudo apt install steam-installer -#+end_src - -For Arch-based systems, you'll simply need to install the =steam= package. -However, this requires that you enable the =multilib= source. To do so, use the -following command: - -#+begin_src sh -sudo nano /etc/pacman.conf -#+end_src - -Now, scroll down and uncomment the =multilib= section. - -#+begin_src config -# Before: -#[multilib] -#Include = /etc/pacman.d/mirrorlist - -# After: -[multilib] -Include = /etc/pacman.d/mirrorlist -#+end_src - -Finally, install the program: - -#+begin_src sh -sudo pacman -S steam -#+end_src - -[[./2020-01-26-steam-on-ntfs-drives.html][Problem Launching Steam Games? -Click Here.]] - -* Command-Line Packages -** neofetch -#+caption: Neofetch Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/neofetch.png]] - -[[https://github.com/dylanaraps/neofetch][Neofetch]] is a customizable tool used in the command-line to show system -information. This is exceptionally useful if you want to see your system's -information quickly without the clutter of some resource-heavy GUI apps. - -This is an official package if you're running Ubuntu 17.04 or later, so simply -use the following command: - -#+begin_src sh -sudo apt install neofetch -#+end_src - -If you're running Ubuntu 16.10 or earlier, you'll have to use a series of -commands: - -#+begin_src sh -sudo add-apt-repository ppa:dawidd0811/neofetch; sudo apt update; sudo apt install neofetch -#+end_src - -Using Arch, Manjaro, or another distro using the AUR? Use this command instead: - -#+begin_src sh -sudo pacman -S neofetch -#+end_src - -** yt-dlp -#+caption: yt-dlp Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/yt-dlp.png]] - -[[https://github.com/yt-dlp/yt-dlp][yt-dlp]] is an extremely handy command-line tool that allows you to download video -or audio files from various websites, such as YouTube. There are a ton of -different options when running this package, so be sure to run =yt-dlp --help= -first to look through everything you can do (or give up and search for the best -config online). - -While this shouldn't be a problem for most users, yt-dlp requires Python 2.6, -2.7, or 3.2+ to work correctly, so install Python if you don't have it already. -You can check to see if you have Python installed by running: - -#+begin_src sh -python -V -#+end_src - -To get the youtube-dl package, simply curl the URL and output the results. - -#+begin_src sh -sudo curl -L https://github.com/yt-dlp/yt-dlp/releases/latest/download/yt-dlp -o /usr/local/bin/yt-dlp -#+end_src - -Finally, make the file executable so that it can be run from the command-line. - -#+begin_src sh -sudo chmod a+rx /usr/local/bin/yt-dlp -#+end_src diff --git a/content/blog/2020-01-26-steam-on-ntfs.md b/content/blog/2020-01-26-steam-on-ntfs.md new file mode 100644 index 0000000..dbc6060 --- /dev/null +++ b/content/blog/2020-01-26-steam-on-ntfs.md @@ -0,0 +1,97 @@ ++++ +date = 2020-01-26 +title = "Linux Gaming Tweak: Steam on NTFS Drives" +description = "" +draft = false ++++ + +# Auto-Mount Steam Drives + +![Steam +Screenshot](https://img.cleberg.net/blog/20200125-the-best-linux-software/steam.png) + +If you want to see how to install Steam on Linux, see my other post: +[Linux Software](../linux-software/). + +Are you having trouble launching games, even though they\'ve installed +correctly? This may happen if you\'re storing your games on an +NTFS-formatted drive. This shouldn\'t be an issue if you\'re storing +your games on the same drive that Steam is on, but some gamers prefer to +put Steam on their main drive and game files on another SSD or HDD. + +To fix this problem, you\'ll need to try a few things. First, you\'ll +need to install the `ntfs-3g` package, which is meant for +better interoperability with Linux. + +```sh +sudo apt install ntfs-3g +``` + +Next, you should set up the `/etc/fstab` file to +automatically mount your drives on boot. To automatically mount your +drives when the computer boots up, you\'ll have to create the folders +you want to mount your drive to first. I store mine in the +`/mnt` folder using names that I\'ll recognize, but you can +create your folders wherever you want. + +```sh +mkdir /path/to/folder +``` + +For example: + +```sh +mkdir /mnt/steam_library +``` + +To automatically mount drives upon system boot, you will need to collect +a few items. The UUID is the identification number connected to +whichever drive you\'re using to store Steam games. + +Drives are usually labeled similar to `/dev/nvme0n1p1` or +`/dev/sda1`, so you\'ll need to find the line in the output +of the command below that correlates to your drive and copy the UUID +over to the `/etc/fstab` file. + +```sh +sudo blkid | grep UUID= +``` + +Next, you\'ll need your `uid` and `gid`. To find +these, run the following command: + +```sh +id -u && id -g +``` + +Now that you have collected the necessary information, open the +`/etc/fstab` file: + +```sh +sudo nano /etc/fstab +``` + +Each drive you want to mount on boot should have its own line in the +`/etc/fstab` file that looks similar to this: + +``` config +UUID=B64E53824E5339F7 /mnt/steam_library ntfs-3g uid=1000,gid=1000 0 0 +``` + +Now all you need to do is unmount your drive and re-mount it. You can +unmount the drive by doing this (be sure to use the correct drive name +here): + +```sh +sudo umount /dev/sdxX +``` + +You can re-mount all your drives by executing the following: + +```sh +sudo mount -a +``` + +If you don\'t know what your drive name is, or you\'re nervous about +unmounting and re-mounting, simply reboot your computer, and it will be +done for you automatically. diff --git a/content/blog/2020-01-26-steam-on-ntfs.org b/content/blog/2020-01-26-steam-on-ntfs.org deleted file mode 100644 index 6dffb3d..0000000 --- a/content/blog/2020-01-26-steam-on-ntfs.org +++ /dev/null @@ -1,91 +0,0 @@ -#+title: Linux Gaming Tweak: Steam on NTFS Drives -#+date: 2020-01-26 -#+description: Learn how to fix Steam NTFS issues on Linux. -#+filetags: :gaming:linux: - -* Auto-Mount Steam Drives -#+caption: Steam Screenshot -[[https://img.cleberg.net/blog/20200125-the-best-linux-software/steam.png]] - -If you want to see how to install Steam on Linux, see my other post: [[../linux-software/][Linux -Software]]. - -Are you having trouble launching games, even though they've installed correctly? -This may happen if you're storing your games on an NTFS-formatted drive. This -shouldn't be an issue if you're storing your games on the same drive that Steam -is on, but some gamers prefer to put Steam on their main drive and game files on -another SSD or HDD. - -To fix this problem, you'll need to try a few things. First, you'll need to -install the =ntfs-3g= package, which is meant for better interoperability with -Linux. - -#+begin_src sh -sudo apt install ntfs-3g -#+end_src - -Next, you should set up the =/etc/fstab= file to automatically mount your drives -on boot. To automatically mount your drives when the computer boots up, you'll -have to create the folders you want to mount your drive to first. I store mine -in the =/mnt= folder using names that I'll recognize, but you can create your -folders wherever you want. - -#+begin_src sh -mkdir /path/to/folder -#+end_src - -For example: - -#+begin_src sh -mkdir /mnt/steam_library -#+end_src - -To automatically mount drives upon system boot, you will need to collect a few -items. The UUID is the identification number connected to whichever drive you're -using to store Steam games. - -Drives are usually labeled similar to =/dev/nvme0n1p1= or =/dev/sda1=, so you'll -need to find the line in the output of the command below that correlates to your -drive and copy the UUID over to the =/etc/fstab= file. - -#+begin_src sh -sudo blkid | grep UUID= -#+end_src - -Next, you'll need your =uid= and =gid=. To find these, run the following -command: - -#+begin_src sh -id -u && id -g -#+end_src - -Now that you have collected the necessary information, open the =/etc/fstab= -file: - -#+begin_src sh -sudo nano /etc/fstab -#+end_src - -Each drive you want to mount on boot should have its own line in the -=/etc/fstab= file that looks similar to this: - -#+begin_src config -UUID=B64E53824E5339F7 /mnt/steam_library ntfs-3g uid=1000,gid=1000 0 0 -#+end_src - -Now all you need to do is unmount your drive and re-mount it. You can unmount -the drive by doing this (be sure to use the correct drive name here): - -#+begin_src sh -sudo umount /dev/sdxX -#+end_src - -You can re-mount all your drives by executing the following: - -#+begin_src sh -sudo mount -a -#+end_src - -If you don't know what your drive name is, or you're nervous about unmounting -and re-mounting, simply reboot your computer, and it will be done for you -automatically. diff --git a/content/blog/2020-02-09-cryptography-basics.md b/content/blog/2020-02-09-cryptography-basics.md new file mode 100644 index 0000000..dee2174 --- /dev/null +++ b/content/blog/2020-02-09-cryptography-basics.md @@ -0,0 +1,180 @@ ++++ +date = 2020-02-09 +title = "Cryptography Basics" +description = "" +draft = false ++++ + +# Similar Article Available + +If you haven\'t already, feel free to read my post on [AES +Encryption](../aes-encryption/). + +# What is Cryptography? + +In layman\'s terms, cryptography is a process that can change data from +a readable format into an unreadable format (and vice-versa) through a +series of processes and secrets. More technically, this is the Internet +Security Glossary\'s definition: + +> \[Cryptography is\] the mathematical science that deals with +> transforming data to render its meaning unintelligible (i.e., to hide +> its semantic content), prevent its undetected alteration, or prevent +> its unauthorized use. If the transformation is reversible, +> cryptography also deals with restoring encrypted data to an +> intelligible form. +> +> - [Internet Security Glossary +> (2000)](https://tools.ietf.org/html/rfc2828) + +Cryptography cannot offer protection against the loss of data; it simply +offers encryption methods to protect data at-rest and data in-traffic. +At a high-level, encrypted is when plaintext data is encrypted to +ciphertext (a secure form of text that cannot be understood unless +decrypted back to plaintext). The encryption process is completed +through the use of a mathematical function that utilizes one or more +values called keys to encrypt or decrypt the data. + +# Key Elements of Cryptographic Systems + +To create or evaluate a cryptographic system, you need to know the +essential pieces to the system: + +- **Encryption Algorithm (Primitive):\*** A mathematical process that + encrypts and decrypts data. +- **Encryption Key:\*** A string of bits used within the encryption + algorithm as the secret that allows successful encryption or + decryption of data. +- **Key Length (Size):\*** The maximum number of bits within the + encryption key. It\'s important to remember that key size is + regulated in many countries. +- **Message Digest:\*** A smaller, fixed-size bit string version of + the original message. This is practically infeasible to reverse, + which is why it\'s commonly used to verify integrity. + +# Symmetric Systems (Secret Key Cryptography) + +Symmetric cryptography utilizes a secret, bidirectional key to perform +both encryption and decryption of the data. The most common +implementation of symmetric cryptography is the Advanced Encryption +Standard, which uses keys that are 128 bits to 256 bits in size. This +standard came after the National Institute of Standards and Technology +(NIST) decided to retire the Data Encryption Standard (DES) in 2001. + +Since brute force attacks strongly correlate with key length, the 56-bit +key length of DES was considered insecure after it was publicly broken +in under 24 hours. However, there is a modern implementation of DES +called Triple DES where the DES method is applied three times to each +data block. + +The main advantages to symmetric systems are the ease of use, since only +one key is required for both encryption and decryption, and the +simplicity of the algorithms. This helps with bulk data encryption that +may unnecessarily waste time and power using asymmetric systems. + +However, symmetric systems have disadvantages to keep in mind. Since the +key is private, it can be difficult to safely distribute keys to +communication partners. Additionally, the key cannot be used to sign +messages since it\'s necessary to keep the key private. + +# Asymmetric Systems (Public Key Cryptography) + +Asymmetric cryptography utilizes two keys within the system: a secret +key that is privately-held and a public key that can be distributed +freely. The interesting aspect of asymmetric cryptography is that either +key can be used to encrypt the data, there\'s no rule that dictates +which key must be used for encryption. Once one key is used to encrypt +the data, only the other key can be used to decrypt the data. This means +that if the private key encrypts the data, only the public key can +decrypt the data. + +An advantage of this system is that if you successfully decrypt data +using one of the keys, you can be sure of the sender since only the +other key could have encrypted the data. + +One of the major implementations of an asymmetric system is a digital +signature. A digital signature can be generated using the sender\'s +private key, or a one-way hash function and is used to provide assurance +for the integrity and authenticity of the message. A couple common +message digest algorithms are SHA-256 and SHA-512, which securely +compress data and produce a 128-bit message digest. + +It should be noted that man-in-the-middle attacks are one of the risks +with digital signatures and public keys. To combat this, applications +often use a public key infrastructure (PKI) to independently +authenticate the validity of signatures and keys. + +Due to the large key size and [inefficient mathematical +functions](https://crypto.stackexchange.com/a/591) of asymmetric +encryption, elliptical curve cryptography (ECC) is often used to +increase security while using fewer resources. + +# Applications of Cryptographic Systems + +There are quite a few implementations of cryptographic systems around +the world. Here are a few popular examples: + +**Transport Layer Security (TLS):\*** One of the most famous +cryptographic solutions created is TLS, a session-layered or +connection-layered internet protocol that allows for secure +communications between browsers and servers. Using handshakes, peer +negotiation, and authentication allows TLS to prevent eavesdropping and +malicious transformation of data. The major reason for TLS popularity is +that a major vulnerability was found in the SSL protocol in 2014. +Instead of SSL, TLS can be used with HTTP to form HTTPS and is the +preferred method for modern web development due to its increased +security. + +**Secure Hypertext Transfer Protocol (HTTPS):\*** An application layer +protocol that allows for secure transport of data between servers and +web clients. One of the unique parts of HTTPS is that it uses a secured +port number instead of the default web port address. + +**Virtual Private Network (VPN):\*** VPNs are made to securely extend a +private network across public networks by utilizing an encrypted layered +tunneling protocol paired with an authentication method, such as +usernames and passwords. This technology originally allowed remote +employees to access their company\'s data but have evolved into one of +the top choices for anyone who wishes to mask their sensitive personal +data. + +**Internet Protocol Security (IPSec):\*** This protocol suite +facilitates communication between two or more hosts or subnets by +authenticating and encrypting the data packets. IPSec is used in a lot +of VPNs to establish the VPN connection through the transport and tunnel +mode encryption methods. IPSec encrypts just the data portion of packets +in the transport methods, but it encrypts both the data and headers in +the tunnel method (introducing an additional header for authentication). + +**Secure Shell (SSH):\*** SSH is another network protocol used to +protect network services by authenticating users through a secure +channel. This protocol is often used for command-line (shell) functions +such as remote shell commands, logins, and file transfers. + +**Kerberos:\*** Developed by MIT, Kerberos is a computer-network +authentication protocol that works on the basis of tickets to allow +nodes communicating over a non-secure network to prove their identity to +one another securely. This is most commonly used in business +environments when used as the authentication and encryption method for +Windows Active Directory (AD). + +# Cybersecurity Controls + +If you\'re someone who needs solutions on how to control risks +associated with utilizing a crytograhpic system, start with a few basic +controls: + +- **Policies:\*** A policy on the use of cryptographic controls for + protection of information is implemented and is in accordance with + organizational objectives. +- **Key management:\*** A policy on the use, protection and lifetime + of cryptographic keys is implemented through the entire application + lifecycle. +- **Key size:\*** The organization has researched the optimal key size + for their purposes, considering national laws, required processing + power, and longevity of the solution. +- **Algorithm selection:\*** Implemented algorithms are sufficiently + appropriate for the business of the organization, robust, and align + with recommended guidelines. +- **Protocol configuration:\*** Protocols have been reviewed and + configured suitable to the purpose of the business. diff --git a/content/blog/2020-02-09-cryptography-basics.org b/content/blog/2020-02-09-cryptography-basics.org deleted file mode 100644 index 43229da..0000000 --- a/content/blog/2020-02-09-cryptography-basics.org +++ /dev/null @@ -1,158 +0,0 @@ -#+title: Cryptography Basics -#+date: 2020-02-09 -#+description: Learn about the basics of cryptography. -#+filetags: :security: - -* Similar Article Available -If you haven't already, feel free to read my post on [[../aes-encryption/][AES Encryption]]. - -* What is Cryptography? -In layman's terms, cryptography is a process that can change data from a -readable format into an unreadable format (and vice-versa) through a series of -processes and secrets. More technically, this is the Internet Security -Glossary's definition: - -#+begin_quote -[Cryptography is] the mathematical science that deals with transforming data to -render its meaning unintelligible (i.e., to hide its semantic content), prevent -its undetected alteration, or prevent its unauthorized use. If the -transformation is reversible, cryptography also deals with restoring encrypted -data to an intelligible form. - -- [[https://tools.ietf.org/html/rfc2828][Internet Security Glossary (2000)]] -#+end_quote - -Cryptography cannot offer protection against the loss of data; it simply offers -encryption methods to protect data at-rest and data in-traffic. At a high-level, -encrypted is when plaintext data is encrypted to ciphertext (a secure form of -text that cannot be understood unless decrypted back to plaintext). The -encryption process is completed through the use of a mathematical function that -utilizes one or more values called keys to encrypt or decrypt the data. - -* Key Elements of Cryptographic Systems -To create or evaluate a cryptographic system, you need to know the essential -pieces to the system: - -- *Encryption Algorithm (Primitive):** A mathematical process that encrypts and - decrypts data. -- *Encryption Key:** A string of bits used within the encryption algorithm as - the secret that allows successful encryption or decryption of data. -- *Key Length (Size):** The maximum number of bits within the encryption key. - It's important to remember that key size is regulated in many countries. -- *Message Digest:** A smaller, fixed-size bit string version of the original - message. This is practically infeasible to reverse, which is why it's commonly - used to verify integrity. - -* Symmetric Systems (Secret Key Cryptography) -Symmetric cryptography utilizes a secret, bidirectional key to perform both -encryption and decryption of the data. The most common implementation of -symmetric cryptography is the Advanced Encryption Standard, which uses keys that -are 128 bits to 256 bits in size. This standard came after the National -Institute of Standards and Technology (NIST) decided to retire the Data -Encryption Standard (DES) in 2001. - -Since brute force attacks strongly correlate with key length, the 56-bit key -length of DES was considered insecure after it was publicly broken in under 24 -hours. However, there is a modern implementation of DES called Triple DES where -the DES method is applied three times to each data block. - -The main advantages to symmetric systems are the ease of use, since only one key -is required for both encryption and decryption, and the simplicity of the -algorithms. This helps with bulk data encryption that may unnecessarily waste -time and power using asymmetric systems. - -However, symmetric systems have disadvantages to keep in mind. Since the key is -private, it can be difficult to safely distribute keys to communication -partners. Additionally, the key cannot be used to sign messages since it's -necessary to keep the key private. - -* Asymmetric Systems (Public Key Cryptography) -Asymmetric cryptography utilizes two keys within the system: a secret key that -is privately-held and a public key that can be distributed freely. The -interesting aspect of asymmetric cryptography is that either key can be used to -encrypt the data, there's no rule that dictates which key must be used for -encryption. Once one key is used to encrypt the data, only the other key can be -used to decrypt the data. This means that if the private key encrypts the data, -only the public key can decrypt the data. - -An advantage of this system is that if you successfully decrypt data using one -of the keys, you can be sure of the sender since only the other key could have -encrypted the data. - -One of the major implementations of an asymmetric system is a digital signature. -A digital signature can be generated using the sender's private key, or a -one-way hash function and is used to provide assurance for the integrity and -authenticity of the message. A couple common message digest algorithms are -SHA-256 and SHA-512, which securely compress data and produce a 128-bit message -digest. - -It should be noted that man-in-the-middle attacks are one of the risks with -digital signatures and public keys. To combat this, applications often use a -public key infrastructure (PKI) to independently authenticate the validity of -signatures and keys. - -Due to the large key size and [[https://crypto.stackexchange.com/a/591][inefficient mathematical functions]] of asymmetric -encryption, elliptical curve cryptography (ECC) is often used to increase -security while using fewer resources. - -* Applications of Cryptographic Systems -There are quite a few implementations of cryptographic systems around the world. -Here are a few popular examples: - -*Transport Layer Security (TLS):** One of the most famous cryptographic -solutions created is TLS, a session-layered or connection-layered internet -protocol that allows for secure communications between browsers and servers. -Using handshakes, peer negotiation, and authentication allows TLS to prevent -eavesdropping and malicious transformation of data. The major reason for TLS -popularity is that a major vulnerability was found in the SSL protocol in 2014. -Instead of SSL, TLS can be used with HTTP to form HTTPS and is the preferred -method for modern web development due to its increased security. - -*Secure Hypertext Transfer Protocol (HTTPS):** An application layer protocol -that allows for secure transport of data between servers and web clients. One of -the unique parts of HTTPS is that it uses a secured port number instead of the -default web port address. - -*Virtual Private Network (VPN):** VPNs are made to securely extend a private -network across public networks by utilizing an encrypted layered tunneling -protocol paired with an authentication method, such as usernames and passwords. -This technology originally allowed remote employees to access their company's -data but have evolved into one of the top choices for anyone who wishes to mask -their sensitive personal data. - -*Internet Protocol Security (IPSec):** This protocol suite facilitates -communication between two or more hosts or subnets by authenticating and -encrypting the data packets. IPSec is used in a lot of VPNs to establish the VPN -connection through the transport and tunnel mode encryption methods. IPSec -encrypts just the data portion of packets in the transport methods, but it -encrypts both the data and headers in the tunnel method (introducing an -additional header for authentication). - -*Secure Shell (SSH):** SSH is another network protocol used to protect network -services by authenticating users through a secure channel. This protocol is -often used for command-line (shell) functions such as remote shell commands, -logins, and file transfers. - -*Kerberos:** Developed by MIT, Kerberos is a computer-network authentication -protocol that works on the basis of tickets to allow nodes communicating over a -non-secure network to prove their identity to one another securely. This is most -commonly used in business environments when used as the authentication and -encryption method for Windows Active Directory (AD). - -* Cybersecurity Controls -If you're someone who needs solutions on how to control risks associated with -utilizing a crytograhpic system, start with a few basic controls: - -- *Policies:** A policy on the use of cryptographic controls for protection of - information is implemented and is in accordance with organizational - objectives. -- *Key management:** A policy on the use, protection and lifetime of - cryptographic keys is implemented through the entire application lifecycle. -- *Key size:** The organization has researched the optimal key size for their - purposes, considering national laws, required processing power, and longevity - of the solution. -- *Algorithm selection:** Implemented algorithms are sufficiently appropriate - for the business of the organization, robust, and align with recommended - guidelines. -- *Protocol configuration:** Protocols have been reviewed and configured - suitable to the purpose of the business. diff --git a/content/blog/2020-03-25-session-manager.org b/content/blog/2020-03-25-session-manager.org deleted file mode 100644 index 3635e4a..0000000 --- a/content/blog/2020-03-25-session-manager.org +++ /dev/null @@ -1,120 +0,0 @@ -#+title: Session Private Messenger -#+date: 2020-03-25 -#+description: Exploring the Session Private Messenger application. -#+filetags: :privacy: - -* Privacy Warning -The company behind Session (Loki Foundation) is from Australia. If you didn't -know, Australia has introduced [[https://parlinfo.aph.gov.au/parlInfo/download/legislation/bills/r6195_aspassed/toc_pdf/18204b01.pdf][legislation]] mandating companies comply with -government requests to build backdoor access into applications. For more -information, read my article on [[./2020-01-25-aes-encryption.html][AES Encryption]]. - -* About Session -[[https://getsession.org][Session]] is a private, cross-platform messaging app from the [[https://loki.foundation][Loki Foundation]]. As -someone who has spent years looking for quality alternatives to major messaging -apps, I was excited when I first heard about Session. Reading through [[https://arxiv.org/pdf/2002.04609.pdf][Session's -white paper]], you can learn the technologies behind the Session app. Part of the -security of Session comes from the Signal protocol, which was forked as the -origin of Session. - -#+begin_quote -Session is an end-to-end encrypted messenger that removes sensitive metadata -collection, and is designed for people who want privacy and freedom from any -forms of surveillance. -#+end_quote - -In general, this app promises security through end-to-end encryption, -decentralized onion routing, and private identities. The biggest change that the -Loki Foundation has made to the Signal protocol is removing the need for a phone -number. Instead, a random identification string is generated for any session you -create. This means you can create a new session for each device if you want to, -or link new devices with your ID. - -Since Session's website and white paper describe the details of Session's -security, I'm going to focus on using the app in this post. - -* Features -Since most people are looking for an alternative to a popular chat app, I am -going to list out the features that Session has so that you are able to -determine if the app would suit your needs: - -- Multiple device linking (via QR code or ID) -- App locking via device screen lock, password, or fingerprint -- Screenshot blocking -- Incognito keyboard -- Read receipts and typing indicators -- Mobile notification customization -- Old message deletion and conversation limit -- Backups -- Recovery phrase -- Account deletion, including ID, messages, sessions, and contacts - -* Downloads -I have tested this app on Ubuntu 19.10, Android 10, macOS Monterey, and iOS 15. -All apps have worked well without many issues. - -Below is a brief overview of the Session app on Linux. To get this app, you'll -need to go to the [[https://getsession.org/download/][Downloads]] page and click to link to the operating system -you're using. - -For Linux, it will download an AppImage that you'll need to enable with the -following command: - -#+begin_src sh -sudo chmod u+x session-messenger-desktop-linux-x86_64-1.0.5.AppImage -#+end_src - -#+caption: Session Download Options -[[https://img.cleberg.net/blog/20200325-session-private-messenger/session_downloads.png]] - -* Creating an Account -Once you've installed the app, simply run the app and create your unique Session -ID. It will look something like this: -=05af1835afdd63c947b47705867501d6373f486aa1ae05b1f2f3fcd24570eba608=. - -You'll need to set a display name and, optionally, a password. If you set a -password, you will need to enter it every time you open the app. - -#+caption: Session Login (Linux) -[[https://img.cleberg.net/blog/20200325-session-private-messenger/session_linux_login.png]] - -#+caption: Session Login (macOS) -[[https://img.cleberg.net/blog/20200325-session-private-messenger/session_macos_login.png]] - -#+caption: Password Authentication -[[https://img.cleberg.net/blog/20200325-session-private-messenger/session_password_authentication.png]] - -* Start Messaging -Once you've created your account and set up your profile details, the next step -is to start messaging other people. To do so, you'll need to share your Session -ID with other people. From this point, it's fairly straightforward and acts like -any other messaging app, so I won't dive into much detail here. - -** macOS -#+caption: macOS Conversations -[[https://img.cleberg.net/blog/20200325-session-private-messenger/session_macos_conversations.png]] - -One key feature to note is that the desktop application now provides a helpful -pop-up box explaining the process that Session uses to hide your IP address: - -#+caption: IP Address Help Box -[[https://img.cleberg.net/blog/20200325-session-private-messenger/session_ip.png]] - -** iOS -The mobile app is quite simple and effective, giving you all the standard mobile -messaging options you'd expect. - -#+caption: iOS App -[[https://img.cleberg.net/blog/20200325-session-private-messenger/session_ios.png]] - -* Potential Issues -I've discovered one annoying issue that would prevent from using this app -regularly. On a mobile device, there have been issues with receiving messages on -time. Even with battery optimization disabled and no network restrictions, -Session notifications sometimes do not display until I open the app or the -conversation itself and wait a few moments. This is actually one of the reasons -I stopped using Signal (this seems fixed as of my updates in 2021/2022, so I -wouldn't worry about this issue anymore). - -Looking for another messenger instead of Session? I recommend Signal, Matrix, -and IRC. diff --git a/content/blog/2020-03-25-session-messenger.md b/content/blog/2020-03-25-session-messenger.md new file mode 100644 index 0000000..805cbd8 --- /dev/null +++ b/content/blog/2020-03-25-session-messenger.md @@ -0,0 +1,137 @@ ++++ +date = 2020-03-25 +title = "Session Private Messenger" +description = "" +draft = false ++++ + +# Privacy Warning + +The company behind Session (Loki Foundation) is from Australia. If you +didn\'t know, Australia has introduced +[legislation](https://parlinfo.aph.gov.au/parlInfo/download/legislation/bills/r6195_aspassed/toc_pdf/18204b01.pdf) +mandating companies comply with government requests to build backdoor +access into applications. For more information, read my article on [AES +Encryption](./2020-01-25-aes-encryption.html). + +# About Session + +[Session](https://getsession.org) is a private, cross-platform messaging +app from the [Loki Foundation](https://loki.foundation). As someone who +has spent years looking for quality alternatives to major messaging +apps, I was excited when I first heard about Session. Reading through +[Session\'s white paper](https://arxiv.org/pdf/2002.04609.pdf), you can +learn the technologies behind the Session app. Part of the security of +Session comes from the Signal protocol, which was forked as the origin +of Session. + +> Session is an end-to-end encrypted messenger that removes sensitive +> metadata collection, and is designed for people who want privacy and +> freedom from any forms of surveillance. + +In general, this app promises security through end-to-end encryption, +decentralized onion routing, and private identities. The biggest change +that the Loki Foundation has made to the Signal protocol is removing the +need for a phone number. Instead, a random identification string is +generated for any session you create. This means you can create a new +session for each device if you want to, or link new devices with your +ID. + +Since Session\'s website and white paper describe the details of +Session\'s security, I\'m going to focus on using the app in this post. + +# Features + +Since most people are looking for an alternative to a popular chat app, +I am going to list out the features that Session has so that you are +able to determine if the app would suit your needs: + +- Multiple device linking (via QR code or ID) +- App locking via device screen lock, password, or fingerprint +- Screenshot blocking +- Incognito keyboard +- Read receipts and typing indicators +- Mobile notification customization +- Old message deletion and conversation limit +- Backups +- Recovery phrase +- Account deletion, including ID, messages, sessions, and contacts + +# Downloads + +I have tested this app on Ubuntu 19.10, Android 10, macOS Monterey, and +iOS 15. All apps have worked well without many issues. + +Below is a brief overview of the Session app on Linux. To get this app, +you\'ll need to go to the [Downloads](https://getsession.org/download/) +page and click to link to the operating system you\'re using. + +For Linux, it will download an AppImage that you\'ll need to enable with +the following command: + +```sh +sudo chmod u+x session-messenger-desktop-linux-x86_64-1.0.5.AppImage +``` + +![Session Download +Options](https://img.cleberg.net/blog/20200325-session-private-messenger/session_downloads.png) + +# Creating an Account + +Once you\'ve installed the app, simply run the app and create your +unique Session ID. It will look something like this: +`05af1835afdd63c947b47705867501d6373f486aa1ae05b1f2f3fcd24570eba608`. + +You\'ll need to set a display name and, optionally, a password. If you +set a password, you will need to enter it every time you open the app. + +![Session Login +(Linux)](https://img.cleberg.net/blog/20200325-session-private-messenger/session_linux_login.png) + +![Session Login +(macOS)](https://img.cleberg.net/blog/20200325-session-private-messenger/session_macos_login.png) + +![Password +Authentication](https://img.cleberg.net/blog/20200325-session-private-messenger/session_password_authentication.png) + +# Start Messaging + +Once you\'ve created your account and set up your profile details, the +next step is to start messaging other people. To do so, you\'ll need to +share your Session ID with other people. From this point, it\'s fairly +straightforward and acts like any other messaging app, so I won\'t dive +into much detail here. + +## macOS + +![macOS +Conversations](https://img.cleberg.net/blog/20200325-session-private-messenger/session_macos_conversations.png) + +One key feature to note is that the desktop application now provides a +helpful pop-up box explaining the process that Session uses to hide your +IP address: + +![IP Address Help +Box](https://img.cleberg.net/blog/20200325-session-private-messenger/session_ip.png) + +## iOS + +The mobile app is quite simple and effective, giving you all the +standard mobile messaging options you\'d expect. + +![iOS +App](https://img.cleberg.net/blog/20200325-session-private-messenger/session_ios.png) + +# Potential Issues + +I\'ve discovered one annoying issue that would prevent from using this +app regularly. On a mobile device, there have been issues with receiving +messages on time. Even with battery optimization disabled and no network +restrictions, Session notifications sometimes do not display until I +open the app or the conversation itself and wait a few moments. This is +actually one of the reasons I stopped using Signal (this seems fixed as +of my updates in 2021/2022, so I wouldn\'t worry about this issue +anymore). + +Looking for another messenger instead of Session? I recommend Signal, +Matrix, and IRC. diff --git a/content/blog/2020-05-03-homelab.md b/content/blog/2020-05-03-homelab.md new file mode 100644 index 0000000..d02e9e8 --- /dev/null +++ b/content/blog/2020-05-03-homelab.md @@ -0,0 +1,166 @@ ++++ +date = 2020-05-03 +title = "An Inside Look at My Home Lab" +description = "" +draft = false ++++ + +# What is a Homelab? + +Starting as a developer, I have largely stayed away from hardware-based +hobbies (other than building a gaming desktop). However, as the +quarantine for COVID-19 stretches out further and further, I found +myself bored and in search of new hobbies. After spending the last few +months browsing the [r/homelab](https://www.reddit.com/r/homelab/) +subreddit, I decided it was time to jump in and try things out for +myself. + +Since I am a beginner and just recently graduated from college, +everything I\'ve done so far in my homelab is fairly low-budget. + +# Hardware + +![HomeLab +Diagram](https://img.cleberg.net/blog/20200503-homelab/homelab-min.png) + +### Raspberry Pi 4 + +Luckily, I had actually purchased a [Raspberry Pi +4](https://www.raspberrypi.org/products/raspberry-pi-4-model-b/) before +the quarantine started so that I could try to keep Plex Media Center +running, even while my desktop computer was turned off. I started here, +using the Pi to hold Plex and Pi-hole until I grew tired with the slow +performance. + +Here are the specifications for the Pi 4: + +- Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz +- 4GB LPDDR4-3200 SDRAM +- Gigabit Ethernet +- H.265 (4kp60 decode), H264 (1080p60 decode, 1080p30 encode) +- 64 GB MicroSD Card + +## Dell Optiplex 5040 + +Since I wasn\'t happy with the Pi as my main server, I turned to +Craigslist. I know a lot of other homelabbers use Ebay, but I can\'t +seem to ever trust it enough to purchase items on there. So I used +Craigslist and found a Dell Optiplex 5040 desktop computer on sale for +\$90. While this computer might be underpowered, it was one of the few +computers under \$100 that was available during quarantine. + +Here are the specifications for the Dell Optiplex 5040: + +- Intel Core i3 6100 +- 8GB RAM DDR3 +- Intel HD Graphics +- Gigabit Ethernet +- 500GB Hard Drive + +While this hardware would be awful for a work computer or a gaming rig, +it turned out to be wonderful for my server purposes. The only +limitation I have found so far is the CPU. The i3-6100 only has enough +power for a single 4k video transcode at a time. I haven\'t tested more +than three 1080p streams at a time, but the maximum amount of streams +I\'ve ever actually used is two. + +## WD easystore 10TB & 8TB + +Application storage and temporary files are stored on the internal hard +drive of the server, but all media files (movies, tv, games, books, etc) +are stored externally on my WD easystore hard drive. Creating auto-boot +configurations in the `/etc/fstab` file on my server allows +the hard drives to automatically mount whenever I need to restart my +server. + +> Update: In March 2022, I shucked the hard drives out of their external +> cases, put some Kapton tape on the third power pin to prevent power +> shutdowns, and stuck them inside my server tower using internal SATA +> cables. + +## Netgear Unmanaged Switch + +To manage all the ethernet cords used by my homelab, my desktop, and my +living room media center, I purchased an 8-port gigabit ethernet switch +for \$50 at my local computer store. This is probably much more than I +should have spent on an unmanaged switch, but I am comfortable with the +choice. + +## TP-Link Managed Switch + +Since I use the unmanaged switch to group all living room devices +together, I use the managed switch to configure VLANs and secure my +network. + +## Arris TM1602A Modem & Sagecom Fast 5280 Router + +My default modem and router, provided by my ISP, are fairly standard. +The Arris modem supports DOCSIS 3.0, which is something that I +definitely wanted as a minimum. The Sagecom router is also standard, no +fancy bells or whistles. However, it does support DHCP and DHCPv6, which +is something you can use to route all household traffic through a +pi-hole or firewall. + +## TP-Link EAP + +In order to gain better control over the network, I use my own wireless +access point instead of the one included in the Sagecom router above. +Now I can control and organize all of my ethernet connections through +the VLANs on the managed switch and wireless connections through the +VLANS on the EAP. + +## Generic Printer + +The last piece to my homelab is a standard wireless printer. Nothing +special here. + +# Software + +## Ubuntu Server 20.04 + +While the 20.04 version of Ubuntu was just released, I always like to +experiment with new features (and I don\'t mind breaking my system - it +just gives me more experience learning how to fix things). So, I have +Ubuntu Server 20.04 installed on the Dell Optiplex server and Ubuntu +Server 19.10 installed on the Raspberry Pi. Once I find an acceptable +use for the Pi, I will most likely switch the operating system. + +## Docker + +I am *very* new to Docker, but I have had a lot of fun playing with it +so far. Docker is used to create containers that can hold all the +contents of a system without interfering with other software on the same +system. So far, I have successfully installed pi-hole, GitLab, Gogs, and +Nextcloud in containers. However, I opted to delete all of those so that +I can reconfigure them more professionally at a later time. + +## Plex Media Server + +Plex is a media center software that allows you to organize your movies, +TV shows, music, photos, and videos automatically. It will even download +metadata for you so that you can easily browse these collections. + +## Pi-hole + +Pi-hole is an alternative ad-blocker that runs at the DNS level, +allowing you to block traffic when it hits your network, so that you can +reject any traffic you deem to be bad. Pi-hole uses blacklists and +whitelists to decide which traffic block and, luckily, there are a lot +of pre-made lists out there on Reddit, GitHub, etc. + +## Nextcloud + +While I had trouble with the Docker version of Nextcloud, I was very +successful when setting up the snap version. Using this, I was able to +map Nextcloud to a subdomain of a domain I own in Namecheap. +Additionally, Nextcloud has an integration with Let\'s Encrypt that +allows me to issue certificates automatically to any new domain I +authorize. + +## Webmin + +To monitor my servers, and the processes running on them, I use the +Webmin dashboard. This was fairly painless to set up, and I currently +access it straight through the server\'s IP address. In the future, I +will be looking to configure Webmin to use a custom domain just like +Nextcloud. diff --git a/content/blog/2020-05-03-homelab.org b/content/blog/2020-05-03-homelab.org deleted file mode 100644 index 1b2b12e..0000000 --- a/content/blog/2020-05-03-homelab.org +++ /dev/null @@ -1,136 +0,0 @@ -#+title: An Inside Look at My Homelab -#+date: 2020-05-03 -#+description: A retrospective on the first iteration of my home lab. -#+filetags: :sysadmin: - -* What is a Homelab? -Starting as a developer, I have largely stayed away from hardware-based hobbies -(other than building a gaming desktop). However, as the quarantine for COVID-19 -stretches out further and further, I found myself bored and in search of new -hobbies. After spending the last few months browsing the [[https://www.reddit.com/r/homelab/][r/homelab]] subreddit, I -decided it was time to jump in and try things out for myself. - -Since I am a beginner and just recently graduated from college, everything I've -done so far in my homelab is fairly low-budget. - -* Hardware -#+caption: HomeLab Diagram -[[https://img.cleberg.net/blog/20200503-homelab/homelab-min.png]] - -*** Raspberry Pi 4 -Luckily, I had actually purchased a [[https://www.raspberrypi.org/products/raspberry-pi-4-model-b/][Raspberry Pi 4]] before the quarantine started -so that I could try to keep Plex Media Center running, even while my desktop -computer was turned off. I started here, using the Pi to hold Plex and Pi-hole -until I grew tired with the slow performance. - -Here are the specifications for the Pi 4: - -- Broadcom BCM2711, Quad core Cortex-A72 (ARM v8) 64-bit SoC @ 1.5GHz -- 4GB LPDDR4-3200 SDRAM -- Gigabit Ethernet -- H.265 (4kp60 decode), H264 (1080p60 decode, 1080p30 encode) -- 64 GB MicroSD Card - -** Dell Optiplex 5040 -Since I wasn't happy with the Pi as my main server, I turned to Craigslist. I -know a lot of other homelabbers use Ebay, but I can't seem to ever trust it -enough to purchase items on there. So I used Craigslist and found a Dell -Optiplex 5040 desktop computer on sale for $90. While this computer might be -underpowered, it was one of the few computers under $100 that was available -during quarantine. - -Here are the specifications for the Dell Optiplex 5040: - -- Intel Core i3 6100 -- 8GB RAM DDR3 -- Intel HD Graphics -- Gigabit Ethernet -- 500GB Hard Drive - -While this hardware would be awful for a work computer or a gaming rig, it -turned out to be wonderful for my server purposes. The only limitation I have -found so far is the CPU. The i3-6100 only has enough power for a single 4k video -transcode at a time. I haven't tested more than three 1080p streams at a time, -but the maximum amount of streams I've ever actually used is two. - -** WD easystore 10TB & 8TB -Application storage and temporary files are stored on the internal hard drive of -the server, but all media files (movies, tv, games, books, etc) are stored -externally on my WD easystore hard drive. Creating auto-boot configurations in -the =/etc/fstab= file on my server allows the hard drives to automatically mount -whenever I need to restart my server. - -#+begin_quote -Update: In March 2022, I shucked the hard drives out of their external cases, -put some Kapton tape on the third power pin to prevent power shutdowns, and -stuck them inside my server tower using internal SATA cables. -#+end_quote - -** Netgear Unmanaged Switch -To manage all the ethernet cords used by my homelab, my desktop, and my living -room media center, I purchased an 8-port gigabit ethernet switch for $50 at my -local computer store. This is probably much more than I should have spent on an -unmanaged switch, but I am comfortable with the choice. - -** TP-Link Managed Switch -Since I use the unmanaged switch to group all living room devices together, I -use the managed switch to configure VLANs and secure my network. - -** Arris TM1602A Modem & Sagecom Fast 5280 Router -My default modem and router, provided by my ISP, are fairly standard. The Arris -modem supports DOCSIS 3.0, which is something that I definitely wanted as a -minimum. The Sagecom router is also standard, no fancy bells or whistles. -However, it does support DHCP and DHCPv6, which is something you can use to -route all household traffic through a pi-hole or firewall. - -** TP-Link EAP -In order to gain better control over the network, I use my own wireless access -point instead of the one included in the Sagecom router above. Now I can control -and organize all of my ethernet connections through the VLANs on the managed -switch and wireless connections through the VLANS on the EAP. - -** Generic Printer -The last piece to my homelab is a standard wireless printer. Nothing special -here. - -* Software -** Ubuntu Server 20.04 -While the 20.04 version of Ubuntu was just released, I always like to experiment -with new features (and I don't mind breaking my system - it just gives me more -experience learning how to fix things). So, I have Ubuntu Server 20.04 installed -on the Dell Optiplex server and Ubuntu Server 19.10 installed on the Raspberry -Pi. Once I find an acceptable use for the Pi, I will most likely switch the -operating system. - -** Docker -I am /very/ new to Docker, but I have had a lot of fun playing with it so far. -Docker is used to create containers that can hold all the contents of a system -without interfering with other software on the same system. So far, I have -successfully installed pi-hole, GitLab, Gogs, and Nextcloud in containers. -However, I opted to delete all of those so that I can reconfigure them more -professionally at a later time. - -** Plex Media Server -Plex is a media center software that allows you to organize your movies, TV -shows, music, photos, and videos automatically. It will even download metadata -for you so that you can easily browse these collections. - -** Pi-hole -Pi-hole is an alternative ad-blocker that runs at the DNS level, allowing you to -block traffic when it hits your network, so that you can reject any traffic you -deem to be bad. Pi-hole uses blacklists and whitelists to decide which traffic -block and, luckily, there are a lot of pre-made lists out there on Reddit, -GitHub, etc. - -** Nextcloud -While I had trouble with the Docker version of Nextcloud, I was very successful -when setting up the snap version. Using this, I was able to map Nextcloud to a -subdomain of a domain I own in Namecheap. Additionally, Nextcloud has an -integration with Let's Encrypt that allows me to issue certificates -automatically to any new domain I authorize. - -** Webmin -To monitor my servers, and the processes running on them, I use the Webmin -dashboard. This was fairly painless to set up, and I currently access it -straight through the server's IP address. In the future, I will be looking to -configure Webmin to use a custom domain just like Nextcloud. diff --git a/content/blog/2020-05-19-customizing-ubuntu.md b/content/blog/2020-05-19-customizing-ubuntu.md new file mode 100644 index 0000000..6137467 --- /dev/null +++ b/content/blog/2020-05-19-customizing-ubuntu.md @@ -0,0 +1,211 @@ ++++ +date = 2020-05-19 +title = "A Beginner's Guide: Customizing Ubuntu" +description = "" +draft = false ++++ + +# More Information + +For inspiration on designing your \*nix computer, check out the +[r/unixporn](https://libredd.it/r/unixporn) subreddit! + +# Customizing Ubuntu + +New to Linux and want to add a personal touch to your machine? One of +the best perks of Linux is that it is **extremely\*** customizable. You +can change the styles of the windows, shell (status bars/docks), icons, +fonts, terminals, and more. + +In this post, I\'m going to go through customization on Ubuntu 20.04 +(GNOME) since most new users tend to choose Ubuntu-based distros. If +you\'ve found a way to install Arch with i3-gaps, I\'m assuming you know +how to find more advanced tutorials out there on customizations. + +## Required Tools + +![Gnome +Tweaks](https://img.cleberg.net/blog/20200519-customizing-ubuntu/gnome-tweaks-min.png) + +Ubuntu 20.04 ships with the default desktop environment +[Gnome](https://www.gnome.org/), which includes the handy +`gnome-tweaks` tool to quickly change designs. To install +this, just open your terminal and enter the following command: + +```sh +sudo apt install gnome-tweaks +``` + +After you\'ve finished installing the tool, simply launch the Tweaks +application, and you\'ll be able to access the various customization +options available by default on Ubuntu. You might even like some of the +pre-installed options. + +## GNOME Application Themes + +To change the themes applied to applications in GNOME, you will need to +change the Applications dropdown in the Appearance section of Tweaks. To +add more themes, you will have to find your preferred theme online and +follow the steps below to have it show up in the Tweaks tool. While you +may find themes anywhere, one of the most popular sites for GNOME themes +is [gnome-look.org](https://www.gnome-look.org/). This website contains +themes for applications, shells, icons, and cursors. + +Steps to import themes into Tweaks: + +1. Download the theme. +2. These files are usually compressed (.zip, .tar.gz, .tar.xz), so you + will need to extract the contents. This is easiest when opening the + file explorer, right-clicking the compressed file, and choosing + \"Extract here.\" +3. Move the theme folder to `/usr/share/themes/`. You can do + so with the following command: + `sudo mv theme-folder/ /usr/share/themes/`. + - Icons and cursors will be moved to the + `/usr/share/icons/` folder. + - Fonts will be moved to the `/usr/share/fonts/` folder + Alternatively, you can move them to the + `/usr/share/fonts/opentype/` or + `/usr/share/fonts/opentype/` folders, if you have a + specific font type. +4. Close tweaks if it is open. Re-open Tweaks and your new theme will + be available in the Applications dropdown in the Appearance section + of Tweaks. + +If the theme is not showing up after you\'ve moved it into the themes +folder, you may have uncompressed the folder into a sub-folder. You can +check this by entering the theme folder and listing the contents: + +```sh +cd /usr/share/themes/Mojave-Dark && ls -la +``` + +This is an example of what the contents of your theme folder should look +like. If you just see another folder there, you should move that folder +up into the `/usr/share/themes/` folder. + +```sh +cinnamon COPYING gnome-shell gtk-2.0 gtk-3.0 index.theme metacity-1 plank xfwm4 +``` + +## GNOME Shell Themes + +To change the appearance of the title bar, default dock, app menu, and +other parts of the GNOME shell, you\'ll need to install the [user +themes](https://extensions.gnome.org/extension/19/user-themes/) +extension on [Gnome Extensions](https://extensions.gnome.org/). To be +able to install extensions, you will first need to install the browser +extension that the website instructs you to. See this screenshot for the +blue box with a link to the extension. + +![Gnome +Extensions](https://img.cleberg.net/blog/20200519-customizing-ubuntu/gnome-extensions-min.png) + +After the browser extension is installed, you will need to install the +native host connector: + +```sh +sudo apt install chrome-gnome-shell +``` + +Finally, you can go the [user +themes](https://extensions.gnome.org/extension/19/user-themes/) +extension page and click the installation button. This will enable the +Shell option in Tweaks. Now you can move shell themes to the +`/usr/share/themes` directory, using the same steps mentioned +in the previous section, and enable the new theme in Tweaks. + +## Icons & Cursors + +Icons and cursors are installed exactly the same way, so I\'m grouping +these together in this post. Both of these items will need to follow the +same process as installing themes, except you will want to move your +font folders to the `/usr/share/icons/` directory instead. + +## Fonts + +Fonts are one of the overlooked parts of customization, but a good font +can make the whole screen look different. For example, I have installed +the [IBM Plex](https://github.com/IBM/plex/releases) fonts on my system. +This follows the same process as installing themes, except you will want +to move your font folders to the `/usr/share/fonts/` +directory instead. + +## Terminal + +If you spend a lot of time typing commands, you know how important the +style and functionality of the terminal is. After spending a lot of time +using the default GNOME terminal with [unix +shell](https://en.wikipedia.org/wiki/Bash_(Unix_shell)), I decided to +try some different options. I ended up choosing +[Terminator](https://terminator-gtk3.readthedocs.io/en/latest/) with +[zsh](https://en.wikipedia.org/wiki/Z_shell). + +Terminator is great if you need to open multiple terminals at one time +by simply right-clicking and splitting the screen into as many terminals +as you want. While this project hasn\'t been updated in a while, [it is +coming under new +development](https://github.com/gnome-terminator/terminator/issues/1). +However, this terminal is great and I haven\'t experienced any errors +yet. + +For the shell choice, I decided to choose zsh after trying it out on a +fresh Manjaro installation. Zsh is great if you like to change the +themes of your terminal, include icons, or add plugins. + +The desktop uses the +[zsh-autosuggestions](https://github.com/zsh-users/zsh-autosuggestions) +to suggest past commands as you type. In addition, it suggests +corrections if you misspell a command. Lastly, it uses the +`af-magic` theme, which adds dashed lines between commands, +moving the user@host tag to the right side of the terminal, and changes +the colors. There are plenty of plugins and themes to choose from. Just +figure out what you like and add it to your `~/.zshrc` file! + +### Steps to Replicate My Terminal + +To install zsh on Ubuntu, enter the following command into a terminal: + +```sh +sudo apt install zsh +``` + +Then, enter the next command to activate zsh: + +```sh +sudo chsh -s $(which zsh) $(whoami) +``` + +To install Terminator on Ubuntu: + +```sh +sudo apt install terminator +``` + +To install Oh My Zsh on Ubuntu: + +```sh +sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" +``` + +To install zsh-autosuggestions via Oh My Zsh: + +```sh +git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions +``` + +Then, add the following plugin wording to your `~/.zshrc` +file (the default config usually has the `git` plugin +activated, so just add any other plugins to the parentheses separated by +a space): + +```sh +nano ~/.zshrc +``` + +```sh +plugins=(git zsh-autosuggestions) +``` + +Finally, you need to log out of your computer and log back in so your +user shell can refresh. diff --git a/content/blog/2020-05-19-customizing-ubuntu.org b/content/blog/2020-05-19-customizing-ubuntu.org deleted file mode 100644 index 468edc8..0000000 --- a/content/blog/2020-05-19-customizing-ubuntu.org +++ /dev/null @@ -1,178 +0,0 @@ -#+title: Beginner's Guide: Customizing Ubuntu -#+date: 2020-05-19 -#+description: A beginner's guide to customizing the Ubuntu operating system. -#+filetags: :linux: - -* More Information -For inspiration on designing your *nix computer, check out the -[[https://libredd.it/r/unixporn][r/unixporn]] subreddit! - -* Customizing Ubuntu -New to Linux and want to add a personal touch to your machine? One of the best -perks of Linux is that it is *extremely** customizable. You can change the -styles of the windows, shell (status bars/docks), icons, fonts, terminals, and -more. - -In this post, I'm going to go through customization on Ubuntu 20.04 (GNOME) -since most new users tend to choose Ubuntu-based distros. If you've found a way -to install Arch with i3-gaps, I'm assuming you know how to find more advanced -tutorials out there on customizations. - -** Required Tools -#+caption: Gnome Tweaks -[[https://img.cleberg.net/blog/20200519-customizing-ubuntu/gnome-tweaks-min.png]] - -Ubuntu 20.04 ships with the default desktop environment [[https://www.gnome.org/][Gnome]], which includes -the handy =gnome-tweaks= tool to quickly change designs. To install this, just -open your terminal and enter the following command: - -#+begin_src sh -sudo apt install gnome-tweaks -#+end_src - -After you've finished installing the tool, simply launch the Tweaks application, -and you'll be able to access the various customization options available by -default on Ubuntu. You might even like some of the pre-installed options. - -** GNOME Application Themes -To change the themes applied to applications in GNOME, you will need to change -the Applications dropdown in the Appearance section of Tweaks. To add more -themes, you will have to find your preferred theme online and follow the steps -below to have it show up in the Tweaks tool. While you may find themes anywhere, -one of the most popular sites for GNOME themes is [[https://www.gnome-look.org/][gnome-look.org]]. This website -contains themes for applications, shells, icons, and cursors. - -Steps to import themes into Tweaks: - -1. Download the theme. -2. These files are usually compressed (.zip, .tar.gz, .tar.xz), so you will need - to extract the contents. This is easiest when opening the file explorer, - right-clicking the compressed file, and choosing "Extract here." -3. Move the theme folder to =/usr/share/themes/=. You can do so with the - following command: =sudo mv theme-folder/ /usr/share/themes/=. - - Icons and cursors will be moved to the =/usr/share/icons/= folder. - - Fonts will be moved to the =/usr/share/fonts/= folder Alternatively, you - can move them to the =/usr/share/fonts/opentype/= or - =/usr/share/fonts/opentype/= folders, if you have a specific font type. -4. Close tweaks if it is open. Re-open Tweaks and your new theme will be - available in the Applications dropdown in the Appearance section of Tweaks. - -If the theme is not showing up after you've moved it into the themes folder, you -may have uncompressed the folder into a sub-folder. You can check this by -entering the theme folder and listing the contents: - -#+begin_src sh -cd /usr/share/themes/Mojave-Dark && ls -la -#+end_src - -This is an example of what the contents of your theme folder should look like. -If you just see another folder there, you should move that folder up into the -=/usr/share/themes/= folder. - -#+begin_src sh -cinnamon COPYING gnome-shell gtk-2.0 gtk-3.0 index.theme metacity-1 plank xfwm4 -#+end_src - -** GNOME Shell Themes -To change the appearance of the title bar, default dock, app menu, and other -parts of the GNOME shell, you'll need to install the [[https://extensions.gnome.org/extension/19/user-themes/][user themes]] extension on -[[https://extensions.gnome.org/][Gnome Extensions]]. To be able to install extensions, you will first need to -install the browser extension that the website instructs you to. See this -screenshot for the blue box with a link to the extension. - -#+caption: Gnome Extensions -[[https://img.cleberg.net/blog/20200519-customizing-ubuntu/gnome-extensions-min.png]] - -After the browser extension is installed, you will need to install the native -host connector: - -#+begin_src sh -sudo apt install chrome-gnome-shell -#+end_src - -Finally, you can go the [[https://extensions.gnome.org/extension/19/user-themes/][user themes]] extension page and click the installation -button. This will enable the Shell option in Tweaks. Now you can move shell -themes to the =/usr/share/themes= directory, using the same steps mentioned in -the previous section, and enable the new theme in Tweaks. - -** Icons & Cursors -Icons and cursors are installed exactly the same way, so I'm grouping these -together in this post. Both of these items will need to follow the same process -as installing themes, except you will want to move your font folders to the -=/usr/share/icons/= directory instead. - -** Fonts -Fonts are one of the overlooked parts of customization, but a good font can make -the whole screen look different. For example, I have installed the [[https://github.com/IBM/plex/releases][IBM Plex]] -fonts on my system. This follows the same process as installing themes, except -you will want to move your font folders to the =/usr/share/fonts/= directory -instead. - -** Terminal -If you spend a lot of time typing commands, you know how important the style and -functionality of the terminal is. After spending a lot of time using the default -GNOME terminal with [[https://en.wikipedia.org/wiki/Bash_(Unix_shell)][unix shell]], I decided to try some different options. I ended -up choosing [[https://terminator-gtk3.readthedocs.io/en/latest/][Terminator]] with [[https://en.wikipedia.org/wiki/Z_shell][zsh]]. - -Terminator is great if you need to open multiple terminals at one time by simply -right-clicking and splitting the screen into as many terminals as you want. -While this project hasn't been updated in a while, [[https://github.com/gnome-terminator/terminator/issues/1][it is coming under new -development]]. However, this terminal is great and I haven't experienced any -errors yet. - -For the shell choice, I decided to choose zsh after trying it out on a fresh -Manjaro installation. Zsh is great if you like to change the themes of your -terminal, include icons, or add plugins. - -The desktop uses the [[https://github.com/zsh-users/zsh-autosuggestions][zsh-autosuggestions]] to suggest past commands as you type. -In addition, it suggests corrections if you misspell a command. Lastly, it uses -the =af-magic= theme, which adds dashed lines between commands, moving the -user@host tag to the right side of the terminal, and changes the colors. There -are plenty of plugins and themes to choose from. Just figure out what you like -and add it to your =~/.zshrc= file! - -*** Steps to Replicate My Terminal -To install zsh on Ubuntu, enter the following command into a terminal: - -#+begin_src sh -sudo apt install zsh -#+end_src - -Then, enter the next command to activate zsh: - -#+begin_src sh -sudo chsh -s $(which zsh) $(whoami) -#+end_src - -To install Terminator on Ubuntu: - -#+begin_src sh -sudo apt install terminator -#+end_src - -To install Oh My Zsh on Ubuntu: - -#+begin_src sh -sh -c "$(curl -fsSL https://raw.github.com/ohmyzsh/ohmyzsh/master/tools/install.sh)" -#+end_src - -To install zsh-autosuggestions via Oh My Zsh: - -#+begin_src sh -git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions -#+end_src - -Then, add the following plugin wording to your =~/.zshrc= file (the default -config usually has the =git= plugin activated, so just add any other plugins to -the parentheses separated by a space): - -#+begin_src sh -nano ~/.zshrc -#+end_src - -#+begin_src sh -plugins=(git zsh-autosuggestions) -#+end_src - -Finally, you need to log out of your computer and log back in so your user shell -can refresh. diff --git a/content/blog/2020-07-20-video-game-sales.md b/content/blog/2020-07-20-video-game-sales.md new file mode 100644 index 0000000..24ca14b --- /dev/null +++ b/content/blog/2020-07-20-video-game-sales.md @@ -0,0 +1,183 @@ ++++ +date = 2020-07-20 +title = "Data Exploration: Video Games Sales" +description = "" +draft = false ++++ + +# Background Information + +This dataset (obtained from +[Kaggle](https://www.kaggle.com/gregorut/videogamesales/data)) contains +a list of video games with sales greater than 100,000 copies. It was +generated by a scrape of vgchartz.com. + +Fields include: + +- Rank: Ranking of overall sales +- Name: The game name +- Platform: Platform of the game release (i.e. PC,PS4, etc.) +- Year: Year of the game\'s release +- Genre: Genre of the game +- Publisher: Publisher of the game +- NA~Sales~: Sales in North America (in millions) +- EU~Sales~: Sales in Europe (in millions) +- JP~Sales~: Sales in Japan (in millions) +- Other~Sales~: Sales in the rest of the world (in millions) +- Global~Sales~: Total worldwide sales. + +There are 16,598 records. 2 records were dropped due to incomplete +information. + +# Import the Data + +``` python +# Import the Python libraries we will be using +import pandas as pd +import numpy as np +import seaborn as sns; sns.set() +import matplotlib.pyplot as plt + +# Load the file using the path to the downloaded file +file = r'video_game_sales.csv' +df = pd.read_csv(file) +df +``` + +![Dataframe +Results](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/01_dataframe-min.png) + +# Explore the Data + +``` python +# With the description function, we can see the basic stats. For example, we can +# also see that the 'Year' column has some incomplete values. +df.describe() +``` + +![df.describe()](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/02_describe-min.png) + +``` python +# This function shows the rows and columns of NaN values. For example, df[179,3] = nan +np.where(pd.isnull(df)) + +(array([179, ..., 16553], dtype=int64), + array([3, ..., 5], dtype=int64)) +``` + +# Visualize the Data + +``` python +# This function plots the global sales by platform +sns.catplot(x='Platform', y='Global_Sales', data=df, jitter=False).set_xticklabels(rotation=90) +``` + +![Plot of Global Sales by +Platform](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/03_plot-min.png) + +``` python +# This function plots the global sales by genre +sns.catplot(x='Genre', y='Global_Sales', data=df, jitter=False).set_xticklabels(rotation=45) +``` + +![Plot of Global Sales by +Genre](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/04_plot-min.png) + +``` python +# This function plots the global sales by year +sns.lmplot(x='Year', y='Global_Sales', data=df).set_xticklabels(rotation=45) +``` + +![Plot of Global Sales by +Year](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/05_plot-min.png) + +``` python +# This function plots four different lines to show sales from different regions. +# The global sales plot line is commented-out, but can be included for comparison +df2 = df.groupby('Year').sum() +years = range(1980,2019) + +a = df2['NA_Sales'] +b = df2['EU_Sales'] +c = df2['JP_Sales'] +d = df2['Other_Sales'] +# e = df2['Global_Sales'] + +fig, ax = plt.subplots(figsize=(12,12)) +ax.set_ylabel('Region Sales (in Millions)') +ax.set_xlabel('Year') + +ax.plot(years, a, label='NA_Sales') +ax.plot(years, b, label='EU_Sales') +ax.plot(years, c, label='JP_Sales') +ax.plot(years, d, label='Other_Sales') +# ax.plot(years, e, label='Global_Sales') + +ax.legend() +plt.show() +``` + +![Plot of Regional Sales by +Year](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/06_plot-min.png) + +## Investigate Outliers + +``` python +# Find the game with the highest sales in North America +df.loc[df['NA_Sales'].idxmax()] + +Rank 1 +Name Wii Sports +Platform Wii +Year 2006 +Genre Sports +Publisher Nintendo +NA_Sales 41.49 +EU_Sales 29.02 +JP_Sales 3.77 +Other_Sales 8.46 +Global_Sales 82.74 +Name: 0, dtype: object + +# Explore statistics in the year 2006 (highest selling year) +df3 = df[(df['Year'] == 2006)] +df3.describe() +``` + +![Descriptive Statistics of 2006 +Sales](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/07_2006_stats-min.png) + +``` python +# Plot the results of the previous dataframe (games from 2006) - we can see the year's results were largely carried by Wii Sports +sns.catplot(x="Genre", y="Global_Sales", data=df3, jitter=False).set_xticklabels(rotation=45) +``` + +![Plot of 2006 +Sales](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/08_plot-min.png) + +``` python +# We can see 4 outliers in the graph above, so let's get the top 5 games from that dataframe +# The results below show that Nintendo had all top 5 games (3 on the Wii and 2 on the DS) +df3.sort_values(by=['Global_Sales'], ascending=False).head(5) +``` + +![Outliers of 2006 +Sales](https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/09_outliers-min.png) + +# Discussion + +The purpose of exploring datasets is to ask questions, answer questions, +and discover intelligence that can be used to inform decision-making. +So, what have we found in this dataset? + +Today we simply explored a publicly-available dataset to see what kind +of information it contained. During that exploration, we found that +video game sales peaked in 2006. That peak was largely due to Nintendo, +who sold the top 5 games in 2006 and has a number of games in the top-10 +list for the years 1980-2020. Additionally, the top four platforms by +global sales (Wii, NES, GB, DS) are owned by Nintendo. + +We didn\'t explore everything this dataset has to offer, but we can tell +from a brief analysis that Nintendo seems to rule sales in the video +gaming world. Further analysis could provide insight into which genres, +regions, publishers, or world events are correlated with sales. diff --git a/content/blog/2020-07-20-video-game-sales.org b/content/blog/2020-07-20-video-game-sales.org deleted file mode 100644 index 2967c17..0000000 --- a/content/blog/2020-07-20-video-game-sales.org +++ /dev/null @@ -1,173 +0,0 @@ -#+title: Data Exploration: Video Game Sales -#+date: 2020-07-20 -#+description: Exploring and visualizing data with Python. -#+filetags: :data: - -* Background Information -This dataset (obtained from [[https://www.kaggle.com/gregorut/videogamesales/data][Kaggle]]) contains a list of video games with sales -greater than 100,000 copies. It was generated by a scrape of vgchartz.com. - -Fields include: - -- Rank: Ranking of overall sales -- Name: The game name -- Platform: Platform of the game release (i.e. PC,PS4, etc.) -- Year: Year of the game's release -- Genre: Genre of the game -- Publisher: Publisher of the game -- NA_{Sales}: Sales in North America (in millions) -- EU_{Sales}: Sales in Europe (in millions) -- JP_{Sales}: Sales in Japan (in millions) -- Other_{Sales}: Sales in the rest of the world (in millions) -- Global_{Sales}: Total worldwide sales. - -There are 16,598 records. 2 records were dropped due to incomplete information. - -* Import the Data -#+begin_src python -# Import the Python libraries we will be using -import pandas as pd -import numpy as np -import seaborn as sns; sns.set() -import matplotlib.pyplot as plt - -# Load the file using the path to the downloaded file -file = r'video_game_sales.csv' -df = pd.read_csv(file) -df -#+end_src - -#+caption: Dataframe Results -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/01_dataframe-min.png]] - -* Explore the Data -#+begin_src python -# With the description function, we can see the basic stats. For example, we can -# also see that the 'Year' column has some incomplete values. -df.describe() -#+end_src - -#+caption: df.describe() -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/02_describe-min.png]] - -#+begin_src python -# This function shows the rows and columns of NaN values. For example, df[179,3] = nan -np.where(pd.isnull(df)) - -(array([179, ..., 16553], dtype=int64), - array([3, ..., 5], dtype=int64)) -#+end_src - -* Visualize the Data -#+begin_src python -# This function plots the global sales by platform -sns.catplot(x='Platform', y='Global_Sales', data=df, jitter=False).set_xticklabels(rotation=90) -#+end_src - -#+caption: Plot of Global Sales by Platform -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/03_plot-min.png]] - -#+begin_src python -# This function plots the global sales by genre -sns.catplot(x='Genre', y='Global_Sales', data=df, jitter=False).set_xticklabels(rotation=45) -#+end_src - -#+caption: Plot of Global Sales by Genre -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/04_plot-min.png]] - -#+begin_src python -# This function plots the global sales by year -sns.lmplot(x='Year', y='Global_Sales', data=df).set_xticklabels(rotation=45) -#+end_src - -#+caption: Plot of Global Sales by Year -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/05_plot-min.png]] - -#+begin_src python -# This function plots four different lines to show sales from different regions. -# The global sales plot line is commented-out, but can be included for comparison -df2 = df.groupby('Year').sum() -years = range(1980,2019) - -a = df2['NA_Sales'] -b = df2['EU_Sales'] -c = df2['JP_Sales'] -d = df2['Other_Sales'] -# e = df2['Global_Sales'] - -fig, ax = plt.subplots(figsize=(12,12)) -ax.set_ylabel('Region Sales (in Millions)') -ax.set_xlabel('Year') - -ax.plot(years, a, label='NA_Sales') -ax.plot(years, b, label='EU_Sales') -ax.plot(years, c, label='JP_Sales') -ax.plot(years, d, label='Other_Sales') -# ax.plot(years, e, label='Global_Sales') - -ax.legend() -plt.show() -#+end_src - -#+caption: Plot of Regional Sales by Year -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/06_plot-min.png]] - -** Investigate Outliers -#+begin_src python -# Find the game with the highest sales in North America -df.loc[df['NA_Sales'].idxmax()] - -Rank 1 -Name Wii Sports -Platform Wii -Year 2006 -Genre Sports -Publisher Nintendo -NA_Sales 41.49 -EU_Sales 29.02 -JP_Sales 3.77 -Other_Sales 8.46 -Global_Sales 82.74 -Name: 0, dtype: object - -# Explore statistics in the year 2006 (highest selling year) -df3 = df[(df['Year'] == 2006)] -df3.describe() -#+end_src - -#+caption: Descriptive Statistics of 2006 Sales -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/07_2006_stats-min.png]] - -#+begin_src python -# Plot the results of the previous dataframe (games from 2006) - we can see the year's results were largely carried by Wii Sports -sns.catplot(x="Genre", y="Global_Sales", data=df3, jitter=False).set_xticklabels(rotation=45) -#+end_src - -#+caption: Plot of 2006 Sales -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/08_plot-min.png]] - -#+begin_src python -# We can see 4 outliers in the graph above, so let's get the top 5 games from that dataframe -# The results below show that Nintendo had all top 5 games (3 on the Wii and 2 on the DS) -df3.sort_values(by=['Global_Sales'], ascending=False).head(5) -#+end_src - -#+caption: Outliers of 2006 Sales -[[https://img.cleberg.net/blog/20200720-data-exploration-video-game-sales/09_outliers-min.png]] - -* Discussion -The purpose of exploring datasets is to ask questions, answer questions, and -discover intelligence that can be used to inform decision-making. So, what have -we found in this dataset? - -Today we simply explored a publicly-available dataset to see what kind of -information it contained. During that exploration, we found that video game -sales peaked in 2006. That peak was largely due to Nintendo, who sold the top 5 -games in 2006 and has a number of games in the top-10 list for the years -1980-2020. Additionally, the top four platforms by global sales (Wii, NES, GB, -DS) are owned by Nintendo. - -We didn't explore everything this dataset has to offer, but we can tell from a -brief analysis that Nintendo seems to rule sales in the video gaming world. -Further analysis could provide insight into which genres, regions, publishers, -or world events are correlated with sales. diff --git a/content/blog/2020-07-26-business-analysis.md b/content/blog/2020-07-26-business-analysis.md new file mode 100644 index 0000000..7f96ac1 --- /dev/null +++ b/content/blog/2020-07-26-business-analysis.md @@ -0,0 +1,389 @@ ++++ +date = 2020-07-26 +title = "Algorithmically Analyzing Local Businesses " +description = "" +draft = false ++++ + +# Background Information + +This project aims to help investors learn more about a random city in +order to determine optimal locations for business investments. The data +used in this project was obtained using Foursquare\'s developer API. + +Fields include: + +- Venue Name +- Venue Category +- Venue Latitude +- Venue Longitude + +There are 232 records found using the center of Lincoln as the area of +interest with a radius of 10,000. + +# Import the Data + +The first step is the simplest: import the applicable libraries. We will +be using the libraries below for this project. + +``` python +# Import the Python libraries we will be using +import pandas as pd +import requests +import folium +import math +import json +from pandas.io.json import json_normalize +from sklearn.cluster import KMeans +``` + +To begin our analysis, we need to import the data for this project. The +data we are using in this project comes directly from the Foursquare +API. The first step is to get the latitude and longitude of the city +being studied (Lincoln, NE) and setting up the folium map. + +``` python +# Define the latitude and longitude, then map the results +latitude = 40.806862 +longitude = -96.681679 +map_LNK = folium.Map(location=[latitude, longitude], zoom_start=12) + +map_LNK +``` + +![Blank +Map](https://img.cleberg.net/blog/20200726-ibm-data-science/01_blank_map-min.png) + +Now that we have defined our city and created the map, we need to go get +the business data. The Foursquare API will limit the results to 100 per +API call, so we use our first API call below to determine the total +results that Foursquare has found. Since the total results are 232, we +perform the API fetching process three times (100 + 100 + 32 = 232). + +``` python +# Foursquare API credentials +CLIENT_ID = 'your-client-id' +CLIENT_SECRET = 'your-client-secret' +VERSION = '20180604' + +# Set up the URL to fetch the first 100 results +LIMIT = 100 +radius = 10000 +url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( + CLIENT_ID, + CLIENT_SECRET, + VERSION, + latitude, + longitude, + radius, + LIMIT) + +# Fetch the first 100 results +results = requests.get(url).json() + +# Determine the total number of results needed to fetch +totalResults = results['response']['totalResults'] +totalResults + +# Set up the URL to fetch the second 100 results (101-200) +LIMIT = 100 +offset = 100 +radius = 10000 +url2 = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}&offset={}'.format( + CLIENT_ID, + CLIENT_SECRET, + VERSION, + latitude, + longitude, + radius, + LIMIT, + offset) + +# Fetch the second 100 results (101-200) +results2 = requests.get(url2).json() + +# Set up the URL to fetch the final results (201 - 232) +LIMIT = 100 +offset = 200 +radius = 10000 +url3 = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}&offset={}'.format( + CLIENT_ID, + CLIENT_SECRET, + VERSION, + latitude, + longitude, + radius, + LIMIT, + offset) + +# Fetch the final results (201 - 232) +results3 = requests.get(url3).json() +``` + +# Clean the Data + +Now that we have our data in three separate dataframes, we need to +combine them into a single dataframe and make sure to reset the index so +that we have a unique ID for each business. The +`get~categorytype~` function below will pull the categories +and name from each business\'s entry in the Foursquare data +automatically. Once all the data has been labeled and combined, the +results are stored in the `nearby_venues` dataframe. + +``` python +# This function will extract the category of the venue from the API dictionary +def get_category_type(row): + try: + categories_list = row['categories'] + except: + categories_list = row['venue.categories'] + + if len(categories_list) == 0: + return None + else: + return categories_list[0]['name'] + +# Get the first 100 venues +venues = results['response']['groups'][0]['items'] +nearby_venues = json_normalize(venues) + +# filter columns +filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] +nearby_venues = nearby_venues.loc[:, filtered_columns] + +# filter the category for each row +nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1) + +# clean columns +nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns] + +--- + +# Get the second 100 venues +venues2 = results2['response']['groups'][0]['items'] +nearby_venues2 = json_normalize(venues2) # flatten JSON + +# filter columns +filtered_columns2 = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] +nearby_venues2 = nearby_venues2.loc[:, filtered_columns] + +# filter the category for each row +nearby_venues2['venue.categories'] = nearby_venues2.apply(get_category_type, axis=1) + +# clean columns +nearby_venues2.columns = [col.split(".")[-1] for col in nearby_venues.columns] +nearby_venues = nearby_venues.append(nearby_venues2) + +--- + +# Get the rest of the venues +venues3 = results3['response']['groups'][0]['items'] +nearby_venues3 = json_normalize(venues3) # flatten JSON + +# filter columns +filtered_columns3 = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] +nearby_venues3 = nearby_venues3.loc[:, filtered_columns] + +# filter the category for each row +nearby_venues3['venue.categories'] = nearby_venues3.apply(get_category_type, axis=1) + +# clean columns +nearby_venues3.columns = [col.split(".")[-1] for col in nearby_venues3.columns] + +nearby_venues = nearby_venues.append(nearby_venues3) +nearby_venues = nearby_venues.reset_index(drop=True) +nearby_venues +``` + +![Clean +Data](https://img.cleberg.net/blog/20200726-ibm-data-science/02_clean_data-min.png) + +# Visualize the Data + +We now have a complete, clean data set. The next step is to visualize +this data onto the map we created earlier. We will be using folium\'s +`CircleMarker()` function to do this. + +``` python +# add markers to map +for lat, lng, name, categories in zip(nearby_venues['lat'], nearby_venues['lng'], nearby_venues['name'], nearby_venues['categories']): + label = '{} ({})'.format(name, categories) + label = folium.Popup(label, parse_html=True) + folium.CircleMarker( + [lat, lng], + radius=5, + popup=label, + color='blue', + fill=True, + fill_color='#3186cc', + fill_opacity=0.7, + ).add_to(map_LNK) + +map_LNK +``` + +\![Initial data +map](https://img.cleberg.net/blog/20200726-ibm-data-science/03_data_map-min.png) + +# Clustering: *k-means* + +To cluster the data, we will be using the *k-means* algorithm. This +algorithm is iterative and will automatically make sure that data points +in each cluster are as close as possible to each other, while being as +far as possible away from other clusters. + +However, we first have to figure out how many clusters to use (defined +as the variable *\'k\'*). To do so, we will use the next two functions +to calculate the sum of squares within clusters and then return the +optimal number of clusters. + +``` python +# This function will return the sum of squares found in the data +def calculate_wcss(data): + wcss = [] + for n in range(2, 21): + kmeans = KMeans(n_clusters=n) + kmeans.fit(X=data) + wcss.append(kmeans.inertia_) + + return wcss + +# Drop 'str' cols so we can use k-means clustering +cluster_df = nearby_venues.drop(columns=['name', 'categories']) + +# calculating the within clusters sum-of-squares for 19 cluster amounts +sum_of_squares = calculate_wcss(cluster_df) + +# This function will return the optimal number of clusters +def optimal_number_of_clusters(wcss): + x1, y1 = 2, wcss[0] + x2, y2 = 20, wcss[len(wcss)-1] + + distances = [] + for i in range(len(wcss)): + x0 = i+2 + y0 = wcss[i] + numerator = abs((y2-y1)*x0 - (x2-x1)*y0 + x2*y1 - y2*x1) + denominator = math.sqrt((y2 - y1)**2 + (x2 - x1)**2) + distances.append(numerator/denominator) + + return distances.index(max(distances)) + 2 + +# calculating the optimal number of clusters +n = optimal_number_of_clusters(sum_of_squares) +``` + +Now that we have found that our optimal number of clusters is six, we +need to perform k-means clustering. When this clustering occurs, each +business is assigned a cluster number from 0 to 5 in the dataframe. + +``` python +# set number of clusters equal to the optimal number +kclusters = n + +# run k-means clustering +kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(cluster_df) + +# add clustering labels to dataframe +nearby_venues.insert(0, 'Cluster Labels', kmeans.labels_) +``` + +Success! We now have a dataframe with clean business data, along with a +cluster number for each business. Now let\'s map the data using six +different colors. + +``` python +# create map with clusters +map_clusters = folium.Map(location=[latitude, longitude], zoom_start=12) +colors = ['#0F9D58', '#DB4437', '#4285F4', '#800080', '#ce12c0', '#171717'] + +# add markers to the map +for lat, lng, name, categories, cluster in zip(nearby_venues['lat'], nearby_venues['lng'], nearby_venues['name'], nearby_venues['categories'], nearby_venues['Cluster Labels']): + label = '[{}] {} ({})'.format(cluster, name, categories) + label = folium.Popup(label, parse_html=True) + folium.CircleMarker( + [lat, lng], + radius=5, + popup=label, + color=colors[int(cluster)], + fill=True, + fill_color=colors[int(cluster)], + fill_opacity=0.7).add_to(map_clusters) + +map_clusters +``` + +![Clustered +Map](https://img.cleberg.net/blog/20200726-ibm-data-science/04_clusters-min.png) + +# Investigate Clusters + +Now that we have figured out our clusters, let\'s do a little more +analysis to provide more insight into the clusters. With the information +below, we can see which clusters are more popular for businesses and +which are less popular. The results below show us that clusters 0 +through 3 are popular, while clusters 4 and 5 are not very popular at +all. + +``` python +# Show how many venues are in each cluster +color_names = ['Dark Green', 'Red', 'Blue', 'Purple', 'Pink', 'Black'] +for x in range(0,6): + print("Color of Cluster", x, ":", color_names[x]) + print("Venues found in Cluster", x, ":", nearby_venues.loc[nearby_venues['Cluster Labels'] == x, nearby_venues.columns[:]].shape[0]) + print("---") +``` + +![Venues per +Cluster](https://img.cleberg.net/blog/20200726-ibm-data-science/05_venues_per_cluster-min.png) + +Our last piece of analysis is to summarize the categories of businesses +within each cluster. With these results, we can clearly see that +restaurants, coffee shops, and grocery stores are the most popular. + +``` python +# Calculate how many venues there are in each category +# Sort from largest to smallest +temp_df = nearby_venues.drop(columns=['name', 'lat', 'lng']) + +cluster0_grouped = temp_df.loc[temp_df['Cluster Labels'] == 0].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) +cluster1_grouped = temp_df.loc[temp_df['Cluster Labels'] == 1].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) +cluster2_grouped = temp_df.loc[temp_df['Cluster Labels'] == 2].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) +cluster3_grouped = temp_df.loc[temp_df['Cluster Labels'] == 3].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) +cluster4_grouped = temp_df.loc[temp_df['Cluster Labels'] == 4].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) +cluster5_grouped = temp_df.loc[temp_df['Cluster Labels'] == 5].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) + +# show how many venues there are in each cluster (> 1) +with pd.option_context('display.max_rows', None, 'display.max_columns', None): + print("\n\n", "Cluster 0:", "\n", cluster0_grouped.loc[cluster0_grouped['Cluster Labels'] > 1]) + print("\n\n", "Cluster 1:", "\n", cluster1_grouped.loc[cluster1_grouped['Cluster Labels'] > 1]) + print("\n\n", "Cluster 2:", "\n", cluster2_grouped.loc[cluster2_grouped['Cluster Labels'] > 1]) + print("\n\n", "Cluster 3:", "\n", cluster3_grouped.loc[cluster3_grouped['Cluster Labels'] > 1]) + print("\n\n", "Cluster 4:", "\n", cluster4_grouped.loc[cluster4_grouped['Cluster Labels'] > 1]) + print("\n\n", "Cluster 5:", "\n", cluster5_grouped.loc[cluster5_grouped['Cluster Labels'] > 1]) +``` + +![Venues per Cluster, pt. +1](https://img.cleberg.net/blog/20200726-ibm-data-science/06_categories_per_cluster_pt1-min.png) + +![Venues per Cluster, pt. +2](https://img.cleberg.net/blog/20200726-ibm-data-science/07_categories_per_cluster_pt2-min.png) + +# Discussion + +In this project, we gathered location data for Lincoln, Nebraska, USA +and clustered the data using the k-means algorithm in order to identify +the unique clusters of businesses in Lincoln. Through these actions, we +found that there are six unique business clusters in Lincoln and that +two of the clusters are likely unsuitable for investors. The remaining +four clusters have a variety of businesses, but are largely dominated by +restaurants and grocery stores. + +Using this project, investors can now make more informed decisions when +deciding the location and category of business in which to invest. + +Further studies may involve other attributes for business locations, +such as population density, average wealth across the city, or crime +rates. In addition, further studies may include additional location data +and businesses by utilizing multiple sources, such as Google Maps and +OpenStreetMap. diff --git a/content/blog/2020-07-26-business-analysis.org b/content/blog/2020-07-26-business-analysis.org deleted file mode 100644 index 098dce7..0000000 --- a/content/blog/2020-07-26-business-analysis.org +++ /dev/null @@ -1,374 +0,0 @@ -#+title: Algorithmically Analyzing Local Businesses -#+date: 2020-07-26 -#+description: Exploring and visualizing data with Python. -#+filetags: :data: - -* Background Information -This project aims to help investors learn more about a random city in order to -determine optimal locations for business investments. The data used in this -project was obtained using Foursquare's developer API. - -Fields include: - -- Venue Name -- Venue Category -- Venue Latitude -- Venue Longitude - -There are 232 records found using the center of Lincoln as the area of interest -with a radius of 10,000. - -* Import the Data -The first step is the simplest: import the applicable libraries. We will be -using the libraries below for this project. - -#+begin_src python -# Import the Python libraries we will be using -import pandas as pd -import requests -import folium -import math -import json -from pandas.io.json import json_normalize -from sklearn.cluster import KMeans -#+end_src - -To begin our analysis, we need to import the data for this project. The data we -are using in this project comes directly from the Foursquare API. The first step -is to get the latitude and longitude of the city being studied (Lincoln, NE) and -setting up the folium map. - -#+begin_src python -# Define the latitude and longitude, then map the results -latitude = 40.806862 -longitude = -96.681679 -map_LNK = folium.Map(location=[latitude, longitude], zoom_start=12) - -map_LNK -#+end_src - -#+caption: Blank Map -[[https://img.cleberg.net/blog/20200726-ibm-data-science/01_blank_map-min.png]] - -Now that we have defined our city and created the map, we need to go get the -business data. The Foursquare API will limit the results to 100 per API call, so -we use our first API call below to determine the total results that Foursquare -has found. Since the total results are 232, we perform the API fetching process -three times (100 + 100 + 32 = 232). - -#+begin_src python -# Foursquare API credentials -CLIENT_ID = 'your-client-id' -CLIENT_SECRET = 'your-client-secret' -VERSION = '20180604' - -# Set up the URL to fetch the first 100 results -LIMIT = 100 -radius = 10000 -url = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}'.format( - CLIENT_ID, - CLIENT_SECRET, - VERSION, - latitude, - longitude, - radius, - LIMIT) - -# Fetch the first 100 results -results = requests.get(url).json() - -# Determine the total number of results needed to fetch -totalResults = results['response']['totalResults'] -totalResults - -# Set up the URL to fetch the second 100 results (101-200) -LIMIT = 100 -offset = 100 -radius = 10000 -url2 = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}&offset={}'.format( - CLIENT_ID, - CLIENT_SECRET, - VERSION, - latitude, - longitude, - radius, - LIMIT, - offset) - -# Fetch the second 100 results (101-200) -results2 = requests.get(url2).json() - -# Set up the URL to fetch the final results (201 - 232) -LIMIT = 100 -offset = 200 -radius = 10000 -url3 = 'https://api.foursquare.com/v2/venues/explore?&client_id={}&client_secret={}&v={}&ll={},{}&radius={}&limit={}&offset={}'.format( - CLIENT_ID, - CLIENT_SECRET, - VERSION, - latitude, - longitude, - radius, - LIMIT, - offset) - -# Fetch the final results (201 - 232) -results3 = requests.get(url3).json() -#+end_src - -* Clean the Data -Now that we have our data in three separate dataframes, we need to combine them -into a single dataframe and make sure to reset the index so that we have a -unique ID for each business. The =get~categorytype~= function below will pull -the categories and name from each business's entry in the Foursquare data -automatically. Once all the data has been labeled and combined, the results are -stored in the =nearby_venues= dataframe. - -#+begin_src python -# This function will extract the category of the venue from the API dictionary -def get_category_type(row): - try: - categories_list = row['categories'] - except: - categories_list = row['venue.categories'] - - if len(categories_list) == 0: - return None - else: - return categories_list[0]['name'] - -# Get the first 100 venues -venues = results['response']['groups'][0]['items'] -nearby_venues = json_normalize(venues) - -# filter columns -filtered_columns = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] -nearby_venues = nearby_venues.loc[:, filtered_columns] - -# filter the category for each row -nearby_venues['venue.categories'] = nearby_venues.apply(get_category_type, axis=1) - -# clean columns -nearby_venues.columns = [col.split(".")[-1] for col in nearby_venues.columns] - ---- - -# Get the second 100 venues -venues2 = results2['response']['groups'][0]['items'] -nearby_venues2 = json_normalize(venues2) # flatten JSON - -# filter columns -filtered_columns2 = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] -nearby_venues2 = nearby_venues2.loc[:, filtered_columns] - -# filter the category for each row -nearby_venues2['venue.categories'] = nearby_venues2.apply(get_category_type, axis=1) - -# clean columns -nearby_venues2.columns = [col.split(".")[-1] for col in nearby_venues.columns] -nearby_venues = nearby_venues.append(nearby_venues2) - ---- - -# Get the rest of the venues -venues3 = results3['response']['groups'][0]['items'] -nearby_venues3 = json_normalize(venues3) # flatten JSON - -# filter columns -filtered_columns3 = ['venue.name', 'venue.categories', 'venue.location.lat', 'venue.location.lng'] -nearby_venues3 = nearby_venues3.loc[:, filtered_columns] - -# filter the category for each row -nearby_venues3['venue.categories'] = nearby_venues3.apply(get_category_type, axis=1) - -# clean columns -nearby_venues3.columns = [col.split(".")[-1] for col in nearby_venues3.columns] - -nearby_venues = nearby_venues.append(nearby_venues3) -nearby_venues = nearby_venues.reset_index(drop=True) -nearby_venues -#+end_src - -#+caption: Clean Data -[[https://img.cleberg.net/blog/20200726-ibm-data-science/02_clean_data-min.png]] - -* Visualize the Data -We now have a complete, clean data set. The next step is to visualize this data -onto the map we created earlier. We will be using folium's =CircleMarker()= -function to do this. - -#+begin_src python -# add markers to map -for lat, lng, name, categories in zip(nearby_venues['lat'], nearby_venues['lng'], nearby_venues['name'], nearby_venues['categories']): - label = '{} ({})'.format(name, categories) - label = folium.Popup(label, parse_html=True) - folium.CircleMarker( - [lat, lng], - radius=5, - popup=label, - color='blue', - fill=True, - fill_color='#3186cc', - fill_opacity=0.7, - ).add_to(map_LNK) - -map_LNK -#+end_src - -![[https://img.cleberg.net/blog/20200726-ibm-data-science/03_data_map-min.png][Initial -data map]] - -* Clustering: /k-means/ -To cluster the data, we will be using the /k-means/ algorithm. This algorithm is -iterative and will automatically make sure that data points in each cluster are -as close as possible to each other, while being as far as possible away from -other clusters. - -However, we first have to figure out how many clusters to use (defined as the -variable /'k'/). To do so, we will use the next two functions to calculate the -sum of squares within clusters and then return the optimal number of clusters. - -#+begin_src python -# This function will return the sum of squares found in the data -def calculate_wcss(data): - wcss = [] - for n in range(2, 21): - kmeans = KMeans(n_clusters=n) - kmeans.fit(X=data) - wcss.append(kmeans.inertia_) - - return wcss - -# Drop 'str' cols so we can use k-means clustering -cluster_df = nearby_venues.drop(columns=['name', 'categories']) - -# calculating the within clusters sum-of-squares for 19 cluster amounts -sum_of_squares = calculate_wcss(cluster_df) - -# This function will return the optimal number of clusters -def optimal_number_of_clusters(wcss): - x1, y1 = 2, wcss[0] - x2, y2 = 20, wcss[len(wcss)-1] - - distances = [] - for i in range(len(wcss)): - x0 = i+2 - y0 = wcss[i] - numerator = abs((y2-y1)*x0 - (x2-x1)*y0 + x2*y1 - y2*x1) - denominator = math.sqrt((y2 - y1)**2 + (x2 - x1)**2) - distances.append(numerator/denominator) - - return distances.index(max(distances)) + 2 - -# calculating the optimal number of clusters -n = optimal_number_of_clusters(sum_of_squares) -#+end_src - -Now that we have found that our optimal number of clusters is six, we need to -perform k-means clustering. When this clustering occurs, each business is -assigned a cluster number from 0 to 5 in the dataframe. - -#+begin_src python -# set number of clusters equal to the optimal number -kclusters = n - -# run k-means clustering -kmeans = KMeans(n_clusters=kclusters, random_state=0).fit(cluster_df) - -# add clustering labels to dataframe -nearby_venues.insert(0, 'Cluster Labels', kmeans.labels_) -#+end_src - -Success! We now have a dataframe with clean business data, along with a cluster -number for each business. Now let's map the data using six different colors. - -#+begin_src python -# create map with clusters -map_clusters = folium.Map(location=[latitude, longitude], zoom_start=12) -colors = ['#0F9D58', '#DB4437', '#4285F4', '#800080', '#ce12c0', '#171717'] - -# add markers to the map -for lat, lng, name, categories, cluster in zip(nearby_venues['lat'], nearby_venues['lng'], nearby_venues['name'], nearby_venues['categories'], nearby_venues['Cluster Labels']): - label = '[{}] {} ({})'.format(cluster, name, categories) - label = folium.Popup(label, parse_html=True) - folium.CircleMarker( - [lat, lng], - radius=5, - popup=label, - color=colors[int(cluster)], - fill=True, - fill_color=colors[int(cluster)], - fill_opacity=0.7).add_to(map_clusters) - -map_clusters -#+end_src - -#+caption: Clustered Map -[[https://img.cleberg.net/blog/20200726-ibm-data-science/04_clusters-min.png]] - -* Investigate Clusters -Now that we have figured out our clusters, let's do a little more analysis to -provide more insight into the clusters. With the information below, we can see -which clusters are more popular for businesses and which are less popular. The -results below show us that clusters 0 through 3 are popular, while clusters 4 -and 5 are not very popular at all. - -#+begin_src python -# Show how many venues are in each cluster -color_names = ['Dark Green', 'Red', 'Blue', 'Purple', 'Pink', 'Black'] -for x in range(0,6): - print("Color of Cluster", x, ":", color_names[x]) - print("Venues found in Cluster", x, ":", nearby_venues.loc[nearby_venues['Cluster Labels'] == x, nearby_venues.columns[:]].shape[0]) - print("---") -#+end_src - -#+caption: Venues per Cluster -[[https://img.cleberg.net/blog/20200726-ibm-data-science/05_venues_per_cluster-min.png]] - -Our last piece of analysis is to summarize the categories of businesses within -each cluster. With these results, we can clearly see that restaurants, coffee -shops, and grocery stores are the most popular. - -#+begin_src python -# Calculate how many venues there are in each category -# Sort from largest to smallest -temp_df = nearby_venues.drop(columns=['name', 'lat', 'lng']) - -cluster0_grouped = temp_df.loc[temp_df['Cluster Labels'] == 0].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster1_grouped = temp_df.loc[temp_df['Cluster Labels'] == 1].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster2_grouped = temp_df.loc[temp_df['Cluster Labels'] == 2].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster3_grouped = temp_df.loc[temp_df['Cluster Labels'] == 3].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster4_grouped = temp_df.loc[temp_df['Cluster Labels'] == 4].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) -cluster5_grouped = temp_df.loc[temp_df['Cluster Labels'] == 5].groupby(['categories']).count().sort_values(by='Cluster Labels', ascending=False) - -# show how many venues there are in each cluster (> 1) -with pd.option_context('display.max_rows', None, 'display.max_columns', None): - print("\n\n", "Cluster 0:", "\n", cluster0_grouped.loc[cluster0_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 1:", "\n", cluster1_grouped.loc[cluster1_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 2:", "\n", cluster2_grouped.loc[cluster2_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 3:", "\n", cluster3_grouped.loc[cluster3_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 4:", "\n", cluster4_grouped.loc[cluster4_grouped['Cluster Labels'] > 1]) - print("\n\n", "Cluster 5:", "\n", cluster5_grouped.loc[cluster5_grouped['Cluster Labels'] > 1]) -#+end_src - -#+caption: Venues per Cluster, pt. 1 -[[https://img.cleberg.net/blog/20200726-ibm-data-science/06_categories_per_cluster_pt1-min.png]] - -#+caption: Venues per Cluster, pt. 2 -[[https://img.cleberg.net/blog/20200726-ibm-data-science/07_categories_per_cluster_pt2-min.png]] - -* Discussion -In this project, we gathered location data for Lincoln, Nebraska, USA and -clustered the data using the k-means algorithm in order to identify the unique -clusters of businesses in Lincoln. Through these actions, we found that there -are six unique business clusters in Lincoln and that two of the clusters are -likely unsuitable for investors. The remaining four clusters have a variety of -businesses, but are largely dominated by restaurants and grocery stores. - -Using this project, investors can now make more informed decisions when deciding -the location and category of business in which to invest. - -Further studies may involve other attributes for business locations, such as -population density, average wealth across the city, or crime rates. In addition, -further studies may include additional location data and businesses by utilizing -multiple sources, such as Google Maps and OpenStreetMap. diff --git a/content/blog/2020-08-22-redirect-github-pages.md b/content/blog/2020-08-22-redirect-github-pages.md new file mode 100644 index 0000000..f92ae07 --- /dev/null +++ b/content/blog/2020-08-22-redirect-github-pages.md @@ -0,0 +1,137 @@ ++++ +date = 2020-08-22 +title = "Redirect GitHub Pages from Subdomain to the Top-Level Domain" +description = "" +draft = false ++++ + +# Short answer + +## Step 1 + +Add a new file CNAME to your GitHub Pages repository containing only one +line: your top-level domain name. E.g.: `example.com` + +## Step 2 + +\[Optional\] but highly recommended + +2.1: Remove all other top-level records (prefixed with @) of type A from +your DNS configuration. + +2.2: Remove a CNAME record for the second-level domain www if it is +present. + +## Step 3 + +Add these 5 entries to the very top of your DNS configuration: + +``` txt +@ A 185.199.108.153 +@ A 185.199.109.153 +@ A 185.199.110.153 +@ A 185.199.111.153 +www CNAME your_github_username.github.io. +``` + +Replace `your_github_username` with your actual GitHub +username. + +## Step 4 + +Wait for your DNS changes to propagate. DNS changes aren\'t effective +immediately. They can take up to a full day to propagate. + +# Long answer + +This issue has two sides. One is the DNS configuration itself. Another +one is the way GitHub Pages will forward HTTP requests. + +We need to know a few things to understand what GitHub is trying to say +in their documentation. + +## DNS Entry Types + +There are two types of DNS records which interest us: CNAME and A. + +`A` is also known as `Apex` or sometimes as +`root entry`. It forwards requests to a specified fixed IP +address. `CNAME` entry forwards requests to a specified URL +(actual valid plain text URL, not an IP address). + +## DNS Load balancing + +GitHub has one central URL address which accepts all DNS requests for +GitHub Pages: `http://username.github.io`. That URL is +resolved to different IP addresses based on your geographical location. +Website hosted on GitHub Pages is a simple collection of +`HTML`, `CSS` and `JS` files. GitHub +distributes these files to different servers across the globe. So that +when your browser sends a request from Europe, it receives data from a +server in Europe. The same is valid for the requests from Asia and the +USA. + +## What GitHub is trying to say + +Since `A` records in DNS must contain IP addresses, and they +must be either `185.199.108.153` or +`185.199.109.153` or `185.199.110.153` or +`185.199.111.153`, there is no way to forward requests to a +server located somewhere in Europe or Asia. Your website hosted at +GitHub Pages will be downloaded from a central GitHub Pages server. +There is a minor risk that if GitHub Pages DNS servers +(`x.x.x.153`) are down for some reason, all custom domains +which use fixed GitHub Pages IP addresses will not be accessible (their +DNS requests will not be resolvable). + +That is why GitHub strongly suggests to either use a second-level domain +for your GitHub Pages (e.g. `blog.example.com`) or use a DNS +service provider that supports a record type `ALIAS` that +acts as `A` record but forwards request to a URL address +(e.g. `username.github.io`) instead of a fixed IP address. + +## How GitHub Pages treats HTTP requests + +After a DNS request for `your_github_username.github.io` is +resolved into an IP address, e.g. `185.199.108.153` your +browser sends an HTTP request to that server with an HTTP header +`Host`. Below are `curl` examples that load the +same website (these examples might not work if you are behind a proxy +server): + +```sh +curl --header "Host: your_github_username.github.io" http://185.199.108.153/ +curl --header "Host: www.example.com" http://185.199.108.153/ +curl --header "Host: example.com" http://185.199.108.153/ +``` + +This way GitHub Pages servers know which user website to serve. + +> GitHub Pages server will automatically redirect HTTP requests to the +> top-level domain if your `CNAME` file contains +> `example.com` but `www.example.com` is +> requested. +> +> The same is valid if your `CNAME` file contains +> `www.example.com` but the header `Host` in the +> `HTTP` request contains `example.com`. + +## Why can\'t I add a `CNAME` record entry that accepts a top-level request (`@`) to my DNS configuration? + +Quote from the GitHub Pages documentation: + +> Warning: Do not create a CNAME record for your custom apex domain! +> Doing so may cause issues with other services, such as email, on that +> domain. + +## References: + +1. [Setting up a custom domain with GitHub + Pages](https://docs.github.com/en/github/working-with-github-pages/configuring-a-custom-domain-for-your-github-pages-site) +2. [My custom domain isn\'t + working](https://docs.github.com/en/github/working-with-github-pages/troubleshooting-custom-domains-and-github-pages) +3. [Cannot access my GitHub Pages website by IP + Address](https://serverfault.com/questions/589370/cannot-access-my-github-pages-website-by-ip-address) +4. [How do I set up GitHub Pages to redirect DNS requests from a + subdomain (e.g. www) to the top-level domain (TLD, Apex + record)?](https://stackoverflow.com/questions/23375422/how-do-i-set-up-github-pages-to-redirect-dns-requests-from-a-subdomain-e-g-www) diff --git a/content/blog/2020-08-22-redirect-github-pages.org b/content/blog/2020-08-22-redirect-github-pages.org deleted file mode 100644 index 562249d..0000000 --- a/content/blog/2020-08-22-redirect-github-pages.org +++ /dev/null @@ -1,110 +0,0 @@ -#+title: Redirect GitHub Pages from Subdomain to the Top-Level Domain -#+date: 2020-08-22 -#+description: Learn how to redirect Github pages to the TLD. -#+filetags: :sysadmin: - -* Short answer -** Step 1 -Add a new file CNAME to your GitHub Pages repository containing only one line: -your top-level domain name. E.g.: =example.com= - -** Step 2 -[Optional] but highly recommended - -2.1: Remove all other top-level records (prefixed with @) of type A from your -DNS configuration. - -2.2: Remove a CNAME record for the second-level domain www if it is present. - -** Step 3 -Add these 5 entries to the very top of your DNS configuration: - -#+begin_src txt -@ A 185.199.108.153 -@ A 185.199.109.153 -@ A 185.199.110.153 -@ A 185.199.111.153 -www CNAME your_github_username.github.io. -#+end_src - -Replace =your_github_username= with your actual GitHub username. - -** Step 4 -Wait for your DNS changes to propagate. DNS changes aren't effective -immediately. They can take up to a full day to propagate. - -* Long answer -This issue has two sides. One is the DNS configuration itself. Another one is -the way GitHub Pages will forward HTTP requests. - -We need to know a few things to understand what GitHub is trying to say in their -documentation. - -** DNS Entry Types -There are two types of DNS records which interest us: CNAME and A. - -=A= is also known as =Apex= or sometimes as =root entry=. It forwards requests -to a specified fixed IP address. =CNAME= entry forwards requests to a specified -URL (actual valid plain text URL, not an IP address). - -** DNS Load balancing -GitHub has one central URL address which accepts all DNS requests for GitHub -Pages: =http://username.github.io=. That URL is resolved to different IP -addresses based on your geographical location. Website hosted on GitHub Pages is -a simple collection of =HTML=, =CSS= and =JS= files. GitHub distributes these -files to different servers across the globe. So that when your browser sends a -request from Europe, it receives data from a server in Europe. The same is valid -for the requests from Asia and the USA. - -** What GitHub is trying to say -Since =A= records in DNS must contain IP addresses, and they must be either -=185.199.108.153= or =185.199.109.153= or =185.199.110.153= or -=185.199.111.153=, there is no way to forward requests to a server located -somewhere in Europe or Asia. Your website hosted at GitHub Pages will be -downloaded from a central GitHub Pages server. There is a minor risk that if -GitHub Pages DNS servers (=x.x.x.153=) are down for some reason, all custom -domains which use fixed GitHub Pages IP addresses will not be accessible (their -DNS requests will not be resolvable). - -That is why GitHub strongly suggests to either use a second-level domain for -your GitHub Pages (e.g. =blog.example.com=) or use a DNS service provider that -supports a record type =ALIAS= that acts as =A= record but forwards request to a -URL address (e.g. =username.github.io=) instead of a fixed IP address. - -** How GitHub Pages treats HTTP requests -After a DNS request for =your_github_username.github.io= is resolved into an IP -address, e.g. =185.199.108.153= your browser sends an HTTP request to that -server with an HTTP header =Host=. Below are =curl= examples that load the same -website (these examples might not work if you are behind a proxy server): - -#+begin_src sh -curl --header "Host: your_github_username.github.io" http://185.199.108.153/ -curl --header "Host: www.example.com" http://185.199.108.153/ -curl --header "Host: example.com" http://185.199.108.153/ -#+end_src - -This way GitHub Pages servers know which user website to serve. - -#+begin_quote -GitHub Pages server will automatically redirect HTTP requests to the top-level -domain if your =CNAME= file contains =example.com= but =www.example.com= is -requested. - -The same is valid if your =CNAME= file contains =www.example.com= but the header -=Host= in the =HTTP= request contains =example.com=. -#+end_quote - -** Why can't I add a =CNAME= record entry that accepts a top-level request (=@=) to my DNS configuration? -Quote from the GitHub Pages documentation: - -#+begin_quote -Warning: Do not create a CNAME record for your custom apex domain! Doing so may -cause issues with other services, such as email, on that domain. -#+end_quote - -** References: -1. [[https://docs.github.com/en/github/working-with-github-pages/configuring-a-custom-domain-for-your-github-pages-site][Setting up a custom domain with GitHub Pages]] -2. [[https://docs.github.com/en/github/working-with-github-pages/troubleshooting-custom-domains-and-github-pages][My custom domain isn't working]] -3. [[https://serverfault.com/questions/589370/cannot-access-my-github-pages-website-by-ip-address][Cannot access my GitHub Pages website by IP Address]] -4. [[https://stackoverflow.com/questions/23375422/how-do-i-set-up-github-pages-to-redirect-dns-requests-from-a-subdomain-e-g-www][How do I set up GitHub Pages to redirect DNS requests from a subdomain (e.g. - www) to the top-level domain (TLD, Apex record)?]] diff --git a/content/blog/2020-08-29-php-auth-flow.md b/content/blog/2020-08-29-php-auth-flow.md new file mode 100644 index 0000000..499eada --- /dev/null +++ b/content/blog/2020-08-29-php-auth-flow.md @@ -0,0 +1,196 @@ ++++ +date = 2020-08-29 +title = "PHP Authentication Flow" +description = "" +draft = false ++++ + +# Introduction + +When creating websites that will allow users to create accounts, the +developer always needs to consider the proper authentication flow for +their app. For example, some developers will utilize an API for +authentication, some will use OAuth, and some may just use their own +simple database. + +For those using pre-built libraries, authentication may simply be a +problem of copying and pasting the code from their library\'s +documentation. For example, here\'s the code I use to authenticate users +with the Tumblr OAuth API for my Tumblr client, Vox Populi: + +``` php +// Start the session +session_start(); + +// Use my key/secret pair to create a new client connection +$consumer_key = getenv('CONSUMER_KEY'); +$consumer_secret = getenv('CONSUMER_SECRET'); +$client = new Tumblr\API\Client($consumer_key, $consumer_secret); +$requestHandler = $client->getRequestHandler(); +$requestHandler->setBaseUrl('https://www.tumblr.com/'); + +// Check the session and cookies to see if the user is authenticated +// Otherwise, send user to Tumblr authentication page and set tokens from Tumblr's response + +// Authenticate client +$client = new Tumblr\API\Client( + $consumer_key, + $consumer_secret, + $token, + $token_secret +); +``` + +However, developers creating authentication flows from scratch will need +to think carefully about when to make sure a web page will check the +user\'s authenticity. + +In this article, we\'re going to look at a simple authentication flow +using a MySQL database and PHP. + +# Creating User Accounts + +The beginning to any type of user authentication is to create a user +account. This process can take many formats, but the simplest is to +accept user input from a form (e.g., username and password) and send it +over to your database. For example, here\'s a snippet that shows how to +get username and password parameters that would come when a user submits +a form to your PHP script. + +**Note**: Ensure that your password column is large enough to hold the +hashed value (at least 60 characters or longer). + +``` php +// Get the values from the URL +$username = $_POST['username']; +$raw_password = $_POST['password']; + +// Hash password +// password_hash() will create a random salt if one isn't provided, and this is generally the easiest and most secure approach. +$password = password_hash($raw_password, PASSWORD_DEFAULT); + +// Save database details as variables +$servername = "localhost"; +$username = "username"; +$password = "password"; +$dbname = "myDB"; + +// Create connection to the database +$conn = new mysqli($servername, $username, $password, $dbname); + +// Check connection +if ($conn->connect_error) { + die("Connection failed: " . $conn->connect_error); +} + +$sql = "INSERT INTO users (username, password) +VALUES ('$username', '$password')"; + +if ($conn->query($sql) === TRUE) { + echo "New record created successfully"; +} else { + echo "Error: " . $sql . "
" . $conn->error; +} + +$conn->close(); +``` + +## Validate Returning Users + +To be able to verify that a returning user has a valid username and +password in your database is as simple as having users fill out a form +and comparing their inputs to your database. + +``` php +// Query the database for username and password +// ... + +if(password_verify($password_input, $hashed_password)) { + // If the input password matched the hashed password in the database + // Do something, log the user in. +} + +// Else, Redirect them back to the login page. +... +``` + +# Storing Authentication State + +Once you\'ve created the user\'s account, now you\'re ready to +initialize the user\'s session. **You will need to do this on every page +you load while the user is logged in.\*** To do so, simply enter the +following code snippet: + +``` php +session_start(); +``` + +Once you\'ve initialized the session, the next step is to store the +session in a cookie so that you can access it later. + +``` php +setcookie(session_name()); +``` + +Now that the session name has been stored, you\'ll be able to check if +there\'s an active session whenever you load a page. + +``` php +if(isset(session_name())) { + // The session is active +} +``` + +## Removing User Authentication + +The next logical step is to give your users the option to log out once +they are done using your application. This can be tricky in PHP since a +few of the standard ways do not always work. + +``` php +// Initialize the session. +// If you are using session_name("something"), don't forget it now! +session_start(); + +// Delete authentication cookies +unset($_COOKIE[session_name()]); +setcookie(session_name(), "", time() - 3600, "/logged-in/"); +unset($_COOKIE["PHPSESSID"]); +setcookie("PHPSESSID", "", time() - 3600, "/logged-in/"); + +// Unset all of the session variables. +$_SESSION = array(); +session_unset(); + +// If it's desired to kill the session, also delete the session cookie. +// Note: This will destroy the session, and not just the session data! +if (ini_get("session.use_cookies")) { + $params = session_get_cookie_params(); + setcookie(session_name(), '', time() - 42000, + $params["path"], $params["domain"], + $params["secure"], $params["httponly"] + ); +} + +// Finally, destroy the session. +session_destroy(); +session_write_close(); + +// Go back to sign-in page +header('Location: https://example.com/logged-out/'); +die(); +``` + +# Wrapping Up + +Now you should be ready to begin your authentication programming with +PHP. You can create user accounts, create sessions for users across +different pages of your site, and then destroy the user data when +they\'re ready to leave. + +For more information on this subject, I recommend reading the [PHP +Documentation](https://www.php.net/). Specifically, you may want to look +at [HTTP Authentication with +PHP](https://www.php.net/manual/en/features.http-auth.php), [session +handling](https://www.php.net/manual/en/book.session.php), and +[hash](https://www.php.net/manual/en/function.hash.php). diff --git a/content/blog/2020-08-29-php-auth-flow.org b/content/blog/2020-08-29-php-auth-flow.org deleted file mode 100644 index 558ad2c..0000000 --- a/content/blog/2020-08-29-php-auth-flow.org +++ /dev/null @@ -1,181 +0,0 @@ -#+title: PHP Authentication Flow -#+date: 2020-08-29 -#+description: Learn how to establish and maintain a basic user authentication flow in PHP. -#+filetags: :dev: - -* Introduction -When creating websites that will allow users to create accounts, the developer -always needs to consider the proper authentication flow for their app. For -example, some developers will utilize an API for authentication, some will use -OAuth, and some may just use their own simple database. - -For those using pre-built libraries, authentication may simply be a problem of -copying and pasting the code from their library's documentation. For example, -here's the code I use to authenticate users with the Tumblr OAuth API for my -Tumblr client, Vox Populi: - -#+begin_src php -// Start the session -session_start(); - -// Use my key/secret pair to create a new client connection -$consumer_key = getenv('CONSUMER_KEY'); -$consumer_secret = getenv('CONSUMER_SECRET'); -$client = new Tumblr\API\Client($consumer_key, $consumer_secret); -$requestHandler = $client->getRequestHandler(); -$requestHandler->setBaseUrl('https://www.tumblr.com/'); - -// Check the session and cookies to see if the user is authenticated -// Otherwise, send user to Tumblr authentication page and set tokens from Tumblr's response - -// Authenticate client -$client = new Tumblr\API\Client( - $consumer_key, - $consumer_secret, - $token, - $token_secret -); -#+end_src - -However, developers creating authentication flows from scratch will need to -think carefully about when to make sure a web page will check the user's -authenticity. - -In this article, we're going to look at a simple authentication flow using a -MySQL database and PHP. - -* Creating User Accounts -The beginning to any type of user authentication is to create a user account. -This process can take many formats, but the simplest is to accept user input -from a form (e.g., username and password) and send it over to your database. For -example, here's a snippet that shows how to get username and password parameters -that would come when a user submits a form to your PHP script. - -*Note*: Ensure that your password column is large enough to hold the hashed -value (at least 60 characters or longer). - -#+begin_src php -// Get the values from the URL -$username = $_POST['username']; -$raw_password = $_POST['password']; - -// Hash password -// password_hash() will create a random salt if one isn't provided, and this is generally the easiest and most secure approach. -$password = password_hash($raw_password, PASSWORD_DEFAULT); - -// Save database details as variables -$servername = "localhost"; -$username = "username"; -$password = "password"; -$dbname = "myDB"; - -// Create connection to the database -$conn = new mysqli($servername, $username, $password, $dbname); - -// Check connection -if ($conn->connect_error) { - die("Connection failed: " . $conn->connect_error); -} - -$sql = "INSERT INTO users (username, password) -VALUES ('$username', '$password')"; - -if ($conn->query($sql) === TRUE) { - echo "New record created successfully"; -} else { - echo "Error: " . $sql . "
" . $conn->error; -} - -$conn->close(); -#+end_src - -** Validate Returning Users -To be able to verify that a returning user has a valid username and password in -your database is as simple as having users fill out a form and comparing their -inputs to your database. - -#+begin_src php -// Query the database for username and password -// ... - -if(password_verify($password_input, $hashed_password)) { - // If the input password matched the hashed password in the database - // Do something, log the user in. -} - -// Else, Redirect them back to the login page. -... -#+end_src - -* Storing Authentication State -Once you've created the user's account, now you're ready to initialize the -user's session. *You will need to do this on every page you load while the user -is logged in.** To do so, simply enter the following code snippet: - -#+begin_src php -session_start(); -#+end_src - -Once you've initialized the session, the next step is to store the session in a -cookie so that you can access it later. - -#+begin_src php -setcookie(session_name()); -#+end_src - -Now that the session name has been stored, you'll be able to check if there's an -active session whenever you load a page. - -#+begin_src php -if(isset(session_name())) { - // The session is active -} -#+end_src - -** Removing User Authentication -The next logical step is to give your users the option to log out once they are -done using your application. This can be tricky in PHP since a few of the -standard ways do not always work. - -#+begin_src php -// Initialize the session. -// If you are using session_name("something"), don't forget it now! -session_start(); - -// Delete authentication cookies -unset($_COOKIE[session_name()]); -setcookie(session_name(), "", time() - 3600, "/logged-in/"); -unset($_COOKIE["PHPSESSID"]); -setcookie("PHPSESSID", "", time() - 3600, "/logged-in/"); - -// Unset all of the session variables. -$_SESSION = array(); -session_unset(); - -// If it's desired to kill the session, also delete the session cookie. -// Note: This will destroy the session, and not just the session data! -if (ini_get("session.use_cookies")) { - $params = session_get_cookie_params(); - setcookie(session_name(), '', time() - 42000, - $params["path"], $params["domain"], - $params["secure"], $params["httponly"] - ); -} - -// Finally, destroy the session. -session_destroy(); -session_write_close(); - -// Go back to sign-in page -header('Location: https://example.com/logged-out/'); -die(); -#+end_src - -* Wrapping Up -Now you should be ready to begin your authentication programming with PHP. You -can create user accounts, create sessions for users across different pages of -your site, and then destroy the user data when they're ready to leave. - -For more information on this subject, I recommend reading the [[https://www.php.net/][PHP Documentation]]. -Specifically, you may want to look at [[https://www.php.net/manual/en/features.http-auth.php][HTTP Authentication with PHP]], [[https://www.php.net/manual/en/book.session.php][session -handling]], and [[https://www.php.net/manual/en/function.hash.php][hash]]. diff --git a/content/blog/2020-09-01-visual-recognition.md b/content/blog/2020-09-01-visual-recognition.md new file mode 100644 index 0000000..8329a5a --- /dev/null +++ b/content/blog/2020-09-01-visual-recognition.md @@ -0,0 +1,205 @@ ++++ +date = 2020-09-01 +title = "IBM Watson Visual Recognition" +description = "" +draft = false ++++ + +# What is IBM Watson? + +If you\'ve never heard of [Watson](https://www.ibm.com/watson), this +service is a suite of enterprise-ready AI services, applications, and +tooling provided by IBM. Watson contains quite a few useful tools for +data scientists and students, including the subject of this post today: +visual recognition. + +If you\'d like to view the official documentation for the Visual +Recognition API, visit the [API +Docs](https://cloud.ibm.com/apidocs/visual-recognition/visual-recognition-v3?code=python). + +# Prerequisites + +To be able to use Watson Visual Recognition, you\'ll need the following: + +1. Create a free account on [IBM Watson + Studio](https://www.ibm.com/cloud/watson-studio). +2. Add the [Watson Visual + Recognition](https://www.ibm.com/cloud/watson-visual-recognition) + service to your IBM Watson account. +3. Get your API key and URL. To do this, first go to the [profile + dashboard](https://dataplatform.cloud.ibm.com/home2?context=cpdaas) + for your IBM account and click on the Watson Visual Recognition + service you created. This will be listed in the section titled + **Your services**. Then click the **Credentials\*** tab and open the + **Auto-generated credentials\*** dropdown. Copy your API key and URL + so that you can use them in the Python script later. +4. **\[Optional\]\*** While not required, you can also create the + Jupyter Notebook for this project right inside [Watson + Studio](https://www.ibm.com/cloud/watson-studio). Watson Studio will + save your notebooks inside an organized project and allow you to use + their other integrated products, such as storage containers, AI + models, documentation, external sharing, etc. + +# Calling the IBM Watson Visual Recognition API + +Okay, now let\'s get started. + +To begin, we need to install the proper Python package for IBM Watson. + +```sh +pip install --upgrade --user "ibm-watson>=4.5.0" +``` + +Next, we need to specify the API key, version, and URL given to us when +we created the Watson Visual Recognition service. + +``` python +apikey = "" +version = "2018-03-19" +url = "" +``` + +Now, let\'s import the necessary libraries and authenticate our service. + +``` python +import json +from ibm_watson import VisualRecognitionV3 +from ibm_cloud_sdk_core.authenticators import IAMAuthenticator + +authenticator = IAMAuthenticator(apikey) +visual_recognition = VisualRecognitionV3( + version=version, + authenticator=authenticator +) + +visual_recognition.set_service_url(url) +``` + +**\[Optional\]** If you\'d like to tell the API not to use any data to +improve their products, set the following header. + +``` python +visual_recognition.set_default_headers({'x-watson-learning-opt-out': "true"}) +``` + +Now we have our API all set and ready to go. For this example, I\'m +going to include a `dict` of photos to load as we test out +the API. + +``` python +data = [ + { + "title": "Grizzly Bear", + "url": "https://example.com/photos/image1.jpg" + }, + { + "title": "Nature Lake", + "url": "https://example.com/photos/image2.jpg" + }, + { + "title": "Welcome Sign", + "url": "https://example.com/photos/image3.jpg" + }, + { + "title": "Honey Badger", + "url": "https://example.com/photos/image4.jpg" + }, + { + "title": "Grand Canyon Lizard", + "url": "https://example.com/photos/image5.jpg" + }, + { + "title": "Castle", + "url": "https://example.com/photos/image6.jpg" + } +] +``` + +Now that we\'ve set up our libraries and have the photos ready, let\'s +create a loop to call the API for each image. The code below shows a +loop that calls the URL of each image and sends it to the API, +requesting results with at least 60% confidence. The results are output +to the console with dotted lines separating each section. + +In the case of an API error, the codes and explanations are output to +the console. + +``` python +from ibm_watson import ApiException + +for x in range(len(data)): +try: + url = data[x]["url"] + images_filename = data[x]["title"] + classes = visual_recognition.classify( + url=url, + images_filename=images_filename, + threshold='0.6', + owners=["IBM"]).get_result() + print("-----------------------------------------------") + print("Image Title: ", data[x]["title"], "\n") + print("Image URL: ", data[x]["url"], "\n") + classification_results = classes["images"][0]["classifiers"][0]["classes"] + for result in classification_results: + print(result["class"], "(", result["score"], ")") + print("-----------------------------------------------") +except ApiException as ex: + print("Method failed with status code " + str(ex.code) + ": " + ex.message) +``` + +# The Results + +Here we can see the full result set of our function above. If you view +each of the URLs that we sent to the API, you\'ll be able to see that it +was remarkably accurate. To be fair, these are clear high-resolution, +clear photos shot with a professional camera. In reality, you will most +likely be processing images that are lower quality and may have a lot of +noise in the photo. + +However, we can clearly see the benefit of being able to call this API +instead of attempting to write our own image recognition function. Each +of the classifications returned was a fair description of the image. + +If you wanted to restrict the results to those that are at least 90% +confident or greater, you would simply adjust the `threshold` +in the `visual_recognition.classify()` function. + +When your program runs, it should show the output below for each photo +you provide. + +``` txt +---------------------------------------------------------------- +Image Title: Grizzly Bear +Image URL: https://example.com/photos/image1.jpg + +brown bear ( 0.944 ) +bear ( 1 ) +carnivore ( 1 ) +mammal ( 1 ) +animal ( 1 ) +Alaskan brown bear ( 0.759 ) +greenishness color ( 0.975 ) +---------------------------------------------------------------- +``` + +# Discussion + +Now, this was a very minimal implementation of the API. We simply +supplied some images and looked to see how accurate the results were. +However, you could implement this type of API into many machine learning +(ML) models. + +For example, you could be working for a company that scans their +warehouses or inventory using drones. Would you want to pay employees to +sit there and watch drone footage all day in order to identify or count +things in the video? Probably not. Instead, you could use a +classification system similar to this one in order to train your machine +learning model to correctly identify items that the drones show through +video. More specifically, you could have your machine learning model +watch a drone fly over a field of sheep in order to count how many sheep +are living in that field. + +There are many ways to implement machine learning functionality, but +hopefully this post helped inspire some deeper thought about the tools +that can help propel us further into the future of machine learning and +AI. diff --git a/content/blog/2020-09-01-visual-recognition.org b/content/blog/2020-09-01-visual-recognition.org deleted file mode 100644 index 1e0f3b5..0000000 --- a/content/blog/2020-09-01-visual-recognition.org +++ /dev/null @@ -1,186 +0,0 @@ -#+title: IBM Watson Visual Recognition -#+date: 2020-09-01 -#+description: Exploring and visualizing data with Python. -#+filetags: :dev: - -* What is IBM Watson? -If you've never heard of [[https://www.ibm.com/watson][Watson]], this service is a suite of enterprise-ready AI -services, applications, and tooling provided by IBM. Watson contains quite a few -useful tools for data scientists and students, including the subject of this -post today: visual recognition. - -If you'd like to view the official documentation for the Visual Recognition API, -visit the [[https://cloud.ibm.com/apidocs/visual-recognition/visual-recognition-v3?code=python][API Docs]]. - -* Prerequisites -To be able to use Watson Visual Recognition, you'll need the following: - -1. Create a free account on [[https://www.ibm.com/cloud/watson-studio][IBM Watson Studio]]. -2. Add the [[https://www.ibm.com/cloud/watson-visual-recognition][Watson Visual Recognition]] service to your IBM Watson account. -3. Get your API key and URL. To do this, first go to the [[https://dataplatform.cloud.ibm.com/home2?context=cpdaas][profile dashboard]] for - your IBM account and click on the Watson Visual Recognition service you - created. This will be listed in the section titled *Your services*. Then - click the *Credentials** tab and open the *Auto-generated credentials** - dropdown. Copy your API key and URL so that you can use them in the Python - script later. -4. *[Optional]** While not required, you can also create the Jupyter Notebook - for this project right inside [[https://www.ibm.com/cloud/watson-studio][Watson Studio]]. Watson Studio will save your - notebooks inside an organized project and allow you to use their other - integrated products, such as storage containers, AI models, documentation, - external sharing, etc. - -* Calling the IBM Watson Visual Recognition API -Okay, now let's get started. - -To begin, we need to install the proper Python package for IBM Watson. - -#+begin_src sh -pip install --upgrade --user "ibm-watson>=4.5.0" -#+end_src - -Next, we need to specify the API key, version, and URL given to us when we -created the Watson Visual Recognition service. - -#+begin_src python -apikey = "" -version = "2018-03-19" -url = "" -#+end_src - -Now, let's import the necessary libraries and authenticate our service. - -#+begin_src python -import json -from ibm_watson import VisualRecognitionV3 -from ibm_cloud_sdk_core.authenticators import IAMAuthenticator - -authenticator = IAMAuthenticator(apikey) -visual_recognition = VisualRecognitionV3( - version=version, - authenticator=authenticator -) - -visual_recognition.set_service_url(url) -#+end_src - -*[Optional]* If you'd like to tell the API not to use any data to improve their -products, set the following header. - -#+begin_src python -visual_recognition.set_default_headers({'x-watson-learning-opt-out': "true"}) -#+end_src - -Now we have our API all set and ready to go. For this example, I'm going to -include a =dict= of photos to load as we test out the API. - -#+begin_src python -data = [ - { - "title": "Grizzly Bear", - "url": "https://example.com/photos/image1.jpg" - }, - { - "title": "Nature Lake", - "url": "https://example.com/photos/image2.jpg" - }, - { - "title": "Welcome Sign", - "url": "https://example.com/photos/image3.jpg" - }, - { - "title": "Honey Badger", - "url": "https://example.com/photos/image4.jpg" - }, - { - "title": "Grand Canyon Lizard", - "url": "https://example.com/photos/image5.jpg" - }, - { - "title": "Castle", - "url": "https://example.com/photos/image6.jpg" - } -] -#+end_src - -Now that we've set up our libraries and have the photos ready, let's create a -loop to call the API for each image. The code below shows a loop that calls the -URL of each image and sends it to the API, requesting results with at least 60% -confidence. The results are output to the console with dotted lines separating -each section. - -In the case of an API error, the codes and explanations are output to the -console. - -#+begin_src python -from ibm_watson import ApiException - -for x in range(len(data)): -try: - url = data[x]["url"] - images_filename = data[x]["title"] - classes = visual_recognition.classify( - url=url, - images_filename=images_filename, - threshold='0.6', - owners=["IBM"]).get_result() - print("-----------------------------------------------") - print("Image Title: ", data[x]["title"], "\n") - print("Image URL: ", data[x]["url"], "\n") - classification_results = classes["images"][0]["classifiers"][0]["classes"] - for result in classification_results: - print(result["class"], "(", result["score"], ")") - print("-----------------------------------------------") -except ApiException as ex: - print("Method failed with status code " + str(ex.code) + ": " + ex.message) -#+end_src - -* The Results -Here we can see the full result set of our function above. If you view each of -the URLs that we sent to the API, you'll be able to see that it was remarkably -accurate. To be fair, these are clear high-resolution, clear photos shot with a -professional camera. In reality, you will most likely be processing images that -are lower quality and may have a lot of noise in the photo. - -However, we can clearly see the benefit of being able to call this API instead -of attempting to write our own image recognition function. Each of the -classifications returned was a fair description of the image. - -If you wanted to restrict the results to those that are at least 90% confident -or greater, you would simply adjust the =threshold= in the -=visual_recognition.classify()= function. - -When your program runs, it should show the output below for each photo you -provide. - -#+begin_src txt ----------------------------------------------------------------- -Image Title: Grizzly Bear -Image URL: https://example.com/photos/image1.jpg - -brown bear ( 0.944 ) -bear ( 1 ) -carnivore ( 1 ) -mammal ( 1 ) -animal ( 1 ) -Alaskan brown bear ( 0.759 ) -greenishness color ( 0.975 ) ----------------------------------------------------------------- -#+end_src - -* Discussion -Now, this was a very minimal implementation of the API. We simply supplied some -images and looked to see how accurate the results were. However, you could -implement this type of API into many machine learning (ML) models. - -For example, you could be working for a company that scans their warehouses or -inventory using drones. Would you want to pay employees to sit there and watch -drone footage all day in order to identify or count things in the video? -Probably not. Instead, you could use a classification system similar to this one -in order to train your machine learning model to correctly identify items that -the drones show through video. More specifically, you could have your machine -learning model watch a drone fly over a field of sheep in order to count how -many sheep are living in that field. - -There are many ways to implement machine learning functionality, but hopefully -this post helped inspire some deeper thought about the tools that can help -propel us further into the future of machine learning and AI. diff --git a/content/blog/2020-09-22-internal-audit.md b/content/blog/2020-09-22-internal-audit.md new file mode 100644 index 0000000..b700062 --- /dev/null +++ b/content/blog/2020-09-22-internal-audit.md @@ -0,0 +1,264 @@ ++++ +date = 2020-09-22 +title = "Who is Internal Audit?" +description = "" +draft = false ++++ + +![Internal Audit +Overview](https://img.cleberg.net/blog/20200922-what-is-internal-audit/internal-audit-overview.jpg) + +# Definitions + +One of the many reasons that Internal Audit needs such thorough +explaining to non-auditors is that Internal Audit can serve many +purposes, depending on the organization\'s size and needs. However, the +Institute of Internal Auditors (IIA) defines Internal Auditing as: + +> Internal auditing is an independent, objective assurance and +> consulting activity designed to add value and improve an +> organization\'s operations. It helps an organization accomplish its +> objectives by bringing a systematic, disciplined approach to evaluate +> and improve the effectiveness of risk management, control, and +> governance processes. + +However, this definition uses quite a few terms that aren\'t clear +unless the reader already has a solid understanding of the auditing +profession. To further explain, the following is a list of definitions +that can help supplement understanding of internal auditing. + +## Independent + +Independence is the freedom from conditions that threaten the ability of +the internal audit activity to carry out internal audit responsibilities +in an unbiased manner. To achieve the degree of independence necessary +to effectively carry out the responsibilities of the internal audit +activity, the chief audit executive has direct and unrestricted access +to senior management and the board. This can be achieved through a +dual-reporting relationship. Threats to independence must be managed at +the individual auditor, engagement, functional, and organizational +levels. + +## Objective + +Objectivity is an unbiased mental attitude that allows internal auditors +to perform engagements in such a manner that they believe in their work +product and that no quality compromises are made. Objectivity requires +that internal auditors do not subordinate their judgment on audit +matters to others. Threats to objectivity must be managed at the +individual auditor, engagement, functional, and organizational levels. + +## Assurance + +Assurance services involve the internal auditor\'s objective assessment +of evidence to provide opinions or conclusions regarding an entity, +operation, function, process, system, or other subject matters. The +internal auditor determines the nature and scope of an assurance +engagement. Generally, three parties are participants in assurance +services: (1) the person or group directly involved with the entity, +operation, function, process, system, or other subject - (the process +owner), (2) the person or group making the assessment - (the internal +auditor), and (3) the person or group using the assessment - (the user). + +## Consulting + +Consulting services are advisory in nature and are generally performed +at the specific request of an engagement client. The nature and scope of +the consulting engagement are subject to agreement with the engagement +client. Consulting services generally involve two parties: (1) the +person or group offering the advice (the internal auditor), and (2) the +person or group seeking and receiving the advice (the engagement +client). When performing consulting services, the internal auditor +should maintain objectivity and not assume management responsibility. + +## Governance, Risk Management, & Compliance (GRC) + +The integrated collection of capabilities that enable an organization to +reliably achieve objectives, address uncertainty and act with integrity. + +# Audit Charter & Standards + +First, it\'s important to note that not every organization needs +internal auditors. In fact, it\'s unwise for an organization to hire +internal auditors unless they have regulatory requirements for auditing +and have the capital to support the department. Internal audit is a cost +center that can only affect revenue indirectly. + +Once an organization determines the need for internal assurance +services, they will hire a Chief Audit Executive and create the audit +charter. This charter is a document, approved by the company\'s +governing body, that will define internal audit\'s purpose, authority, +responsibility, and position within the organization. Fortunately, the +IIA has model charters available to IIA members for those developing or +improving their charter. + +Beyond the charter and organizational documents, internal auditors +follow a few different standards in order to perform their job. First is +the International Professional Practices Framework (IPPF) by the IIA, +which is the model of standards for internal auditing. In addition, +ISACA\'s Information Technology Assurance Framework (ITAF) helps guide +auditors in reference to information technology (IT) compliance and +assurance. Finally, additional standards such as FASB, GAAP, and +industry-specific standards are used when performing internal audit +work. + +# Three Lines of Defense + +[The IIA](https://theiia.org) released the original Three Lines of +Defense model in 2013, but have released an updated version in 2020. +Here is what the Three Lines of Defense model has historically looked +like: + +![2013 Three Lines of Defense +Model](https://img.cleberg.net/blog/20200922-what-is-internal-audit/three_lines_model.png) + +I won\'t go into depth about the changes made to the model in this +article. Instead, let\'s take a look at the most current model. + +![2020 Three Lines of Defense +Model](https://img.cleberg.net/blog/20200922-what-is-internal-audit/updated_three_lines_model.png) + +The updated model forgets the strict idea of areas performing their own +functions or line of defense. Instead of talking about management, risk, +and internal audit as 1-2-3, the new model creates a more fluid and +cooperative model. + +Looking at this model from an auditing perspective shows us that +auditors will need to align, communicate, and collaborate with +management, including business area managers and chief officers, as well +as reporting to the governing body. The governing body will instruct +internal audit *functionally* on their goals and track their progress +periodically. + +However, the internal audit department will report *administratively* to +a chief officer in the company for the purposes of collaboration, +direction, and assistance with the business. Note that in most +situations, the governing body is the audit committee on the company\'s +board of directors. + +The result of this structure is that internal audit is an independent +and objective function that can provide assurance over the topics they +audit. + +# Audit Process + +A normal audit will generally follow the same process, regardless of the +topic. However, certain special projects or abnormal business areas may +call for changes to the audit process. The audit process is not set in +stone, it\'s simply a set of best practices so that audits can be +performed consistently. + +![The Internal Audit +Process](https://img.cleberg.net/blog/20200922-what-is-internal-audit/internal-audit-process.jpg) + +While different organizations may tweak the process, it will generally +follow this flow: + +## 1. Risk Assessment + +The risk assessment part of the process has historically been performed +annually, but many organizations have moved to performing this process +much more frequently. In fact, some organizations are moving to an agile +approach that can take new risks into the risk assessment and +re-prioritize risk areas on-the-go. To perform a risk assessment, +leaders in internal audit will research industry risks, consult with +business leaders around the company, and perform analyses on company +data. + +Once a risk assessment has been documented, the audit department has a +prioritized list of risks that can be audited. This is usually in the +form of auditable entities, such as business areas or departments. + +## 2. Planning + +During the planning phase of an audit, auditors will meet with the +business area to discuss the various processes, controls, and risks +applicable to the business. This helps the auditors determine the scope +limits for the audit, as well as timing and subject-matter experts. +Certain documents will be created in this phase that will be used to +keep the audit on-track an in-scope as it goes forward. + +## 3. Testing + +The testing phase, also known as fieldwork or execution, is where +internal auditors will take the information they\'ve discovered and test +it against regulations, industry standards, company rules, best +practices, as well as validating that any processes are complete and +accurate. For example, an audit of HR would most likely examine +processes such as employee on-boarding, employee termination, security +of personally identifiable information (PII), or the IT systems involved +in these processes. Company standards would be examined and compared +against how the processes are actually being performed day-to-day, as +well as compared against regulations such as the Equal Employment +Opportunity (EEO), American with Disabilities Act, and National Labor +Relations Act. + +## 4. Reporting + +Once all the tests have been completed, the audit will enter the +reporting phase. This is when the audit team will conclude on the +evidence they\'ve collected, interviews they\'ve held, and any opinions +they\'ve formed on the controls in place. A summary of the audit +findings, conclusions, and specific recommendations are officially +communicated to the client through a draft report. Clients have the +opportunity to respond to the report and submit an action plan and time +frame. These responses become part of the final report which is +distributed to the appropriate level of administration. + +## 5. Follow-Up + +After audits have been completed and management has formed action plans +and time frames for audit issues, internal audit will follow up once +that due date has arrived. In most cases, the follow-up will simply +consist of a meeting to discuss how the action plan has been completed +and to request documentation to prove it. + +# Audit Department Structure + +While an internal audit department is most often thought of as a team of +full-time employees, there are actually many different ways in which a +department can be structured. As the world becomes more digital and +fast-paced, outsourcing has become a more attractive option for some +organizations. Internal audit can be fully outsourced or partially +outsourced, allowing for flexibility in cases where turnover is high. + +In addition, departments can implement a rotational model. This allows +for interested employees around the organization to rotate into the +internal audit department for a period of time, allowing them to obtain +knowledge of risks and controls and allowing the internal audit team to +obtain more business area knowledge. This program is popular in very +large organizations, but organizations tend to rotate lower-level audit +staff instead of managers. This helps prevent any significant knowledge +loss as auditors rotate out to business areas. + +# Consulting + +Consulting is not an easy task at any organization, especially for a +department that can have negative perceptions within the organization as +the \"compliance police.\" However, once an internal audit department +has delivered value to organization, adding consulting to their suite of +services is a smart move. In most cases, Internal Audit can insert +themselves into a consulting role without affecting the process of +project management at the company. This means that internal audit can +add objective assurance and opinions to business areas as they develop +new processes, instead of coming in periodically to audit an area and +file issues that could have been fixed at the beginning. + +# Data Science & Data Analytics + +![Data Science Skill +Set](https://img.cleberg.net/blog/20200922-what-is-internal-audit/data-science-skillset.png) + +One major piece of the internal audit function in the modern world is +data science. While the process is data science, most auditors will +refer to anything in this realm as data analytics. Hot topics such as +robotic process automation (RPA), machine learning (ML), and data mining +have taken over the auditing world in recent years. These technologies +have been immensely helpful with increasing the effectiveness and +efficiency of auditors. + +For example, mundane and repetitive tasks can be automated in order for +auditors to make more room in their schedules for labor-intensive work. +Further, auditors will need to adapt technologies like machine learning +in order to extract more value from the data they\'re using to form +conclusions. diff --git a/content/blog/2020-09-22-internal-audit.org b/content/blog/2020-09-22-internal-audit.org deleted file mode 100644 index b90b461..0000000 --- a/content/blog/2020-09-22-internal-audit.org +++ /dev/null @@ -1,232 +0,0 @@ -#+title: What is Internal Audit? -#+date: 2020-09-22 -#+description: Learn about the Internal Audit function and their purpose. -#+filetags: :audit: - -#+caption: Internal Audit Overview -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/internal-audit-overview.jpg]] - -* Definitions -One of the many reasons that Internal Audit needs such thorough explaining to -non-auditors is that Internal Audit can serve many purposes, depending on the -organization's size and needs. However, the Institute of Internal Auditors (IIA) -defines Internal Auditing as: - -#+begin_quote -Internal auditing is an independent, objective assurance and consulting activity -designed to add value and improve an organization's operations. It helps an -organization accomplish its objectives by bringing a systematic, disciplined -approach to evaluate and improve the effectiveness of risk management, control, -and governance processes. - -#+end_quote - -However, this definition uses quite a few terms that aren't clear unless the -reader already has a solid understanding of the auditing profession. To further -explain, the following is a list of definitions that can help supplement -understanding of internal auditing. - -** Independent -Independence is the freedom from conditions that threaten the ability of the -internal audit activity to carry out internal audit responsibilities in an -unbiased manner. To achieve the degree of independence necessary to effectively -carry out the responsibilities of the internal audit activity, the chief audit -executive has direct and unrestricted access to senior management and the board. -This can be achieved through a dual-reporting relationship. Threats to -independence must be managed at the individual auditor, engagement, functional, -and organizational levels. - -** Objective -Objectivity is an unbiased mental attitude that allows internal auditors to -perform engagements in such a manner that they believe in their work product and -that no quality compromises are made. Objectivity requires that internal -auditors do not subordinate their judgment on audit matters to others. Threats -to objectivity must be managed at the individual auditor, engagement, -functional, and organizational levels. - -** Assurance -Assurance services involve the internal auditor's objective assessment of -evidence to provide opinions or conclusions regarding an entity, operation, -function, process, system, or other subject matters. The internal auditor -determines the nature and scope of an assurance engagement. Generally, three -parties are participants in assurance services: (1) the person or group directly -involved with the entity, operation, function, process, system, or other -subject - (the process owner), (2) the person or group making the assessment - -(the internal auditor), and (3) the person or group using the assessment - (the -user). - -** Consulting -Consulting services are advisory in nature and are generally performed at the -specific request of an engagement client. The nature and scope of the consulting -engagement are subject to agreement with the engagement client. Consulting -services generally involve two parties: (1) the person or group offering the -advice (the internal auditor), and (2) the person or group seeking and receiving -the advice (the engagement client). When performing consulting services, the -internal auditor should maintain objectivity and not assume management -responsibility. - -** Governance, Risk Management, & Compliance (GRC) -The integrated collection of capabilities that enable an organization to -reliably achieve objectives, address uncertainty and act with integrity. - -* Audit Charter & Standards -First, it's important to note that not every organization needs internal -auditors. In fact, it's unwise for an organization to hire internal auditors -unless they have regulatory requirements for auditing and have the capital to -support the department. Internal audit is a cost center that can only affect -revenue indirectly. - -Once an organization determines the need for internal assurance services, they -will hire a Chief Audit Executive and create the audit charter. This charter is -a document, approved by the company's governing body, that will define internal -audit's purpose, authority, responsibility, and position within the -organization. Fortunately, the IIA has model charters available to IIA members -for those developing or improving their charter. - -Beyond the charter and organizational documents, internal auditors follow a few -different standards in order to perform their job. First is the International -Professional Practices Framework (IPPF) by the IIA, which is the model of -standards for internal auditing. In addition, ISACA's Information Technology -Assurance Framework (ITAF) helps guide auditors in reference to information -technology (IT) compliance and assurance. Finally, additional standards such as -FASB, GAAP, and industry-specific standards are used when performing internal -audit work. - -* Three Lines of Defense -[[https://theiia.org][The IIA]] released the original Three Lines of Defense model in 2013, but have -released an updated version in 2020. Here is what the Three Lines of Defense -model has historically looked like: - -#+caption: 2013 Three Lines of Defense Model -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/three_lines_model.png]] - -I won't go into depth about the changes made to the model in this article. -Instead, let's take a look at the most current model. - -#+caption: 2020 Three Lines of Defense Model -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/updated_three_lines_model.png]] - -The updated model forgets the strict idea of areas performing their own -functions or line of defense. Instead of talking about management, risk, and -internal audit as 1-2-3, the new model creates a more fluid and cooperative -model. - -Looking at this model from an auditing perspective shows us that auditors will -need to align, communicate, and collaborate with management, including business -area managers and chief officers, as well as reporting to the governing body. -The governing body will instruct internal audit /functionally/ on their goals -and track their progress periodically. - -However, the internal audit department will report /administratively/ to a chief -officer in the company for the purposes of collaboration, direction, and -assistance with the business. Note that in most situations, the governing body -is the audit committee on the company's board of directors. - -The result of this structure is that internal audit is an independent and -objective function that can provide assurance over the topics they audit. - -* Audit Process -A normal audit will generally follow the same process, regardless of the topic. -However, certain special projects or abnormal business areas may call for -changes to the audit process. The audit process is not set in stone, it's simply -a set of best practices so that audits can be performed consistently. - -#+caption: The Internal Audit Process -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/internal-audit-process.jpg]] - -While different organizations may tweak the process, it will generally follow -this flow: - -** 1. Risk Assessment -The risk assessment part of the process has historically been performed -annually, but many organizations have moved to performing this process much more -frequently. In fact, some organizations are moving to an agile approach that can -take new risks into the risk assessment and re-prioritize risk areas on-the-go. -To perform a risk assessment, leaders in internal audit will research industry -risks, consult with business leaders around the company, and perform analyses on -company data. - -Once a risk assessment has been documented, the audit department has a -prioritized list of risks that can be audited. This is usually in the form of -auditable entities, such as business areas or departments. - -** 2. Planning -During the planning phase of an audit, auditors will meet with the business area -to discuss the various processes, controls, and risks applicable to the -business. This helps the auditors determine the scope limits for the audit, as -well as timing and subject-matter experts. Certain documents will be created in -this phase that will be used to keep the audit on-track an in-scope as it goes -forward. - -** 3. Testing -The testing phase, also known as fieldwork or execution, is where internal -auditors will take the information they've discovered and test it against -regulations, industry standards, company rules, best practices, as well as -validating that any processes are complete and accurate. For example, an audit -of HR would most likely examine processes such as employee on-boarding, employee -termination, security of personally identifiable information (PII), or the IT -systems involved in these processes. Company standards would be examined and -compared against how the processes are actually being performed day-to-day, as -well as compared against regulations such as the Equal Employment Opportunity -(EEO), American with Disabilities Act, and National Labor Relations Act. - -** 4. Reporting -Once all the tests have been completed, the audit will enter the reporting -phase. This is when the audit team will conclude on the evidence they've -collected, interviews they've held, and any opinions they've formed on the -controls in place. A summary of the audit findings, conclusions, and specific -recommendations are officially communicated to the client through a draft -report. Clients have the opportunity to respond to the report and submit an -action plan and time frame. These responses become part of the final report -which is distributed to the appropriate level of administration. - -** 5. Follow-Up -After audits have been completed and management has formed action plans and time -frames for audit issues, internal audit will follow up once that due date has -arrived. In most cases, the follow-up will simply consist of a meeting to -discuss how the action plan has been completed and to request documentation to -prove it. - -* Audit Department Structure -While an internal audit department is most often thought of as a team of -full-time employees, there are actually many different ways in which a -department can be structured. As the world becomes more digital and fast-paced, -outsourcing has become a more attractive option for some organizations. Internal -audit can be fully outsourced or partially outsourced, allowing for flexibility -in cases where turnover is high. - -In addition, departments can implement a rotational model. This allows for -interested employees around the organization to rotate into the internal audit -department for a period of time, allowing them to obtain knowledge of risks and -controls and allowing the internal audit team to obtain more business area -knowledge. This program is popular in very large organizations, but -organizations tend to rotate lower-level audit staff instead of managers. This -helps prevent any significant knowledge loss as auditors rotate out to business -areas. - -* Consulting -Consulting is not an easy task at any organization, especially for a department -that can have negative perceptions within the organization as the "compliance -police." However, once an internal audit department has delivered value to -organization, adding consulting to their suite of services is a smart move. In -most cases, Internal Audit can insert themselves into a consulting role without -affecting the process of project management at the company. This means that -internal audit can add objective assurance and opinions to business areas as -they develop new processes, instead of coming in periodically to audit an area -and file issues that could have been fixed at the beginning. - -* Data Science & Data Analytics -#+caption: Data Science Skill Set -[[https://img.cleberg.net/blog/20200922-what-is-internal-audit/data-science-skillset.png]] - -One major piece of the internal audit function in the modern world is data -science. While the process is data science, most auditors will refer to anything -in this realm as data analytics. Hot topics such as robotic process automation -(RPA), machine learning (ML), and data mining have taken over the auditing world -in recent years. These technologies have been immensely helpful with increasing -the effectiveness and efficiency of auditors. - -For example, mundane and repetitive tasks can be automated in order for auditors -to make more room in their schedules for labor-intensive work. Further, auditors -will need to adapt technologies like machine learning in order to extract more -value from the data they're using to form conclusions. diff --git a/content/blog/2020-09-25-happiness-map.md b/content/blog/2020-09-25-happiness-map.md new file mode 100644 index 0000000..a9ba336 --- /dev/null +++ b/content/blog/2020-09-25-happiness-map.md @@ -0,0 +1,226 @@ ++++ +date = 2020-09-25 +title = "Data Visualization: World Choropleth Map of Happiness" +description = "" +draft = false ++++ + +# Background Information + +The dataset (obtained from +[Kaggle](https://www.kaggle.com/unsdsn/world-happiness)) used in this +article contains a list of countries around the world, their happiness +rankings and scores, as well as other national scoring measures. + +Fields include: + +- Overall rank +- Country or region +- GDP per capita +- Social support +- Healthy life expectancy +- Freedom to make life choices +- Generosity +- Perceptions of corruption + +There are 156 records. Since there are \~195 countries in the world, we +can see that around 40 countries will be missing from this dataset. + +# Install Packages + +As always, run the `install` command for all packages needed +to perform analysis. + +``` python +!pip install folium geopandas matplotlib numpy pandas +``` + +# Import the Data + +We only need a couple packages to create a choropleth map. We will use +[Folium](https://python-visualization.github.io/folium/), which provides +map visualizations in Python. We will also use geopandas and pandas to +wrangle our data before we put it on a map. + +``` python +# Import the necessary Python packages +import folium +import geopandas as gpd +import pandas as pd +``` + +To get anything to show up on a map, we need a file that will specify +the boundaries of each country. Luckily, GeoJSON files exist (for free!) +on the internet. To get the boundaries of every country in the world, we +will use the GeoJSON link shown below. + +GeoPandas will take this data and load it into a dataframe so that we +can easily match it to the data we\'re trying to analyze. Let\'s look at +the GeoJSON dataframe: + +``` python +# Load the GeoJSON data with geopandas +geo_data = gpd.read_file('https://raw.githubusercontent.com/datasets/geo-countries/master/data/countries.geojson') +geo_data.head() +``` + +![GeoJSON +Dataframe](https://img.cleberg.net/blog/20200925-world-choropleth-map/geojson_df.png) + +Next, let\'s load the data from the Kaggle dataset. I\'ve downloaded +this file, so update the file path if you have it somewhere else. After +loading, let\'s take a look at this dataframe: + +``` python +# Load the world happiness data with pandas +happy_data = pd.read_csv(r'~/Downloads/world_happiness_data_2019.csv') +happy_data.head() +``` + +![Happiness +Dataframe](https://img.cleberg.net/blog/20200925-world-choropleth-map/happiness_df.png) + +# Clean the Data + +Some countries need to be renamed, or they will be lost when you merge +the happiness and GeoJSON dataframes. This is something I discovered +when the map below showed empty countries. I searched both data frames +for the missing countries to see the naming differences. Any countries +that do not have records in the `happy_data` df will not show +up on the map. + +``` python +# Rename some countries to match our GeoJSON data + +# Rename USA +usa_index = happy_data.index[happy_data['Country or region'] == 'United States'] +happy_data.at[usa_index, 'Country or region'] = 'United States of America' + +# Rename Tanzania +tanzania_index = happy_data.index[happy_data['Country or region'] == 'Tanzania'] +happy_data.at[tanzania_index, 'Country or region'] = 'United Republic of Tanzania' + +# Rename the Congo +republic_congo_index = happy_data.index[happy_data['Country or region'] == 'Congo (Brazzaville)'] +happy_data.at[republic_congo_index, 'Country or region'] = 'Republic of Congo' + +# Rename the DRC +democratic_congo_index = happy_data.index[happy_data['Country or region'] == 'Congo (Kinshasa)'] +happy_data.at[democratic_congo_index, 'Country or region'] = 'Democratic Republic of the Congo' +``` + +# Merge the Data + +Now that we have clean data, we need to merge the GeoJSON data with the +happiness data. Since we\'ve stored them both in dataframes, we just +need to call the `.merge()` function. + +We will also rename a couple columns, just so that they\'re a little +easier to use when we create the map. + +``` python +# Merge the two previous dataframes into a single geopandas dataframe +merged_df = geo_data.merge(happy_data,left_on='ADMIN', right_on='Country or region') + +# Rename columns for ease of use +merged_df = merged_df.rename(columns = {'ADMIN':'GeoJSON_Country'}) +merged_df = merged_df.rename(columns = {'Country or region':'Country'}) +``` + +![Merged +Dataframe](https://img.cleberg.net/blog/20200925-world-choropleth-map/merged_df.png) + +# Create the Map + +The data is finally ready to be added to a map. The code below shows the +simplest way to find the center of the map and create a Folium map +object. The important part is to remember to reference the merged +dataframe for our GeoJSON data and value data. The columns specify which +geo data and value data to use. + +``` python +# Assign centroids to map +x_map = merged_df.centroid.x.mean() +y_map = merged_df.centroid.y.mean() +print(x_map,y_map) + +# Creating a map object +world_map = folium.Map(location=[y_map, x_map], zoom_start=2,tiles=None) +folium.TileLayer('CartoDB positron',name='Dark Map',control=False).add_to(world_map) + +# Creating choropleth map +folium.Choropleth( + geo_data=merged_df, + name='Choropleth', + data=merged_df, + columns=['Country','Overall rank'], + key_on='feature.properties.Country', + fill_color='YlOrRd', + fill_opacity=0.6, + line_opacity=0.8, + legend_name='Overall happiness rank', + smooth_factor=0, + highlight=True +).add_to(world_map) +``` + +Let\'s look at the resulting map. + +![Choropleth +Map](https://img.cleberg.net/blog/20200925-world-choropleth-map/map.png) + +# Create a Tooltip on Hover + +Now that we have a map set up, we could stop. However, I want to add a +tooltip so that I can see more information about each country. The +`tooltip_data` code below will show a popup on hover with all +the data fields shown. + +``` python + # Adding labels to map + style_function = lambda x: {'fillColor': '#ffffff', + 'color':'#000000', + 'fillOpacity': 0.1, + 'weight': 0.1} + +tooltip_data = folium.features.GeoJson( + merged_df, + style_function=style_function, + control=False, + tooltip=folium.features.GeoJsonTooltip( + fields=['Country' + ,'Overall rank' + ,'Score' + ,'GDP per capita' + ,'Social support' + ,'Healthy life expectancy' + ,'Freedom to make life choices' + ,'Generosity' + ,'Perceptions of corruption' + ], + aliases=['Country: ' + ,'Happiness rank: ' + ,'Happiness score: ' + ,'GDP per capita: ' + ,'Social support: ' + ,'Healthy life expectancy: ' + ,'Freedom to make life choices: ' + ,'Generosity: ' + ,'Perceptions of corruption: ' + ], + style=('background-color: white; color: #333333; font-family: arial; font-size: 12px; padding: 10px;') + ) +) +world_map.add_child(tooltip_data) +world_map.keep_in_front(tooltip_data) +folium.LayerControl().add_to(world_map) + +# Display the map +world_map +``` + +The final image below will show you what the tooltip looks like whenever +you hover over a country. + +![Choropleth Map +Tooltip](https://img.cleberg.net/blog/20200925-world-choropleth-map/tooltip_map.png) diff --git a/content/blog/2020-09-25-happiness-map.org b/content/blog/2020-09-25-happiness-map.org deleted file mode 100644 index 1f2b56f..0000000 --- a/content/blog/2020-09-25-happiness-map.org +++ /dev/null @@ -1,212 +0,0 @@ -#+title: Data Visualization: World Choropleth Map of Happiness -#+date: 2020-09-25 -#+description: Exploring and visualizing data with Python. -#+filetags: :data: - -* Background Information -The dataset (obtained from [[https://www.kaggle.com/unsdsn/world-happiness][Kaggle]]) used in this article contains a list of -countries around the world, their happiness rankings and scores, as well as -other national scoring measures. - -Fields include: - -- Overall rank -- Country or region -- GDP per capita -- Social support -- Healthy life expectancy -- Freedom to make life choices -- Generosity -- Perceptions of corruption - -There are 156 records. Since there are ~195 countries in the world, we can see -that around 40 countries will be missing from this dataset. - -* Install Packages -As always, run the =install= command for all packages needed to perform -analysis. - -#+begin_src python -!pip install folium geopandas matplotlib numpy pandas -#+end_src - -* Import the Data -We only need a couple packages to create a choropleth map. We will use [[https://python-visualization.github.io/folium/][Folium]], -which provides map visualizations in Python. We will also use geopandas and -pandas to wrangle our data before we put it on a map. - -#+begin_src python -# Import the necessary Python packages -import folium -import geopandas as gpd -import pandas as pd -#+end_src - -To get anything to show up on a map, we need a file that will specify the -boundaries of each country. Luckily, GeoJSON files exist (for free!) on the -internet. To get the boundaries of every country in the world, we will use the -GeoJSON link shown below. - -GeoPandas will take this data and load it into a dataframe so that we can easily -match it to the data we're trying to analyze. Let's look at the GeoJSON -dataframe: - -#+begin_src python -# Load the GeoJSON data with geopandas -geo_data = gpd.read_file('https://raw.githubusercontent.com/datasets/geo-countries/master/data/countries.geojson') -geo_data.head() -#+end_src - -#+caption: GeoJSON Dataframe -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/geojson_df.png]] - -Next, let's load the data from the Kaggle dataset. I've downloaded this file, so -update the file path if you have it somewhere else. After loading, let's take a -look at this dataframe: - -#+begin_src python -# Load the world happiness data with pandas -happy_data = pd.read_csv(r'~/Downloads/world_happiness_data_2019.csv') -happy_data.head() -#+end_src - -#+caption: Happiness Dataframe -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/happiness_df.png]] - -* Clean the Data -Some countries need to be renamed, or they will be lost when you merge the -happiness and GeoJSON dataframes. This is something I discovered when the map -below showed empty countries. I searched both data frames for the missing -countries to see the naming differences. Any countries that do not have records -in the =happy_data= df will not show up on the map. - -#+begin_src python -# Rename some countries to match our GeoJSON data - -# Rename USA -usa_index = happy_data.index[happy_data['Country or region'] == 'United States'] -happy_data.at[usa_index, 'Country or region'] = 'United States of America' - -# Rename Tanzania -tanzania_index = happy_data.index[happy_data['Country or region'] == 'Tanzania'] -happy_data.at[tanzania_index, 'Country or region'] = 'United Republic of Tanzania' - -# Rename the Congo -republic_congo_index = happy_data.index[happy_data['Country or region'] == 'Congo (Brazzaville)'] -happy_data.at[republic_congo_index, 'Country or region'] = 'Republic of Congo' - -# Rename the DRC -democratic_congo_index = happy_data.index[happy_data['Country or region'] == 'Congo (Kinshasa)'] -happy_data.at[democratic_congo_index, 'Country or region'] = 'Democratic Republic of the Congo' -#+end_src - -* Merge the Data -Now that we have clean data, we need to merge the GeoJSON data with the -happiness data. Since we've stored them both in dataframes, we just need to call -the =.merge()= function. - -We will also rename a couple columns, just so that they're a little easier to -use when we create the map. - -#+begin_src python -# Merge the two previous dataframes into a single geopandas dataframe -merged_df = geo_data.merge(happy_data,left_on='ADMIN', right_on='Country or region') - -# Rename columns for ease of use -merged_df = merged_df.rename(columns = {'ADMIN':'GeoJSON_Country'}) -merged_df = merged_df.rename(columns = {'Country or region':'Country'}) -#+end_src - -#+caption: Merged Dataframe -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/merged_df.png]] - -* Create the Map -The data is finally ready to be added to a map. The code below shows the -simplest way to find the center of the map and create a Folium map object. The -important part is to remember to reference the merged dataframe for our GeoJSON -data and value data. The columns specify which geo data and value data to use. - -#+begin_src python -# Assign centroids to map -x_map = merged_df.centroid.x.mean() -y_map = merged_df.centroid.y.mean() -print(x_map,y_map) - -# Creating a map object -world_map = folium.Map(location=[y_map, x_map], zoom_start=2,tiles=None) -folium.TileLayer('CartoDB positron',name='Dark Map',control=False).add_to(world_map) - -# Creating choropleth map -folium.Choropleth( - geo_data=merged_df, - name='Choropleth', - data=merged_df, - columns=['Country','Overall rank'], - key_on='feature.properties.Country', - fill_color='YlOrRd', - fill_opacity=0.6, - line_opacity=0.8, - legend_name='Overall happiness rank', - smooth_factor=0, - highlight=True -).add_to(world_map) -#+end_src - -Let's look at the resulting map. - -#+caption: Choropleth Map -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/map.png]] - -* Create a Tooltip on Hover -Now that we have a map set up, we could stop. However, I want to add a tooltip -so that I can see more information about each country. The =tooltip_data= code -below will show a popup on hover with all the data fields shown. - -#+begin_src python - # Adding labels to map - style_function = lambda x: {'fillColor': '#ffffff', - 'color':'#000000', - 'fillOpacity': 0.1, - 'weight': 0.1} - -tooltip_data = folium.features.GeoJson( - merged_df, - style_function=style_function, - control=False, - tooltip=folium.features.GeoJsonTooltip( - fields=['Country' - ,'Overall rank' - ,'Score' - ,'GDP per capita' - ,'Social support' - ,'Healthy life expectancy' - ,'Freedom to make life choices' - ,'Generosity' - ,'Perceptions of corruption' - ], - aliases=['Country: ' - ,'Happiness rank: ' - ,'Happiness score: ' - ,'GDP per capita: ' - ,'Social support: ' - ,'Healthy life expectancy: ' - ,'Freedom to make life choices: ' - ,'Generosity: ' - ,'Perceptions of corruption: ' - ], - style=('background-color: white; color: #333333; font-family: arial; font-size: 12px; padding: 10px;') - ) -) -world_map.add_child(tooltip_data) -world_map.keep_in_front(tooltip_data) -folium.LayerControl().add_to(world_map) - -# Display the map -world_map -#+end_src - -The final image below will show you what the tooltip looks like whenever you -hover over a country. - -#+caption: Choropleth Map Tooltip -[[https://img.cleberg.net/blog/20200925-world-choropleth-map/tooltip_map.png]] diff --git a/content/blog/2020-10-12-mediocrity.md b/content/blog/2020-10-12-mediocrity.md new file mode 100644 index 0000000..c13a5ea --- /dev/null +++ b/content/blog/2020-10-12-mediocrity.md @@ -0,0 +1,121 @@ ++++ +date = 2020-10-12 +title = "On the Pursuit of Mediocrity" +description = "" +draft = false ++++ + +# Perfect is the Enemy of Good + +As the saying goes, \"the best is the enemy of the good.\" As we strive +for perfection, we often fail to realize the implications of such an +undertaking. Attempting to reach perfection is often unrealistic. Even +worse, it can get in the way of achieving a good outcome. In certain +situations, we try so hard to achieve the ideal solution that we have +burned the bridges that would have allowed us to reach a lesser yet +still superb solution. + +Philosophers throughout history have inspected this plight from many +viewpoints. Greek mythology speaks of the [golden +mean](https://en.wikipedia.org/wiki/Golden_mean_(philosophy)), which +uses the story of Icarus to illustrate that sometimes \"the middle +course\" is the best solution. In this story, Daedalus, a famous artist +of his time, built feathered wings for himself and his son so that they +might escape the clutches of King Minos. Daedalus warns his beloved son +whom he loved so much to \"fly the middle course\", between the sea +spray and the sun\'s heat. Icarus did not heed his father; he flew up +and up until the sun melted the wax off his wings. For not heeding the +middle course, he fell into the sea and drowned. + +More recently, management scholars have explored the [Pareto +principle](https://en.wikipedia.org/wiki/Pareto_principle) and found +that as we increase the frequency of something, or strive to perform +actions to achieve some form of perfection, we run into [diminishing +returns](https://en.wikipedia.org/wiki/Diminishing_returns). + +Even further, Harold Demsetz is noted as coining the term [the Nirvana +fallacy](https://en.wikipedia.org/wiki/Nirvana_fallacy) in 1969, which +shows the fallacy of comparing actual things with unrealistic, idealized +alternatives. This is another trap that we may fall into, where we are +constantly thinking of the ultimate solutions to problems, when +something more realistic needs to be considered. + +Over and over throughout history, we\'ve found that perfection is often +unrealistic and unachievable. However, we push ourselves and our peers +to \"give 100%\" or \"go the extra mile,\" while it may be that the +better course is to give a valuable level of effort while considering +the effects of further effort on the outcome. Working harder does not +always help us achieve loftier goals. + +This has presented itself to me most recently during my time studying at +my university. I was anxious and feeling the stresses of my courses, +career, and personal life for quite a while, which was greatly affecting +how well I was doing at school and my level of effort at work. One day, +I happened to be talking to my father when he said something simple that +hit home: + +> All you can do is show up and do your best. Worrying about the +> outcomes won\'t affect the outcome itself. + +The thought was extremely straightforward and uncomplicated, yet it was +something that I had lost sight of during my stress-filled years at +school. Ever since then, I\'ve found myself pausing and remembering that +quote every time I get anxious or stressed. It helps to stop and think +\"Can I do anything to affect the outcome, or am I simply worrying over +something I can\'t change?\" + +# When Mediocrity Isn\'t Enough + +One problem with the philosophies presented in this post is that they +are implemented far too often in situations where mediocrity simply +isn\'t adequate. For example, let\'s take a look at digital user data, +specifically personally-identifiable information (PII). As a +cybersecurity auditor in the United States, I have found that most +companies are concerned more with compliance than any actual safeguards +over the privacy or protection of user data. Other than companies who +have built their reputation on privacy and security, most companies will +use [satisficing](https://en.wikipedia.org/wiki/Satisficing) as their +primary decision-making strategy around user data. + +> Satisficing is a decision-making strategy or cognitive heuristic that +> entails searching through the available alternatives until an +> acceptability threshold is met. + +This means that each decision will be met with certain possible +solutions until one of the solutions meets their minimum acceptable +standards. For companies that deal with user data, the +minimum-acceptable standards come from three areas: + +1. Laws and regulations +2. Competitive pressure +3. Risk of monetary or reputation loss + +Working with project management or auditing, the primary concern here is +the risk of legal ramifications. Since the primary risk comes from laws +and regulations, companies will require that any project that involves +user data must follow all the rules of those laws so that the company +can protect itself from fines or other penalties. + +Following this, companies will consider best practices in order to place +itself in a competitive position (e.g. Google vs. Apple) and review any +recent or ongoing litigation against companies regarding user data. In a +perfect company, management would then consider the ethical +responsibilities of their organization and discuss their +responsibilities over things like personally-identifiable information. + +However, as we mentioned above, most companies follow the idea of +satisficing, which states that they have met the minimum acceptable +standards and can now move on to other decisions. Modern business +culture in the United States dictates that profits are the golden +measure of how well a company or manager is performing, so we often +don\'t think about our responsibilities beyond these basic standards. + +Not all situations demand excellence, but I believe that applying any +philosophy as a broad stroke across one\'s life can be a mistake. We +must be able to think critically about what we are doing as we do it and +ask ourselves a few questions. Have I done everything I can in this +situation? Is mediocrity an acceptable outcome, or should we strive for +perfection, even if we can\'t attain it? + +Taking a few moments to think critically throughout our day, as we make +decisions, can have a tremendous effect on the outcomes we create. diff --git a/content/blog/2020-10-12-mediocrity.org b/content/blog/2020-10-12-mediocrity.org deleted file mode 100644 index a653f80..0000000 --- a/content/blog/2020-10-12-mediocrity.org +++ /dev/null @@ -1,122 +0,0 @@ -#+title: On the Pursuit of Mediocrity -#+date: 2020-10-12 -#+description: Musings on mediocrity. -#+filetags: :personal: - -* Perfect is the Enemy of Good -As the saying goes, "the best is the enemy of the good." As we strive -for perfection, we often fail to realize the implications of such an -undertaking. Attempting to reach perfection is often unrealistic. Even -worse, it can get in the way of achieving a good outcome. In certain -situations, we try so hard to achieve the ideal solution that we have -burned the bridges that would have allowed us to reach a lesser yet -still superb solution. - -Philosophers throughout history have inspected this plight from many -viewpoints. Greek mythology speaks of the -[[https://en.wikipedia.org/wiki/Golden_mean_(philosophy)][golden mean]], -which uses the story of Icarus to illustrate that sometimes "the middle -course" is the best solution. In this story, Daedalus, a famous artist -of his time, built feathered wings for himself and his son so that they -might escape the clutches of King Minos. Daedalus warns his beloved son -whom he loved so much to "fly the middle course", between the sea spray -and the sun's heat. Icarus did not heed his father; he flew up and up -until the sun melted the wax off his wings. For not heeding the middle -course, he fell into the sea and drowned. - -More recently, management scholars have explored the -[[https://en.wikipedia.org/wiki/Pareto_principle][Pareto principle]] and -found that as we increase the frequency of something, or strive to -perform actions to achieve some form of perfection, we run into -[[https://en.wikipedia.org/wiki/Diminishing_returns][diminishing -returns]]. - -Even further, Harold Demsetz is noted as coining the term -[[https://en.wikipedia.org/wiki/Nirvana_fallacy][the Nirvana fallacy]] -in 1969, which shows the fallacy of comparing actual things with -unrealistic, idealized alternatives. This is another trap that we may -fall into, where we are constantly thinking of the ultimate solutions to -problems, when something more realistic needs to be considered. - -Over and over throughout history, we've found that perfection is often -unrealistic and unachievable. However, we push ourselves and our peers -to "give 100%" or "go the extra mile," while it may be that the better -course is to give a valuable level of effort while considering the -effects of further effort on the outcome. Working harder does not always -help us achieve loftier goals. - -This has presented itself to me most recently during my time studying at -my university. I was anxious and feeling the stresses of my courses, -career, and personal life for quite a while, which was greatly affecting -how well I was doing at school and my level of effort at work. One day, -I happened to be talking to my father when he said something simple that -hit home: - -#+begin_quote -All you can do is show up and do your best. Worrying about the outcomes -won't affect the outcome itself. -#+end_quote - -The thought was extremely straightforward and uncomplicated, yet it was -something that I had lost sight of during my stress-filled years at -school. Ever since then, I've found myself pausing and remembering that -quote every time I get anxious or stressed. It helps to stop and think -"Can I do anything to affect the outcome, or am I simply worrying over -something I can't change?" - -* When Mediocrity Isn't Enough -One problem with the philosophies presented in this post is that they -are implemented far too often in situations where mediocrity simply -isn't adequate. For example, let's take a look at digital user data, -specifically personally-identifiable information (PII). As a -cybersecurity auditor in the United States, I have found that most -companies are concerned more with compliance than any actual safeguards -over the privacy or protection of user data. Other than companies who -have built their reputation on privacy and security, most companies will -use [[https://en.wikipedia.org/wiki/Satisficing][satisficing]] as their -primary decision-making strategy around user data. - -#+begin_quote -Satisficing is a decision-making strategy or cognitive heuristic that -entails searching through the available alternatives until an -acceptability threshold is met. -#+end_quote - -This means that each decision will be met with certain possible -solutions until one of the solutions meets their minimum acceptable -standards. For companies that deal with user data, the -minimum-acceptable standards come from three areas: - -1. Laws and regulations -2. Competitive pressure -3. Risk of monetary or reputation loss - -Working with project management or auditing, the primary concern here is -the risk of legal ramifications. Since the primary risk comes from laws -and regulations, companies will require that any project that involves -user data must follow all the rules of those laws so that the company -can protect itself from fines or other penalties. - -Following this, companies will consider best practices in order to place -itself in a competitive position (e.g. Google vs. Apple) and review any -recent or ongoing litigation against companies regarding user data. In a -perfect company, management would then consider the ethical -responsibilities of their organization and discuss their -responsibilities over things like personally-identifiable information. - -However, as we mentioned above, most companies follow the idea of -satisficing, which states that they have met the minimum acceptable -standards and can now move on to other decisions. Modern business -culture in the United States dictates that profits are the golden -measure of how well a company or manager is performing, so we often -don't think about our responsibilities beyond these basic standards. - -Not all situations demand excellence, but I believe that applying any -philosophy as a broad stroke across one's life can be a mistake. We must -be able to think critically about what we are doing as we do it and ask -ourselves a few questions. Have I done everything I can in this -situation? Is mediocrity an acceptable outcome, or should we strive for -perfection, even if we can't attain it? - -Taking a few moments to think critically throughout our day, as we make -decisions, can have a tremendous effect on the outcomes we create. diff --git a/content/blog/2020-12-27-website-redesign.md b/content/blog/2020-12-27-website-redesign.md new file mode 100644 index 0000000..9224348 --- /dev/null +++ b/content/blog/2020-12-27-website-redesign.md @@ -0,0 +1,104 @@ ++++ +date = 2020-12-27 +title = "Redesigning My Website: The 5 KB Result" +description = "" +draft = false ++++ + +# A Brief History + +As a form of continuous learning and entertainment, I\'ve been running a +handful of websites since 2016 when I took my first programming courses +in college. I maintain one main website, the place I consider the +official website to represent me. Under this site, I have a handful of +subdirectories and subdomains. + +One of the parts I\'ve enjoyed the most about web development is the +aspect of designing an identity for a web page and working to find +exciting ways to display the site\'s content. Inevitably, this means +I\'ve changed the designs for my websites more times than I could +possibly count. Since I don\'t really host anything on my main webpage +that\'s vital, it allows me the freedom to change things as inspiration +strikes. + +Historically, I\'ve relied on core utilities for spacing, components, +and layouts from [Bootstrap](https://getbootstrap.com) and added custom +CSS for fonts, accents, colors, and other items. I also tend to create +sites with no border radius on items, visible borders, and content that +takes up the entire screen (using whitespace inside components instead +of whitespace around my components). + +# The Redesign Process + +About a week ago, I found myself wishing for a new design yet again. The +prior design was largely inspired by IBM\'s [Carbon Design +System](https://www.carbondesignsystem.com) and relied on jQuery, +Bootstrap, along with some compressed +[.webp](https://en.wikipedia.org/wiki/WebP) images. + +To anyone who knows my preferences toward web design - and even in my +personal life - it should be no surprise that I immediately started +looking for inspiration on minimalism. While there are some decent +minimalistic designs on sites like +[Dribbble](https://dribbble.com/search/shots/popular/web-design?q=minimalism), +people seem to mostly discuss [brutalist web +design](https://brutalist-web.design) when you ask about minimalism. +While brutalist web design doesn\'t have to be minimal, it often is. + +I suppose, in a way, I did create a brutalist website since my HTML is +semantic and accessible, hyperlinks are colored and underlined, and all +native browser functions like scrolling and the back button work as +expected. However, I didn\'t think about brutalism while designing these +sites. + +The new design followed a simple design process. I walked through the +screens on my blog and asked myself: \"Is this element necessary for a +user?\" This allowed me to first start by removing all javascript, which +had the sole purpose of allowing users to open a collapsed navbar on +mobile. Replacing the collapsible navbar allowed me to remove both +jQuery and Bootstrap\'s javascript. + +Next, I removed things like author names (since I\'m literally the only +person who will ever write on this site), multiple ways to click on a +blog post card, blog post descriptions, and the scroll-to-top button. It +also helped to move all categories to a single page, rather than have +each category on its own page. + +The final big piece to finish the +\"[KonMari](https://en.wikipedia.org/wiki/Marie_Kondo#KonMari_method)\"-like +part of my process was to remove Bootstrap CSS in its entirety. However, +this meant pulling out a few very useful classes, such as +`.img-fluid` and the default font stacks to keep in my custom +CSS. + +After removing all the unnecessary pieces, I was finally able to +reorganize my content and add a very small amount of custom CSS to make +everything pretty. This took a brief amount of time, effectively just +consisting of me converting `
` tags into things like +`
    ` lists and choosing accent colors. + +# The Results + +## Reflection + +So, what did all of this reorganizing do to my webpages? Well, first, my +websites are now **ridiculously fast**. Since the prior designs were +also minimal and didn\'t have many images, they measured up in +Firefox\'s Network Monitor around 300 KB - 600KB. After making the +changes, my main site is at 5 KB transferred (22 KB total), and my blog +is at 6.5 KB transferred (13 KB total). **That means the redesigned +pages are less than 2% the size of the old designs.** + +Google Lighthouse ranks the new webpage as 100 in performance, +accessibility, and best practices, with SEO at 92 since they think tap +targets are not sized appropriately for mobile users. First contextual +paints of the pages are under 0.8 seconds with 0 ms of blocking time. +However, the blog subdomain ranks at 100 for all four categories! First +contextual paints of the blog homepage are under 1.0 seconds with 0 ms +of blocking time, due to the fact that the CSS for my blog is within a +separate CSS file, and the CSS for my main website is simply embedded in +the HTML file. + +Now that everything is complete, I can confidently say I\'m happy with +the result and proud to look at the fastest set of websites I\'ve +created so far. diff --git a/content/blog/2020-12-27-website-redesign.org b/content/blog/2020-12-27-website-redesign.org deleted file mode 100644 index cfa8795..0000000 --- a/content/blog/2020-12-27-website-redesign.org +++ /dev/null @@ -1,96 +0,0 @@ -#+title: Redesigning My Website: The 5 KB Result -#+date: 2020-12-27 -#+description: A retrospective on my recent website redesign. -#+filetags: :sysadmin: - -* A Brief History -As a form of continuous learning and entertainment, I've been running a -handful of websites since 2016 when I took my first programming courses -in college. I maintain one main website, the place I consider the -official website to represent me. Under this site, I have a handful of -subdirectories and subdomains. - -One of the parts I've enjoyed the most about web development is the -aspect of designing an identity for a web page and working to find -exciting ways to display the site's content. Inevitably, this means I've -changed the designs for my websites more times than I could possibly -count. Since I don't really host anything on my main webpage that's -vital, it allows me the freedom to change things as inspiration strikes. - -Historically, I've relied on core utilities for spacing, components, and -layouts from [[https://getbootstrap.com][Bootstrap]] and added custom -CSS for fonts, accents, colors, and other items. I also tend to create -sites with no border radius on items, visible borders, and content that -takes up the entire screen (using whitespace inside components instead -of whitespace around my components). - -* The Redesign Process -About a week ago, I found myself wishing for a new design yet again. The -prior design was largely inspired by IBM's -[[https://www.carbondesignsystem.com][Carbon Design System]] and relied -on jQuery, Bootstrap, along with some compressed -[[https://en.wikipedia.org/wiki/WebP][.webp]] images. - -To anyone who knows my preferences toward web design - and even in my -personal life - it should be no surprise that I immediately started -looking for inspiration on minimalism. While there are some decent -minimalistic designs on sites like -[[https://dribbble.com/search/shots/popular/web-design?q=minimalism][Dribbble]], -people seem to mostly discuss [[https://brutalist-web.design][brutalist -web design]] when you ask about minimalism. While brutalist web design -doesn't have to be minimal, it often is. - -I suppose, in a way, I did create a brutalist website since my HTML is -semantic and accessible, hyperlinks are colored and underlined, and all -native browser functions like scrolling and the back button work as -expected. However, I didn't think about brutalism while designing these -sites. - -The new design followed a simple design process. I walked through the -screens on my blog and asked myself: "Is this element necessary for a -user?" This allowed me to first start by removing all javascript, which -had the sole purpose of allowing users to open a collapsed navbar on -mobile. Replacing the collapsible navbar allowed me to remove both -jQuery and Bootstrap's javascript. - -Next, I removed things like author names (since I'm literally the only -person who will ever write on this site), multiple ways to click on a -blog post card, blog post descriptions, and the scroll-to-top button. It -also helped to move all categories to a single page, rather than have -each category on its own page. - -The final big piece to finish the -"[[https://en.wikipedia.org/wiki/Marie_Kondo#KonMari_method][KonMari]]"-like -part of my process was to remove Bootstrap CSS in its entirety. However, -this meant pulling out a few very useful classes, such as =.img-fluid= -and the default font stacks to keep in my custom CSS. - -After removing all the unnecessary pieces, I was finally able to -reorganize my content and add a very small amount of custom CSS to make -everything pretty. This took a brief amount of time, effectively just -consisting of me converting =
    = tags into things like =
      = lists -and choosing accent colors. - -* The Results -** Reflection -So, what did all of this reorganizing do to my webpages? Well, first, my -websites are now *ridiculously fast*. Since the prior designs were also -minimal and didn't have many images, they measured up in Firefox's -Network Monitor around 300 KB - 600KB. After making the changes, my main -site is at 5 KB transferred (22 KB total), and my blog is at 6.5 KB -transferred (13 KB total). *That means the redesigned pages are less -than 2% the size of the old designs.* - -Google Lighthouse ranks the new webpage as 100 in performance, -accessibility, and best practices, with SEO at 92 since they think tap -targets are not sized appropriately for mobile users. First contextual -paints of the pages are under 0.8 seconds with 0 ms of blocking time. -However, the blog subdomain ranks at 100 for all four categories! First -contextual paints of the blog homepage are under 1.0 seconds with 0 ms -of blocking time, due to the fact that the CSS for my blog is within a -separate CSS file, and the CSS for my main website is simply embedded in -the HTML file. - -Now that everything is complete, I can confidently say I'm happy with -the result and proud to look at the fastest set of websites I've created -so far. diff --git a/content/blog/2020-12-28-neon-drive.md b/content/blog/2020-12-28-neon-drive.md new file mode 100644 index 0000000..bce7223 --- /dev/null +++ b/content/blog/2020-12-28-neon-drive.md @@ -0,0 +1,95 @@ ++++ +date = 2020-12-28 +title = "Neon Drive: A Nostalgic 80s Arcade Racing Game" +description = "" +draft = false ++++ + +# Game Description + +[Neon Drive](https://store.steampowered.com/app/433910/Neon_Drive/) +presents itself as a simple arcade-style game inspired by the arcade +race games of the 1980s, yet it has managed to take up hours of my life +without much effort. The game description, directly from the Steam page, +is intriguing enough to entice anyone who\'s been looking for a good +arcade racing game: + +> Neon Drive is a slick retro-futuristic arcade game that will make your +> brain melt. You\'ve been warned. From beautiful cityscapes and ocean +> roads to exploding enemy spaceships, Neon Drive has it all. + +# Gameplay + +The game holds true to the +[retro-futurism](https://en.wikipedia.org/wiki/Retrofuturism) style, +including chrome female robots, pixelated arcade machines, and +[outrun](https://teddit.net/r/outrun/) aesthetics. + +Each level of the game is shown as a separate arcade machine. Each +arcade machine lets you play on Normal, Hard, Insane, Practice, and Free +Run. To beat each arcade, you must reach the end of the level without +crashing your car into the various obstacles on the course. Basic levels +let you move left or right to avoid blocks in the road. Later levels put +you through other tests, such as dodging traffic or blasting asteroids. + +The game uses synthwave music to keep you on track to make the correct +moves by timing the beats of the songs to the correct moves on the +screen. It reminds me of the early Guitar Hero games, as well as mobile +apps like VOEZ - repetition and staying on-beat is the only way to win. + +# In-Game Screenshots + +Taking a look at the main menu, you can see that Neon Drive plays into +every stereotype you can think of around retro-futuristic, synthwave +arcades (in a good way). + +![Neon Drive +Menu](https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_menu.png) + +Once you get into the first level, we see that the choice of car fits +right in with the stereotypical cars of the 80s, like the +[DeLorean](https://en.wikipedia.org/wiki/DMC_DeLorean) or the [Ferrari +F40](https://en.wikipedia.org/wiki/Ferrari_F40). Each new level comes +with new color schemes and cars, so you should never get tired of the +aesthetic. + +![Neon Drive +Race](https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_race.png) + +Personally, I love the orange and blue colors used in level 2: + +![Level +2](https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_level_2.png) + +If you\'re the competitive type and getting 100% on all arcade machines +isn\'t enough, there are leaderboards for the regular part of the game, +and the endurance game mode. + +![Leaderboard](https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_leaderboard.png) + +# Other Suggestions + +Neon Drive sits nicely within the well-founded cult genre of Outrun. +Other games that I\'ve enjoyed in this same spectrum are: + +- [Far Cry 3: Blood + Dragon](https://store.steampowered.com/app/233270/Far_Cry_3__Blood_Dragon/) +- [Retrowave](https://store.steampowered.com/app/1239690/Retrowave/) +- [Slipstream](https://store.steampowered.com/app/732810/Slipstream/) + +Although these games aren\'t necessarily in the same genre, they do have +aspects that place them close enough to interest gamers that enjoyed +Neon Drive: + +- [Black Ice](https://store.steampowered.com/app/311800/Black_Ice/) +- [Cloudpunk](https://store.steampowered.com/app/746850/Cloudpunk/) +- [Need for Speed: + Heat](https://store.steampowered.com/app/1222680/Need_for_Speed_Heat/) +- [VirtuaVerse](https://store.steampowered.com/app/1019310/VirtuaVerse/) + +Of course, if all you really care about is the arcade aspect of these +games, you can check out the [Atari +Vault](https://store.steampowered.com/app/400020/Atari_Vault/) or any of +the other classic games sold on Steam by companies like Namco, Atari. +For something like Nintendo, you\'d have to settle for buying used +classic consoles or delve into the world of emulation. diff --git a/content/blog/2020-12-28-neon-drive.org b/content/blog/2020-12-28-neon-drive.org deleted file mode 100644 index 9160b7f..0000000 --- a/content/blog/2020-12-28-neon-drive.org +++ /dev/null @@ -1,83 +0,0 @@ -#+title: Neon Drive: A Nostalgic 80s Arcade Racing Game -#+date: 2020-12-28 -#+description: A video game review for Neon Drive. -#+filetags: :gaming: - -* Game Description -[[https://store.steampowered.com/app/433910/Neon_Drive/][Neon Drive]] presents itself as a simple arcade-style game inspired by the arcade -race games of the 1980s, yet it has managed to take up hours of my life without -much effort. The game description, directly from the Steam page, is intriguing -enough to entice anyone who's been looking for a good arcade racing game: - -#+begin_quote -Neon Drive is a slick retro-futuristic arcade game that will make your brain -melt. You've been warned. From beautiful cityscapes and ocean roads to exploding -enemy spaceships, Neon Drive has it all. -#+end_quote - -* Gameplay -The game holds true to the [[https://en.wikipedia.org/wiki/Retrofuturism][retro-futurism]] style, including chrome female robots, -pixelated arcade machines, and [[https://teddit.net/r/outrun/][outrun]] aesthetics. - -Each level of the game is shown as a separate arcade machine. Each arcade -machine lets you play on Normal, Hard, Insane, Practice, and Free Run. To beat -each arcade, you must reach the end of the level without crashing your car into -the various obstacles on the course. Basic levels let you move left or right to -avoid blocks in the road. Later levels put you through other tests, such as -dodging traffic or blasting asteroids. - -The game uses synthwave music to keep you on track to make the correct moves by -timing the beats of the songs to the correct moves on the screen. It reminds me -of the early Guitar Hero games, as well as mobile apps like VOEZ - repetition -and staying on-beat is the only way to win. - -* In-Game Screenshots -Taking a look at the main menu, you can see that Neon Drive plays into every -stereotype you can think of around retro-futuristic, synthwave arcades (in a -good way). - -#+caption: Neon Drive Menu -[[https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_menu.png]] - -Once you get into the first level, we see that the choice of car fits right in -with the stereotypical cars of the 80s, like the [[https://en.wikipedia.org/wiki/DMC_DeLorean][DeLorean]] or the [[https://en.wikipedia.org/wiki/Ferrari_F40][Ferrari F40]]. -Each new level comes with new color schemes and cars, so you should never get -tired of the aesthetic. - -#+caption: Neon Drive Race -[[https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_race.png]] - -Personally, I love the orange and blue colors used in level 2: - -#+caption: Level 2 -[[https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_level_2.png]] - -If you're the competitive type and getting 100% on all arcade machines isn't -enough, there are leaderboards for the regular part of the game, and the -endurance game mode. - -#+caption: Leaderboard -[[https://img.cleberg.net/blog/20201228-neon-drive/neon_drive_leaderboard.png]] - -* Other Suggestions -Neon Drive sits nicely within the well-founded cult genre of Outrun. Other games -that I've enjoyed in this same spectrum are: - -- [[https://store.steampowered.com/app/233270/Far_Cry_3__Blood_Dragon/][Far - Cry 3: Blood Dragon]] -- [[https://store.steampowered.com/app/1239690/Retrowave/][Retrowave]] -- [[https://store.steampowered.com/app/732810/Slipstream/][Slipstream]] - -Although these games aren't necessarily in the same genre, they do have aspects -that place them close enough to interest gamers that enjoyed Neon Drive: - -- [[https://store.steampowered.com/app/311800/Black_Ice/][Black Ice]] -- [[https://store.steampowered.com/app/746850/Cloudpunk/][Cloudpunk]] -- [[https://store.steampowered.com/app/1222680/Need_for_Speed_Heat/][Need - for Speed: Heat]] -- [[https://store.steampowered.com/app/1019310/VirtuaVerse/][VirtuaVerse]] - -Of course, if all you really care about is the arcade aspect of these games, you -can check out the [[https://store.steampowered.com/app/400020/Atari_Vault/][Atari Vault]] or any of the other classic games sold on Steam by -companies like Namco, Atari. For something like Nintendo, you'd have to settle -for buying used classic consoles or delve into the world of emulation. diff --git a/content/blog/2020-12-29-zork.md b/content/blog/2020-12-29-zork.md new file mode 100644 index 0000000..06ccd77 --- /dev/null +++ b/content/blog/2020-12-29-zork.md @@ -0,0 +1,96 @@ ++++ +date = 2020-12-29 +title = "Zork: Let's Explore a Classic" +description = "" +draft = false ++++ + +# Download (Free) + +Before we get into the game itself, you should know that you can +download Zork for free from Infocom\'s [download +page](http://infocom-if.org/downloads/downloads.html). So feel free to +boot it up and take a ride back to the 1980s with this masterpiece. + +# Game Description + +Zork is an interactive, text-based computer game originally released in +1980. This series, split into three separate games, introduced a robust +and sophisticated text parser to gamers. People were largely used to the +simple commands used in the popular game [Colossal Cave +Adventure](https://en.wikipedia.org/wiki/Colossal_Cave_Adventure), but +Zork allowed users to send more complex commands that included +prepositions and conjunctions. + +Zork tracks your score as you explore the map, find tools, and collect +trophy items (e.g., a jewel-encrusted egg). When you place your trophy +items in the trophy case found in the Living Room area, you gain score +points. Collecting the Twenty Treasures of Zork and placing them within +the trophy case wins the game. However, you must explore the map, solve +puzzles, and avoid being eaten by a grue to collect these treasures. + +# The Map + +Since Zork is a vast and complex game, it helps to have a map as you +explore and collect your trophies. However, if you want to play the game +as it was truly intended, you should try to play it without using the +map. + +![Zork Map](https://img.cleberg.net/blog/20201229-zork/zork_map.png) + +*[Map Source](https://www.filfre.net/2012/01/exploring-zork-part-1/)* + +# In-Game Screenshots + +After playing the game (for the first time ever) for several weeks +around 2014, I was finally able to beat the game with some online help +to find the last couple items. As I was writing this post, I installed +the game again to grab some screenshots to show off the true glory of +this game. As noted in [Jimmy Maher\'s +playthrough](https://www.filfre.net/2012/01/exploring-zork-part-1/), the +original Zork games looked quite a bit different due to the older +hardware of computers like the Apple II and multiple bug fixes that +Infocom pushed out after the game\'s initial release. My play-through +uses the [Zork +Anthology](https://store.steampowered.com/app/570580/Zork_Anthology/) +version, which utilizes DOSBox on Windows. + +The first screenshot here shows the introductory information, which +doesn\'t include instructions of any kind for the player. If you +haven\'t played text adventures before, try to use simple commands like +\"go west,\" \"look around,\" or \"hit troll with elvish sword.\" + +![Zork Screen, pt. +1](https://img.cleberg.net/blog/20201229-zork/zork_01.png) + +In this second screenshot, we see the player has entered the house and +found the trophy case in the living room. The lantern and sword in this +room allow the player to explore dark areas and attack enemies. If you +don\'t use the lantern, you won\'t be able to see anything in dark +areas, and you may be eaten by a grue. + +![Zork Screen, pt. +2](https://img.cleberg.net/blog/20201229-zork/zork_02.png) + +Finally, we see that the player has found the first treasure: a +jewel-encrusted egg. These treasures can be taken back to the house and +placed in the trophy case or carried until you feel like you want to put +things away. + +![Zork Screen, pt +3.](https://img.cleberg.net/blog/20201229-zork/zork_03.png) + +# Conclusion + +It\'s been quite a few years since I first played Zork, but I clearly +remember the late nights and bloodshot eyes that helped me find all the +treasures. This game is well worth the time and effort, even though the +text-based aspect may be off-putting to gamers who didn\'t have to grow +up playing games without graphics. However, I believe that the strategy +and skills learned in early video games like Zork can actually help you, +even when playing newer games. + +If you do decide to play Zork, you can download Zork I, II, and III from +Infocom\'s [download +page](http://infocom-if.org/downloads/downloads.html) for free or search +the internet for an online version. diff --git a/content/blog/2020-12-29-zork.org b/content/blog/2020-12-29-zork.org deleted file mode 100644 index 5225517..0000000 --- a/content/blog/2020-12-29-zork.org +++ /dev/null @@ -1,77 +0,0 @@ -#+title: Zork: Let's Explore a Classic -#+date: 2020-12-29 -#+description: A video game review for Zork. -#+filetags: :gaming: - -* Download (Free) -Before we get into the game itself, you should know that you can download Zork -for free from Infocom's [[http://infocom-if.org/downloads/downloads.html][download page]]. So feel free to boot it up and take a -ride back to the 1980s with this masterpiece. - -* Game Description -Zork is an interactive, text-based computer game originally released in 1980. -This series, split into three separate games, introduced a robust and -sophisticated text parser to gamers. People were largely used to the simple -commands used in the popular game [[https://en.wikipedia.org/wiki/Colossal_Cave_Adventure][Colossal Cave Adventure]], but Zork allowed -users to send more complex commands that included prepositions and conjunctions. - -Zork tracks your score as you explore the map, find tools, and collect trophy -items (e.g., a jewel-encrusted egg). When you place your trophy items in the -trophy case found in the Living Room area, you gain score points. Collecting the -Twenty Treasures of Zork and placing them within the trophy case wins the game. -However, you must explore the map, solve puzzles, and avoid being eaten by a -grue to collect these treasures. - -* The Map -Since Zork is a vast and complex game, it helps to have a map as you explore and -collect your trophies. However, if you want to play the game as it was truly -intended, you should try to play it without using the map. - -#+caption: Zork Map -[[https://img.cleberg.net/blog/20201229-zork/zork_map.png]] - -/[[https://www.filfre.net/2012/01/exploring-zork-part-1/][Map Source]]/ - -* In-Game Screenshots -After playing the game (for the first time ever) for several weeks around 2014, -I was finally able to beat the game with some online help to find the last -couple items. As I was writing this post, I installed the game again to grab -some screenshots to show off the true glory of this game. As noted in [[https://www.filfre.net/2012/01/exploring-zork-part-1/][Jimmy -Maher's playthrough]], the original Zork games looked quite a bit different due to -the older hardware of computers like the Apple II and multiple bug fixes that -Infocom pushed out after the game's initial release. My play-through uses the -[[https://store.steampowered.com/app/570580/Zork_Anthology/][Zork Anthology]] version, which utilizes DOSBox on Windows. - -The first screenshot here shows the introductory information, which doesn't -include instructions of any kind for the player. If you haven't played text -adventures before, try to use simple commands like "go west," "look around," or -"hit troll with elvish sword." - -#+caption: Zork Screen, pt. 1 -[[https://img.cleberg.net/blog/20201229-zork/zork_01.png]] - -In this second screenshot, we see the player has entered the house and found the -trophy case in the living room. The lantern and sword in this room allow the -player to explore dark areas and attack enemies. If you don't use the lantern, -you won't be able to see anything in dark areas, and you may be eaten by a grue. - -#+caption: Zork Screen, pt. 2 -[[https://img.cleberg.net/blog/20201229-zork/zork_02.png]] - -Finally, we see that the player has found the first treasure: a jewel-encrusted -egg. These treasures can be taken back to the house and placed in the trophy -case or carried until you feel like you want to put things away. - -#+caption: Zork Screen, pt 3. -[[https://img.cleberg.net/blog/20201229-zork/zork_03.png]] - -* Conclusion -It's been quite a few years since I first played Zork, but I clearly remember -the late nights and bloodshot eyes that helped me find all the treasures. This -game is well worth the time and effort, even though the text-based aspect may be -off-putting to gamers who didn't have to grow up playing games without graphics. -However, I believe that the strategy and skills learned in early video games -like Zork can actually help you, even when playing newer games. - -If you do decide to play Zork, you can download Zork I, II, and III from -Infocom's [[http://infocom-if.org/downloads/downloads.html][download page]] for free or search the internet for an online version. diff --git a/content/blog/2021-01-01-seum.md b/content/blog/2021-01-01-seum.md new file mode 100644 index 0000000..49cac88 --- /dev/null +++ b/content/blog/2021-01-01-seum.md @@ -0,0 +1,93 @@ ++++ +date = 2021-01-01 +title = "SEUM: Spreedrunners from Hell" +description = "" +draft = false ++++ + +# Game Description + +[SEUM: Speedrunners from +Hell](https://store.steampowered.com/app/457210/SEUM_Speedrunners_from_Hell/) +is an incredibly fast-paced mashup of a puzzle game and a free-running +game. Created by [Pine Studio](https://pinestudio.co) in early 2016 and +first released as a [demo on +GameJolt](https://gamejolt.com/games/seum-speedrunners-from-hell-demo/154868), +this game was successfully green-lit on Steam and has amassed a cult +following on multiple consoles. + +Here\'s the game description straight from the developers: + +> Like a bastard child of Quake 3 and Super Meat Boy, SEUM: Speedrunners +> from Hell is truly hardcore and focuses on speed and fast reaction. + +# Story + +SEUM does a great job setting the stage when you open the game for the +first time, playing an animated video in the form of comic book images. +You see Marty, the protagonist, sitting around drinking his beer as +Satan busts down the door, cuts his hand off, and steals all his beer +and bacon. As Satan is leaving, Marty whips a vinyl record across the +room and cuts off one of Satan\'s hands. This hand is what allows you to +use all the powers in the game. + +Check out the screenshot below for one of the first panels of the +storyline: + +![SEUM Story](https://img.cleberg.net/blog/20210101-seum/seum_story.png) + +# Gameplay + +To accomplish each level, you will need to get to the final blue portal +under a certain time limit. You can beat a level by getting to the +portal before the time meter runs out or \"Dominate\" a level by beating +it within a shorter time limit (noted by a bright red/orange color in +the time meter). + +The portal is usually set behind locked gates, protected by dangerous +defenses, or in hard-to-reach places. To reach the portal, you\'ll need +to go through any existing orange portals, light all yellow beacons, +avoid things like fireballs and blades, or use any satanic power orbs +lying around. These special abilities include: + +- Gravity +- Teleport +- Rewind +- Spawn platform +- Roar (DLC) +- Rocket (DLC) +- Shadow world (DLC) + +For the main storyline, there are nine floors to beat. Each floor +contains nine regular levels, one boss level, and one bonus level; +although you don\'t technically need to beat all levels to advance to +the next floor. + +![SEUM Floor +Menu](https://img.cleberg.net/blog/20210101-seum/seum_floor.png) + +# In-Game Screenshots + +The main menu gives you plenty of options for tuning your system, +playing main levels, playing the DLC, or exploring the other game modes. + +![SEUM Main +Menu](https://img.cleberg.net/blog/20210101-seum/seum_menu.png) + +Once you enter a level, you can look around and strategize before +starting. Clicking any button will start the menu, and you\'ll have to +restart if you die. One of the great things about SEUM is that it has +great keyboard shortcuts. You can quickly smash `R` to +restart the level or `M` to return to the level menu. + +![SEUM Level](https://img.cleberg.net/blog/20210101-seum/seum_level.png) + +Once you\'re able to win a level, you\'ll see the high scores and can +watch replays of the best scores. + +![SEUM Win](https://img.cleberg.net/blog/20210101-seum/seum_win.png) + +Each regular level contains a beer in a disguised location that may take +some out-of-the-box thinking. + +![SEUM Beer](https://img.cleberg.net/blog/20210101-seum/seum_beer.png) diff --git a/content/blog/2021-01-01-seum.org b/content/blog/2021-01-01-seum.org deleted file mode 100644 index 4e483a8..0000000 --- a/content/blog/2021-01-01-seum.org +++ /dev/null @@ -1,92 +0,0 @@ -#+title: SEUM: Speedrunners from Hell -#+date: 2021-01-01 -#+description: A video game review for SEUM: Speedrunners from Hell. -#+filetags: :gaming: - -* Game Description -[[https://store.steampowered.com/app/457210/SEUM_Speedrunners_from_Hell/][SEUM: -Speedrunners from Hell]] is an incredibly fast-paced mashup of a puzzle -game and a free-running game. Created by [[https://pinestudio.co][Pine -Studio]] in early 2016 and first released as a -[[https://gamejolt.com/games/seum-speedrunners-from-hell-demo/154868][demo -on GameJolt]], this game was successfully green-lit on Steam and has -amassed a cult following on multiple consoles. - -Here's the game description straight from the developers: - -#+begin_quote -Like a bastard child of Quake 3 and Super Meat Boy, SEUM: Speedrunners -from Hell is truly hardcore and focuses on speed and fast reaction. -#+end_quote - -* Story -SEUM does a great job setting the stage when you open the game for the -first time, playing an animated video in the form of comic book images. -You see Marty, the protagonist, sitting around drinking his beer as -Satan busts down the door, cuts his hand off, and steals all his beer -and bacon. As Satan is leaving, Marty whips a vinyl record across the -room and cuts off one of Satan's hands. This hand is what allows you to -use all the powers in the game. - -Check out the screenshot below for one of the first panels of the -storyline: - -#+caption: SEUM Story -[[https://img.cleberg.net/blog/20210101-seum/seum_story.png]] - -* Gameplay -To accomplish each level, you will need to get to the final blue portal -under a certain time limit. You can beat a level by getting to the -portal before the time meter runs out or "Dominate" a level by beating -it within a shorter time limit (noted by a bright red/orange color in -the time meter). - -The portal is usually set behind locked gates, protected by dangerous -defenses, or in hard-to-reach places. To reach the portal, you'll need -to go through any existing orange portals, light all yellow beacons, -avoid things like fireballs and blades, or use any satanic power orbs -lying around. These special abilities include: - -- Gravity -- Teleport -- Rewind -- Spawn platform -- Roar (DLC) -- Rocket (DLC) -- Shadow world (DLC) - -For the main storyline, there are nine floors to beat. Each floor -contains nine regular levels, one boss level, and one bonus level; -although you don't technically need to beat all levels to advance to the -next floor. - -#+caption: SEUM Floor Menu -[[https://img.cleberg.net/blog/20210101-seum/seum_floor.png]] - -* In-Game Screenshots -The main menu gives you plenty of options for tuning your system, -playing main levels, playing the DLC, or exploring the other game modes. - -#+caption: SEUM Main Menu -[[https://img.cleberg.net/blog/20210101-seum/seum_menu.png]] - -Once you enter a level, you can look around and strategize before -starting. Clicking any button will start the menu, and you'll have to -restart if you die. One of the great things about SEUM is that it has -great keyboard shortcuts. You can quickly smash =R= to restart the level -or =M= to return to the level menu. - -#+caption: SEUM Level -[[https://img.cleberg.net/blog/20210101-seum/seum_level.png]] - -Once you're able to win a level, you'll see the high scores and can -watch replays of the best scores. - -#+caption: SEUM Win -[[https://img.cleberg.net/blog/20210101-seum/seum_win.png]] - -Each regular level contains a beer in a disguised location that may take -some out-of-the-box thinking. - -#+caption: SEUM Beer -[[https://img.cleberg.net/blog/20210101-seum/seum_beer.png]] diff --git a/content/blog/2021-01-04-fediverse.md b/content/blog/2021-01-04-fediverse.md new file mode 100644 index 0000000..e663f45 --- /dev/null +++ b/content/blog/2021-01-04-fediverse.md @@ -0,0 +1,107 @@ ++++ +date = 2021-01-04 +title = "A Simple Guide to the Fediverse" +description = "" +draft = false ++++ + +# What is the Fediverse? + +The fediverse is a federated universe of servers commonly used for +sharing content, like social media. So, instead of having to rely on a +single organization to run the server (e.g. Facebook), the fediverse is +a giant collection of servers across the world, owned by many people and +organizations. + +Take a look at this depiction of a federated network. Each server in +this photo is owned and run by different administrators/owners. +Federated networks are best explained as email servers: you have an +email account that exists on a server (e.g. Outlook), your friend has an +account on a different server (e.g. GMail), and another friend has an +account on a third server (e.g. ProtonMail). All three of you can talk +and communicate back and forth without having to be on the same server. +However, responsible email admins are there to set rules and control the +traffic going in/out of the server. + +![Federated services +diagram](https://img.cleberg.net/blog/20210104-a-simple-guide-to-the-fediverse/federated-example.svg) + +The main objective of this architecture is to decentralize the control +within the internet connections. For example, if you run your own +Mastodon instance, you and your users can\'t be censored or impacted in +any way by authorities of another Mastodon instance. Some users have +praised these features due to recent criticism of popular social media +websites that may be over-censoring their users. + +This strategy is great for making sure control of the social web isn\'t +controlled by a single organization, but it also has some downsides. If +I create a Mastodon instance and get a ton of users to sign up, I can +shut the server down at any time. That means you\'re at risk of losing +the content you\'ve created unless you back it up, or the server backs +it up for you. Also, depending on the software used (e.g. Mastodon, +Pixelfed, etc.), censorship may still be an issue if the server admins +decide they want to censor their users. Now, censorship isn\'t always a +bad thing and can even benefit the community as a whole, but you\'ll +want to determine which servers align with your idea of proper +censorship. + +However, these are risks that we take when we sign up for any online +platform. Whatever your reason is for trying out federated social +networks, they are part of the future of the internet. However, the +popularity of these services is still yet to be determined, especially +with the increased difficulty understanding and signing up for these +platforms. Perhaps increased regulation and litigation against current +social media sites will push more users into the fediverse. + +# Federated Alternatives to Popular Sites + +The list below is a small guide that will show you federated +alternatives to current popular websites. There are many more out there, +so go and explore: you might just find the perfect home. + +## Reddit + +- [Lemmy](https://lemmy.ml/instances) + +## Twitter/Facebook/Tumblr + +- [Mastodon](https://joinmastodon.org) +- [Diaspora](https://diasporafoundation.org) +- [Friendica](https://friendi.ca) +- [GNU Social](https://gnusocial.network) +- [Pleroma](https://pleroma.social) + +## Instagram + +- [Pixelfed](https://pixelfed.org) + +## Slack/Discord + +- [Matrix](https://element.io) + +## Youtube/Vimeo + +- [Peertube](https://joinpeertube.org) + +## Spotify/Soundcloud + +- [Funkwhale](https://funkwhale.audio) + +## Podcasting + +- [Pubcast](https://pubcast.pub) + +## Medium/Blogger + +- [WriteFreely](https://writefreely.org) + +# Get Started + +The best way to get started is to simply sign up and learn as you go. If +you\'re comfortable signing up through a Mastodon, Pleroma, or Friendica +server, here is [a list of themed +servers](https://fediverse.party/en/portal/servers) to choose from. If +you\'re looking for something else, try a web search for a federated +alternative to your favorite sites. + +Find a server that focuses on your passions and start there! diff --git a/content/blog/2021-01-04-fediverse.org b/content/blog/2021-01-04-fediverse.org deleted file mode 100644 index 5224b17..0000000 --- a/content/blog/2021-01-04-fediverse.org +++ /dev/null @@ -1,92 +0,0 @@ -#+title: A Simple Guide to the Fediverse -#+date: 2021-01-04 -#+description: Learn about the basics of the Fediverse. -#+filetags: :social: - -* What is the Fediverse? -The fediverse is a federated universe of servers commonly used for -sharing content, like social media. So, instead of having to rely on a -single organization to run the server (e.g. Facebook), the fediverse is -a giant collection of servers across the world, owned by many people and -organizations. - -Take a look at this depiction of a federated network. Each server in -this photo is owned and run by different administrators/owners. -Federated networks are best explained as email servers: you have an -email account that exists on a server (e.g. Outlook), your friend has an -account on a different server (e.g. GMail), and another friend has an -account on a third server (e.g. ProtonMail). All three of you can talk -and communicate back and forth without having to be on the same server. -However, responsible email admins are there to set rules and control the -traffic going in/out of the server. - -#+caption: Federated services diagram -[[https://img.cleberg.net/blog/20210104-a-simple-guide-to-the-fediverse/federated-example.svg]] - -The main objective of this architecture is to decentralize the control -within the internet connections. For example, if you run your own -Mastodon instance, you and your users can't be censored or impacted in -any way by authorities of another Mastodon instance. Some users have -praised these features due to recent criticism of popular social media -websites that may be over-censoring their users. - -This strategy is great for making sure control of the social web isn't -controlled by a single organization, but it also has some downsides. If -I create a Mastodon instance and get a ton of users to sign up, I can -shut the server down at any time. That means you're at risk of losing -the content you've created unless you back it up, or the server backs it -up for you. Also, depending on the software used (e.g. Mastodon, -Pixelfed, etc.), censorship may still be an issue if the server admins -decide they want to censor their users. Now, censorship isn't always a -bad thing and can even benefit the community as a whole, but you'll want -to determine which servers align with your idea of proper censorship. - -However, these are risks that we take when we sign up for any online -platform. Whatever your reason is for trying out federated social -networks, they are part of the future of the internet. However, the -popularity of these services is still yet to be determined, especially -with the increased difficulty understanding and signing up for these -platforms. Perhaps increased regulation and litigation against current -social media sites will push more users into the fediverse. - -* Federated Alternatives to Popular Sites -The list below is a small guide that will show you federated -alternatives to current popular websites. There are many more out there, -so go and explore: you might just find the perfect home. - -** Reddit -- [[https://lemmy.ml/instances][Lemmy]] - -** Twitter/Facebook/Tumblr -- [[https://joinmastodon.org][Mastodon]] -- [[https://diasporafoundation.org][Diaspora]] -- [[https://friendi.ca][Friendica]] -- [[https://gnusocial.network][GNU Social]] -- [[https://pleroma.social][Pleroma]] - -** Instagram -- [[https://pixelfed.org][Pixelfed]] - -** Slack/Discord -- [[https://element.io][Matrix]] - -** Youtube/Vimeo -- [[https://joinpeertube.org][Peertube]] - -** Spotify/Soundcloud -- [[https://funkwhale.audio][Funkwhale]] - -** Podcasting -- [[https://pubcast.pub][Pubcast]] - -** Medium/Blogger -- [[https://writefreely.org][WriteFreely]] - -* Get Started -The best way to get started is to simply sign up and learn as you go. If -you're comfortable signing up through a Mastodon, Pleroma, or Friendica -server, here is [[https://fediverse.party/en/portal/servers][a list of -themed servers]] to choose from. If you're looking for something else, -try a web search for a federated alternative to your favorite sites. - -Find a server that focuses on your passions and start there! diff --git a/content/blog/2021-01-07-ufw.md b/content/blog/2021-01-07-ufw.md new file mode 100644 index 0000000..da62aac --- /dev/null +++ b/content/blog/2021-01-07-ufw.md @@ -0,0 +1,225 @@ ++++ +date = 2021-01-07 +title = "Secure Your Network with the Uncomplicated Firewall (ufw)" +description = "" +draft = false ++++ + +# Uncomplicated Firewall + +Uncomplicated Firewall, also known as ufw, is a convenient and +beginner-friendly way to enforce OS-level firewall rules. For those who +are hosting servers or any device that is accessible to the world (i.e., +by public IP or domain name), it\'s critical that a firewall is properly +implemented and active. + +Ufw is available by default in all Ubuntu installations after 8.04 LTS. +For other distributions, you can look to install ufw or check if there +are alternative firewalls installed already. There are usually +alternatives available, such as Fedora\'s `firewall` and the +package available on most distributions: `iptables`. Ufw is +considered a beginner-friendly front-end to iptables. + +[Gufw](https://gufw.org) is available as a graphical user interface +(GUI) application for users who are uncomfortable setting up a firewall +through a terminal. + +![Gufw +Screenshot](https://img.cleberg.net/blog/20210107-secure-your-network-with-the-uncomplicated-firewall/gufw.png) + +# Getting Help + +If you need help figuring out commands, remember that you can run the +`--help` flag to get a list of options. + +```sh +sudo ufw --help +``` + +# Set Default State + +The proper way to run a firewall is to set a strict default state and +slowly open up ports that you want to allow. This helps prevent anything +malicious from slipping through the cracks. The following command +prevents all incoming traffic (other than the rules we specify later), +but you can also set this for outgoing connections, if necessary. + +```sh +sudo ufw default deny incoming +``` + +You should also allow outgoing traffic if you want to allow the device +to communicate back to you or other parties. For example, media servers +like Plex need to be able to send out data related to streaming the +media. + +```sh +sudo ufw default allow outgoing +``` + +# Adding Port Rules + +Now that we\'ve disabled all incoming traffic by default, we need to +open up some ports (or else no traffic would be able to come in). If you +need to be able to `ssh` into the machine, you\'ll need to +open up port 22. + +```sh +sudo ufw allow 22 +``` + +You can also issue more restrictive rules. The following rule will allow +`ssh` connections only from machines on the local subnet. + +```sh +sudo ufw allow proto tcp from 192.168.0.0/24 to any port 22 +``` + +If you need to set a rule that isn\'t tcp, just append your connection +type to the end of the rule. + +```sh +sudo ufw allow 1900/udp +``` + +# Enable ufw + +Now that the firewall is configured and ready to go, you can enable the +firewall. + +```sh +sudo ufw enable +``` + +A restart may be required for the firewall to begin operating. + +```sh +sudo reboot now +``` + +# Checking Status + +Now that the firewall is enabled, let\'s check and see what the rules +look like. + +```sh +sudo ufw status numbered +``` + +``` txt +Status: active + + To Action From + -- ------ ---- +[ 1] 22 ALLOW IN Anywhere +[ 2] 22 (v6) ALLOW IN Anywhere (v6) +``` + +# Deleting Rules + +If you need to delete a rule, you need to know the number associated +with that rule. Let\'s delete the first rule in the table above. You\'ll +be asked to confirm the deletion as part of this process. + +```sh +sudo ufw delete 1 +``` + +# Managing App Rules + +Luckily, there\'s a convenient way for installed applications to create +files that ufw can easily implement so that you don\'t have to search +and find which ports your application requires. To see if your device +has any applications with pre-installed ufw rules, execute the following +command: + +```sh +sudo ufw app list +``` + +The results should look something like this: + +``` txt +Available applications: + OpenSSH + Samba + plexmediaserver + plexmediaserver-all + plexmediaserver-dlna +``` + +If you want to get more information on a specific app rule, use the +`info` command. + +```sh +sudo ufw app info plexmediaserver-dlna +``` + +You\'ll get a blurb of info back like this: + +``` txt +Profile: plexmediaserver-dlna +Title: Plex Media Server (DLNA) +Description: The Plex Media Server (additional DLNA capability only) + +Ports: + 1900/udp + 32469/tcp +``` + +You can add or delete app rules the same way that you\'d add or delete +specific port rules. + +```sh +sudo ufw allow plexmediaserver-dlna +``` + +```sh +sudo ufw delete RULE|NUM +``` + +# Creating App Rules + +If you\'d like to create you own app rule, you\'ll need to create a file +in the `/etc/ufw/applications.d` directory. Within the file +you create, you need to make sure the content is properly formatted. + +For example, here are the contents my `plexmediaserver` file, +which creates three distinct app rules for ufw: + +``` config +[plexmediaserver] +title=Plex Media Server (Standard) +description=The Plex Media Server +ports=32400/tcp|3005/tcp|5353/udp|8324/tcp|32410:32414/udp + +[plexmediaserver-dlna] +title=Plex Media Server (DLNA) +description=The Plex Media Server (additional DLNA capability only) +ports=1900/udp|32469/tcp + +[plexmediaserver-all] +title=Plex Media Server (Standard + DLNA) +description=The Plex Media Server (with additional DLNA capability) +ports=32400/tcp|3005/tcp|5353/udp|8324/tcp|32410:32414/udp|1900/udp|32469/tcp +``` + +So, if I wanted to create a custom app rule called \"mycustomrule,\" +I\'d create a file and add my content like this: + +```sh +sudo nano /etc/ufw/applications.d/mycustomrule +``` + +``` config +[mycustomrule] +title=My Custom Rule +description=This is a temporary ufw app rule. +ports=88/tcp|9100/udp +``` + +Then, I would just enable this rule in ufw. + +```sh +sudo ufw allow mycustomrule +``` diff --git a/content/blog/2021-01-07-ufw.org b/content/blog/2021-01-07-ufw.org deleted file mode 100644 index b1e9adf..0000000 --- a/content/blog/2021-01-07-ufw.org +++ /dev/null @@ -1,213 +0,0 @@ -#+title: Secure Your Network with the Uncomplicated Firewall (ufw) -#+date: 2021-01-07 -#+description: A simple guide to the UFW. -#+filetags: :sysadmin: - -* Uncomplicated Firewall -Uncomplicated Firewall, also known as ufw, is a convenient and -beginner-friendly way to enforce OS-level firewall rules. For those who -are hosting servers or any device that is accessible to the world (i.e., -by public IP or domain name), it's critical that a firewall is properly -implemented and active. - -Ufw is available by default in all Ubuntu installations after 8.04 LTS. -For other distributions, you can look to install ufw or check if there -are alternative firewalls installed already. There are usually -alternatives available, such as Fedora's =firewall= and the package -available on most distributions: =iptables=. Ufw is considered a -beginner-friendly front-end to iptables. - -[[https://gufw.org][Gufw]] is available as a graphical user interface -(GUI) application for users who are uncomfortable setting up a firewall -through a terminal. - -#+caption: Gufw Screenshot -[[https://img.cleberg.net/blog/20210107-secure-your-network-with-the-uncomplicated-firewall/gufw.png]] - -* Getting Help -If you need help figuring out commands, remember that you can run the -=--help= flag to get a list of options. - -#+begin_src sh -sudo ufw --help -#+end_src - -* Set Default State -The proper way to run a firewall is to set a strict default state and -slowly open up ports that you want to allow. This helps prevent anything -malicious from slipping through the cracks. The following command -prevents all incoming traffic (other than the rules we specify later), -but you can also set this for outgoing connections, if necessary. - -#+begin_src sh -sudo ufw default deny incoming -#+end_src - -You should also allow outgoing traffic if you want to allow the device -to communicate back to you or other parties. For example, media servers -like Plex need to be able to send out data related to streaming the -media. - -#+begin_src sh -sudo ufw default allow outgoing -#+end_src - -* Adding Port Rules -Now that we've disabled all incoming traffic by default, we need to open -up some ports (or else no traffic would be able to come in). If you need -to be able to =ssh= into the machine, you'll need to open up port 22. - -#+begin_src sh -sudo ufw allow 22 -#+end_src - -You can also issue more restrictive rules. The following rule will allow -=ssh= connections only from machines on the local subnet. - -#+begin_src sh -sudo ufw allow proto tcp from 192.168.0.0/24 to any port 22 -#+end_src - -If you need to set a rule that isn't tcp, just append your connection -type to the end of the rule. - -#+begin_src sh -sudo ufw allow 1900/udp -#+end_src - -* Enable ufw -Now that the firewall is configured and ready to go, you can enable the -firewall. - -#+begin_src sh -sudo ufw enable -#+end_src - -A restart may be required for the firewall to begin operating. - -#+begin_src sh -sudo reboot now -#+end_src - -* Checking Status -Now that the firewall is enabled, let's check and see what the rules -look like. - -#+begin_src sh -sudo ufw status numbered -#+end_src - -#+begin_src txt -Status: active - - To Action From - -- ------ ---- -[ 1] 22 ALLOW IN Anywhere -[ 2] 22 (v6) ALLOW IN Anywhere (v6) -#+end_src - -* Deleting Rules -If you need to delete a rule, you need to know the number associated -with that rule. Let's delete the first rule in the table above. You'll -be asked to confirm the deletion as part of this process. - -#+begin_src sh -sudo ufw delete 1 -#+end_src - -* Managing App Rules -Luckily, there's a convenient way for installed applications to create -files that ufw can easily implement so that you don't have to search and -find which ports your application requires. To see if your device has -any applications with pre-installed ufw rules, execute the following -command: - -#+begin_src sh -sudo ufw app list -#+end_src - -The results should look something like this: - -#+begin_src txt -Available applications: - OpenSSH - Samba - plexmediaserver - plexmediaserver-all - plexmediaserver-dlna -#+end_src - -If you want to get more information on a specific app rule, use the -=info= command. - -#+begin_src sh -sudo ufw app info plexmediaserver-dlna -#+end_src - -You'll get a blurb of info back like this: - -#+begin_src txt -Profile: plexmediaserver-dlna -Title: Plex Media Server (DLNA) -Description: The Plex Media Server (additional DLNA capability only) - -Ports: - 1900/udp - 32469/tcp -#+end_src - -You can add or delete app rules the same way that you'd add or delete -specific port rules. - -#+begin_src sh -sudo ufw allow plexmediaserver-dlna -#+end_src - -#+begin_src sh -sudo ufw delete RULE|NUM -#+end_src - -* Creating App Rules -If you'd like to create you own app rule, you'll need to create a file -in the =/etc/ufw/applications.d= directory. Within the file you create, -you need to make sure the content is properly formatted. - -For example, here are the contents my =plexmediaserver= file, which -creates three distinct app rules for ufw: - -#+begin_src config -[plexmediaserver] -title=Plex Media Server (Standard) -description=The Plex Media Server -ports=32400/tcp|3005/tcp|5353/udp|8324/tcp|32410:32414/udp - -[plexmediaserver-dlna] -title=Plex Media Server (DLNA) -description=The Plex Media Server (additional DLNA capability only) -ports=1900/udp|32469/tcp - -[plexmediaserver-all] -title=Plex Media Server (Standard + DLNA) -description=The Plex Media Server (with additional DLNA capability) -ports=32400/tcp|3005/tcp|5353/udp|8324/tcp|32410:32414/udp|1900/udp|32469/tcp -#+end_src - -So, if I wanted to create a custom app rule called "mycustomrule," I'd -create a file and add my content like this: - -#+begin_src sh -sudo nano /etc/ufw/applications.d/mycustomrule -#+end_src - -#+begin_src config -[mycustomrule] -title=My Custom Rule -description=This is a temporary ufw app rule. -ports=88/tcp|9100/udp -#+end_src - -Then, I would just enable this rule in ufw. - -#+begin_src sh -sudo ufw allow mycustomrule -#+end_src diff --git a/content/blog/2021-02-19-macos.md b/content/blog/2021-02-19-macos.md new file mode 100644 index 0000000..26fdaa7 --- /dev/null +++ b/content/blog/2021-02-19-macos.md @@ -0,0 +1,215 @@ ++++ +date = 2021-02-19 +title = "macOS: Testing Out a New OS" +description = "" +draft = false ++++ + +# Diving into macOS + +After spending nearly 15 years working with Windows and 8 years on +Linux, I have experienced macOS for the first time. By chance, my spouse +happened to buy a new MacBook and gifted me their 2013 model. Of course, +I still consider my Linux desktop to be my daily driver and keep Windows +around for gaming needs, but over the past week I\'ve found myself using +the MacBook more and more for things that don\'t require gaming specs or +advanced dev tools. + +# Initial Thoughts + +Before I move on to the technical aspects of my set-up, I want to take +some time and express my thoughts on the overall OS. + +![macOS +Desktop](https://img.cleberg.net/blog/20210219-macos-testing-out-a-new-os/macos-desktop.png) + +As expected, the initial computer setup is a breeze with Mac\'s guided +GUI installer. + +The desktop itself reminds me of GNOME more than anything else I\'ve +seen: even Pantheon from [ElementaryOS](https://elementary.io/), which +people commonly refer to as the closest Linux distro to macOS. The +desktop toolbar is great and far surpasses the utility of the GNOME +toolbar due to the fact that the extensions and icons *actually work*. I +launch macOS and immediately see my shortcuts for Tresorit, Bitwarden, +and Mullvad pop up as the computer loads. + +Even further, the app dock is very useful and will be yet another +familiarity for GNOME users. I know many people like panels instead of +docks, but I\'ve always found docks to have a more pleasing UI. However, +I had to disable the \"Show recent applications in Dock\" preference; I +can\'t stand items taking up precious screen space if I\'m not currently +using them. On that same note, it\'s taking me some time to get use to +the fact that I have to manually quit an app or else it will still stay +open/active in the dock, even if I\'ve closed out all windows for that +app (e.g. Firefox). + +Overall, I\'m having a lot of fun and for users who spend a large +majority of their time performing basic tasks like web browsing, +writing, watching media, etc., macOS is a fantastic option. + +The rest of this post explains the technicalities of how I set up my CLI +environment to make me feel more at-home, similar to the environments I +set up on Fedora, Ubuntu, etc. + +# Making it Feel Like Home + +If you\'re someone who uses Linux primarily, no doubt your first thought +when booting macOS will be the same as mine was: \"Where is the terminal +and how do I set up my favorite utilities?\" + +Luckily, macOS hasn\'t completely hidden away the development tools from +the average user. You can easily find the Terminal app in the Launchpad +area, but it\'s probably not what you\'re used to. I was surprised (and +happy) to see that the default shell is `zsh`, the shell I +use on all of my Linux distros. However, the commands are not the same - +even the ones you may think are native to the shell. Commands like +`dir` do not exist, so other native commands like +`ls -la` or `pwd` are more useful here. + +With only a few minutes of installing and tweaking a few packages, I was +able to recreate a terminal environment that I feel very comfortable +using. See the image below for a preview of the iTerm2 app with a split +view between my macOS desktop shell and an SSH session into my server. + +![iTerm2](https://img.cleberg.net/blog/20210219-macos-testing-out-a-new-os/iterm2.png) + +# Xcode + +My first step was to search the web for any hints on how to get +`zsh` back up to the state I like, with extensions, themes, +etc. My first step was to install the CLI tools for +[Xcode](https://developer.apple.com/xcode/), Apple\'s suite of +development tools. + +```sh +sudo xcode-select -r +``` + +```sh +sudo xcode-select --install +``` + +# Homebrew + +Next up is to install [Homebrew](https://brew.sh), a nifty package +manager for macOS. + +```sh +/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" +``` + +I ran into a permission error when installing Homebrew: + +```sh +Error: Failed to link all completions, docs and manpages: + Permission denied @ rb_file_s_symlink - (../../../Homebrew/completions/zsh/_brew, /usr/local/share/zsh/site-functions/_brew) +Failed during: /usr/local/bin/brew update --force --quiet +``` + +I found that the following permission modification worked like a charm. +However, I noted that some users online discussed the fact that this +solution may not work if your system has multiple users who use +Homebrew. + +```sh +sudo chown -R $(whoami) $(brew --prefix)/* +``` + +Next up is to ensure Homebrew is updated and cleaned. + +```sh +brew update +``` + +```sh +brew cleanup +``` + +# iTerm2 + +Now that I\'ve installed the basic utilities for development, I moved +onto installing iTerm2, a much better terminal than the default. + +```sh +brew install --cask iterm2 +``` + +I also used the `Make iTerm2 Default Term` and +`Install Shell Integration` options in the iTerm2 application +menu to make sure I don\'t run into any issues later on with different +terminals. + +We will also install `zsh` so we can use it in iTerm2. + +```sh +brew install zsh +``` + +# Oh-My-Zsh + +I\'ve shown the great aspects of [Oh My Zsh](https://ohmyz.sh) in other +blog posts, so I\'ll skip over that speech for now. Simply install it +and run an update. + +```sh +sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)" +``` + +```sh +omz update +``` + +Finally, restart the iTerm2 application to ensure all changes go into +effect. + +# Oh-My-Zsh Themes + +Let\'s change the theme of the terminal to make it a little more +friendly. + +```sh +open ~/.zshrc +``` + +The third section of this file should contain a line like the code +below. Change that theme to [any theme you +want](https://github.com/ohmyzsh/ohmyzsh/wiki/Themes), save the file, +and exit. + +```sh +ZSH_THEME="af-magic" +``` + +After changing the `.zshrc` file, you\'ll need to close your +terminal and re-open it to see the changes. Optionally, just open a new +tab if you\'re using iTerm2, and you\'ll see the new shell config. + +# Oh-My-Zsh Plugins + +Of course, my customization of `zsh` would not be complete +without +[zsh-autosuggestions](https://github.com/zsh-users/zsh-autosuggestions). +This will bring up commands you\'ve run in the past as you type them. +For example, if you\'ve run `ssh user@192.168.1.99` before, +the terminal will show this command as soon as you start typing it (e.g. +`zsh u`), and you can hit the right arrow to autocomplete the +command. + +```sh +git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions +``` + +```sh +open ~/.zshrc +``` + +```sh +# Scroll down the script and edit this line to add zsh-autosuggestions +plugins=(git zsh-autosuggestions) +``` + +Remember: After changing the `.zshrc` file, you\'ll need to +close your terminal and re-open it to see the changes. Optionally, just +open a new tab if you\'re using iTerm2, and you\'ll see the new shell +config. diff --git a/content/blog/2021-02-19-macos.org b/content/blog/2021-02-19-macos.org deleted file mode 100644 index 37aca9d..0000000 --- a/content/blog/2021-02-19-macos.org +++ /dev/null @@ -1,200 +0,0 @@ -#+title: macOS: Testing Out A New OS -#+date: 2021-02-19 -#+description: A retrospective on my migration from Linux to macOS. -#+filetags: :apple: - -* Diving into macOS -After spending nearly 15 years working with Windows and 8 years on -Linux, I have experienced macOS for the first time. By chance, my spouse -happened to buy a new MacBook and gifted me their 2013 model. Of course, -I still consider my Linux desktop to be my daily driver and keep Windows -around for gaming needs, but over the past week I've found myself using -the MacBook more and more for things that don't require gaming specs or -advanced dev tools. - -* Initial Thoughts -Before I move on to the technical aspects of my set-up, I want to take -some time and express my thoughts on the overall OS. - -#+caption: macOS Desktop -[[https://img.cleberg.net/blog/20210219-macos-testing-out-a-new-os/macos-desktop.png]] - -As expected, the initial computer setup is a breeze with Mac's guided -GUI installer. - -The desktop itself reminds me of GNOME more than anything else I've -seen: even Pantheon from [[https://elementary.io/][ElementaryOS]], which -people commonly refer to as the closest Linux distro to macOS. The -desktop toolbar is great and far surpasses the utility of the GNOME -toolbar due to the fact that the extensions and icons /actually work/. I -launch macOS and immediately see my shortcuts for Tresorit, Bitwarden, -and Mullvad pop up as the computer loads. - -Even further, the app dock is very useful and will be yet another -familiarity for GNOME users. I know many people like panels instead of -docks, but I've always found docks to have a more pleasing UI. However, -I had to disable the "Show recent applications in Dock" preference; I -can't stand items taking up precious screen space if I'm not currently -using them. On that same note, it's taking me some time to get use to -the fact that I have to manually quit an app or else it will still stay -open/active in the dock, even if I've closed out all windows for that -app (e.g. Firefox). - -Overall, I'm having a lot of fun and for users who spend a large -majority of their time performing basic tasks like web browsing, -writing, watching media, etc., macOS is a fantastic option. - -The rest of this post explains the technicalities of how I set up my CLI -environment to make me feel more at-home, similar to the environments I -set up on Fedora, Ubuntu, etc. - -* Making it Feel Like Home -If you're someone who uses Linux primarily, no doubt your first thought -when booting macOS will be the same as mine was: "Where is the terminal -and how do I set up my favorite utilities?" - -Luckily, macOS hasn't completely hidden away the development tools from -the average user. You can easily find the Terminal app in the Launchpad -area, but it's probably not what you're used to. I was surprised (and -happy) to see that the default shell is =zsh=, the shell I use on all of -my Linux distros. However, the commands are not the same - even the ones -you may think are native to the shell. Commands like =dir= do not exist, -so other native commands like =ls -la= or =pwd= are more useful here. - -With only a few minutes of installing and tweaking a few packages, I was -able to recreate a terminal environment that I feel very comfortable -using. See the image below for a preview of the iTerm2 app with a split -view between my macOS desktop shell and an SSH session into my server. - -#+caption: iTerm2 -[[https://img.cleberg.net/blog/20210219-macos-testing-out-a-new-os/iterm2.png]] - -* Xcode -My first step was to search the web for any hints on how to get =zsh= -back up to the state I like, with extensions, themes, etc. My first step -was to install the CLI tools for -[[https://developer.apple.com/xcode/][Xcode]], Apple's suite of -development tools. - -#+begin_src sh -sudo xcode-select -r -#+end_src - -#+begin_src sh -sudo xcode-select --install -#+end_src - -* Homebrew -Next up is to install [[https://brew.sh][Homebrew]], a nifty package -manager for macOS. - -#+begin_src sh -/bin/bash -c "$(curl -fsSL https://raw.githubusercontent.com/Homebrew/install/HEAD/install.sh)" -#+end_src - -I ran into a permission error when installing Homebrew: - -#+begin_src sh -Error: Failed to link all completions, docs and manpages: - Permission denied @ rb_file_s_symlink - (../../../Homebrew/completions/zsh/_brew, /usr/local/share/zsh/site-functions/_brew) -Failed during: /usr/local/bin/brew update --force --quiet -#+end_src - -I found that the following permission modification worked like a charm. -However, I noted that some users online discussed the fact that this -solution may not work if your system has multiple users who use -Homebrew. - -#+begin_src sh -sudo chown -R $(whoami) $(brew --prefix)/* -#+end_src - -Next up is to ensure Homebrew is updated and cleaned. - -#+begin_src sh -brew update -#+end_src - -#+begin_src sh -brew cleanup -#+end_src - -* iTerm2 -Now that I've installed the basic utilities for development, I moved -onto installing iTerm2, a much better terminal than the default. - -#+begin_src sh -brew install --cask iterm2 -#+end_src - -I also used the =Make iTerm2 Default Term= and -=Install Shell Integration= options in the iTerm2 application menu to -make sure I don't run into any issues later on with different terminals. - -We will also install =zsh= so we can use it in iTerm2. - -#+begin_src sh -brew install zsh -#+end_src - -* Oh-My-Zsh -I've shown the great aspects of [[https://ohmyz.sh][Oh My Zsh]] in other -blog posts, so I'll skip over that speech for now. Simply install it and -run an update. - -#+begin_src sh -sh -c "$(curl -fsSL https://raw.githubusercontent.com/robbyrussell/oh-my-zsh/master/tools/install.sh)" -#+end_src - -#+begin_src sh -omz update -#+end_src - -Finally, restart the iTerm2 application to ensure all changes go into -effect. - -* Oh-My-Zsh Themes -Let's change the theme of the terminal to make it a little more -friendly. - -#+begin_src sh -open ~/.zshrc -#+end_src - -The third section of this file should contain a line like the code -below. Change that theme to -[[https://github.com/ohmyzsh/ohmyzsh/wiki/Themes][any theme you want]], -save the file, and exit. - -#+begin_src sh -ZSH_THEME="af-magic" -#+end_src - -After changing the =.zshrc= file, you'll need to close your terminal and -re-open it to see the changes. Optionally, just open a new tab if you're -using iTerm2, and you'll see the new shell config. - -* Oh-My-Zsh Plugins -Of course, my customization of =zsh= would not be complete without -[[https://github.com/zsh-users/zsh-autosuggestions][zsh-autosuggestions]]. -This will bring up commands you've run in the past as you type them. For -example, if you've run =ssh user@192.168.1.99= before, the terminal will -show this command as soon as you start typing it (e.g. =zsh u=), and you -can hit the right arrow to autocomplete the command. - -#+begin_src sh -git clone https://github.com/zsh-users/zsh-autosuggestions ${ZSH_CUSTOM:-~/.oh-my-zsh/custom}/plugins/zsh-autosuggestions -#+end_src - -#+begin_src sh -open ~/.zshrc -#+end_src - -#+begin_src sh -# Scroll down the script and edit this line to add zsh-autosuggestions -plugins=(git zsh-autosuggestions) -#+end_src - -Remember: After changing the =.zshrc= file, you'll need to close your -terminal and re-open it to see the changes. Optionally, just open a new -tab if you're using iTerm2, and you'll see the new shell config. diff --git a/content/blog/2021-03-19-clone-github-repos.md b/content/blog/2021-03-19-clone-github-repos.md new file mode 100644 index 0000000..f5db436 --- /dev/null +++ b/content/blog/2021-03-19-clone-github-repos.md @@ -0,0 +1,156 @@ ++++ +date = 2021-03-19 +title = "How to Clonse All Repositories from a GitHub or Sourcehut Account" +description = "" +draft = false ++++ + +# Cloning from GitHub + +If you\'re like me and use a lot of different devices (and sometimes +decide to just wipe your device and start with a new OS), you probably +know the pain of cloning all your old code repositories down to your +local file system. + +If you\'re using GitHub, you can easily clone all of your code back down +in just seconds. First, create a bash script. I do so by opening a new +file in `nano`, but you can use `gedit`, +`vim`, or something else: + +```sh +nano clone_github_repos.sh +``` + +Next, paste in the following information. Note that you can replace the +word `users` in the first line with `orgs` and +type an organization\'s name instead of a user\'s name. + +```sh +CNTX=users; NAME=YOUR-USERNAME; PAGE=1 +curl "https://api.github.com/$CNTX/$NAME/repos?page=$PAGE&per_page=100" | + grep -e 'git_url*' | + cut -d \" -f 4 | + xargs -L1 git clone +``` + +Finally, save the bash script and make it executable. + +```sh +chmod a+x clone_github_repos.sh +``` + +Now you can run the script and should see the cloning process begin. + +```sh +./clone_github_repos.sh +``` + +# Cloning from Sourcehut + +I haven\'t fully figured out how to directly incorporate Sourcehut\'s +GraphQL API into a bash script yet, so this one will take two steps. + +First, log-in to Sourcehut and go to their [GraphQL playground for +Git](https://git.sr.ht/graphql). Next, paste the following query into +the left box: + +```sh +query { + me { + canonicalName + repositories() { + cursor + results { + name + } + } + } +} +``` + +The output on the right side will give you an object of all your +repositories. Just grab that text and remove all the characters such as +quotation marks and curly brackets. You will need a single-line list of +space-separated values for the next step. + +Now let\'s create the bash script: + +```sh +nano clone_sourcehut_repos.sh +``` + +Next, paste the following bash script in with the list of repositories +you obtained above and replace `your-username` with your +username. + +Note that this uses the SSH-based Git cloning method (e.g. +`git@git...`), so you\'ll need to ensure you have set up +Sourcehut with your SSH key. + +```sh +repos=(repo1 repo2 repo3) + +# List all sub-directories in the current directory +for repo in "${repos[@]}" +do + # Clone + git clone git@git.sr.ht:~your-username/$repo +done +``` + +Finally, save the bash script and make it executable. + +```sh +chmod a+x clone_sourcehut_repos.sh +``` + +Now you can run the script and should see the cloning process begin. + +```sh +./clone_sourcehut_repos.sh +``` + +# Moving Repositories to a New Host + +Now that you have all of your code repositories cloned to your local +computer, you may want to change the remote host (e.g., moving from +GitHub to GitLab). To do this, let\'s create another bash script: + +```sh +nano change_remote_urls.sh +``` + +Past the following information and be sure to change the URL information +to whichever host you are moving to. For this example, I am looping +through all of my cloned GitHub directories and changing them to +Sourcehut (e.g. `` -\> +`git@git.sr.ht:~myusername`). + +```sh +# List all sub-directories in the current directory +for dir in */ +do + # Remove the trailing "/" + dir=${dir%*/} + # Enter sub-directory + cd $dir + # Change remote Git URL + git remote set-url origin /"${dir##*/}" + # Push code to new remote + git push + # Go back to main directory + cd .. +done +``` + +Finally, save the bash script and make it executable. + +```sh +chmod a+x change_remote_urls.sh +``` + +Now you can run the script and should see the cloning process begin. + +```sh +./change_remote_urls.sh +``` diff --git a/content/blog/2021-03-19-clone-github-repos.org b/content/blog/2021-03-19-clone-github-repos.org deleted file mode 100644 index 3814e9f..0000000 --- a/content/blog/2021-03-19-clone-github-repos.org +++ /dev/null @@ -1,148 +0,0 @@ -#+title: How to Clone All Repositories from a GitHub or Sourcehut Account -#+date: 2021-03-19 -#+description: Learn how to clone all GitHub or Sourcehut repositories. -#+filetags: :dev: - -* Cloning from GitHub -If you're like me and use a lot of different devices (and sometimes -decide to just wipe your device and start with a new OS), you probably -know the pain of cloning all your old code repositories down to your -local file system. - -If you're using GitHub, you can easily clone all of your code back down -in just seconds. First, create a bash script. I do so by opening a new -file in =nano=, but you can use =gedit=, =vim=, or something else: - -#+begin_src sh -nano clone_github_repos.sh -#+end_src - -Next, paste in the following information. Note that you can replace the -word =users= in the first line with =orgs= and type an organization's -name instead of a user's name. - -#+begin_src sh -CNTX=users; NAME=YOUR-USERNAME; PAGE=1 -curl "https://api.github.com/$CNTX/$NAME/repos?page=$PAGE&per_page=100" | - grep -e 'git_url*' | - cut -d \" -f 4 | - xargs -L1 git clone -#+end_src - -Finally, save the bash script and make it executable. - -#+begin_src sh -chmod a+x clone_github_repos.sh -#+end_src - -Now you can run the script and should see the cloning process begin. - -#+begin_src sh -./clone_github_repos.sh -#+end_src - -* Cloning from Sourcehut -I haven't fully figured out how to directly incorporate Sourcehut's -GraphQL API into a bash script yet, so this one will take two steps. - -First, log-in to Sourcehut and go to their -[[https://git.sr.ht/graphql][GraphQL playground for Git]]. Next, paste -the following query into the left box: - -#+begin_src sh -query { - me { - canonicalName - repositories() { - cursor - results { - name - } - } - } -} -#+end_src - -The output on the right side will give you an object of all your -repositories. Just grab that text and remove all the characters such as -quotation marks and curly brackets. You will need a single-line list of -space-separated values for the next step. - -Now let's create the bash script: - -#+begin_src sh -nano clone_sourcehut_repos.sh -#+end_src - -Next, paste the following bash script in with the list of repositories -you obtained above and replace =your-username= with your username. - -Note that this uses the SSH-based Git cloning method -(e.g. =git@git...=), so you'll need to ensure you have set up Sourcehut -with your SSH key. - -#+begin_src sh -repos=(repo1 repo2 repo3) - -# List all sub-directories in the current directory -for repo in "${repos[@]}" -do - # Clone - git clone git@git.sr.ht:~your-username/$repo -done -#+end_src - -Finally, save the bash script and make it executable. - -#+begin_src sh -chmod a+x clone_sourcehut_repos.sh -#+end_src - -Now you can run the script and should see the cloning process begin. - -#+begin_src sh -./clone_sourcehut_repos.sh -#+end_src - -* Moving Repositories to a New Host -Now that you have all of your code repositories cloned to your local -computer, you may want to change the remote host (e.g., moving from -GitHub to GitLab). To do this, let's create another bash script: - -#+begin_src sh -nano change_remote_urls.sh -#+end_src - -Past the following information and be sure to change the URL information -to whichever host you are moving to. For this example, I am looping -through all of my cloned GitHub directories and changing them to -Sourcehut (e.g. == -> =git@git.sr.ht:~myusername=). - -#+begin_src sh -# List all sub-directories in the current directory -for dir in */ -do - # Remove the trailing "/" - dir=${dir%*/} - # Enter sub-directory - cd $dir - # Change remote Git URL - git remote set-url origin /"${dir##*/}" - # Push code to new remote - git push - # Go back to main directory - cd .. -done -#+end_src - -Finally, save the bash script and make it executable. - -#+begin_src sh -chmod a+x change_remote_urls.sh -#+end_src - -Now you can run the script and should see the cloning process begin. - -#+begin_src sh -./change_remote_urls.sh -#+end_src diff --git a/content/blog/2021-03-28-gemini-capsule.md b/content/blog/2021-03-28-gemini-capsule.md new file mode 100644 index 0000000..0fb2a2c --- /dev/null +++ b/content/blog/2021-03-28-gemini-capsule.md @@ -0,0 +1,186 @@ ++++ +date = 2021-03-28 +title = "Launching a Gemini Capsule" +description = "" +draft = false ++++ + +# What is Gemini? + +[Gemini](https://gemini.circumlunar.space/) is an internet protocol +introduced in June 2019 as an alternative to HTTP(S) or Gopher. In +layman\'s terms, it\'s an alternative way to browse sites (called +capsules) that requires a special browser. Since Gemini is not +standardized as an internet standard, normal web browsers won\'t be able +to load a Gemini capsule. Instead, you\'ll need to use [a +Gemini-specific +browser](https://gemini.%20circumlunar.space/clients.html). + +The content found within a Gemini page is called +[Gemtext](https://gemini.circumlunar.space/docs/cheatsheet.gmi) and is +*extremely* basic (on purpose). Gemini only processes the text, no media +content like images. However, you\'re able to style 3 levels of +headings, regular text, links (which will display on their own line), +quotes, and an unordered list. + +Here\'s a complete listing of valid Gemtext: + +``` txt +# Heading 1 +## Heading 2 +### Heading 3 + +Regular text! Lorem ipsum dolor sit amet. + +=> https://example.com My Website +=> gemini://example.com My Gemini Capsule + +> "If life were predictable it would cease to be life, and be without flavor." - Eleanor Roosevelt + +My List: +** Item +** Item + +```Anything between three backticks will be rendered as code.``` +``` + +### Free Option + +There are probably numerous websites that allow you to create your +personal Gemini capsule, but I\'m going to focus on the two sites that I +have personally tested. The first option below, Midnight Pub, allows you +to create/edit any Gemini files you want in your account. This is +essentially a GUI option with a built-in text box for editing. The +second option below, Sourcehut, allows you to use a Git repository and +automatic build process to deploy your personal Gemini capsule every +time you push a commit. + +## Midnight Pub - Beginner Friendly + +[Midnight Pub](https://midnight.pub/) is a small, virtual community +meant to reflect the atmosphere of wandering into a small alley pub. The +site is built in Gemtext and has a server-side process to convert +Gemtext to HTML if someone loads the site in an HTTP(S) browser. + +To create an account, you\'ll need to email the owner of the website to +obtain a key. You can find their email on the Midnight Pub homepage. +Once registered, head to [your account](https://midnight.pub/account) +and select [manage site](https://midnight.pub/site). This is the screen +where you can upload or create any files to be displayed on the +internet. + +For example, I\'ve created both an HTML file and a Gemini file. Remember +that Gemini is automatically converted to HTML on the Pub, so you don\'t +need an HTML version. For example, I created an HTML version to add in +some extra styling. + +All you need to do is create a page like `index.gmi` and use +your Gemini browser to head over to your-username.midnight.pub to see +the result. + +That\'s all there is to it! Easy enough, right? Let\'s check out a more +advanced version in the next section. + +# Paid Option + +As of 2021, Sourcehut has decided to require users to have a paid +account in order to utilize their automated build system. For now, paid +accounts can be as low as \$2/month. + +## Sourcehut + +[Sourcehut](https://sourcehut.org/) is a collection of software +development tools, but mostly surrounds their hosted Git repository +service. Simply put, it\'s a minimal and more private alternative to +services like GitHub. + +This walkthrough is more advanced and involves things like Git, SSH, the +command line. If you don\'t think you know enough to do this, check out +my walkthrough on creating a Gemini capsule for the Midnight Pub +instead. + +The first thing you\'ll need to do is create an SSH key pair, if you +don\'t already have one on your system. Once created, grab the contents +of `id_rsa.pub` and add it to your Sourcehut account +settings - this will allow you to push and pull code changes without +using a username/password. + +```sh +ssh keygen +``` + +Next up, let\'s create a repository with the proper name so that the +Sourcehut build system will know we want them to host a website for us. +Use the following format exactly: + +```sh +mkdir your-username.srht.site && cd your-username.srht.site +``` + +Now that we\'ve created the repo, let\'s initialize Git and add the +proper remote URL. + +```sh +git init +``` + +```sh +git remote add origin git@git.sr.ht:~your-username/your-username.srht.site +``` + +Now that our repository is set up and configured, we will need to create +at least two files: + +- `index.gmi` +- `.build.yml` + +For your `.build.yml` file, use the following content and be +sure to update the `site` line with your username! + +``` yaml +image: alpine/latest +oauth: pages.sr.ht/PAGES:RW +environment: + site: your-username.srht.site +tasks: + - package: | + cd $site + tar -cvz . > ../site.tar.gz + - upload: | + acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz -Fprotocol=GEMINI +``` + +For the `index.gmi` file, put whatever you want in there and +save it. You could even just copy and paste the Gemtext cheatsheet. + +If you want to serve both HTML and Gemini files from this repository, +just add a second command to the `upload` section: + +``` yaml +- upload: | + acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz -Fprotocol=GEMINI + acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz +``` + +Lastly, commit your changes and push them to the remote repo. + +```sh +git add .; git commit -m "initial commit"; git push --set-upstream origin HEAD +``` + +If you\'ve successfully created the files with the proper format, +you\'ll see the terminal print a message that lets you know where the +automatic build is taking place. For example, here\'s what the terminal +tells me: + +```sh +remote: Build started: +remote: https://builds.sr.ht/~user/job/689803 [.build.yml] +``` + +Now that you\'ve properly built your Sourcehut page, you can browse to +your-username.srht.site in a Gemini browser and view the final results. +Take a look at the image below for my Sourcehut Gemini capsule. + +![Gemini page on the amfora +browser](https://img.cleberg.net/blog/20210328-launching-a-gemini-capsule/amfora.png) diff --git a/content/blog/2021-03-28-gemini-capsule.org b/content/blog/2021-03-28-gemini-capsule.org deleted file mode 100644 index 69fd8f2..0000000 --- a/content/blog/2021-03-28-gemini-capsule.org +++ /dev/null @@ -1,177 +0,0 @@ -#+title: Launching a Gemini Capsule -#+date: 2021-03-28 -#+description: A guide to self-hosting a Gemini capsule on your own server. -#+filetags: :dev: - -* What is Gemini? -[[https://gemini.circumlunar.space/][Gemini]] is an internet protocol -introduced in June 2019 as an alternative to HTTP(S) or Gopher. In -layman's terms, it's an alternative way to browse sites (called -capsules) that requires a special browser. Since Gemini is not -standardized as an internet standard, normal web browsers won't be able -to load a Gemini capsule. Instead, you'll need to use -[[https://gemini.%20circumlunar.space/clients.html][a Gemini-specific -browser]]. - -The content found within a Gemini page is called -[[https://gemini.circumlunar.space/docs/cheatsheet.gmi][Gemtext]] and is -/extremely/ basic (on purpose). Gemini only processes the text, no media -content like images. However, you're able to style 3 levels of headings, -regular text, links (which will display on their own line), quotes, and -an unordered list. - -Here's a complete listing of valid Gemtext: - -#+begin_src txt -# Heading 1 -## Heading 2 -### Heading 3 - -Regular text! Lorem ipsum dolor sit amet. - -=> https://example.com My Website -=> gemini://example.com My Gemini Capsule - -> "If life were predictable it would cease to be life, and be without flavor." - Eleanor Roosevelt - -My List: -,** Item -,** Item - -```Anything between three backticks will be rendered as code.``` -#+end_src - -*** Free Option -There are probably numerous websites that allow you to create your -personal Gemini capsule, but I'm going to focus on the two sites that I -have personally tested. The first option below, Midnight Pub, allows you -to create/edit any Gemini files you want in your account. This is -essentially a GUI option with a built-in text box for editing. The -second option below, Sourcehut, allows you to use a Git repository and -automatic build process to deploy your personal Gemini capsule every -time you push a commit. - -** Midnight Pub - Beginner Friendly -[[https://midnight.pub/][Midnight Pub]] is a small, virtual community -meant to reflect the atmosphere of wandering into a small alley pub. The -site is built in Gemtext and has a server-side process to convert -Gemtext to HTML if someone loads the site in an HTTP(S) browser. - -To create an account, you'll need to email the owner of the website to -obtain a key. You can find their email on the Midnight Pub homepage. -Once registered, head to [[https://midnight.pub/account][your account]] -and select [[https://midnight.pub/site][manage site]]. This is the -screen where you can upload or create any files to be displayed on the -internet. - -For example, I've created both an HTML file and a Gemini file. Remember -that Gemini is automatically converted to HTML on the Pub, so you don't -need an HTML version. For example, I created an HTML version to add in -some extra styling. - -All you need to do is create a page like =index.gmi= and use your Gemini -browser to head over to your-username.midnight.pub to see the result. - -That's all there is to it! Easy enough, right? Let's check out a more -advanced version in the next section. - -* Paid Option -As of 2021, Sourcehut has decided to require users to have a paid -account in order to utilize their automated build system. For now, paid -accounts can be as low as $2/month. - -** Sourcehut -[[https://sourcehut.org/][Sourcehut]] is a collection of software -development tools, but mostly surrounds their hosted Git repository -service. Simply put, it's a minimal and more private alternative to -services like GitHub. - -This walkthrough is more advanced and involves things like Git, SSH, the -command line. If you don't think you know enough to do this, check out -my walkthrough on creating a Gemini capsule for the Midnight Pub -instead. - -The first thing you'll need to do is create an SSH key pair, if you -don't already have one on your system. Once created, grab the contents -of =id_rsa.pub= and add it to your Sourcehut account settings - this -will allow you to push and pull code changes without using a -username/password. - -#+begin_src sh -ssh keygen -#+end_src - -Next up, let's create a repository with the proper name so that the -Sourcehut build system will know we want them to host a website for us. -Use the following format exactly: - -#+begin_src sh -mkdir your-username.srht.site && cd your-username.srht.site -#+end_src - -Now that we've created the repo, let's initialize Git and add the proper -remote URL. - -#+begin_src sh -git init -#+end_src - -#+begin_src sh -git remote add origin git@git.sr.ht:~your-username/your-username.srht.site -#+end_src - -Now that our repository is set up and configured, we will need to create -at least two files: - -- =index.gmi= -- =.build.yml= - -For your =.build.yml= file, use the following content and be sure to -update the =site= line with your username! - -#+begin_src yaml -image: alpine/latest -oauth: pages.sr.ht/PAGES:RW -environment: - site: your-username.srht.site -tasks: - - package: | - cd $site - tar -cvz . > ../site.tar.gz - - upload: | - acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz -Fprotocol=GEMINI -#+end_src - -For the =index.gmi= file, put whatever you want in there and save it. -You could even just copy and paste the Gemtext cheatsheet. - -If you want to serve both HTML and Gemini files from this repository, -just add a second command to the =upload= section: - -#+begin_src yaml -- upload: | - acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz -Fprotocol=GEMINI - acurl -f https://pages.sr.ht/publish/$site -Fcontent=@site.tar.gz -#+end_src - -Lastly, commit your changes and push them to the remote repo. - -#+begin_src sh -git add .; git commit -m "initial commit"; git push --set-upstream origin HEAD -#+end_src - -If you've successfully created the files with the proper format, you'll -see the terminal print a message that lets you know where the automatic -build is taking place. For example, here's what the terminal tells me: - -#+begin_src sh -remote: Build started: -remote: https://builds.sr.ht/~user/job/689803 [.build.yml] -#+end_src - -Now that you've properly built your Sourcehut page, you can browse to -your-username.srht.site in a Gemini browser and view the final results. -Take a look at the image below for my Sourcehut Gemini capsule. - -#+caption: Gemini page on the amfora browser -[[https://img.cleberg.net/blog/20210328-launching-a-gemini-capsule/amfora.png]] diff --git a/content/blog/2021-03-28-vaporwave-vs-outrun.md b/content/blog/2021-03-28-vaporwave-vs-outrun.md new file mode 100644 index 0000000..1ac49e9 --- /dev/null +++ b/content/blog/2021-03-28-vaporwave-vs-outrun.md @@ -0,0 +1,133 @@ ++++ +date = 2021-03-28 +title = "Vaporware vs. Outrun" +description = "" +draft = false ++++ + +# Overview + +When it comes to an aesthetic that originated primarily online, there +tends to be a lot of confusion around what falls into each specific +genre. This post discusses Vaporwave and Outrun, which have become +almost synonymous in the minds of those online who aren\'t hardcore into +these genres of media. More specifically, Outrun is largely an unknown +term while all aesthetics in these two genres are usually attributed to +Vaporwave. For example, take a look at the image search results for +Vaporwave: the results include **a lot\*** of Outrun-themed images. +You\'ll find a similar trend almost everywhere. + +![Vaporwave Search +Results](https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/vaporwave-search-results.png) + +# Vaporwave + +![Macintosh Plus Album +Cover](https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/macintosh-plus.png) + +Okay, so what is Vaporwave? I\'m going to just copy-and-paste some +general info from the Wikipedia article on +[Vaporwave](https://en.wikipedia.org/wiki/Vaporwave), so that I\'m not +repeating everything you can already search for online: + +> Vaporwave is a microgenre of electronic music, a visual art style, and +> an Internet meme that emerged in the early 2010s. It is defined partly +> by its slowed-down, chopped and screwed samples of smooth jazz, +> elevator, R&B, and lounge music from the 1980s and 1990s. The +> surrounding subculture is sometimes associated with an ambiguous or +> satirical take on consumer capitalism and pop culture, and tends to be +> characterized by a nostalgic or surrealist engagement with the popular +> entertainment, technology and advertising of previous decades. +> Visually, it incorporates early Internet imagery, late 1990s web +> design, glitch art, anime, 3D-rendered objects, and cyberpunk tropes +> in its cover artwork and music videos. + +This is an excellent summary, and it helps address my point here: there +are specific aspects that make Vaporwave unique: + +## Time Frame + +The time frame for references, logos, etc. focuses mostly on the 1990s +in Vaporwave. You\'ll see old school Pepsi logos, Microsoft 95 screens, +tropical plants, classic marble sculptures, and many references from +Japan\'s influence in the 90s. + +## Art + +The color scheme is generally a soft, light palette that uses pastel +colors. The backdrop will often be in a light-pink or teal blue. + +## Music + +The musical genre of Vaporwave incorporates soft tunes, lounge music, +and sound effects that will make the listener reminisce of the 90s. The +sounds of the genre are generally slower-paced and calming. The major +breakthrough artist for Vaporwave was [Macintosh +Plus](https://en.wikipedia.org/wiki/Vektroid), who released the album +[Floral +Shoppe](https://archive.org/details/MACINTOSHPLUS-FLORALSHOPPE_complete) +in 2011. Another more recent example is the artist +[sadbert](https://sadbert.bandcamp.com/), whose latest album +incorporates the upbeat tones of the 1999 Dilbert TV series. + +Notice that Vaporwave doesn\'t include things like racing cars, +futuristic technology, chrome, or the deep orange/purple color scheme. +Vaporwave is a focus on the idyllic state of the world when technology +was becoming common in households, a reality that we have already +experienced. Focus on the most aesthetically-pleasing parts of that past +is a large part of Vaporwave. + +# Outrun + +![Outrun](https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/outrun.png) + +Now, let\'s get to Outrun. This one is a little trickier since the genre +has largely been lumped under the Vaporwave title for so long. However, +it stems from the [Synthwave](https://en.wikipedia.org/wiki/Synthwave) +music genre and is likely named after the 1986 racer game, [Out +Run](https://en.wikipedia.org/wiki/Out_Run). + +## Time Frame + +Outrun can be thought of as a retro-futuristic aesthetic born from the +1980s. + +## Art + +The color scheme uses a very dark color palette with the main colors +being deep oranges, blues, and purples. Red edges are common around +objects in Outrun art. The background of the Outrun aesthetic is almost +always a neon grid like you\'d expect to see in Tron or a 1980s arcade +machine. + +Classic sports cars, chrome robots, computer generated graphics and +fonts, and the occasional use of rain or palm trees can be found in +Outrun art. + +## Music + +This aesthetic has a more aggressive and fast-paced style of music, +which tends to match the subject of the art in this aesthetic. + +Outrun enthusiasts love what people in the 1980s thought the future +would look like. Take a look at a common video game discussed in Outrun +circles, Far Cry 3: Blood Dragon: + +![Fry Cry 3: Blood +Dragon](https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/far-cry.png) + +Another example that doesn\'t force the color scheme as hard as some +online art does is Kung Fury: + +![Kung +Fury](https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/kung-fury.png) + +![Kung Fury Hacker +Scene](https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/kung-fury-hacker.png) + +# Conclusion + +While Vaporwave and Outrun share similarities, they are two distinct +aesthetics with many important distinctions. Someone who enjoys one may +not necessarily enjoy the other, so it\'s important to make sure we +properly describe the aesthetic we\'re looking for. diff --git a/content/blog/2021-03-28-vaporwave-vs-outrun.org b/content/blog/2021-03-28-vaporwave-vs-outrun.org deleted file mode 100644 index e268a3d..0000000 --- a/content/blog/2021-03-28-vaporwave-vs-outrun.org +++ /dev/null @@ -1,124 +0,0 @@ -#+title: Vaporwave vs Outrun -#+date: 2021-03-28 -#+description: Learn the differences between the vaporwave and Outrun aesthetics. -#+filetags: :personal: - -* Overview -When it comes to an aesthetic that originated primarily online, there -tends to be a lot of confusion around what falls into each specific -genre. This post discusses Vaporwave and Outrun, which have become -almost synonymous in the minds of those online who aren't hardcore into -these genres of media. More specifically, Outrun is largely an unknown -term while all aesthetics in these two genres are usually attributed to -Vaporwave. For example, take a look at the image search results for -Vaporwave: the results include *a lot** of Outrun-themed images. You'll -find a similar trend almost everywhere. - -#+caption: Vaporwave Search Results -[[https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/vaporwave-search-results.png]] - -* Vaporwave -#+caption: Macintosh Plus Album Cover -[[https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/macintosh-plus.png]] - -Okay, so what is Vaporwave? I'm going to just copy-and-paste some -general info from the Wikipedia article on -[[https://en.wikipedia.org/wiki/Vaporwave][Vaporwave]], so that I'm not -repeating everything you can already search for online: - -#+begin_quote -Vaporwave is a microgenre of electronic music, a visual art style, and -an Internet meme that emerged in the early 2010s. It is defined partly -by its slowed-down, chopped and screwed samples of smooth jazz, -elevator, R&B, and lounge music from the 1980s and 1990s. The -surrounding subculture is sometimes associated with an ambiguous or -satirical take on consumer capitalism and pop culture, and tends to be -characterized by a nostalgic or surrealist engagement with the popular -entertainment, technology and advertising of previous decades. Visually, -it incorporates early Internet imagery, late 1990s web design, glitch -art, anime, 3D-rendered objects, and cyberpunk tropes in its cover -artwork and music videos. -#+end_quote - -This is an excellent summary, and it helps address my point here: there -are specific aspects that make Vaporwave unique: - -** Time Frame -The time frame for references, logos, etc. focuses mostly on the 1990s -in Vaporwave. You'll see old school Pepsi logos, Microsoft 95 screens, -tropical plants, classic marble sculptures, and many references from -Japan's influence in the 90s. - -** Art -The color scheme is generally a soft, light palette that uses pastel -colors. The backdrop will often be in a light-pink or teal blue. - -** Music -The musical genre of Vaporwave incorporates soft tunes, lounge music, -and sound effects that will make the listener reminisce of the 90s. The -sounds of the genre are generally slower-paced and calming. The major -breakthrough artist for Vaporwave was -[[https://en.wikipedia.org/wiki/Vektroid][Macintosh Plus]], who released -the album -[[https://archive.org/details/MACINTOSHPLUS-FLORALSHOPPE_complete][Floral -Shoppe]] in 2011. Another more recent example is the artist -[[https://sadbert.bandcamp.com/][sadbert]], whose latest album -incorporates the upbeat tones of the 1999 Dilbert TV series. - -Notice that Vaporwave doesn't include things like racing cars, -futuristic technology, chrome, or the deep orange/purple color scheme. -Vaporwave is a focus on the idyllic state of the world when technology -was becoming common in households, a reality that we have already -experienced. Focus on the most aesthetically-pleasing parts of that past -is a large part of Vaporwave. - -* Outrun -#+caption: Outrun -[[https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/outrun.png]] - -Now, let's get to Outrun. This one is a little trickier since the genre -has largely been lumped under the Vaporwave title for so long. However, -it stems from the [[https://en.wikipedia.org/wiki/Synthwave][Synthwave]] -music genre and is likely named after the 1986 racer game, -[[https://en.wikipedia.org/wiki/Out_Run][Out Run]]. - -** Time Frame -Outrun can be thought of as a retro-futuristic aesthetic born from the -1980s. - -** Art -The color scheme uses a very dark color palette with the main colors -being deep oranges, blues, and purples. Red edges are common around -objects in Outrun art. The background of the Outrun aesthetic is almost -always a neon grid like you'd expect to see in Tron or a 1980s arcade -machine. - -Classic sports cars, chrome robots, computer generated graphics and -fonts, and the occasional use of rain or palm trees can be found in -Outrun art. - -** Music -This aesthetic has a more aggressive and fast-paced style of music, -which tends to match the subject of the art in this aesthetic. - -Outrun enthusiasts love what people in the 1980s thought the future -would look like. Take a look at a common video game discussed in Outrun -circles, Far Cry 3: Blood Dragon: - -#+caption: Fry Cry 3: Blood Dragon -[[https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/far-cry.png]] - -Another example that doesn't force the color scheme as hard as some -online art does is Kung Fury: - -#+caption: Kung Fury -[[https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/kung-fury.png]] - -#+caption: Kung Fury Hacker Scene -[[https://img.cleberg.net/blog/20210328-vaporwave-vs-outrun/kung-fury-hacker.png]] - -* Conclusion -While Vaporwave and Outrun share similarities, they are two distinct -aesthetics with many important distinctions. Someone who enjoys one may -not necessarily enjoy the other, so it's important to make sure we -properly describe the aesthetic we're looking for. diff --git a/content/blog/2021-03-30-vps-web-server.md b/content/blog/2021-03-30-vps-web-server.md new file mode 100644 index 0000000..f3a29bf --- /dev/null +++ b/content/blog/2021-03-30-vps-web-server.md @@ -0,0 +1,418 @@ ++++ +date = 2021-03-30 +title = "How to Set Up a VPS Web Server" +description = "" +draft = false ++++ + +# Shared Hosting vs. VPS + +Choosing a place to host a website is one of the most confusing +decisions for beginner web developers. Even for experienced web devs, +choosing between different forms of web hosting can be a daunting +choice. + +First, let\'s take a look at [shared web +hosting](https://en.wikipedia.org/wiki/Shared_web_hosting_service). +Shared web hosting is a product where you are purchasing a small piece +of a web server that is being shared between many websites. As a result, +the cost of shared hosting is extremely low. You won\'t have access to +the server itself, which means you can\'t install your own software on +the server, such as Docker. Usually, you are simply allowed to connect +your domains to the server, set up domain security, and other small +utilities. + +In contrast, a [virtual private +server](https://en.wikipedia.org/wiki/Virtual_private_server) (VPS) is a +virtual machine that replicates the environment of having a dedicated +server to yourself. You are able to control the virtual server\'s host +names, base file system, package manager, etc. Another great upside of a +VPS is that since it\'s virtual, the company providing the VPS can +dynamically increase the disk size, RAM size, or number of CPUs at any +time. However, the virtual server is still physically located on a +server that is shared between multiple virtual servers. + +The choice between shared hosting and VPS mostly depends on your skill +level with system administration. If you\'re comforting working on a +server that is mostly left up to you (or you\'re willing to learn), then +a VPS is usually a better option. However, shared hosting is a fantastic +option for people who don\'t want to have to learn how to manage their +server. + +# My Situation + +I had used shared hosting for approximately 5 years before trying my +first VPS. I manage a homelab and have had success running a server and +performing typical sysadmin duties, but I was still hesitant to get a +VPS. One fear was that I always struggled to properly set up the +networking part of a server - DNS and hostname configurations were not +my friend. + +As a little bit of background, I originally used +[Siteground](https://www.siteground.com) for my initially shared hosting +and stayed on that platform for at least a year. However, the UI was +clunky, and I didn\'t like how they handled certain technical aspects, +so I switched to [Namecheap](https://www.namecheap.com). Namecheap was +great because it is the service I primarily use for purchasing domain +names, which made it incredibly easy to link them to my hosting service. +However, it was still mediocre shared hosting, and Namecheap is +notorious for not letting you use [Let\'s +Encrypt](https://letsencrypt.org) to obtain free SSL/TLS certificates; +Namecheap wants to make you purchase certificates through their store. + +Finally, I settled down with [iWebFusion](https://www.iwebfusion.net) +for about the last year of my shared hosting. This service was pretty +great, came with free SSL/TLS, and I never had any complaints. + +However, I finally grew tired of not being able to install software on +my own web server. I wanted to be able to try out things like +[Postmill](https://postmill.xyz) or [Matrix](https://matrix.org). This +is possible with a VPS, so I decided to grab a new domain name to try it +out. + +# Getting Started: Buying a VPS + +The first step to moving over to a VPS is (you guessed it): finding a +VPS provider. For my VPSs, I use [1984](https://1984hosting.com) and +prefer their services much more than any alternative, due to their +location (Iceland), their [privacy +policy](https://1984hosting.com/GDPR/), their respect for GDPR, and the +ability to remain anonymous if you pay in Bitcoin or Monero. + +[Njalla](https://njal.la) is another good, privacy-oriented option for +VPS services. + +You\'ll have to decide what specifications you want on your VPS. For me, +I only build and deploy low-resource HTML, PHP, and Python websites. +This means I can survive on the smallest VPS: 1 CPU, 1GB of RAM, and +25GB SSD for \$5.00 per month. + +As noted above, the great thing about a VPS is you can request your +provider to increase the resources at any time. + +# Configuring DNS Settings + +Okay, so now let\'s get into some actual work that has to be done to get +content moved from a shared host to a VPS. At this point, I\'m assuming +you have a shared host with website content that you can still access, +and you\'ve purchased a new VPS and can SSH into that server. + +The first change is minor, but it should be done immediately in order to +get things moving: DNS settings. Go to wherever your DNS settings are +handled. If your shared host also managed your DNS settings, you\'ll +need to first move that DNS over to your new VPS provider. For me, I +route my DNS through [Gandi](https://www.gandi.net). + +Once you know where your DNS settings are, go ahead and update the +`A` records to match the public IP address of your VPS. For +example: + +``` txt +A example.com xxx.xxx.xxx.xxx +A subdomain xxx.xxx.xxx.xxx +CNAME www example.com. +``` + +If you have any other records that require updates, such as MX or TXT +records for a mail server, be sure to update those accordingly. +Personally, I don\'t host my own mail server. I route all mail on my +custom domains to [Migadu](https://www.migadu.com). Hosting your own +email server can become complex quickly and is not for beginners. + +DNS changes can take up to 48 hours to propagate, so be sure to give it +some time before assuming you\'ve made an error. + +# Server Updates and Packages + +Now that the DNS settings have been changed, let\'s set up our server +while we wait for the DNS to propagate. First up is to ssh into your +server. If you\'ve signed up with a service like DigitalOcean, you can +add your SSH key to your account and to your VPS droplet so that you +don\'t need a password in order to SSH. + +```sh +ssh root@xxx.xxx.xxx.xxx +``` + +The VPS that is used in this blog post runs Ubuntu 20.04 with an Apache +web server. If you\'re working on a different operating system (OS) or +want a different web server, such as Nginx, you\'ll have to use +different commands to set it up. + +First, let\'s update and upgrade our server. + +**NOTE:** Since we have logged in to the server as `root` for +now, we don\'t need to use the `sudo` modifier before our +commands. + +```sh +apt update && apt upgrade -y +``` + +# Create A User Account + +While being able to use `root` can be beneficial at times, +you shouldn\'t use `root` unless you have to. + +So let\'s set up a new user in our system. The `-m` option +below tells the OS to create a home directory for the new user. + +```sh +adduser USERNAME +``` + +Now, create a password for that user. + +```sh +passwd USERNAME +``` + +Finally, add the user to the sudoers file, so they can perform +priveleged commands. + +```sh +usermod -a -G sudo USERNAME +``` + +If you are using SSH keys and not passwords, you\'ll need to copy your +SSH key from your local machine to the VPS. If you haven\'t disabled +password-based SSH yet, the easiest way to do this is +`ssh-copy-id` from your local computer (not from the VPS): + +```sh +ssh-copy-id testuser@xxx.xxx.xxx.xxx +``` + +If you\'ve disabled password-based SSH, you\'ll need to manually copy +your SSH key into the `~/.ssh/authorized_keys` file. + +# Install Software + +Our goal here is to host a web server, so the next step is to install +the Apache web server and any other packages we need. + +From this point on, I will be logged in as a user (not +`root`) and will need to use the `sudo` modifier +for most commands. + +```sh +sudo apt update; sudo apt upgrade -y; sudo apt autoremove -y +sudo apt install apache2 +``` + +If you need other language support, such as PHP, you\'ll need to install +that too. + +```sh +sudo apt install libapache2-mod-php php-dom +sudo a2enmod php +sudo systemctl restart apache2 +``` + +# Website Files & Folders + +Next up is to create the directories for the domain(s) we want to be +hosted on this web server. + +```sh +cd /var/www +sudo mkdir example.com +``` + +We have a folder for `example.com` now, so let\'s add an +`index.html` file and put it within a specific +`public_html` folder. You don\'t need this +`public_html` if you don\'t want it, but it helps with +organizing items related to `example.com` that you don\'t +want to publish to the internet. + +```sh +cd example.com +sudo mkdir public_html && cd public_html +sudo nano index.html +``` + +You can put anything you want in this `index.html` file. If +you can\'t think of anything, paste this in there: + +``` html + + + + + + Hello, world! + + +

      Hello, world!

      + + +``` + +If you want something to be served at +`example.com/page01/file.txt`, you\'ll have to create the +`page01` directory under the `example.com` +directory. For example: + +```sh +cd /var/www/example.com/public_html +sudo mkdir page01 +sudo nano file.txt +``` + +# Apache Configuration + +Now, let\'s set up the files that will tell the server where to find the +files for `example.com`. We will copy the default +configuration file and create our own. + +```sh +cd /etc/apache2/sites-available +sudo cp 000-default.conf example.com.conf +sudo nano example.com.conf +``` + +This configuration file will have a few default lines, but you\'ll need +to edit it to look similar to this (settings may change based on your +personal needs): + +``` config + + ServerAdmin your-email@email-provider.com + ServerName example.com + ServerAlias www.example.com + DocumentRoot /var/www/example.com/public_html + ErrorLog ${APACHE_LOG_DIR}/error.log + CustomLog ${APACHE_LOG_DIR}/access.log combined + +``` + +Now, enable the configuration for your new site, disable the default +configuration, and reload the web server. + +```sh +sudo a2ensite example.com.conf +sudo a2dissite 000-default.conf +sudo systemctl reload apache2 +``` + +You can always run a test to make sure no errors or warnings are found +in your configuration files. + +```sh +sudo apache2ctl configtest +``` + +Now, restart the web server entirely. After this, you should be able to +browse to `http://example.com` and see the HTML content you +provided earlier. Note that SSL/TLS has not been enabled yet, so you +won\'t be able to use the secure version yet +(`https://example.com`). + +```sh +sudo systemctl restart apache2 +``` + +You can repeat this for as many websites as you need. Just create the +domain folders in `/var/www/`, add the configuration file, +enable the configuration, and restart `apache2`. + +# SSL/TLS Certificates: Serve Websites Over HTTPS + +In order to serve secure content, you\'ll need to obtain SSL/TLS +certificates. Luckily, there\'s a free tool called +[Certbot](https://certbot.eff.org) that helps us with the process. + +The first step is to install `snapd` and `core` +for Ubuntu. + +```sh +sudo apt install snapd +sudo snap install core +sudo snap refresh core +``` + +Next, install the `certbot` snap package. + +```sh +sudo snap install --classic certbot +``` + +Execute the following command to ensure that the `certbot` +command can be run. + +```sh +sudo ln -s /snap/bin/certbot /usr/bin/certbot +``` + +Finally, you can run `certbot` one of two ways: + +1. run it and let it alter your Apache configuration files + automatically to enable HTTPS redirects. +2. run it and only allow it to create certificates. You\'ll need to + manually alter the config files to enable HTTPS redirects. + +Run certbot and allow automatic config changes: + +```sh +sudo certbot --apache +``` + +Run certbot for certificates only and don\'t allow it to alter config +files: + +```sh +sudo certbot certonly --apache +``` + +The Certbot packages on your system come with a cron job or systemd +timer that will renew your certificates automatically before they +expire. You will not need to run Certbot again unless you change your +configuration. You can test automatic renewal for your certificates by +running this command: + +```sh +sudo certbot renew --dry-run +``` + +Now, test your domains by going to `https://example.com`. + +# Firewall Security + +To enable better security on your server, you\'ll need to enable a basic +firewall. For Ubuntu, we\'ll use [the uncomplicated +firewall](../secure-your-network-with-the-uncomplicated-firewall/). + +Now, add the following rules to the firewall allow SSH, Apache, and +HTTP(S) connections. If you need to, you can enable different ports for +specifics applications, SFTP, etc. + +```sh +sudo ufw default deny incoming +sudo ufw default allow outgoing +sudo ufw allow OpenSSH +sudo ufw allow Apache +sudo ufw allow proto tcp from any to any port 80,443 +``` + +Once you\'ve added all the rules you need, enable the firewall. + +```sh +sudo ufw enable +``` + +# Troubleshooting + +If you run into any issues during your VPS set-up, be sure to walk back +through your actions and make sure you didn\'t miss any steps. + +Many websites have fantastic guides to setting up various web servers. +This is one of the areas [where DigitalOcean +shines](https://www.digitalocean.%20com/community/tutorials). For +simpler or more Linux-oriented questions, I suggest using \[Linuxize\] +(). + +If you\'re getting certain errors (e.g. +`500 Internal Server Error`) and need to debug locally, you +can view the `access.log` and `error.log` files in +the `/var/log/apache/` directory. diff --git a/content/blog/2021-03-30-vps-web-server.org b/content/blog/2021-03-30-vps-web-server.org deleted file mode 100644 index d0c1b4f..0000000 --- a/content/blog/2021-03-30-vps-web-server.org +++ /dev/null @@ -1,399 +0,0 @@ -#+title: How to Set Up a VPS Web Server -#+date: 2021-03-30 -#+description: A beginner's guide to setting up a virtual private server (VPS). -#+filetags: :sysadmin: - -* Shared Hosting vs. VPS -Choosing a place to host a website is one of the most confusing -decisions for beginner web developers. Even for experienced web devs, -choosing between different forms of web hosting can be a daunting -choice. - -First, let's take a look at -[[https://en.wikipedia.org/wiki/Shared_web_hosting_service][shared web -hosting]]. Shared web hosting is a product where you are purchasing a -small piece of a web server that is being shared between many websites. -As a result, the cost of shared hosting is extremely low. You won't have -access to the server itself, which means you can't install your own -software on the server, such as Docker. Usually, you are simply allowed -to connect your domains to the server, set up domain security, and other -small utilities. - -In contrast, a -[[https://en.wikipedia.org/wiki/Virtual_private_server][virtual private -server]] (VPS) is a virtual machine that replicates the environment of -having a dedicated server to yourself. You are able to control the -virtual server's host names, base file system, package manager, etc. -Another great upside of a VPS is that since it's virtual, the company -providing the VPS can dynamically increase the disk size, RAM size, or -number of CPUs at any time. However, the virtual server is still -physically located on a server that is shared between multiple virtual -servers. - -The choice between shared hosting and VPS mostly depends on your skill -level with system administration. If you're comforting working on a -server that is mostly left up to you (or you're willing to learn), then -a VPS is usually a better option. However, shared hosting is a fantastic -option for people who don't want to have to learn how to manage their -server. - -* My Situation -I had used shared hosting for approximately 5 years before trying my -first VPS. I manage a homelab and have had success running a server and -performing typical sysadmin duties, but I was still hesitant to get a -VPS. One fear was that I always struggled to properly set up the -networking part of a server - DNS and hostname configurations were not -my friend. - -As a little bit of background, I originally used -[[https://www.siteground.com][Siteground]] for my initially shared -hosting and stayed on that platform for at least a year. However, the UI -was clunky, and I didn't like how they handled certain technical -aspects, so I switched to [[https://www.namecheap.com][Namecheap]]. -Namecheap was great because it is the service I primarily use for -purchasing domain names, which made it incredibly easy to link them to -my hosting service. However, it was still mediocre shared hosting, and -Namecheap is notorious for not letting you use -[[https://letsencrypt.org][Let's Encrypt]] to obtain free SSL/TLS -certificates; Namecheap wants to make you purchase certificates through -their store. - -Finally, I settled down with [[https://www.iwebfusion.net][iWebFusion]] -for about the last year of my shared hosting. This service was pretty -great, came with free SSL/TLS, and I never had any complaints. - -However, I finally grew tired of not being able to install software on -my own web server. I wanted to be able to try out things like -[[https://postmill.xyz][Postmill]] or [[https://matrix.org][Matrix]]. -This is possible with a VPS, so I decided to grab a new domain name to -try it out. - -* Getting Started: Buying a VPS -The first step to moving over to a VPS is (you guessed it): finding a -VPS provider. For my VPSs, I use [[https://1984hosting.com][1984]] and -prefer their services much more than any alternative, due to their -location (Iceland), their [[https://1984hosting.com/GDPR/][privacy -policy]], their respect for GDPR, and the ability to remain anonymous if -you pay in Bitcoin or Monero. - -[[https://njal.la][Njalla]] is another good, privacy-oriented option for -VPS services. - -You'll have to decide what specifications you want on your VPS. For me, -I only build and deploy low-resource HTML, PHP, and Python websites. -This means I can survive on the smallest VPS: 1 CPU, 1GB of RAM, and -25GB SSD for $5.00 per month. - -As noted above, the great thing about a VPS is you can request your -provider to increase the resources at any time. - -* Configuring DNS Settings -Okay, so now let's get into some actual work that has to be done to get -content moved from a shared host to a VPS. At this point, I'm assuming -you have a shared host with website content that you can still access, -and you've purchased a new VPS and can SSH into that server. - -The first change is minor, but it should be done immediately in order to -get things moving: DNS settings. Go to wherever your DNS settings are -handled. If your shared host also managed your DNS settings, you'll need -to first move that DNS over to your new VPS provider. For me, I route my -DNS through [[https://www.gandi.net][Gandi]]. - -Once you know where your DNS settings are, go ahead and update the =A= -records to match the public IP address of your VPS. For example: - -#+begin_src txt -A example.com xxx.xxx.xxx.xxx -A subdomain xxx.xxx.xxx.xxx -CNAME www example.com. -#+end_src - -If you have any other records that require updates, such as MX or TXT -records for a mail server, be sure to update those accordingly. -Personally, I don't host my own mail server. I route all mail on my -custom domains to [[https://www.migadu.com][Migadu]]. Hosting your own -email server can become complex quickly and is not for beginners. - -DNS changes can take up to 48 hours to propagate, so be sure to give it -some time before assuming you've made an error. - -* Server Updates and Packages -Now that the DNS settings have been changed, let's set up our server -while we wait for the DNS to propagate. First up is to ssh into your -server. If you've signed up with a service like DigitalOcean, you can -add your SSH key to your account and to your VPS droplet so that you -don't need a password in order to SSH. - -#+begin_src sh -ssh root@xxx.xxx.xxx.xxx -#+end_src - -The VPS that is used in this blog post runs Ubuntu 20.04 with an Apache -web server. If you're working on a different operating system (OS) or -want a different web server, such as Nginx, you'll have to use different -commands to set it up. - -First, let's update and upgrade our server. - -*NOTE:* Since we have logged in to the server as =root= for now, we -don't need to use the =sudo= modifier before our commands. - -#+begin_src sh -apt update && apt upgrade -y -#+end_src - -* Create A User Account -While being able to use =root= can be beneficial at times, you shouldn't -use =root= unless you have to. - -So let's set up a new user in our system. The =-m= option below tells -the OS to create a home directory for the new user. - -#+begin_src sh -adduser USERNAME -#+end_src - -Now, create a password for that user. - -#+begin_src sh -passwd USERNAME -#+end_src - -Finally, add the user to the sudoers file, so they can perform -priveleged commands. - -#+begin_src sh -usermod -a -G sudo USERNAME -#+end_src - -If you are using SSH keys and not passwords, you'll need to copy your -SSH key from your local machine to the VPS. If you haven't disabled -password-based SSH yet, the easiest way to do this is =ssh-copy-id= from -your local computer (not from the VPS): - -#+begin_src sh -ssh-copy-id testuser@xxx.xxx.xxx.xxx -#+end_src - -If you've disabled password-based SSH, you'll need to manually copy your -SSH key into the =~/.ssh/authorized_keys= file. - -* Install Software -Our goal here is to host a web server, so the next step is to install -the Apache web server and any other packages we need. - -From this point on, I will be logged in as a user (not =root=) and will -need to use the =sudo= modifier for most commands. - -#+begin_src sh -sudo apt update; sudo apt upgrade -y; sudo apt autoremove -y -sudo apt install apache2 -#+end_src - -If you need other language support, such as PHP, you'll need to install -that too. - -#+begin_src sh -sudo apt install libapache2-mod-php php-dom -sudo a2enmod php -sudo systemctl restart apache2 -#+end_src - -* Website Files & Folders -Next up is to create the directories for the domain(s) we want to be -hosted on this web server. - -#+begin_src sh -cd /var/www -sudo mkdir example.com -#+end_src - -We have a folder for =example.com= now, so let's add an =index.html= -file and put it within a specific =public_html= folder. You don't need -this =public_html= if you don't want it, but it helps with organizing -items related to =example.com= that you don't want to publish to the -internet. - -#+begin_src sh -cd example.com -sudo mkdir public_html && cd public_html -sudo nano index.html -#+end_src - -You can put anything you want in this =index.html= file. If you can't -think of anything, paste this in there: - -#+begin_src html - - - - - - Hello, world! - - -

      Hello, world!

      - - -#+end_src - -If you want something to be served at =example.com/page01/file.txt=, -you'll have to create the =page01= directory under the =example.com= -directory. For example: - -#+begin_src sh -cd /var/www/example.com/public_html -sudo mkdir page01 -sudo nano file.txt -#+end_src - -* Apache Configuration -Now, let's set up the files that will tell the server where to find the -files for =example.com=. We will copy the default configuration file and -create our own. - -#+begin_src sh -cd /etc/apache2/sites-available -sudo cp 000-default.conf example.com.conf -sudo nano example.com.conf -#+end_src - -This configuration file will have a few default lines, but you'll need -to edit it to look similar to this (settings may change based on your -personal needs): - -#+begin_src config - - ServerAdmin your-email@email-provider.com - ServerName example.com - ServerAlias www.example.com - DocumentRoot /var/www/example.com/public_html - ErrorLog ${APACHE_LOG_DIR}/error.log - CustomLog ${APACHE_LOG_DIR}/access.log combined - -#+end_src - -Now, enable the configuration for your new site, disable the default -configuration, and reload the web server. - -#+begin_src sh -sudo a2ensite example.com.conf -sudo a2dissite 000-default.conf -sudo systemctl reload apache2 -#+end_src - -You can always run a test to make sure no errors or warnings are found -in your configuration files. - -#+begin_src sh -sudo apache2ctl configtest -#+end_src - -Now, restart the web server entirely. After this, you should be able to -browse to =http://example.com= and see the HTML content you provided -earlier. Note that SSL/TLS has not been enabled yet, so you won't be -able to use the secure version yet (=https://example.com=). - -#+begin_src sh -sudo systemctl restart apache2 -#+end_src - -You can repeat this for as many websites as you need. Just create the -domain folders in =/var/www/=, add the configuration file, enable the -configuration, and restart =apache2=. - -* SSL/TLS Certificates: Serve Websites Over HTTPS -In order to serve secure content, you'll need to obtain SSL/TLS -certificates. Luckily, there's a free tool called -[[https://certbot.eff.org][Certbot]] that helps us with the process. - -The first step is to install =snapd= and =core= for Ubuntu. - -#+begin_src sh -sudo apt install snapd -sudo snap install core -sudo snap refresh core -#+end_src - -Next, install the =certbot= snap package. - -#+begin_src sh -sudo snap install --classic certbot -#+end_src - -Execute the following command to ensure that the =certbot= command can -be run. - -#+begin_src sh -sudo ln -s /snap/bin/certbot /usr/bin/certbot -#+end_src - -Finally, you can run =certbot= one of two ways: - -1. run it and let it alter your Apache configuration files automatically - to enable HTTPS redirects. -2. run it and only allow it to create certificates. You'll need to - manually alter the config files to enable HTTPS redirects. - -Run certbot and allow automatic config changes: - -#+begin_src sh -sudo certbot --apache -#+end_src - -Run certbot for certificates only and don't allow it to alter config -files: - -#+begin_src sh -sudo certbot certonly --apache -#+end_src - -The Certbot packages on your system come with a cron job or systemd -timer that will renew your certificates automatically before they -expire. You will not need to run Certbot again unless you change your -configuration. You can test automatic renewal for your certificates by -running this command: - -#+begin_src sh -sudo certbot renew --dry-run -#+end_src - -Now, test your domains by going to =https://example.com=. - -* Firewall Security -To enable better security on your server, you'll need to enable a basic -firewall. For Ubuntu, we'll use -[[../secure-your-network-with-the-uncomplicated-firewall/][the -uncomplicated firewall]]. - -Now, add the following rules to the firewall allow SSH, Apache, and -HTTP(S) connections. If you need to, you can enable different ports for -specifics applications, SFTP, etc. - -#+begin_src sh -sudo ufw default deny incoming -sudo ufw default allow outgoing -sudo ufw allow OpenSSH -sudo ufw allow Apache -sudo ufw allow proto tcp from any to any port 80,443 -#+end_src - -Once you've added all the rules you need, enable the firewall. - -#+begin_src sh -sudo ufw enable -#+end_src - -* Troubleshooting -If you run into any issues during your VPS set-up, be sure to walk back -through your actions and make sure you didn't miss any steps. - -Many websites have fantastic guides to setting up various web servers. -This is one of the areas -[[https://www.digitalocean.%20com/community/tutorials][where -DigitalOcean shines]]. For simpler or more Linux-oriented questions, I -suggest using [Linuxize] ([[https://linuxize.com]]). - -If you're getting certain errors (e.g. =500 Internal Server Error=) and -need to debug locally, you can view the =access.log= and =error.log= -files in the =/var/log/apache/= directory. diff --git a/content/blog/2021-04-17-gemini-server.md b/content/blog/2021-04-17-gemini-server.md new file mode 100644 index 0000000..7b4aafe --- /dev/null +++ b/content/blog/2021-04-17-gemini-server.md @@ -0,0 +1,165 @@ ++++ +date = 2021-04-17 +title = "Hosting a Gemini Server" +description = "" +draft = false ++++ + +# Similar Article Available + +To read more about Gemini and ways to test out this new protocol without +your own server, see my previous post [Launching a Gemini +Capsule](../launching-a-gemini-capsule/). + +# Preparation + +This guide assumes you have access to a server accessible to the world +through a public IP address and that you own a domain name used for this +Gemini capsule. + +# Getting Started with Agate + +We are going to use [Agate](https://github.com/mbrubeck/agate) for this +tutorial. This is a basic Gemini server written in Rust. It takes very +little time and maintenance to get it running. + +# Install Dependencies + +First, you will need to install the Rust package for your system. On +Ubuntu, use the following commands (remember to use `sudo` if +you are not the root user). The Rust installation will give you options +to customize the installation; I used the default installation options. + +```sh +sudo apt update && sudo apt upgrade -y +curl https://sh.rustup.rs -sSf | sh +``` + +Remember to configure your shell with the new configuration: + +```sh +source $HOME/.cargo/env +``` + +Before we install agate, make sure you have the `gcc` package +installed: + +```sh +sudo apt install gcc +``` + +Next, you\'ll need to install the agate executable with Rust\'s Cargo +package maintainer: + +```sh +cargo install agate +``` + +# Create Symlinks + +Once Cargo has finished installing all the required packages, symlink +the executable to your \$PATH. + +```sh +sudo ln -s $HOME/.cargo/bin/agate /usr/local/bin/agate +``` + +# Using Agate\'s Built-In Installation Tool + +If you\'re running Ubuntu or Debian, use the Debian installation script +found in Agate\'s GitHub repository, under the `tools/debian` +folder. + +```sh +git clone https://github.com/mbrubeck/agate +cd agate/tools/debian +sudo ./install.sh +``` + +# Configure the Gemini Service + +We have a little more to do, but since this script tries to immediately +run the service, it will likely fail with an exit code. Let\'s add our +finishing touches. Edit the following file and replace the hostname with +your desired URL. You can also change the directory where content will +be served. + +```sh +sudo nano /etc/systemd/system/gemini.service +``` + +```sh +# Edit these lines to whatever you want - see the next code block for my personal configuration. +WorkingDirectory=/srv/gemini +ExecStart=agate --hostname $(uname -n) --lang en +``` + +This is my personal config: + +```sh +WorkingDirectory=/var/gemini/ +ExecStart=agate --hostname gemini.example.com --lang en +``` + +Since we\'ve altered the systemd configuration files, we have to reload +the daemon. Let\'s do that, restart our service, and check its status. + +```sh +sudo systemctl daemon-reload +sudo systemctl restart gemini.service +sudo systemctl status gemini.service +``` + +# Fixing Systemd Errors + +If you\'re still getting errors, the installation process may not have +properly enabled the gemini service. Fix it with the following commands. + +```sh +sudo systemctl enable gemini.service +sudo systemctl restart gemini.service +sudo systemctl status gemini.service +``` + +# Firewall Rules + +Great! Our server is now functional and running. The first consideration +now is that you need to be able to access port 1965 on the server. If +you have a firewall enabled, you\'ll need to open that port up. + +```sh +sudo ufw allow 1965 +sudo ufw reload +``` + +# Creating Content + +Let\'s create the Gemini capsule. Note that wherever you set the +WorkingDirectory variable to earlier, Agate will expect you to put your +Gemini capsule contents in a sub-folder called \"content.\" So, I place +my files in \"/var/gmi/content.\" I\'m going to create that folder now +and put a file in there. + +```sh +sudo mkdir /var/gemini/content +sudo nano /var/gemini/content/index.gmi +``` + +You can put whatever you want in the \"index.gmi\" file, just make sure +it\'s valid Gemtext. + +# The Results + +Here are some screenshots of the Gemini page I just created in the +[Lagrange](https://gmi.skyjake.fi/lagrange/) browser and the +[amfora](https://github.com/makeworld-the-better-one/amfora) browser. + +![GUI Gemini +browser](https://img.cleberg.net/blog/20210417-hosting-a-gemini-server/lagrange.png) + +*Lagrange* + +![CLI Gemini +browser](https://img.cleberg.net/blog/20210417-hosting-a-gemini-server/amfora.png) + +*Amfora* diff --git a/content/blog/2021-04-17-gemini-server.org b/content/blog/2021-04-17-gemini-server.org deleted file mode 100644 index fd50c20..0000000 --- a/content/blog/2021-04-17-gemini-server.org +++ /dev/null @@ -1,150 +0,0 @@ -#+title: Hosting a Gemini Server -#+date: 2021-04-17 -#+description: A guide to self-hosting a Gemini web server on your own server. -#+filetags: :sysadmin: - -* Similar Article Available -To read more about Gemini and ways to test out this new protocol without -your own server, see my previous post -[[../launching-a-gemini-capsule/][Launching a Gemini Capsule]]. - -* Preparation -This guide assumes you have access to a server accessible to the world -through a public IP address and that you own a domain name used for this -Gemini capsule. - -* Getting Started with Agate -We are going to use [[https://github.com/mbrubeck/agate][Agate]] for -this tutorial. This is a basic Gemini server written in Rust. It takes -very little time and maintenance to get it running. - -* Install Dependencies -First, you will need to install the Rust package for your system. On -Ubuntu, use the following commands (remember to use =sudo= if you are -not the root user). The Rust installation will give you options to -customize the installation; I used the default installation options. - -#+begin_src sh -sudo apt update && sudo apt upgrade -y -curl https://sh.rustup.rs -sSf | sh -#+end_src - -Remember to configure your shell with the new configuration: - -#+begin_src sh -source $HOME/.cargo/env -#+end_src - -Before we install agate, make sure you have the =gcc= package installed: - -#+begin_src sh -sudo apt install gcc -#+end_src - -Next, you'll need to install the agate executable with Rust's Cargo -package maintainer: - -#+begin_src sh -cargo install agate -#+end_src - -* Create Symlinks -Once Cargo has finished installing all the required packages, symlink -the executable to your $PATH. - -#+begin_src sh -sudo ln -s $HOME/.cargo/bin/agate /usr/local/bin/agate -#+end_src - -* Using Agate's Built-In Installation Tool -If you're running Ubuntu or Debian, use the Debian installation script -found in Agate's GitHub repository, under the =tools/debian= folder. - -#+begin_src sh -git clone https://github.com/mbrubeck/agate -cd agate/tools/debian -sudo ./install.sh -#+end_src - -* Configure the Gemini Service -We have a little more to do, but since this script tries to immediately -run the service, it will likely fail with an exit code. Let's add our -finishing touches. Edit the following file and replace the hostname with -your desired URL. You can also change the directory where content will -be served. - -#+begin_src sh -sudo nano /etc/systemd/system/gemini.service -#+end_src - -#+begin_src sh -# Edit these lines to whatever you want - see the next code block for my personal configuration. -WorkingDirectory=/srv/gemini -ExecStart=agate --hostname $(uname -n) --lang en -#+end_src - -This is my personal config: - -#+begin_src sh -WorkingDirectory=/var/gemini/ -ExecStart=agate --hostname gemini.example.com --lang en -#+end_src - -Since we've altered the systemd configuration files, we have to reload -the daemon. Let's do that, restart our service, and check its status. - -#+begin_src sh -sudo systemctl daemon-reload -sudo systemctl restart gemini.service -sudo systemctl status gemini.service -#+end_src - -* Fixing Systemd Errors -If you're still getting errors, the installation process may not have -properly enabled the gemini service. Fix it with the following commands. - -#+begin_src sh -sudo systemctl enable gemini.service -sudo systemctl restart gemini.service -sudo systemctl status gemini.service -#+end_src - -* Firewall Rules -Great! Our server is now functional and running. The first consideration -now is that you need to be able to access port 1965 on the server. If -you have a firewall enabled, you'll need to open that port up. - -#+begin_src sh -sudo ufw allow 1965 -sudo ufw reload -#+end_src - -* Creating Content -Let's create the Gemini capsule. Note that wherever you set the -WorkingDirectory variable to earlier, Agate will expect you to put your -Gemini capsule contents in a sub-folder called "content." So, I place my -files in "/var/gmi/content." I'm going to create that folder now and put -a file in there. - -#+begin_src sh -sudo mkdir /var/gemini/content -sudo nano /var/gemini/content/index.gmi -#+end_src - -You can put whatever you want in the "index.gmi" file, just make sure -it's valid Gemtext. - -* The Results -Here are some screenshots of the Gemini page I just created in the -[[https://gmi.skyjake.fi/lagrange/][Lagrange]] browser and the -[[https://github.com/makeworld-the-better-one/amfora][amfora]] browser. - -#+caption: GUI Gemini browser -[[https://img.cleberg.net/blog/20210417-hosting-a-gemini-server/lagrange.png]] - -/Lagrange/ - -#+caption: CLI Gemini browser -[[https://img.cleberg.net/blog/20210417-hosting-a-gemini-server/amfora.png]] - -/Amfora/ diff --git a/content/blog/2021-04-23-php-comment-system.md b/content/blog/2021-04-23-php-comment-system.md new file mode 100644 index 0000000..f369d0c --- /dev/null +++ b/content/blog/2021-04-23-php-comment-system.md @@ -0,0 +1,277 @@ ++++ +date = 2021-04-23 +title = "Roll Your Own Static Commenting System in PHP" +description = "" +draft = false ++++ + +# The Terrible-ness of Commenting Systems + +The current state of affairs regarding interactive comment systems is, +well, terrible. It is especially awful if you\'re a privacy conscious +person who does not generally load third-party scripts or frames on the +websites you visit. + +Even further, many comment systems are charging exorbitant fees for +something that should be standard. + +Of course, there are some really terrible options: + +- Facebook Comments +- Discourse + +There are some options that are better but still use too many scripts, +frames, or social integrations on your web page that could impact some +users: + +- Disqus +- Isso +- Remark42 + +Lastly, I looked into a few unique ways of generating blog comments, +such as using Twitter threads or GitHub issues to automatically post +issues. However, these both rely on external third-party sites that I +don\'t currently use. + +# Stay Static with Server-Side Comments + +The main issue for my personal use-case is that my blog is completely, +100% static. I use PHP on the back-end but website visitors only see +HTML and a single CSS file. No external javascript and no embedded +frames. + +So, how do we keep a site static and still allow users to interact with +blog posts? The key actually pretty simple - I\'m already using PHP, so +why not rely on the classic HTML `
      ` and a PHP script to +save the comments somewhere? As it turns out, this was a perfect +solution for me. + +The second issue for my personal use-case is that I am trying to keep +the contents of my website accessible over time, as described by +@brandur, in his post entitled [Blog with Markdown + Git, and degrade +gracefully through +time](https://brandur.org/fragments/graceful-degradation-time) . + +This means I cannot rely on a database for comments, since I do not rely +on a database for any other part of my websites. + +I blog in plain Markdown files, commit all articles to Git, and ensure +that future readers will be able to see the source data long after I\'m +gone, or the website has gone offline. However, I still haven\'t +committed any images served on my blog to Git, as I\'m not entirely sold +on Git LFS yet - for now, images can be found at +[img.cleberg.net](https://img.cleberg.net). + +Saving my comments back to the Git repository ensures that another +aspect of my site will degrade gracefully. + +# Create a Comment Form + +Okay, let\'s get started. The first step is to create an HTML form that +users can see and utilize to submit comments. This is fairly easy and +can be changed depending on your personal preferences. + +Take a look at the code block below for the form I currently use. Note +that `` is replaced automatically in PHP with +the current post\'s URL, so that my PHP script used later will know +which blog post the comment is related to. + +The form contains the following structure: + +1. `` - This is the form and will determine which PHP + script to send the comment to. +2. `