Just a quick little quarantine project that I thought turned out okay. I went down a rabbit hole of watching some marble run videos over a few days. Since I was staring at my television while watching them, the idea of having the marbles go back and forth over it popped into my head. That expanded into having a coaster around my shelves of figures and other things, so I ordered in a few sets of Gravitrax, and found some automated lifts and other pieces on a couple of Etsy shops.
I had hoped that maybe the vertical piece connectors from Gravitrax would hold the bridge across the gap, but alas they are not designed for that, so I used some Command Strip Shelves to hold them up. Hopefully you enjoy the video.
References:
- MarbleTracks Etsy Shop: https://www.etsy.com/ca/shop/MarbleTracks
- MarbleMania3D Etsy Shop: https://www.etsy.com/ca/shop/MarbleMania3D
- Gravitrax: https://www.ravensburger.org/uk/discover/gravitrax/index.html
Pop out Player/Article to new tab/window
Also on Odysee
Just a short follow up to the previous article How Did I Setup Calibre/Calibre-Web for eBooks. I had a couple of people ask/comment about running this from a Raspberry Pi. Since I had a Pi 3 sitting here I thought I would give it a try and document the differences. Once you get it setup, it should work pretty much the same, but there were two differences in the install process.
First of all, the install script from the Cailbre site doesn't work because the Rasperry Pi uses the ARM architecture instead of x86. Luckily, if you are using Raspbian on your Pi, there is a Calibre build in the standard repositories. Or you can build it from source. Secondly, I ran into problems when running the latest code from the Calibre-Web repository. I am not sure if that is a general issue due to ongoing work in the current branch, or if something was introduced that specifically caused an issue on the Pi. It could also be a compatibility issue if the version of Calibre in the Raspbian repository is older, as I didn't dig too deep. Either way, going back to the 0.6.11 tag on Calibre-Web solved the issue.
Here are all the commands needed to get a basic setup going on an up to date Raspian install. These steps just put everything in folders under the home directory. You may want to change this for a more permanent setup.
#update package database and install Calibre sudo apt update sudo apt install calibre #Create basic Calibre database with a single book I downloaded mkdir Calibre calibredb add ~/Downloads/book.epub --with-library ~/Calibre/ #Test start of built in Calibre server (available at http://localhost:8080) /usr/bin/calibre-server --with-library ~/Calibre #Clone the 0.6.11 branch of Calibre-Web and install prerequisites git clone --branch 0.6.11 https://github.com/janeczku/calibre-web cd calibre-web pip3 install --system --target vendor -r requirements.txt #Start Calibre-Web (available at http://localhost:8083) python3 cps.py
Pop out Player/Article to new tab/window
Watch my full playthrough here
- Released: 1992
- Published By: Ultra Games
- Owned: Physical (Cartridge only)
I recently rewatched the original series of Star Trek so I thought it would be fun to play through some of the games I remembered as well. In 1992 Ultra Games released Star Trek 25th Anniversary for the Nintendo Entertainment system (NES). Ultra Games was a child company of Konami that was created to get around Nintendo's limit on the number of games a single company could release in a year. To coincide with the anniversary, there were also games released on the Nintendo Gameboy and PC platforms published by Ultra Games and Interplay respectively.
Story
The game is a sequel to the events of the original series episode "A Piece of the Action". While it is not necessary to have seen the episode before playing, it does reference characters from the story as well as others like Harry Mudd. At the start of the game the Enterprise arrives at the Sigma Iotia system, where they start encountering fluxuations in the gravitational fields. While investigating, a tear in space opens up and flings the enterprise off to an unknown part of space. In the process, the ship is damaged, and the dilithium crystals are drained, leaving the enterprise stranded in a decaying orbit around an unidentified planet. You must take control of the crew in order to repair, refuel, and return the ship to Federation space so that you can investigate the cause of the disturbance.
Gameplay
Star Trek mostly plays like an adventure game. You explore areas, collect items, and solve puzzles. When beaming down to planets, you can choose from Spock, McCoy, or a number of experts to form your three person away team. These experts will be able to give you specific information depending on the situation, and also help with things related to their field. For instance the security officer is able to provide some extra firepower in a tough situation. A phaser and tricorder are the only items you start with, the rest being found along the way. While on the ship, you can use the D-Pad to turn and face different crew members, and the A-Button to interact with them. If you are in ship to ship combat, the D-Pad controls the ship, the A-Button fires phasers, and the B-Button fires photon torpedoes. When on an away mission, the D-Pad moves Kirk (the rest of the away team will follow), the A-Button uses the tricorder on nearby objects, and the B-Button fires your phaser. In all cases the Start button brings up another menu from where you can use items from your inventory, and perform some other tasks.
Notes
The symbols in the temple on the first planet are the same symbols used on the album Led Zeppelin IV.
I am not 100% sure, but I think the store on Sigma Iotia II has Nintendo Entertainment Systems for sale...
Subjectively...
As a kid I remember renting this game multiple times as a kid. I don't think I ever beat it back then but I do have fond memories of it, so I was glad when I found a physical copy to buy. Replaying it now I was pleasantly surprised on how well it held. up. The graphics while on away missions are very impressive for the NES (although the graphics on the ship range between good and a little scary...), and the sounds/music match the iconic cues of the source material pretty well. From a difficulty perspective none of the puzzles are particular difficult, especially if you follow the old adventure game strategy of writing things down. The story had me engaged even if it was fairly short. Unfortunately due to it being an adventure game, there is not much replay-ability to this title, and if you know what you are doing, it can be completed in under 90 minutes. There are lots of places you can visit on the starmap but sadly most of them aren't relevant to the story. Having said that the game feels like I am playing through a good Star Trek episode and is worth returning to it again when I want a nice, compact Trek story to play.
I have been playing some games for upcoming Game Cats on the Digital Lethargia Twitch channel.
References
- Moby Games: https://www.mobygames.com/game/star-trek-25th-anniversary__
- GameFAQs: https://gamefaqs.gamespot.com/nes/563461-star-trek-25th-anniversary/data
- Wikipedia: https://en.wikipedia.org/wiki/Star_Trek:_25th_Anniversary_(NES_video_game)
Pop out Player/Article to new tab/window
While I do still use services like Amazon and Google for certain movies, books, and comics, I like to retain actual ownership of my digital media whenever possible. In that spirit I run a Plex server in my home network for my collection of movies and music. After using (and loving) a Kindle (Keyboard model) for over 10 years it was finally time for a new eReader. I have tried reading from a computer/tablet screen and definitely notice a difference in eye strain compared to eInk.
For my work and other projects I still hand write a lot of notes. I think the actual act of writing helps cement the information in my head more than typing does, but referring back to or searching old notes becomes difficult with stacks of notebooks. With that in mind I decided to go with an eInk tablet that runs Android and has full stylus note taking with handwriting to text recognition.
Once that decision was made, I started looking into possible changes to my setup for storing/consuming eBooks and Comics. I already had a decent amount of DRM free eBooks from many of the Humble Bundles, and since the new eInk tablet runs Android, it can run the Kindle and Google Play reader apps for any books that you can't find being sold DRM free (Plus there are ways around the DRM of those platforms, but I won't go into that here...). What I wanted was a system similar to Plex that could catalogue, track and serve out my books so that I could consume them in whatever way I wanted (mostly from my eReader, but it is nice to have options from the browser or just to download the file). The most active one I could find was an eBook manager called Calibre, it was open source and seemed to do everything I wanted. In most of the reading I had done, the biggest complaint seemed to be that the web interface was not that great, so another open source project called Calibre-Web was started. Both packages also support the OPDS standard, which a lot of eReading apps use to sync/download eBooks.
It was a little unclear if both packages were needed, so let's find out.
Calibre
My home server is currently still on Ubuntu 18.04 LTS. Luckily this is still supported by Calibre and they have easy install instructions here. Calibre is firstly designed as a desktop app, which I knew may cause some difficulty installing it on a headless system, but it turned out to be an easy process thanks to some documentation I found.
Prerequisites
Calibre requires the xdg-utils, wget, xz-utils, and a version of Python greater than 2.6. I already had a newer version of python and wget, so I installed the other two using the apt command:
sudo apt install xdg-utils sudo apt install xv-utils
Install
Once you have the prerequisites installed, they give you a simple one line install command for the Calibre package:
sudo -v && wget -nv -O- https://download.calibre-ebook.com/linux-installer.sh | sudo sh /dev/stdin install_dir=/opt
There are also options for an isolated install or one for systems without updated CA certificates.
Setup
Calibre is normally used as a desktop app that has a server component. Since I was installing this on a headless server I would not be able to do the setup that way. There is a full command line interface and I found a guide that helped with this process at Digital Ocean. Note that if you watch the video, I mistakenly followed the 14.04 guide so the "xvfb-run" part of the commands are no longer necessary and I have omitted that from the text version here.
The first thing you need some eBooks. If you do not currently have any you can download some from Project Gutenberg, which distributes public domain books. Once you have some you will need to create a calibre database and add your books to it. To do so we can use the calibredb command, it will create a new database/folder structure at the location you specify if one does not already exist. An example of adding a single eBook would be:
calibredb add ~/Downloads/MyBook.epub --with-library /mnt/storage/calibre/
Or you can add a full directory structure of books with the --recurse option:
calibredb add /mnt/storage/Books --library-path /mnt/storage/calibre --recurse
It will add any known file types to your database, and from a basic setup purpose that is all that we need to do.
Automatically Add Books
I kept following the steps from the Digital Ocean tutorial to setup a folder that I could copy new books into, and every 15 minutes the system will check and add any new books into the Calibre database. The crontab entry is as follows:
*/15 * * * * /opt/calibre/calibredb add /mnt/storage/Documents/Books/ToAdd --recurse --library-path /mnt/storage/calibre && rm /mnt/storage/Documents/Books/ToAdd/*
Note that you can also add new books via the web interface, but this is a good way for bulk additions, like when you purchase a new bundle of eBooks.
Web Interface
Calibre comes with its' own web interface. It seems to work well and supports OPDS, but I found it clunky to use and not that nice to look at. It is simple to start with the calibre-server command with the path to your Calibre database:
/opt/calibre/calibre-server /mnt/storage/calibre
You can also make it run as a service, similarly to what we will do with the calibre-web package below.
Calibre-Web
I had come across the calibre-web project while researching and it looked pretty good to me. It is a fork of the Library project which aims to provide a modern web interface to a Calibre database and is Python based. It requires Python 3 and is very easy to install. I did not have the Python3 version of pip (Basically a package manager for Python) so I first installed that:
sudo apt install python3-pip
Then I cloned the calibre-web repository using git:
git clone https://github.com/janeczku/calibre-web
They have a handy file that you can run pip against to install all the prerequisites:
pip3 install --system --target vendor -r requirements.txt
And then you can simply start the server with:
python3 cps.py
When you first browse to the site, it will ask you for the location of your Calibre database, and then you can login with the default admin/admin123 username and password.
Start As Service/daemon
Now that I have confirmed the system is working, I want it to start automatically as a service/daemon. My system is using systemd for daemons so following the instructions from the wiki I created a file called cps.service under /etc/systemd/system with the following:
[Unit] Description=Calibre-Web [Service] Type=simple ExecStart=/usr/bin/python3 /home/lemac/calibre/calibre-web/cps.py WorkingDirectory=/home/lemac/calibre/calibre-web [Install] WantedBy=multi-user.target
Note that for my testing I left the folder underneath my home folder, it may be wise to move this somewhere more global for a final setup. Once that file is created, you can enable the service with the systemctl command and similarly stop/start it as necessary:
sudo systemctl enable cps.service sudo systemctl start cps.service sudo systemctl stop cps.service
Usage/Experience
I found the interface of Calibre-Web to be smoother than the built-in one that comes with Calibre. At first finding books through categories and authors was a bit messy. This seemed to be because the tags it read in from the books themselves were either overly complicated or not present at all, so I took some time going through my collection and cleaning up the tags. The system can fetch info and cover images from online databases which also helped, although I would still cut down the number of tags because I like simpler, more general categories.
A really good feature is the ability to convert your ebooks between a number of different formats (e.g. convert mobi to epub). The conversion leaves the original format intact as well, which helps when you are using different devices/apps to read your books.
The system also allows multiple users, so you can add other people from your household, share your books and allow them to upload theirs. Virtual shelves can also be created to organize your collection however you like, and these shelves can be made public to to other users on your system. For instance I have created public shelves for all comic books, my favourite books, and the books I have finished reading.
Reading
Reading books using this system is pretty easy. From the browser itself you can read any book in a supported format, and it will keep track of your reading progress. You can also download the file in any of the available formats to read on your computer, or copy over to your reader/tablet via USB or other means.
I prefer doing my reading on eInk screens. I spend so much of my time in front of brightly lit screens, so reading on eInk definitely is more comfortable on my eyes. For my old Kindle (Or if you use the Kindle app), Calibe-Web supports the send to kindle option as long as you have an smtp/mail server setup. This makes it as easy as clicking on the book, and then clicking the "Send Mobi to Kindle" option. The next time you turn on your Kindle, the book will be downloaded.
For my new Onyx Boox device, I wanted to try out the OPDS support that Calibre-Web has, so I started using the Moon+ Reader app. Inside the app you simply create a new "Net Library" and point it to the address of your Calibre-Web server with /opds added to the end. You can then browse through all categories and shelves you have created, select a book you want to read, and tap "Download". The book downloads to your device and you can start reading.
Finding DRM free eBooks and Comics
To get more books into your collection, they will need to be DRM free, which can be a little more inconvenient but not too bad. There are a bunch of publishers/authors who have specified that their books be sold without DRM. If you have a particular store you buy from, do a search to see if they have a DRM free section. Also, the Humble book and comic bundles are generally all DRM free. Here are some links to DRM free sources I have found:
- Ebooks.com DRM free section (store) - https://www.ebooks.com/en-ca/drm-free/
- Project Gutenberg (Free public domain ebooks) - https://www.gutenberg.org/
- Comixoly DRM free section (Comics store) - Link
- Current Humble Books/Comics bundes - https://www.humblebundle.com/books
There are also ways to get your current Kindle collections and other DRM protected books into Calibre, but I will not cover that here.
ToDo
That is basically it for a setup of my new book server. There are some things I still want to tweak, and a couple decisions to make. I want to add a certificate and make the service run over SSL so that it is encrypted. I could possibly move this setup to my online host, so that my collection is available from anywhere, but I think I will keep it local as it is a rare occasion I am out somewhere and need to download a book.
As I said before I am calling these articles "How Did I"s instead of "How To"s because I am by no means an expert, and would like these articles to encourage people to give these things a try and let me know how I could improve my processes. You can participate via the message boards on the BBS, the Discord server (invite link on your user dashboard), via Twitter, or through the comments on the video available on Odysee/LBRY and YouTube.
References
- Calibre - https://calibre-ebook.com/about
- Calibre-Web - https://github.com/janeczku/calibre-web
- OPDS - https://en.wikipedia.org/wiki/Open_Publication_Distribution_System
- Digital Ocean Guide: Link
Now that Digital Lethargia and the Diglet RSS Reader have been live for a while, I figured there was a need to break up the content that gets loaded into pages or something similar. This will help keep the scroll length and the memory usage of the browser down. I also noticed that performance of fetching unread RSS articles was starting to suffer, and retrieving fewer rows at a time could help with that, although that problem was better helped by creating some extra indices on the databases as you will read below.
Let's start with pagination, which turns out to be a lot more complicated than you would think. It is especially hard to do on data that is coming in from multiple sources, and is mixed together into one view. I spent a long time trying to get some traditional methods to work, as well as some add-ons to GORM and GoLang in general, but got nowhere with it. Another method would be to do the dynamic loading that a lot of social media sites use, where new content is automatically loaded as you scroll towards the bottom of the page. This is a behaviour that I really do not like, as it gets in the way of getting to links at the bottom of the page (where a lot of RSS feed links are!) and I just don't like the idea of constantly chasing the end. The method that ended up working for me in the end is kind of a hybrid of this dynamic model and Cursor Pagination. So lets start with defining the two common methods of paginating results from a database:
Limit/Offset Pagination
This is the most common and basic method of pagination. It basically uses the formula of "Give me a list of items starting at the row number equal to the page number multiplied by the number of items per page". As you serve out the data, your "Limit" is the number of items per page, and your "Offset" is the Limit multiplied by the page number, giving you a query like the following:
SELECT * FROM items ORDER BY date DESC OFFSET 100 LIMIT 10
This works great on smaller data sets that are fairly static, and are easily ordered. But it does not scale very well, as once the offsets start getting large, the queries have to step over all the rows that you are skipping. Also, if items are inserted or deleted, they are easily missed as someone is paging through the data and the number of rows change.
Cursor Pagination
The concepts of Cursor Pagination are similar to Limit/Offset, except that instead of just dealing with the starting row and number of rows, you are keeping track of an identifier for the last item returned. For the next page, you are querying for values greater/less than that value. From a database perspective this is much more efficient, as finding your starting point using a WHERE clause is much quicker than giving an offset of rows to skip over. Another advantage to this method is that it prevents items being missed on active/dynamic datasets. If rows are being inserted/removed while people are trying to page through them, they won't be missed because we are continuing on from the value of where we left off, instead of a static number of rows. An example of a query statement for Cursor Pagination would be:
SELECT * FROM items ORDER BY date DESC WHERE date < cursorValue LIMIT 10
While it looks very close to the original one, it will be much faster in larger datasets, but you need to keep track of the values for the cursor between loads. One of the downsides to Cursor Pagination is that it is a bit more complex, especially if you also want to be able to return to previous pages.
This is basically the method I ended up using. Since you can mark items as read or saved for later, and the fact that each feed's articles will be returned in the order of their publish date as they get added, the data being returned is fairly dynamic, and would be fairly inaccurate if I used the Limit/Offset method. While it is possible to keep track of previous pages in Cursor Pagination, the hidden read/saved items in my dataset started to make that functionality really complex and I decided to abandon "pages" of data for a method closer to the dynamic loading I mentioned earlier. Instead of automatically loading the next set of data when you scroll, I added a "Load More" button that stays at the bottom. When clicked an AJAX request uses the cursor value to return the next set of data and it dynamically gets added to the end of the page.
How to Use
By default, the page size is set to 25 items.You can change this using the dropdown at the top right:
If there are more items than that count, a "Load More" link will appear below the last article. Simply click on that link to retrieve the next set of articles.
The sorting icon and behavior has changed a little. Click the icon () at the top right to toggle the date sorting between descending (default) and ascending order.
Indices
Splitting up the data into pages did help with the rendering time and memory usage in the browser, but did not help much with the loading times overall. Since I am not very experienced on the database design side I figured I had a great deal that could be improved there. I started by doing some research into creating indices in MySQL to see if that would help. An index for MySQL is basically a separately maintained BTree (in most cases) of the values you want that are kept in order to allow for faster lookups. Indices can slow down the writes of a table since it also gets updated with every INSERT, but can greatly increase the speed of lookups. There was currently no issue with write speeds in the system, so I continued.
When creating the list of articles to display for a user on the front page, the system is getting all the articles for each feed that they follow, and omitting any of them marked as read or saved by the user through a JOIN to a separate table. I assumed that this was the piece that was slowing things down the most, so I created a multi-column index on the articles table based on the feed Id, published status and publish date. I also created one on the table that keeps track of items read/saved based on the user Id and the article id. As soon as the indices were populated, the load times of the front page improved by over 5x. Success!
Changes committed since last Dev Diary
- You can now collapse the sidebar using the () icon
- Added social links (YouTube, Twitch, and Twitter) to menu
- Pagination implemented (See above for instructions)
- Fix for overflow on long article titles.
- Added BBS and Help pages
- Added support tickets for logged in users
- Fixed security bug with public groups
- Log cleanup job.
References
- Limit/Offset and Cursor Pagination: https://dev.to/jackmarchant/offset-and-cursor-pagination-explained-b89
- Cursor Pagination: https://medium.com/swlh/how-to-implement-cursor-pagination-like-a-pro-513140b65f32
- Indices: https://www.tutorialspoint.com/mysql/mysql-indexes.htm
Pop out Player/Article to new tab/window
Instead of "How To"s I am calling articles like these "How Did I"s. I am doing so for two reasons, the first is just to be difficult, and the second is that I don't consider myself an expert on any of these subjects, so I want to encourage discussion on what I could have done differently or better. Hopefully these can help someone overall, or possibly help me to improve my own techniques, so feel free to leave feedback (Hoping to have a comment system written for the website, but until then feel free to use the BBS message boards, Discord server, or video comments).
Recently, the Digital Lethargia BBS went live. Since I have separated out the database, web server, and app server portions of Digital Lethargia into Docker containers, I wanted to do the same for the BBS. This helps keeps things modular, easily updated with change control, and somewhat sand boxed. Plus with telnet being a somewhat insecure protocol, I liked the idea of it going to a container rather than directly to the host itself. I chose Synchronet as the BBS system to use because it is still actively maintained, has a good community, and has good management tools. This article assumes a small level of experience with building packages and using Docker.
Prerequisites
The first thing I needed to do was get my system ready to build and test the Synchronet system. I am currently using an Ubuntu 20.04 system, so these steps may differ depending on your distribution. The Synchronet Wiki lists the needed perquisites (with Debian package name) for linux systems here. Personally I was missing the ncurses dev library (libncursessw5-dev) and the Netscape Portable Runtime Library (libnspr4-dev) so I had to install them using apt:
sudo apt install libncursesw5-dev sudo apt install libnspr4-dev
Since I planned to be running some old DOS doors with the BBS, I was going to need dosemu to run them. There didn't seem to be a package for dosemu in the official Ubuntu repositories, so I ended up finding a package built for 19.10 on Launchpad and installing that using apt (you can use dpkg as well).
wget http://launchpadlibrarian.net/363141379/dosemu_1.4.0.7+20130105+b028d3f-2build1_amd64.deb sudo apt install ./dosemu_1.4.0.7+20130105+b028d3f-2build1_amd64.deb
Building
Now that I had the prerequisites installed, I was able to proceed with building the package from source. Luckily, the Synchronet Wiki has very good instructions for this and a wonderful Makefile that even handles the code checkout for you, so generally all you need to do is obtain the Makefile and build:
wget https://gitlab.synchro.net/main/sbbs/-/raw/master/install/GNUmakefile make install USE_DOSEMU=1
Note that if you are just building Synchronet to run outside of a container, I would suggest following the instructions and use the symlink flag (e.g. "make install SYMLINK=1 USE_DOSEMU=1") as that will make subsequent updates/upgrades to the system easier in the future. Since I will be rebuilding the container for updates, I chose to not use the symlink flag.
I got a number of warnings during the build, but luckily no errors. At this point I ran a quick test to see if the scfg utility would load to ensure the build worked. A reminder that by default, Synchronet looks for its' config files under the /sbbs/ctrl folder, so you will need to add the SBBSCTRL environment variable if you are not using that path:
export SBBSCTRL=/home/lemac/src/sbbs/ctrl
Containerization
I already had a Docker environment setup on my machine, so I won't go into that setup here. I am still somewhat of a beginner on the Docker side of things so I am sure there are some ways I can improve this. I started by creating a docker folder in my sbbs folder (The git repository is one level down still so this doesn't conflict). I also created folders and moved in needed items like the dosemu package and termcap file to make things easier.
Within the docker folder I created a Dockerfile, which you can think of as a Makefile for docker containers. It tells docker all the information and pieces it needs to build a container.
Here is the final Dockerfile I ended up with:
#Use Ubuntu 20.04 FROM ubuntu:20.04 LABEL Description="Synchronet BBS server" EXPOSE 23 22 RUN mkdir /sbbs COPY ./ /sbbs/ COPY ./dosemu/dosemu.deb /tmp/dosemu.deb COPY ./termcap/ansi.bbs /tmp/ansi.bbs RUN apt update RUN apt install -y \ libnspr4 \ libncursesw5 \ zip RUN apt install /tmp/dosemu.deb -y RUN rm -rf /var/lib/apt/lists/* RUN chmod -R 0755 /sbbs/exec RUN tic /tmp/ansi.bbs RUN mkdir /media/cdrom COPY ./dosemu/dosemu.conf /etc/dosemu/dosemu.conf ENV LD_LIBRARY_PATH="/sbbs/exec:${LD_LIBRARY_PATH}" ENV LANG="en_US" WORKDIR /sbbs/exec ENTRYPOINT ["/sbbs/exec/sbbs"]
FROM ubuntu:20.04 - tells Docker which base image to start from and generally needs to be the first line. Since I used Ubuntu 20.04 to build the system, I chose that as my base image.
LABEL - Defines a description for the container
EXPOSE - These are the ports the container will be listening on. These can be mapped to host ports at runtime.
RUN/COPY - The commands themselves are pretty self explanatory, they run a command within the container, or copy items into it. I am doing the following set of steps here:
- Create the /sbbs folder
- Copy the sbbs folder structure in from my dev directory
- Copy the dosemu deb package into the tmp folder
- Copy the ansi-bbs termcap file into the tmp folder
- Run the "apt-update" command to populate the list of available Ubuntu packages
- Install the needed packages (Note that these are the binary versions of the dev prerequisites I installed before building)
- Install the dosemu deb package
- Remove the package list files to save space now that everything is installed
- Make sure everything in the /sbbs/exec folder is executable
- Compile the ansi-bbs termcap info file
- Copy my modified dosemu config file over the default
- Add the /sbbs/exec folder to the LD_LIBRARY_PATH so that the executables can locate their needed libraries
- Ensure the LANG variable is set to "en_US"
WORKDIR - This line defines the working folder the container will be in at runtime. In this case the exec folder of sbbs
ENTRYPOINT - This lines defines the command that will be executed when the container starts. In this case the sbbs server.
Once I had everything defined, I built it with the "docker build" command. I use the -t argument to tag the container (My naming convention is repository:ServiceDateBuild), the -f argument to specify which Dockerfile to user, and finally the required context argument as "." to use the current directory:
docker build -t="diglet:bbs12152001" -f docker/Dockerfile .
The command will download the base image you specified in the FROM line, apply all the changes/commands specified in the rest of the Dockerfile, and package it up as a new container image.
Running/Testing
To test the container I first just ran it in the foreground to see all the output with the "docker run" command, using the --name argument to name the container, followed by the tag of the container image to run:
docker run --name diglet-bbs-test diglet:bbs12122001
At which point I saw the familiar Synchronet output and things were looking good, so time to test it out. Because of the way I started the container, it will only be accessible from the local machine, which is fine for initial testing. To do so I needed to find out what IP address docker gave the container within its' external range to do so I used the "docker inspect" command and looked for the "IPAddress" line in the output:
docker inspect diglet-bbs-test ... "IPAddress": "172.17.0.2", ...
With that information I was able to load up SyncTerm, and connect to the BBS using telnet. Success!
The nice thing about Syncrhronet is that you can run the scfg utility right from within a session to the bbs. There may situations where you need to be able to do run a command from within the container. To do so I execute the bash shell within the container in interactive mode (-i) using a psuedo terminal (-t)
docker exec -it diglet-bbs-test /bin/bash
This gives you a prompt to run anything you need to.
Persistent config and external access
At this point I have a container image that I can deploy a default Synchronet setup from at anytime. Once deployed I can connect to it and configure it. The next issue I needed to solve was the fact that all the data and config are stored within the container itself, so that anytime I need to deploy an updated container, all that info will be wiped out. Luckily, docker allows you to map folders within your container to folders on the host (-v), even if they already exist within the container itself. Note that you will have to copy the contents of these folders from your initial build folder to the host mapped folders the first time. To allow external access, you can tell docker to map an exposed port on the container to a port on the host (-p). And finally, you can tell docker to run the container "detached" (-d) so that it keeps running in the background. An example of the run command similar to my prod host:
docker run --name diglet-bbs \ -v /root/dockerdata/sbbs/ctrl:/sbbs/ctrl \ -v /root/dockerdata/sbbs/text:/sbbs/text \ -v /root/dockerdata/sbbs/xtrn:/sbbs/xtrn \ -v /root/dockerdata/sbbs/data:/sbbs/data \ -v /root/dockerdata/sbbs/mods:/sbbs/mods \ -p 23:23 -p 22:22 \ -d reponame/diglet:bbs12122001
Note that I also mapped the entire xtrn folder to my host, which means I would have to copy any updates done to the xtrn folder when upgrading the system as well. A better practice might be to have a separate folder for doors that you add to your system.
And that is basically it. I can now delete and redeploy the container, and as long as I keep the same mapped folders, my config/data will persist.
References
- Synchronet: https://www.synchro.net/
- Synchronet Wiki: http://wiki.synchro.net/index
- Docker: https://www.docker.com/
- DOSEMU: http://www.dosemu.org/
- Ubuntu LaunchPad: https://launchpad.net/ubuntu
I have been using and improving the RSS reader I built over the past while and I feel it is probably up to about a Beta level of quality. So rather than fall into my old trap of keeping something private until I think it is as perfect as it can be, which ends up being never, I am going to open it up to the world to break and provide some feedback on.
For a more detailed explanation, I suggest reading Dev Diary 5, but in basic terms, this feature will allow you to follow and add content feeds, aggregating them all into one place (This site!). So you can add feeds for news, comics, and podcasts etc to consume as you like, choosing what gets mixed into the front page and what articles to save for later.
A little rough around the edges
There is definitely some clunkiness in the system currently. I am not great at user interface design and have a lot of general improvements I will be trying to make over the coming weeks. To help get you going I have written three help articles with videos:
These can all be viewed from the Help page accessed from the menu as well.
Pagination
One thing that quickly became apparent was the need to paginate the content. Things are fine when it is just a few local articles, but loading hundreds of them at once definitely slows the browser down. Paginating is a surprisingly tricky thing to do well, especially when combined with read and save function, and it will probably be the subject of an upcoming Dev Diary as I try to implement the feature. As a workaround I limit the front page to loading 100 articles at max, individual feed pages are not limited because of how I currently do the "Read All" functionality, but I will work to paginate all sections.
Podcast Feeds
Podcasts are also a little clunky right now, but they work. The nice thing is that I embed a player as the article loads so you can play it right on the site, but of course as soon as you navigate elsewhere it stops playing. I added a "Pop out player" link under each embedded item that will open the article/podcast in a separate tab/window as a workaround. The system also doesn't save your place if you quit in the middle of an episode. There are some potential Javascript workarounds to this that I am researching, as well as some possibilities around downloading the episodes to mobile devices that doesn't require an app.
There will be bugs
As I said, the system is still somewhat messy, and that means there will be problems. One such issue is malformed RSS feeds. I have found a few so far, and when I do I code in workarounds for them, so please report any errors you find and I can take a look. On top of that I am sure there are dozens of bugs I haven't even discovered yet because someone will think of trying something I haven't.
Speaking of which, I have a couple of methods for providing feedback. The first is to submit a problem/question ticket from your User Dashboard. Simply click on the "Problem/Question" button in the Actions section, fill in as many details as possible, and submit it. Any responses I add to the ticket will display on the front page of the site for you.
The second way is through our Discord server. You can join the server by clicking the "Join our Discord Server" button in the actions section of your User Dashboard. In the server we have a "help-and-feedback" channel you can ask questions in.
Just keep in mind that Digital Lethargia is a personal project, so as such I may not be able to fix issues right away.
Onwards and Upwards
With the RSS reader in a somewhat working state, I can put some more attention into other upcoming features, the next of which is the Digital Lethargia BBS:
One of the main features for Digital Lethargia that I planned from the start was an RSS/Atom reader/aggregator, and I have finally made some progress towards that in the past couple of weeks. Even though it is an old technology, and is nowhere near as prevalent as it once was, I still feel that it is the best way to receive the content I want, the way I want it. I am hoping that I can provide a system here that other people may actually enjoy using, giving them a method of receiving news/content from sources they pick in a way that is not filtered or tracked through the algorithms of the big social networks.
What is RSS?
Depending on which point in history you are looking at or what you are using it for, RSS can stand for Really Simple Syndication, Rich Site Summary, or RDF Site Summary. It was first designed by people at Netscape as a means for them to build channels into the My Netscape Portal, pulling in headlines from different sources. After the AOL takeover of Netscape shifted the focus, development of the RSS specification was picked up by others who disagreed on what the purpose of RSS should be. Today we are basically left with two similar standards still in use. RSS 2.0 which took on the "Really Simple Syndication" name, and the Atom Syndication format (See "The Rise and Demise of RSS" for a more detailed history).
Today RSS is probably most widely used for the distribution of podcasts, but most news sites, comics, and blogs still serve out feeds. Basically, an RSS feed is an XML file that provides a listing of headlines, summaries, and sometimes the full content from a source like a website, comic, or podcast. It provides a way for content creators to syndicate their items to people so that they can consume the content however they like. This is usually done via applications called readers/aggregators, but the format provides an easy way for any application to use.
Why is RSS "dead"?
Many people will say the rise of social networks was a big killer of RSS, and that is true to a certain extent. Things like Facebook and Twitter make a convenient place to get most of your news, while following content creators and friends at the same time. Some of my concerns with consuming content though the social networks include the privacy issues around using those services, the trust around the spread of misinformation just because things are shared by a large number of people/bots, and the fact that the algorithms of those sites are designed to feed you specific types of content just to keep you clicking more items within their ecosystem.
Another reason RSS has declined over the years is that it is not the easiest thing to understand, and has not evolved in any significant way over the years. Especially after Google shut down its' Reader app, and the major browsers dropped support, RSS became that much more inconvenient to use, and it can take a bit of hunting to find the RSS feeds for the sources you trust as they don't prominently display the feed icons anymore.
Digital Lethargia RSS
When starting work to add support for what I am calling "Remote Feeds" (RSS/Atom, others in the future) into Diglet, my plan was to use the built in XML support that GoLang has to read the files, and then do the mapping of the items myself. I did something similar in PHP for the last version of Digital Lethargia, but then I came across a library called gofeed that is still being maintained. GoFeed has removed a lot of the tedious steps I was going to have to go through myself, and if it turns out I need to write my own library for this eventually it is separated enough that I can do so without having to change much existing code.
When complete, you will be able to go to a page that shows all the feeds you are currently following, and list all the available ones you are not. Categories and filtering will be available to help narrow down the selections, and you will be able to preview the current articles within that feed. If you don't see what you are looking for, you can simply paste a link to the RSS feed of your choice to follow it. Once you follow a feed, by default, all the articles from that feed will be added to the aggregated list of articles you see on the front page, mixed in with the local Digital Lethargia content. You will be able to change this and pick which feeds get shown together vs which ones you will have to select from the navigation menu and view separately (e.g. if you don't want to see podcasts or comics on the main page, you can remove them). Remote feeds will be updated hourly, evenly spread out over each 15 minutes, and removal of old items will be done automatically. Some other features:
- Marking articles as "read". This will hide them from the list, but you can view hidden items if needed
- Marking articles as "saved". This will hide them from the list, but place them in a separate "Saved for later" feed and make them exempt from cleanup
- Ability to play podcasts right from the article. Looking into options to pop out a player to a separate tab so that you can continue browsing.
Work in Progress
Some examples of how this is working in the current development build of diglet:
Work in progress article full view
Work in progress preview card view.
References
- GoFeed: https://github.com/mmcdole/gofeed/
- The Rise and Demise of RSS: https://www.vice.com/en_us/article/a3mm4z/the-rise-and-demise-of-rss
- RSS: https://en.wikipedia.org/wiki/RSS
- Atom: https://en.wikipedia.org/wiki/Atom_(Web_standard)
- XML: https://en.wikipedia.org/wiki/XML
It didn't take long for me to break the second lesson from Dev Diary 1 about sticking to more frequent dev sessions and trying to sit down for at least 30-60 minutes a day to work on something productive for myself. But that is ok, excuses and life aside, I will use another movie quote that is helpful when you don't meet your goals or fail to stick to a plan:
Every passing minute is another chance to turn it all around
So on that note I am back working on Digital Lethargia, and thought I would just write a short Dev Diary about a couple of added features.
Content list views
For sections that have a list of items/articles, you can now choose between three views by using the buttons at the top right of the page (). Your choice will persist across your session, and will eventually persist in a user preferences table once I have a reason for people to have accounts here. There is a full view where the articles are all fully displayed, a preview view where cards showing the splash image and a snippet of the articles are displayed, and a headline view where just a list of headlines is displayed. For the preview and headline views, you can click on the title to view the full article. A "Read more..." link is also at the bottom of the preview cards.
Admin Dashboard
I have also created a base admin dashboard that will act as my starting point for all settings and content on the site. It can drill down through the different content types, with links to edit each individual item. This can also be used as a template for the user dashboards, where you will be able to manage your personal preferences, content (if you are an author in a feed), and get statistics.
JQUERY and AJAX
The views and dashboard were made using JQUERY and AJAX calls. JQuery is a javascript library desinged to help with reading and manipulating the HTML DOM (Document Object Model). Basically, it allows you to make decisions and change things within the current page, without having to reload the entire document each time. AJAX is also javascript, and allows you to send and receive data from the server asynchronously. When combining the two methods, you can have JQuery listening for something the user does (e.g. click on the change view button), have AJAX request new data from the server, and pass that back to JQuery to change attributes on the currently displayed document. I have not used these libraries in a long time, but I was pleasantly surprised to find that the knowledge all came back fairly quick. Also, it is very easy to add scripts with the multi-template package within Gin for GoLang.
Changes committed since last Dev Diary
- Base admin dashboard
- Fix for view transistions
- Base implementation of content list views
- Splash Image handing
References
- JQUERY: https://jquery.com/
- AJAX: https://api.jquery.com/category/ajax/
With the past explored, and the present documented, it is time to look to future. In this entry, I will go over some of the planned features for Digital Lethargia, as well as some of the content ideas I am playing with. Of course the future is not yet written, so plans may change and new ideas may take priority.
Short Term Fixes/Additions
There are some immediate things that I have on my list to get going before I can comfortably say I have a good base to build off of. Here are some of the highlights:
Styling and Navigation
One of my biggest weaknesses is making things look good (Just ask my mirror...). I can generally move/transform data, and solve most functional problems well, but aesthetics/user experience is a difficult area for me. As you can currently see, the site doesn't look the greatest, from both a layout and a visual sense. Also, I have not done much to make the site great on mobile devices yet. It is readable, but has a lot of room for improvement. I want people to be able to switch between views as they prefer, and have things be more dynamic depending on the devices being used.
For navigation, I have a base created, but since there is only one feed with content currently, nothing has been implemented on the front end. If you are viewing this on a big enough screen right now, you can see a list of feeds you have access to in the left pane. These will eventually become clickable and serve as the context menu. At the top left is a dropdown menu that will have extra items and serve as the full menu for mobile devices. My next steps are to refresh my knowledge of JQuery to accomplish some of the more dynamic tasks, while making sure things will still work for people who have javascript disabled.
Planned work in this area:
- Selectable views in content area (e.g. Headlines, preview cards, read-more cards etc)
- Sorting options to not only show newest first
- Pagination (So that the site doesn't show all the content at once)
- Better mobile styling
General Admin and Content Creation
The general operation of the site feels fairly solid now. I have separate admin pages for security, feeds, articles etc., and they have the basic operations implemented. I would like to bring these into a more centralized admin dashboard that can perform the basic tasks for each section, give statistics, and link to the full admin pages as needed. As I write new features, I am trying to get the admin functions done at the same time to avoid having to directly manipulate the database at any time.
On the content creation side, I am currently using Quill as a WYSIWYG editor, and I am undecided about it. It is super easy to setup and use, but I am finding more items that I am writing workarounds for which is not ideal. It is fine for now but I may revisit this, especially if other people come on board to start creating articles for Digital Lethargia. Having a dashboard for authors that shows drafts, published articles, and other items is also an important thing for me to get setup.
Planned work in this area:
- Admin Dashboard
- User Dashboard
- Improve authoring experience
- autogenerate RSS feeds for the local content
Short-Medium Term Features
To try and keep things in managable chunks I have decided on two features to focus on first while completing the above improvements.
Feed Aggregation
One of the pieces I used a lot in one of the earlier revisions of Digital Lethargia was the rss feed reader. I would like to bring that back here and actually make it available to everyone in the hopes that it might be useful for others. In basic terms, a lot of sites and services publish feeds that list and links to their recent content. I want people to be able to add feeds that they follow into this site, as well as browse feeds other people have added to follow. This puts all of your news, webcomics, podcasts etc. into one place. As you read items they get hidden from your list or you can save them for later consumption. There are a few products out there that already do this, but for some reason I can never find one I really like, so maybe I can create something again that fits the bill.
Planned Features:
- Add new or follow existing feeds in the system
- Choose which feeds get blended into your main feed, and which are kept separate
- Mark items as read or saved
- Play podcasts via built-in HTML5 players
Feature Ideas to explore in the future
- Keep track of location in podcasts, allow offline saving to mobile (Seen some javascript for the location, and some possible cache options for offline, but not sure if doable)
- Add YouTube channels as feeds
Digital Lethargia BBS
I chose to work on the BBS before SlackTicker because of the estimated size of the job. The Feed Aggregation is a fairly involved project, while I feel the BBS is a lot smaller of a job than SlackTicker, so I can pair up a larger and smaller project togeher. The main point of the BBS is nostalgia, playing the old games we did back in junior high, and playing around with some fun messaging options like DoveNet. I want to have the BBS running out of a Docker container to keep the telnet access isolated, as well as provide an easy way to copy the system for testing changes etc.
Planned Features:
- Selection of the usual games (LORD, Usurper, BRE etc)
- Have the scores/stats integrated into the website so that you can view standings at any time.
- Provide an integrated HTML5 terminal so that you can login from the browser or your favorite client
Medium-Long Term Features
There are of course an endless amount of items and ideas I could work on, but there is only so much time. However there are some ideas I know that I would like to get to.
SlackTicker 2.0
I had a beta version of SlackTicker 2.0 written in PHP and runninng on a previous revision of Digital Lethargia. Porting this over to Golang should not be too complicated but will take some time. Here were the features of that version that may or may not change as I rewrite it:
- Global long term game that everyone is a part of and can take one turn per day
- Smaller games that you can setup or join with other people, that either allow one turn per day, or more if everyone has already completed their turn.
- Winning the smaller games can earn you cash or bonuses in the global game.
- Leaderboards for points earned in smaller games.
Fully Featured API
Digital Lethargia is currently being written with an API in mind. By keeping the model and security functions separate from the front end handlers, I can reuse most of the code for the API and just have a different set of handlers. This opens up the possibility to write other Apps and services based on Digital Lethargia.
Possible features:
- Fully RESTful interface
- Possible mobile apps
- NCurses terminal apps (I love NCurses based apps and use them a lot)
Misc Ideas
- Discord server associated with Digital Lethargia (the channels and discussions there could act like a forum)
Content Ideas
Since the base system currently works well as a blogging system, it would be nice to also start producing some local content for Digital Lethargia. My personal goals in creating content would be to start flexing my tired old writing muscles, to provide an additional creative outlet to coding, and to learn some new skills. One new skill I am interested in is video editing, so I would like some of the content to have a video component to it while staying text friendly overall. I would also like to bring other friends to create their own content as well since the system supports different feeds and security groups. Here are three initial categories I am thinking of:
Dev Diaries
The Dev Diaries will continue, but will mostly be shorter and more focused from here on out (I hope...). I would like to pick a specific problem or feature in each one, describe how I tackled it, and the lessons learned while doing so.
Sysadmin and Tech tips
My day job in IT and my hobbies in computing/linux provide me lots of opportunities to problem solve and play with new systems. I am going to try and capture certain things I find interesting and pass them along. This should also help develop my technical writing skills to feed back into my career.
Gaming
I have put together a halfway decent video came and console collection over the years (As shown in the images below). I think putting together content based on that will provide a good way of cataloging it. We could probably even try some live streams around the items currently being played for recorded/written content. I know that retro-gaming content on the internet is not exactly an original idea, but maybe I can come up with something interesting.
Thanks to everyone for reading. Hopefully you are looking forward to some of these features.
Changes committed since last Dev Diary
- Fixed overflow on mobile for code blocks
- Group admin handlers
- User admin handlers
- Security admin handlers
Now that we have explored the past, let's go through some of the platforms and tools I am using for this revision of the site. I will try to explain the reasoning behind the decisions as well as lessons learned along the way. The main goals for me on the technology side was to start flexing the old programming muscle again (I do a lot of scripting at work which helps, but is not quite the same), and to learn some new skills that will be useful for both my career and personal projects.
Golang
I think my experience with C programming is what drew me to Go. My mindset seems to fit better with statically typed languages in a structured vs object oriented style. Go aims to be more readable than C/C++ as well, and while some scripting languages may be even more readable, I find Go quite easy to follow. The other nice part of Go is the large amount of built-in packages you can use to accomplish most tasks, as well as a large selection of community packages available to do almost anything. A good example of the power built into Go is how easy it is to setup a basic web server (Example from yourbasic.org):
package main import ( "fmt" "net/http" ) func main() { http.HandleFunc("/", HelloServer) http.ListenAndServe(":8080", nil) } func HelloServer(w http.ResponseWriter, r *http.Request) { fmt.Fprintf(w, "Hello, %s!", r.URL.Path[1:]) }
While using 3rd party packages adds external dependancies and some risk to your project, they can save you a lot of time and effort that would be spent writing every task yourself. Here are a couple that I have used:
Gin
Gin is a web framework for Go that claims to be 40 times faster than the built in one. I mostly chose it for the ease of use and extra features that you can plug into it. It has modules to easily handle sessions, SSL encryption, templating and more. I especially like the multitemplate module, where you can stack template files together and it automatically combines them into the served page, dynamically filling in content from passed in variables. This allows the reuse of common files across your templates. Here is an example:
renderer := multitemplate.NewRenderer() renderer.AddFromFiles("front", "templates/base.html", "templates/sidebar.html", "templates/base.css", "templates/base.js", "templates/front.html")
GORM
GORM has truly been a time saver for me. It is an Object-Relational Mapping (ORM) library for Go. One of the biggest pains in creating systems is properly moving data back and forth between your backend data store (usually a database) and your user interface. You have to create structures or objects within your application, and create functions that convert those data types into the proper queries to read and write data to/from the database. An ORM library handles the data mapping and connections for you, allowing you to only deal with the structures themselves. GORM is the first library of this type I have used, so I am not sure how it compares, but I am very happy with the results so far. All I have to do is define a structure with all the fields I want (e.g. I have a struct type called Article with various fields), and pass it to GORM's AutoMigrate function. When the application starts up, GORM checks to see if there is a table in your database for each of the structs, and creates it if it doesn't exist. Similary you just populate one of the structures and call GORM's Create or Save functions to add or modify data. It also handles your associations (has many, many to many etc.) and takes care of primary/foreign keys for you, with the option of manually declaring them as well. One other great feature is that it can connect to many different backends, allowing you to switch systems without changing your code. I use this for testing. When testing/developing locally, I have GORM create an SQLite file on my local disk and populate it, while the staging or production versions connect to a MySQL database.
Auth0
Security and account management is always a tough thing to do well. I did not want to be storing any passwords or sensitive information within my system. I am just not enough of an expert to feel comfortable against attacks, so I don't want to keep anything more than an email address and preferences. After trying a couple different approaches I decided on Auth0 for identity management. They have a good free tier for smaller projects, and provide easy to follow examples for using their service in Go from both a web server and api service perspective. When this site eventually has features for users that will require a sign in, they can use an existing social login account (like google or facebook) or sign up for an Auth0 account specific to this site.
Docker
Containerization has been a big trend over the past few years, much like virtualization was the big trend before that. To oversimplify things, virtualization takes a whole server and runs it on a host simulating hardware. Containerization takes that a step further, where a container basically only has the pieces of an operating system needed to run an application (e.g. just a linux kernel) and runs that as a tiny process on a host server. This allows you to run applications with a much smaller footprint, allows easier separation/scaling of tasks, and gives you a nice way of tearing down/redeploying entire applications without worrying about server configs etc. I am using docker containers for each service on my production host. On my development machine I have a script that builds a docker container based on my compiled Go code, and pushes it up to a private repository on DockerHub. On the production host I have a script that stops and deletes the current running container and replaces it with the one from DockerHub matching the tag I specify. This allows me to deploy new versions of Digital Lethargia with only 5-10 seconds of downtime.
Git
For source code control and versioning I am using Git with a private repository hosted on GitLab. Source control allows you to clone a copy of your source code locally to work on. It keeps track of your changes and when they are ready, you commit them back to the reposity for deployment to your production environments. Since the changes are tracked, you can easily revert to previous versions anytime if something has broken, and you can create branches etc. to explore different ideas or features without disrupting regular work and fixes. It also allows multiple people to work on the same project at once, helping to identify and merge any conflicts that arise.
Linode
For my host I decided to go with Linode. Through work I have some experience with Amazon's AWS and Microsoft's Azure. While their prices are fairly competitive, and they offer free tiers (at least for a limited time), the storage and network transfer costs can sneak up on you. Linodes plans for hosted linux servers seemed reasonably priced with included storage and network transfer all as one line item. So far I am happy with the performance and management options, but have not pushed the system at all yet.
Changes commited since last Dev Diary
- Added max-width style to images for resizing on mobile
- Added published and security check to article view handler
- Added check for future publish date on article view
- Added local feed creation and editing
- Changed html sanitizer to allow code block style
- Fixed unused package import
References
- Golang: https://golang.org/
- Gin: https://gin-gonic.com/docs/introduction/
- GORM: https://gorm.io/docs/
- Auth0: https://auth0.com/
- Docker: https://www.docker.com/
- Git: https://git-scm.com/
- GitLab: https://about.gitlab.com/
- Linode: https://www.linode.com/
For this first development diary I am going to review some of the past items and false starts that have happened over the years before bringing this version of Digital Lethargia online. I think it will be a good way to document some history and put in writing some of the lessons I have learned.
My Background
I have been interested in computers since my family first brought home a Commodore VIC-20 in the early 80s. My sisters and I would play games as well as trying to enter programs in BASIC from magazines. This interest has stuck with me, and I eventually went to a technology institute to earn a diploma in computer technology. After that I spent 8 years working as a C/C++ developer on the Solaris and Windows NT platforms, with the past 12 years working as a Systems Administrator/Operations Analyst. I am a Linux enthusiast and also like to play video games, both on consoles and PC. You could probably accuse me of being stuck in the past to a certain extent as well...
Slackcrew.com
Some of my facts may be fuzzy from this time so cut me some "slack" if some things are incorrect, but sometime around the end of highschool and the start of college, some friends put together a website located at slackcrew.com. Before all the social media of today, it provided a place for my friends and I to share links, pictures, and our immature thoughts. At some point in 2002 the site got rewritten in PHP with a MySQL backend. It also started to incorporate some more modern ideas still used today like cascading style sheets. It is around this time I started to help out with the development a little bit. I wrote a new image gallery, calendar system, a stock ticker game, and helped out with other random items. The site continued as such until we finally closed up shop sometime around 2010 - 2011.
Digital-Lethargia.com
Around the time that slackcrew was shutting down, a group of us wanted to do a 24 hour gaming marathon for charity and needed a site to promote it. Since I didn't control the slackcrew domain myself, I decided to take one of my favorite tag lines for the site that my friend came up with (Digital Lethargia), and use that as a domain. It is a bit long and clunky as a domain name (Hence why you can now use www.diglet.ca), but I still like it. Since we needed something up quick, I threw together a template using SquareSpace (I was watching a lot of Revision3 at the time...) with the idea that I would develop something for the domain myself after the event was over. Thus began the next 10 years of false starts, rewrites, and letting life get in the way.
Revision 1: Drupal with custom PHP
My original thinking was that I did not want to write content management code from the ground up, so I looked around at the different content management systems out there and decided on Drupal because it seemed to offer more support for including custom PHP than Wordpress. This worked well as a starting point to getting a basic blogging system with a few custom pages up quickly. I even managed to get a new version of the stock ticker game from slackcrew (SlackTicker) integrated into Drupal. But getting the custom PHP code to actually fit into the Drupal security and themes always seemed to take much more effort on tasks that I really didn't enjoy. Eventually I stopped working on this version and never brought it online.
Revision 2: PHP with MySQL
The second attempt came when the feed aggregator I was using at the time shutdown and I couldn't find a new one that I really liked. So I started from scratch in PHP with a MySQL backend. I got the system to a point where you could add and manage RSS feeds. You could view each feed separately, or view them all merged into one stream ordered by the published date. You could mark the entries as read and it would hide them from your stream, or mark them as saved for later consumption. The backend would split the feeds up into 4 groups and update a group every 15 minutes. I actually used this as my own personal feed aggregator for around 2-3 years. The trap I fell into with this version was that I kept telling myself I would make it available to the public as soon as I got the user interface perfect, and got the other features implemented. But this was during a time where my life became fairly busy and did not leave much energy to put towards those features.
Revision 3: Golang with Angular
More recently, I am now back in a situation where I actually have some time for personal projects. I became interested in Google's golang as it allows you to do a lot of powerful things without a huge amount of code. It is aimed mostly for microservices, but you can use it for almost anything. I figured since Golang was more for services, I would use a separate front end framework running on a proper webserver, with Golang handling the API backend. The separate API would also allow other apps and services to talk to it without any extra development. A couple of our systems at work used Angular for their front ends and I really liked what I saw so I chose to write the front end in that. My main issue with this revision was trying to take on too many new things at once. I was learning both Golang and Angular, and got frustrated with the roadblocks I would hit. On the security side I was trying to use social media logins so that I would not be storing anyone's account info, so that meant learning the Oauth, and Java Web Tokens in both systems. I would fix something on one side that would end up breaking the other, which turned into an almost endless loop. Developing in this way meant I never really developed a good base of skills in either system.
Current Revision: Golang only
I will cover the specifics of this revision more in my next Dev Diary, but earlier this year I started again using just Golang. Golang has its' own built in webserver, plus many packages that extend upon that and add other features. I started with the roadblocks from my last attempt and was able to solve them with much less frustration by only using one framework. I now have the system up as a simple blogging system, with the aim of adding the features from revisions 1 and 2 in a more incremental approach.
Lessons Learned
Progress, not perfection
I am using one of my friend's favorite Denzel quotes as the name of this lesson. This has been (and continues to be) one of the hardest lessons for me to learn in life. I tend to be scared of showing things to people until I feel they are completely ready and feature complete. This also causes me to focus too much on the big picture of a project which is intimidating, and leads to procrastination. To combat this I am trying to learn and adopt more of a continous integration/continuous deployment approach, where things are broken up into much smaller features and deployed to production on a frequent basis.
Shorter but more frequent work sessions
Another mistake I would make would be to put off working on personal projects if I didn't have a whole afternoon or evening to dedicate to them. What I have been trying to this year is to sit down everyday for 30-60 minutes to do something productive for myself before getting sucked into TV or games. If I end up spending more time on it then great, or if during that 30-60minutes I don't get much done then that is ok as well. It doesn't have to be the same project everyday as I always have a couple of things on the go, but it is also important not to start too many things at the same time. The goal behind this is building habbits, and using my energy before the couch sucks it away. If I end up sitting in front of a project not doing much then I also have to recognize it is ok to stop there before getting frustrated and burnt out.
Proper Development environment and pipelines
When building things for personal projects, it is easy to just do all the work and testing in production. This makes it harder to identify and fix bugs, and puts up mental blocks against trying new ideas out. And of course leads to more disruptions to your production system. The tools today make it a lot easier to have a local dev environment, a staging test environment, as well as a production environment with deployment pipelines. In my next Dev Diary I will outline some of the tools I am using.
Thank you for your patience while I led you on this trip through the past with me. I imagine that future Dev Diaries will be shorter as the changes will be smaller and hopefully more frequent as time goes on.
-Lemac