Website recursion: A website to build more websites

I finally did it! I abandoned my previous Jekyll system for auto-building my blog from Markdown files for a fully customized system. But there's more to it than that! This system doesn't only let me build my current blog but lets me manage any number of personal sites all from one place with minimal manual work on the server side (to set up the listener and SSL certificate, as well as buy the domain).

There are quite a few components to the full system in place here. So here are the core concepts that I'll address in how I built this system:

  • An admin editor that creates the sites in the database, syncs them to the production server, and determines which modules are enabled.
  • A Less CSS compiler where each site can choose which components it needs to have available.
  • A templating engine built around Laravel's Blade templates with default suggestions per module.
  • Editors for the various types of content that can be added to the site.
  • A live preview site that is only accessible over the intranet where the website builder is housed.
  • A synchronization mechanic so that the internal server can communicate with the production server to publish ready content.

That probably sounds like a lot when you consider how vaguely all those components were written. I won't lie, this was a huge project that probably consumed the most time out of anything I've built into my intranet thus far. Even after building all of that stuff, the thought of writing a blog post to show it off was also daunting - it's so much stuff to cover! So let's get started.

Creating sites in the database

I had only two goals when I first started thinking through how this system would look:

  1. Allow the sites to be modular.
  2. Keep the system simple.

What can I say, I like brevity and I didn't expand on these two concepts much. But in my experience working with a lot of frameworks in the past, the desire to be modular was often incompatible with simplicity. In my early days, I fiddled a lot with PHP-Nuke, and a lot of the things I tried to build for it just barely worked. There's a reason for that, of course. Every person using your framework has different use cases that you need to account for to make your product great. Fortunately, I don't ever plan to publicly release all the code for everything I've done here, so I don't have to worry about it.

In the end, I created a fairly basic table for Websites which contained the following columns: Id, Title, TitleSlug, Domain, UserId (to determine which intranet user sees it), and LessFile (more on that later). It also has an extra column for each feature that can be enabled: EnableBlog, EnableInventory, and EnableTimeline. I hate that method of doing it and will end up redoing it at some point. I should be storing each module into some sort of WebsiteFeatures table with a separate table linking each feature that is enabled to a website with additional options. Each of the Enable columns also does not represent a bit as you'd expect. They're actually strings that represent the path where those features will be displayed on the site. So EnableBlog can be set to "blog" and all blogs will be shown at "{Domain}/blog" on the live site. Again, terrible database design that I'll change later. But at the time, I only had one option that could be changed per module so it was an easy thing to get things moving forward.

Because of the simplicity of how modules are enabled, the on-site editor for websites is very straightforward. It's just a title box, a select menu for which user should own it, and a list of the modules with a box for the path. If the path is filled in, the module is enabled. If it's empty, it's disabled.

Compiling the CSS

I wanted to improve my sanity by switching all my existing CSS to Less CSS as part of my project. Technically that was out of the scope of my original intentions, but it felt necessary to do in order to be able to split up my styles, organize them better, and make it a heck of a lot easier to update it moving forward. The process of doing this also revealed a bunch of obsolete CSS that I had written at one point for a very specific purpose, then changed my mind and went a different direction but never deleted it from the file. Some of it was also overly complex and got simplified a lot.

But there is an inherent problem to using Less CSS with PHP: the simple fact that PHP doesn't need to be compiled, so there is no built-in process where my Less could be compiled into CSS as part of building out. The way I work at home, I use a combination of live testing and version control. Whenever I make changes to my code, the save triggers the file to be copied to my local server via SSH automatically and the change is live there immediately. Once I've gotten a piece of code to a working state that I like, I commit it o my GitHub repository as part of the history of the code. There is no point where I could just tie in a Less compiler server-side because the files are just being copied and going live.

So I needed to build a compiler. And by building my own, I don't mean literally from scratch. I did use the existing JavaScript library available from Less to simplify things a lot, but that still needs to be tied into something so that it can be triggered. The basic concept here is to just list each site out. You click on whichever site needs to have its CSS updated and it displays further options. There is also a button that goes to a Live Compiler which lists all of the individual Less files so they can be individually compiled for debugging purposes.

A collapsed view of the websites list in the CSS compiler.

Expanding one of the websites shows all the components that be added to the site's CSS when compiled because not every site needs every component applied to it. The "custom" folder also allows including only one custom file that may have been created just for that site. What the compiler does in reality here is a bit more complex. When you click Compile, it grabs all the inputs that are currently checked and forwards them as JSON data to the server for processing. The server consumes this data and compiled them into the less file specified in the box at the bottom via @import statements.

$allFiles = json_decode($input['Files']);
foreach ($allFiles as $folder => $files):
  $result .= '/* '.$folder.' */'."\n";
  foreach ($files as $file) $result .= '@import "'.$file.'";'."\n";
  $result .= "\n";
endforeach;
Note: I know this code doesn't sanitize file names to prevent folder structure changes. But I'm not concerned about this. These files are being saved into a folder for the CDN, which is already restricted to the web server. At worst, a malicious submission here would attempt to load some other CSS, JS, or image file off the CDN when run through the client-side compiler.

The client-side application waits for the server to respond saying the file has been saved, then triggers the Less.js compiler to load that file and compile it. Once it's been compiled by the client, the client sends the full compilation to the server again to be stored in the file specified. Upon saving, the new CSS is immediately live on the preview site (covered later) so that the changes can be viewed. When satisfied with the changes made, you can click the Sync button to copy the saved CSS file over to the production server.

An expanded view of one website in the CSS compiler.

There is also an upload box here for saving a new favicon for the site. There is no preview for this option - it just saves the new favicon to both the preview and production site immediately.

Working with the templates

I honestly don't have a ton to say about the templating system. It's a fairly standard system that mostly extends what Laravel already makes available, just with some convenience features on top of them. Most notable is that I have set up "default" templates, which are the ones my code expects to exist for the module to function correctly. As in, attempting to view something on the live site without this template will just throw an error. These get inserted into the list of existing templates with an error icon to emphasize they haven't been created yet.

Otherwise, the templating system is a simplistic code editor that runs through CodeMirror. You can call them whatever you want and include them inside other templates to reduce code repetition. The only major thing missing here is documentation to tell the user which variables are being passed into the template depending on which module is being used, but that's an issue to tackle some other time. Who ever writes documentation before they've had the chance to forget how it works?

Creating content in each module

This is the bulk of the UI that I will regularly see and interact with. I'm going to start with a screenshot of the page before diving into it a bit.

Snapshot of the list of pages and blogs for a website.

The two modules I have built so far are Pages and Blog. Pages are an always-on feature (hence why there was no enable column for it earlier). They function very similarly to templates behind the scenes. They use the same code editor for modifying the page and have pretty much unlimited flexibility in what you want to do with them. The main difference is that they have a path and publish status, which determines how to access them and whether they've available on the production site.

The blog uses a modified code editor that is tuned for a combination of Markdown and HTML, rather than PHP. Each blog has to be assigned to a category, but similarly has visibility options for publishing them. The categories are listed at the bottom, with the number of blogs in each one and a status bar that visualizes the breakdown of blogs per status.

The bar of buttons at the top also includes some extra options. The red tuning icon configures the enabled modules for the current site (available to admins only). The others let you manage images that have been uploaded to the site, edit the site's title, visit the preview site, and visit the production (live) site.

Previewing before going live

Whenever editing content, saving changes only saves them locally to the preview site. Changes are immediately available for previewing upon saving, and each piece of content has a link that will load the page directly in the preview site to see how it looks. These options are available directly from the sidebar. The Publish button is available whenever there are new changes that haven't been pushed to the live site yet, while the Preview button is always available. Once the content has been published at least once, you can also use the Hide/Unhide button to quickly remove it from view on the live site, if necessary, or use the Live button to be directed to the live page.

Options for publishing, hiding, and previewing a blog post on the preview and live servers.

The preview site itself is essentially a copy of the live site just with a few parameters altered. It adds a "[Preview]" prefix to the title to emphasize it's not the live site but otherwise displays all the pages exactly as they would appear if the current changes were published. Because the preview site also hosts all sites in the database from one domain, whereas the live sites each live on separate domains, there were other behind-the-scenes things that had to be done to ensure it runs smoothly. On preview, each site runs out of a virtual sub-folder identified by its TitleSlug instead. Here's how these two modules get routed in Laravel:

$prefix = CurrentSite::isProduction() ? '/' : CurrentSite::get('TitleSlug');
Route::prefix($prefix)->group(function() {
    Route::any('/', function () {
        $page = Page::where('Path', '')->where('SiteId', CurrentSite::get('Id'))->firstOrFail();
        if (CurrentSite::isProduction() && $page->IsHidden) abort(404);
        CurrentPage::addTitle($page->Title);
        return view('pages/'.$page->UUID);
    });
    if (CurrentSite::get('EnableBlog') !== null):
        Route::prefix(CurrentSite::get('EnableBlog'))->group(function() {
            Route::any('/', function() {
                CurrentPage::addTitle('Blog');
                return view('blog/index');
            });
            Route::any('/category/{blogCategory}', function(BlogCategory $blogCategory) {
                CurrentPage::addTitle('Blog');
                CurrentPage::addTitle($blogCategory->Title);
                return view('blog/category', compact('blogCategory'));
            });
            Route::any('/view/{blog}', function(Blog $blog) {
                CurrentPage::addTitle('Blog');
                if (CurrentSite::isProduction() && $blog->IsHidden) abort(404);
                CurrentPage::addTitle($blog->Title);
                return view('blog/post', compact('blog'));
            });
        });
    endif;
    Route::any('/{page}', function(Page $page) {
        if (CurrentSite::isProduction() && $page->IsHidden) abort(404);
        CurrentPage::addTitle($page->Title);
        return view('pages/'.$page->UUID);
    });
});

You'll notice a couple of places that explicitly look for the IsHidden variable. This is done so that hidden content can still be viewed in the preview server and only gets hidden (dropping a 404 instead) when attempting to view it from the live site. Because unpublished (new) content doesn't exist in the database on the live server, we don't need to check for that condition between preview vs production (more on that in the synchronization section next).

Synchronizing to the production server

Because the preview and production servers are completely separate, I needed a way to push the changes live. I also realized that not all the information on the internal server even needs to be pushed live and that the database can be a bit slimmed down. Very little information actually exists on the production server - only what is necessary to display the site.

Getting the information there uses a private API with only one valid key assigned to the preview server. When you click to publish a template, page, blog, or whatever else, the preview server sends a call to production with a copy of everything currently saved for that content. The production server sees this and just... saves it by creating a copy there. To simplify this process even more on the database end, I opted to also use Eloquent models that are built into Laravel. I don't use these on the preview server because some of the things I do are a bit complicated for Eloquent models to handle, but I don't need to do those things on the production side. This especially eliminates the need for me to write extra code to determine whether I'm doing an insert or update with the information given to production, as Eloquent already does that for you. Eloquent also makes it easy to pull in lists of blogs for a specific category with minimal effort so that the variable can be given to a template.

In fact, the production site only has a single controller called the SyncController - that's all it needs. This controller has a separate method for each type of data that can be received via synchronization and does some slightly different things depending on what it's being given. As an example, here's the method that handles a blog being published:

public function Blog()
{
  $data = $this->GetData();
  foreach ($data as $blog):
    $site = Website::where('Id', $blog['SiteId'])->firstOrFail();
    Blog::updateOrCreate([ 'Id' => $blog['Id'] ], $blog);
    if ($blog['Content']):
      static::$FileFolder = sprintf('/%s/blogs', $site['TitleSlug']);
      static::$FileExtension = 'md';
      $this->WriteFile(strtolower($blog['UUID']), $blog['Content']);
    endif;
  endforeach;
  $this->JsonSuccess();
}

And with the click of a button, the content is live for everyone to view!

Reflection

Obviously, this was a lot more effort than just continuing to use Jekyll, but I always felt so trapped using Jekyll. It was quite limited in what I could do and was a clunky command-line tool. If you've ever spent time in the command line, you know that tools there break constantly when their dependencies have been updated. I barely ever touched my blog back then because every time I went to do something, it felt like I had to spend an hour updating Jekyll and Firebase to get them functional again before I could publish any changes. So I kind of just gave up.

And no, WordPress or other blog software was not a viable alternative. Jekyll wouldn't have cut it either because I wanted to expand the types of things I put onto my site. In particular, there's an Inventory module I plan to build (you may have questioned it when you saw EnableInventory as a column at the beginning). I don't have details on what this is going to be yet, but it's a feature that could not be easily implemented into a blogging system. My blog was just the obvious thing to get working and test the system before moving on to more aspirational projects.

Most importantly, I look forward to doing things with my websites again now that it's not so tedious to do those things. Here's to the future!

Development
June 19, 2023