Moving HTMLy to a new server

One of the reasons I host using HTMLy is that it is quite simple. No DB's to manage, just PHP and some .MD files.

I have been upgrading many of my servers that run on 18.04 and sometimes PHP doesnt make this as easy as it should be. My old instance ran on Apache. Nothing against Apache, I am just increasingly more familiar with NGINX. So a do-release-upgrade failed, and rather than take the time to properly fix the post-upgrade issues, I just rolled a new VPS (and saved some cash) and moved my files.

Process was fairly simple though.

  1. Build new server, install and configure SSH, firewall, user etc.
  2. Install php-fpm and nginx
  3. Configure nginx
  4. Copy the root webserver files from the old source to the new.
  5. setup my rsync accounts and re-configure my backups

This is all pretty boilerplate stuff. I use UFW for the firewall and simply limit SSH connections to my known IP's I tend to work from. I configure SSH with normal settings, like limiting access to groups, only using key based auth etc.

PHP-FPM needs nothing special, though NGINX did need some tweaking. The recommended config, at least for Debian/Ubuntu does work out of the box. I am not using PHP-cgi or TCP sockets but rather the default Unix sockets.

Here is the sample config that does work.

server {
    listen 80 ;
    listen [::]:80;

    server_name wallyswiki.com www.wallyswiki.com;
    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.php index.html index.htm index.nginx-debian.html;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log error;

    location / {
            try_files $uri $uri/ /index.php?$args;
    }

    # Commented out as it broke the admin pages
    #location ~ /config/ {
    #    deny all;
    #}

    # pass PHP scripts to FastCGI server
    #
    location ~ .php$ {
            include snippets/fastcgi-php.conf;
    #
    #       # With php-fpm (or other unix sockets):
            fastcgi_pass unix:/run/php/php8.1-fpm.sock;
    #       # With php-cgi (or other tcp sockets):
    #       fastcgi_pass 127.0.0.1:9000;
    #       fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME
            $document_root$fastcgi_script_name;
            include        fastcgi_params;
    }

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    location ~ /.ht {
            deny all;
     }
    }

After this its simply setting up certbot/Lets Encrypt.

Finally, since there is no DB or anything to work off of. You can simply copy over your files and make sure the permissions (namely the www-data:www-data users and groups) are set properly.

For me, I simply created a tar file, used SCP to copy the file over and extracted it. Though you could easily do this with rsync as well, and this would be especially useful if you are trying to keep multiple locations in sync.

UPDATE

I noticed RSS wasnt working. Nginx was throwing the following in error.log

2023/05/12 00:33:36 [error] 16320#16320: *1090 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Uncaught Error: Class "SimpleXMLElement" not found in /var/www/html/system/vendor/suin/php-rss-writer/src/Suin/RSSWriter/SimpleXMLElement.php:9

Therefore I simply installed php8.1-xml and issue is fixed.

sudo apt install php8.1-xml
sudo service php8.1-fpm restart
sudo service nginx restart

MIgrating a Synology NAS to BTRFS

I recently upgraded my NAS, and in the process desired a few changes that the simple migration methods didnt support.

Here were the requirements

  1. Migrate to BTRFS from ext4
  2. Upgrade total storage (from 2TB disks to 4TB)
  3. Change the name of the NAS.
  4. Consolidate my increasing segmented media shares

Its also worth noting that I had acquired some new (to me) hardware for this project. Specifically a DS1815+ (with the C2000 cap fix) chock full of 4TB disks. The disks had about 3600 hours on them but were HGST ultrastars with pretty solid reputations.

Because I wanted to migrate the filesystem from ext4 to BTRFS, the the Synology Migration Assistant was not an option.

Because i wanted a new name, backup and restore wasn't really an option. And because i needed to take time (1 week or so) to make the transition. It wasn't terribly feasible either.

Ultimately I used Shared folder sync to migrate the bulk of the fileshare data. This is easy enough.

  1. Turn on rsync on the destination by going to Control Panel>File Services>rsync
  2. At the source go to Control Panel>Shared folder sync>Task List>Create

Things to note with Shared Folder Sync:

  • While the syncing is running, you cannot edit/write to the destination. Any changes made there will be wiped.
  • Use the original shares for new changes until the time you cut over
  • You can STILL move any services that read from the shares (ie: Plex) during the transition.
  • You CAN fix any share permissions and test them at the destination

For the last point. I have a mix of shares setup on fairly legacy versions of DSM (ie: 4.3, 5.0, 5.1 etc). I did have to go through and edit some share permissions.

Specifically some of the older shares had an extra settings options in

Control Panel>Shared Folder>Edit

enter image description here

In the Advanced section these shares had some extra options that merely needed them to be unchecked in the "Advanced Share Permissions"

These options no longer appear on new shares that are created, but are akin to the Windows "Share Permissions" section on a Microsoft file server. Best I can surmise, there used to be a TON of forums questions on why smb shares wouldnt work and 9/10 the share permissions werent set. Samba has had plenty of changes over the years and now it appears Synology structures new shares to just work off the NTFS acls on a folder.

enter image description here

Once sync's were complete it was a matter of simply updating places that I had shares mapped. Group Policy, user accounts for home drives etc.

I also went ahead and rebuilt the Synology Cloud Sync tasks to backblaze B2. And allowed for them to re-upload. This did take some time thanks to Comcasts data caps.