Managing Teams in Multiple Tenants

Managing Teams in Multiple Tenants

So I work in multiple M365 Tenants, and even after years of uservoice feedback, and more complaints, Teams is dead awful at working between two tenants. And only gets worse if you ever decide to try guest accounts in Azure which really just add MORE accounts to switch between..

That said, I work in primarily two Tenants and was recently going over with some of my team how I manage both. So here goes.

First some short pointers.

  1. Choose your primary tenant. This is the one you sign into the desktop apps with.
  2. Chrome based Edge with signed in profiles can help this effort.
  3. Signing into the Azure tenant on your windows machine via ADAL can also ease the pain.

Sidenote. Using ADAL, where you sign into Azure via Windows is probably not required. It can make things a bit easier by allowing edge to leverage ADAL when you HAVE to switch between profiles because you are already signed in.

enter image description here

With that said, I have seen issues with setting up a secondary synced profile in Edge that isn't first attached via ADAL IF your first profile IS synced via ADAL. So if you run into issues, setting up the "work account" in Windows settings can help this.

enter image description here

Once you have a primary selected and signed in, you should also sign into Edge and sync the profile with your M365 account. It does make moving machines and upgrades easier as well and since this is typically a work use case, privacy concerns here basically go out the windows.

Once the intial setup is done its now time to setup your secondary account. The basic steps here are to:

  1. Sign into a secondary profile in Edge and sync it with your other tenant.
  2. Enable automatic profile switchingenter image description here
  3. Configure the default profile for external links to use your primary profle. This will make sure any links clicked in places like Outlook will open in your primary profileasdf
  4. Go to profile preferences for sites and add any overrides for your secondary tenant. Some often used ones include shortname-my.sharepoint.com for OneDrive links etc. Anything that uses the secondary M365 tenant can go here.

enter image description here

So now any links going to your secondary Tenant should be opening in a profile that is also using cached credentials and signed into said tenant. This will avoid things like this a warning that is thrown in a browser session if you switch accounts in another tab. And whats worse is if you hit "refresh" you will pretty much just get dumped to a generic landing page in whatever tenant you are signed into.

enter image description here

Now for the teams part. Edge has an ability to make any website an "App" in windows. Its basically just edge in some hybrid kiosk mode, but it makes the application launchable from the start menu and pins it to your taskbar.

So open your secondary profile in Edge, and head to teams.microsoft.com and login. Then hit your ... menu and go to Apps and "Install this site as an app."

enter image description here

Click through the confirmation and select whether you want this to auto-start, pin to the desktop etc and hit Allow.

enter image description here

If you ever want to remove this as well, you can simply right click on the app in the start menu and hit Uninstall

enter image description here

The next change worth making is going back to your Edge browser version of teams and clicking the security lock and changing the permissions to the following. This will allow you to join meetings and such without any prompts to allow the mic etc.

enter image description here

One last thing I do is modify my notification settings in any Teams tenant that isnt my primary and set the missed activity emails to ASAP. This ensures I dont miss any messages within reason as I should also see them in email.

enter image description here

And Voila, I now have two independent versions of Teams on my desktop that I can use in my taskbar etc.

enter image description here

Moving HTMLy to a new server

One of the reasons I host using HTMLy is that it is quite simple. No DB's to manage, just PHP and some .MD files.

I have been upgrading many of my servers that run on 18.04 and sometimes PHP doesnt make this as easy as it should be. My old instance ran on Apache. Nothing against Apache, I am just increasingly more familiar with NGINX. So a do-release-upgrade failed, and rather than take the time to properly fix the post-upgrade issues, I just rolled a new VPS (and saved some cash) and moved my files.

Process was fairly simple though.

  1. Build new server, install and configure SSH, firewall, user etc.
  2. Install php-fpm and nginx
  3. Configure nginx
  4. Copy the root webserver files from the old source to the new.
  5. setup my rsync accounts and re-configure my backups

This is all pretty boilerplate stuff. I use UFW for the firewall and simply limit SSH connections to my known IP's I tend to work from. I configure SSH with normal settings, like limiting access to groups, only using key based auth etc.

PHP-FPM needs nothing special, though NGINX did need some tweaking. The recommended config, at least for Debian/Ubuntu does work out of the box. I am not using PHP-cgi or TCP sockets but rather the default Unix sockets.

Here is the sample config that does work.

server {
    listen 80 ;
    listen [::]:80;

    server_name wallyswiki.com www.wallyswiki.com;
    root /var/www/html;

    # Add index.php to the list if you are using PHP
    index index.php index.html index.htm index.nginx-debian.html;

    access_log /var/log/nginx/access.log;
    error_log /var/log/nginx/error.log error;

    location / {
            try_files $uri $uri/ /index.php?$args;
    }

    # Commented out as it broke the admin pages
    #location ~ /config/ {
    #    deny all;
    #}

    # pass PHP scripts to FastCGI server
    #
    location ~ .php$ {
            include snippets/fastcgi-php.conf;
    #
    #       # With php-fpm (or other unix sockets):
            fastcgi_pass unix:/run/php/php8.1-fpm.sock;
    #       # With php-cgi (or other tcp sockets):
    #       fastcgi_pass 127.0.0.1:9000;
    #       fastcgi_index  index.php;
            fastcgi_param  SCRIPT_FILENAME
            $document_root$fastcgi_script_name;
            include        fastcgi_params;
    }

    # deny access to .htaccess files, if Apache's document root
    # concurs with nginx's one
    #
    location ~ /.ht {
            deny all;
     }
    }

After this its simply setting up certbot/Lets Encrypt.

Finally, since there is no DB or anything to work off of. You can simply copy over your files and make sure the permissions (namely the www-data:www-data users and groups) are set properly.

For me, I simply created a tar file, used SCP to copy the file over and extracted it. Though you could easily do this with rsync as well, and this would be especially useful if you are trying to keep multiple locations in sync.

UPDATE

I noticed RSS wasnt working. Nginx was throwing the following in error.log

2023/05/12 00:33:36 [error] 16320#16320: *1090 FastCGI sent in stderr: "PHP message: PHP Fatal error:  Uncaught Error: Class "SimpleXMLElement" not found in /var/www/html/system/vendor/suin/php-rss-writer/src/Suin/RSSWriter/SimpleXMLElement.php:9

Therefore I simply installed php8.1-xml and issue is fixed.

sudo apt install php8.1-xml
sudo service php8.1-fpm restart
sudo service nginx restart

Downgrading Unifi AP Firmware

So you might have tried upgrading your Unifi devices via the controller and for some reason it did not work. Usually this means that the download of the firmware-file failed and chances are big this happens for one of your Switches. In my usecase I am manually upgrading the US-8-60W.

First you download the latest .bin-file from the Downloads page. Make sure to put it in the Downloads-folder on your Macbook in order to follow the steps I have specified below, else you need aadjust accordingly.

Open the Terminal app and copy the firmware over

scp ./US.bcm5334x_5.43.23+12533.201223.0319.bin ubnt@10.0.1.2:/tmp/fwupdate.bin

If this is the first time establishing an ssh connection to the device it will ask you if you want to continue connecting. Type ‘yes’ and press Enter.

Add the password.

If you don’t remember the username/password to ssh into your devices you can check them in your controller. Go to Settings > System Settings > Controller Configuration > Device SSH Authentication. If you did not set it up yet you can do so now.

The Terminal will show you the progress of uploading the .bin-file to /tmp/fwupdate.bin which for me took about 20 seconds.

After the file is uploaded you now need to ssh into the device and execute the upgrade command:

ssh ubnt@10.0.1.2

Add password. (this is the same user/password you used before to upload the file). Then:

syswrapper.sh upgrade2 &

You will see the progress again and in total just give it 5 minutes and reload the controller to verify the upgraded firmware version.

That’s it.

Source: https://www.carlobloks.com/blog/manually-upgrading-unifi-firmware/

HOW TO: Fix “Home” and “End” keys in the Mac Terminal

With MacOS certain simple things can be quite.....perplexing. Like 3 finger middle click on a touchpad, you need an App for that.. Such is the way with many of the seemingly "default" options in MacOS and probably the reason so much is "appified" in that ecosystem. The Apple devs probably just dont worry about such "minutia" that has made its way into other OS system defaults because they can just rely on a swift developer to make an app and try and charge $0.99 in the AppStore to enable something that would otherwise be default. So simple things like keyboard compatibility, or how responsive you want to dock to be, or things like snapping windows to a side of the screen need these weird customization settings. Thats enough ranting through, it is what it is in that ecosystem and Apple does some things pretty well. Their security defaults are solid, especially if you arent someone that is prone to caring much about things like full disk encryption because you simply dont understand it.

Anyhow, not the topic for today. The MacOS terminal isnt bad, comparable to most of the others I have used. But one thing that nagged me about the terminal app is how the Home/End keys basically don't behave.

Heres a quick fix.

Pull up the Terminal Settings and navigate to your default profile and go to the keyboard options. enter image description here

Fill it out like so and hit OK:

enter image description here

Here are some other *nix friendly keycodes:

  • home :: \033OH
  • end :: \033OF
  • F1 :: \033[11~
  • F2 :: \033[12~
  • F3 :: \033[13~
  • F4 :: \033[14~

Source is from an old blog post that seems to have been hijacked to advertise spam. But here is the archive of the post.

When df and du dont match.

So I had a weird issue the other day, just after I had created an extent and added it to a volume group to extend my plex disk (mostly for more scratch space for the transcoder to do its job).

Anyway, I got an alert in Zabbix that the disk was nearly full.disk was near full

This was odd, since I had just increased the disk some 30+ GB's. A quick look at the server and sure enough df showed that it was in fact used

df -h

It looked something similar to this. df -h

Notice how the main root mount point was showing high usage

Note: I fixed this before i wrote this post. So I artificially re-created the condition for the screenshots.

I have a small script that I often use. I call it treesize, but in reality its just du with some extra stuff and I often use it to find where storage is being eaten up.

Heres the script:

#/bin/sh
du -k --max-depth=1 | sort -nr | awk '
     BEGIN {
        split("KB,MB,GB,TB", Units, ",");
     }
     {
        u = 1;
        while ($1 >= 1024) {
           $1 = $1 / 1024;
           u += 1
        }
        $1 = sprintf("%.1f %s", $1, Units[u]);
        print $0;
     }
    '

And here is what an output would look like.treesize

Now its worth noting a few things at this point.

  1. Plex uses cifs mounts setup via fstab for its actual library content.
  2. The NAS that stores those mounts is backed up nightly.
  3. Plex itself is a Virtual Machine and also backed up nightly.

As a result I will sometimes have plex oddly lose its cifs mounts.This is likely due to the VM being "stunned" during backup, but could also happen during the btrfs snapshot and backup done on the NAS. Frankly, I never had the time nor the inclination to look too far into it because simply updating the server nightly and rebooting it fixes the issue in 95% of the cases.

So i got to googling (well "duckduckgoing") and came across some of the standard advice such as check to make sure inodes arent exhausted or run lsof to make sure a deleted file wasnt held up by a running process. But then i got to this page about bind mounts and finally this one that really got me thinking.

I do have plex setup to record some over-the-air TV (OTA) stuff, mostly local news and kids shots. What would happen if plex lost its cifs shares inexplicably AND Plex did a ton of recording to that same path location before I had time to catch it?

So I stopped the plex service and unmounted all shares, then re-ran my du/treesize script.

enter image description here

Sure enough, some extra storage utilization appears. And it all appears in the very share that I used Plex to record over the air TV to. enter image description here

Note: As stated, in this example I used fallocate to create some large files to use as examples. My actual directory looked something like this snap which I grabbed from backups.enter image description here

Sure enough that's exactly what happened. So in this case, its worth looking at any mounts you may be using and making sure that you don't have some files inexplicably loaded up in folder locations you typically use to mount shares...

Or just rebuild the server, I am not your parent.

Synology Drive Compatibility

Synology Drive Compatibility

Self note:

Have been starting the process of looking at new options for my older Synology units. Within the last year or two I have temporarily gotten an update to an 1815+ with the resistor fix, but frankly have been looking/considering ways to consolidate some of the services a bit (ie: docker).

Along the way you hear about the anti-competitive behavior. At my heart, I despise this move. In a practical sense if theres an easy workaround, it may be easier than re-inventing the wheel. Anyway, this isnt about the business or politics of the issue.

Possible workarounds worth trying:

Option 1

SSH into synology. Become root

sudo -i

Navigate to the folder where the drive list is stored.

cd /var/lib/disk-compatibility

Now if you list the directory you should see some DB files named after your model.

Step 2

We want to edit *_host.db and *_host.db.new and add your drive. In our case we had to add this line to the DB rs2821rp+_host.db and rs2821rp+_host.db.new (Can be done with VIM editor):

{"model":"WD102KRYZ","firmware":"01.01H01","rec_intvl”:[1]},

Some drives in that file don't have a firmware defined, so for you, you could try to copy that entry and add your model number of the drive. Another option is to look in the DB for the expansion unit for your drive, we found our config there. Just make sure "rec_intvl" is set to 1 if you copy the entry. Ours was standard 3 and that didn't work.

Save and quit both files with :wq and then reboot the NAS, if everything is done correctly the NAS should say the disk status is normal.

I hope I can help some people out with this, and I hope Synology will remove this drive "whitelist".

DISCLAIMER: I have no idea how this will work long term or with other drives or NAS'es, I'm just sharing my experience. Try this at your own risk.

Option 2

Does anyone know if this works? https://www.reddit.com/r/synology/comments/tjgba0/comment/i1kalis/?utm_source=share&utm_medium=web2x&context=3

Edit /etc.defaults/synoinfo.conf and change
support_disk_compatibility="yes"
to "no" and reboot. 

Then all drives can be used without error messages.

Tunneling services through SSH

While its never a great idea to expose RDP to the world directly, some may argue using filtered ports would be okay. That is, allowing RDP to only specific external networks or IP's... Regardless though, its know RDP is a target for brute force attacks and in some cases exploits on the protocol for things like Bluekeep. In fact its begging to be reported that RDP is a common vector for Ransomware

And some networks may just straight filter RDP outbound, at the network layer or even application layer.

But there are ways to work around these issues. Namely with SSH. SSH if setup correctly (aka key-based authentication) is often exposed to the internet. And while there are definately threat models that may warrant you to take extra precautions, its generally been okay. And its quite powerful.

For example, you can use it to setup a Socks5 Proxy for things like proxying browser traffic with SSH -D.

Well heres another quick trick. SSH -L

The format is ssh -L "Local Port":"internal IP":"Internal Service Port" username@sshhost

Graphical Example

In the example above. Once connecting an SSh session, you can open your RDP window and point to the local port (33389 in this case) and connect to the Remote Desktop session on the IP listed (in my example 192.168.1.10.)

RDP Session Example

Its worth noting for the local port I used a unregistered port..Anything above 49152 are dynamic as well and not assigned by IANA. It is sometimes the case for some OS's to not allow non-admins to bind to registered ports for security reasons so I just make a habit of using a unregistered port above 1000 or so..

CVE-2021-1675

Proof of Concepts and Initial Reports

First attempts seen at claiming a post patch exploit: https://twitter.com/RedDrip7/status/1409353110187757575

Original PoC pulled

First PoC of exploit, forked from the one pulled above: https://github.com/cube0x0/CVE-2021-1675

More efforts to show the PoC:

Microsoft may be pulling the more easily usable PoC's from github

Mitigations

MS Documentation on Print Spooler:

Possible GPO based mitigation for non-print server: https://github.com/LaresLLC/CVE-2021-1675

Possible Mitigation for Print Servers: https://blog.truesec.com/2021/06/30/fix-for-printnightmare-cve-2021-1675-exploit-to-keep-your-print-servers-running-while-a-patch-is-not-available/

Roku Shortcut Cheatsheet

  1. System Information - Things like CPU Temps, Clock Speeds etc.

    Press Home x5 > Fast Forward> Down > Rewind > Down > Fast Forward

  2. Wireless Settings - Things like signal strength, drops/retries etc. Can be used to adjust your 2.4 gHz strength to just the right level (anything better than -70 dbm seems optimal)

    Press Home x5 > Up > Down > Up > Down > Up

  3. Limit streaming bandwidth - Handy if you have datacaps and want to manage a heavy streamer etc

    Press Home x5 > Rewind x3 > Fast Forward x2

  4. Random Secret Screens - Disable scrolling ads etc.

    Press Home x5 > Fast Forward x3 > Rewind x2

    Press Home x5 > Up > Right > Down > Left > Up

  5. Developer Options - Webserver to take screenshots of rokus etc.

    Press Home x3 > Up x2 > Right > Left > Right > Left > Right

  6. Force Restart - When you are too lazy to walk up and power cycle it.

    Press Home x5 > Up > Rewind x2 > Fast Forward x2.