eXistenZ | Stefan

Remote Kodi video playback (Part 2 – OpenSSH)

In part 1 I explained that I wanted to see my media on a remote RaspBerry Pi 2 running OpenELEC and that it turned out to be quite more of a hassle than foreseen.

I talked about swapping out OpenVPN for OpenSSH, but since talking alone doesn’t get you very far, I built a demo setup with another Pi2 w/ OpenELEC, that is in the same LAN as the NAS. This made it easier for me to test various things so that I would have all the configuration done once I was at the remote Pi, so setup time would be reduced.


SMB over OpenSSH

Since I liked the fine grained access control to my NAS that Samba provided, I wanted to keep SMB, so the setup I went for was SMB over an OpenSSH tunnel instead of the previously explained SMB over OpenVPN.

The reason I didn’t go with SSHFS instead is twofold:

  • Access control is harder to set up because the options in OpenSSH are somewhat limited
  • SSHFS simply isn’t available via the OpenELEC Addon Repositories for ARM processors (which is our Pi2).

To make sure that only the kittens were exposed to my Pi and not the puppies (they dont mix well) I only allowed the forwarding of port 139 (which is Samba) and disallowed anything else. This was done by prepending the appropriate config to the authorized_keys file so that the Pi (that logs in with a private key) has no way of snooping around on my nas:

no-pty,command="/bin/echo 'This account can only be used for forwarding Samba'",permitopen="" ssh-rsa [...] root@remote-openelec

This kinda works like a firewall where only port 139 is available to the Pi on the other side. I also set the shell of remoteuser to /sbin/nologin just to be sure.

On the Pi, I could install autossh from the unofficial addons, which opens a normal OpenSSH tunnel and then monitors if the tunnel still passes data. If not, the old tunnel is destroyed and a new SSH tunnel is set up. This is a somewhat crude way of guaranteeing that a SSH tunnel is always available, but hey, if it works, it works.

So I added the right keypair on the Pi, added a simple SSH config that would always use this keypair when connecting to my NAS, and created a small systemd init script:


ExecStart=/storage/.kodi/addons/tools.autossh/bin/autossh -M 20000 my.nas.com -f -N -L 139:localhost:139


I let systemd start this script on boot, and after a reboot I was able to add the media by adding smb://localhost/videos as a source in Kodi.


Now that I no longer used OpenVPN, I could tell straight away that this setup was working much smoother, because now I was able to play back 1080p media on the test Pi via the SSH tunnel without a hitch.

Looking at htop on the Pi confirmed that a single core was no longer overloaded with the task of decrypting the tunnelled traffic, but instead all four cores were happily sharing the workload, ending up at around 35% of load on each core.

A few days later (I still had not implemented the solution on the remote Pi, that was planned for the weekend after) I decided to give the new shiny SMB over SSH setup a field test, by simply running the same SSH tunnel command on my work laptop from the office, and playing a 1080p video from smb://localhost there. This was the first time the new setup was actually tested via the internet instead of a local network.

Much to my chargrin, it turned out that 1080p videos still were showing the same choppy behaviour like with the OpenVPN setup. Only this time it couldn’t be the CPU limitation, and with a network limitation already crossed off the list, it had to something else, but what?

After a bit of Googling it turned out that SMB just simply isn’t wel suited for network connections that have a bit of latency – like the setup I’m trying to get to work here – will not be able to work properly. We were amplifying the problem by running TCP (Samba) over TCP (OpenSSH) which is a lot of frickin’ packages going back and forth, and Samba just couldn’t cope with the delay this brings.

Once I got home I could easily confirm this theory by adding a bit of latency to the network adapter on my NAS like so:

# tc qdisc add dev eth0 root netem delay 20ms

and the network performance decreased dramatically to the point where I was no longer able to play 1080p videos without choppiness. Removing the latency made 1080p streaming smooth as butter again.

So now what?

We already dropped SMB via OpenVPN in part 1, and now must come to the conclusion that SMB via OpenSSH isn’t suited for 1080p playback via the interwebz either.

From this point, I could basically try two more things: remove SMB from the equation in favour of NFS (which has the added benefit of sending UDP over TCP instead of TCP over TCP if I’m not mistaken) of simply stop jamming another protocol trough a tunnel, and start using SSHFS.

Stay tuned for part 3 where I explain what I did next, and if I was able to solved this problem once and for all.