I have the base components of my new UNRAID server. I got a Chinese X99 dual CPU motherboard off eBay, it’s the ZX-DU99D4 V1.11. Which I will be running two Xeon E5-2650LV3 with 64GB of Ram (both used of eBay). This will be a great base system for playing with VMs and dockers etc in UNRAID using the ZX-DU99D4, as well as running backups.
Lightworks and Opensuse Leap 15.2
Quick post on how to get Lightworks running on Opensuse Leap 15.2 as I ran into a few problems. I discovered that Opensuse Leap 15.2 is not officially supported by lightworks however it was relatively simple to get it working. The error I was getting was because the glibc version installed by Opensuse is too old. I think it is something like version 2.26 and lightworks needs version 2.29 or above. Why the version in opensuse is so old I do not know, we are on version 2.33 I think.
So what I did was download the rpm of version 2.33 from here, and extract it as an archive. I then found the required library that lightworks needed, “libm.so.6” which is a simlink to “libm-2.33.so”.
All that was then required was to copy both of those files into “/usr/lib/lightworks/” as that is one of the first places that lightworks looks for libraries. This seems to have solved the problem. I did not install all of glibc 2.33 as I think this would have caused problems elsewhere, and it seems that this is the only missing library that lightworks needed.
I haven’t had any problems since.
Unraid and BackInTime over SMB
I have recently setup a secondary server using Unraid to run backups (including with BackInTime). (I will do a full post about it at some point.) I will use this server as a BackInTime server for my Linux workstation, and the Time Machine server for my Mac laptops. Getting it working with the Macs was a bit of a pain, but I got there in the end. But today I shall talk about getting it to work with BackInTime, which I think I have now done.
I ran into a problem thar the standard rsync configuration used by BackInTime was not able to copy files that started with an ‘_’ from the workstation over to the smb share on the Unraid server. This was because when rsync made the temporary ‘._*’ file it would fail as smb doesn’t allow that file name. This was a problem because some of the software libraries that I was using for projects have files that start with ‘_’. So the result was that the snapshots were missing those files and completing with errors. Not ideal. The actual number of files that this affects is fairly low and I could probably live with it, but really it would be better if a solution could be found.
The solution
It turned out that the solution was fairly straight forward in the end. I had to disable the vfs objects fruit and streams_xattr for the share. Then it worked perfectly. The problem is that unraid sets those automatically for each share. So I have to manually delete them, and then restart samba each time I reboot the server. Fortunately that will not be very often.
Still, it would be better to find a more permanent solution. I am currently looking for that, and have posted to a couple of forums to see if anyone knows if it is possible to have custom settings for a share that are not overwritten each time the array is started.
Cloning System Disk with Clonezilla
Home Server Upgrade
Lenovo ThinkPad Carbon X1 (20KH) 6th Gen Linux (Ubuntu 18:04)
I have received a new Lenovo ThinkPad Carbon X1 (6th generation, mine is the 20KH model with NFC and the higher res screen) which I have been trying to get Kubuntu 18:04 LTS Linux running on. This has been largely successful with a few problematic areas. Mostly the trackpad. I am a fan of mouse pointer emulation, which allows you to do one finger tap for mouse left click, two finger for right click, and three for middle. This is what I haven’t been able to get working really.
The Trackpad
The problems with the trackpad are that it would be slow to start working on boot, and often when waking from sleep. Also mouse button emulation intermittently failed. This would be accompanied with CPU usage spikes that I think were some sort of driver crashing, trace state, or similar.
Edit: This is probably the fix we are looking for! Install xserver-xorg-input-synaptics (which for me is xserver-xorg-input-synaptics-hwe-18.04). The left ‘i2c_i801’ commented out of the ‘/etc/modprobe/blacklist.conf’. This seems to have got it. I now have a trackpad that works on boot and after waking from sleep. I also have the config controls in the settings. So I think this is the way forward. In addition, as the physical buttons did occasionally stop working after reboot, I suggest getting the pm-utils to run the following commands at wake up which I put in a bash script.
#!/bin/bash
#reconnects trackpad after sleep
case "$1" in
post)
echo -n "none" | sudo tee /sys/bus/serio/devices/serio1/drvctl
echo -n "reconnect" | sudo tee /sys/bus/serio/devices/serio1/drvctl
;;
esac
The best place for the bash script in Ubuntu 18.04LTS is in a file in /lib/systemd/system-sleep/ which I called trackpad, make sure you make it executable.
Possible fix one: Getting the trackpad working might require that ‘i2c_i801’ is commented out of the ‘/etc/modprobe/blacklist.conf’. This will fix the problems with it being detected on start and coming out of sleep.
However… possible fix two: You might have more success with leaving ‘i2c_i801’ on the blacklist and adding ‘psmouse.synaptics_intertouch=1’ to you grub config (‘/etc/default/grub’) and then running ‘sudo update-grub’. This seems also to work.
So this is the problem with intermittent faults. Getting to the bottom of the problem and the solution is very difficult. So I would suggest you try both and see what works better.
It will not however fix the problems with mouse click emulation intermittently failing. That I have not found a fix and instead switched it off altogether. You are then left with using the mouse buttons.
Sleeping
To get sleeping working correctly you need to make use that in the BIOS the sleep mode is set to Linux instead of windows. That is it. Without that then the system will not come in and out of sleep properly.
Other than these issues, and the WAN modem not working (no driver), the laptop seems to work well.
Moving Windows 10 install to an SSD
My fairly old Windows 10 machine was running very slowly, it was extremely annoying and I didn’t want to invest in a new machine. So moving the Windows 10 install to an SSD seemed like a option worth trying.
I didn’t want to get a new machine for two reasons really, one was I didn’t want to have to move all the data and apps etc to a new computer, and also I don’t use Windows much (so motivation was lacking). So investing in a new machine was also not really a high priority. I looked at the performance monitoring systems in Windows to try to figure out what the problem was. The machine at the time had 16GB of DDR2 ram, and 2 quad core AMD Opteron 2386 SE Processors (2.8GHz). Which, although isn’t super high spec (anymore) its not bad.
What was the problem?
Looking at the performance monitoring it became clear that it was the old system disk that was a 32oGB hard drive. Which was always at 100% use and just being hammered all the time. So decided that it was worth a shot to replace it with a SSD, to see if that would improve things enough to make the machine usable. If not I could use the SSD in a new system anyway.
The SSD Upgrade
I bought a 250GB SSD drive, which was plenty big enough, and cost about £30. The next question was what was the best way to move the system drive over to the new disk. In such a way that nothing would then need to be re-installed. On Linux this is fairly easy, there are a load of free software that can do this. I wasn’t so sure on Windows. However after a quick search I found a EaseUS Todo Backup Free 11.5 which worked very well. The tool has a disk or partition clone mode, which is the bit you want. Here you have to make sure that you select the whole system drive, as there is a bunch of recovery partitions etc. So in the picture below I would choose Hard Disk 0, which selects all the partitions.
One the next screen you select the target drive. I cannot pick the target, as its not there. However you would select the SSD drive. For an SSD there is an additional step, where you should click ‘advanced options’ and and select optimise for SSD.
Once you have picked the target click next and then it is a simple process to start cloning the system drive onto the new SSD drive. This will take a little while and you can set the machine to shut itself down at the end.
You then just swap the drive over, and you might need to tell your computer to boot off that drive in the BIOS. It depends on if you have a load of other drives in the computer. All being well the computer will just boot up as normal, only with a bit of luck it will boot a lot quicker.
What was the outcome?
When I swapped the drive over I also put in different memory. I had a lot of memory from a server, so I put 32GB of ram in the machine at the same time. So I upgraded it to the SSD and 32GB of ram. The difference is huge!! It was well worth it, the machine is very much usable again. It boots in under a minute, and works well for what I use Windows for. This is a great option to put some life into an old machine. The CPUs and GPU in this computer are old, but the machine will do pretty much anything most people need.
You might have problems with updates after this saying that they cannot update the System Reserved partition. If you do then check out this guide from Microsoft about how to fix it.
TensorFlow GPU Kubuntu LTS
I wanted to install TensorFlow GPU on my Kubuntu System (currently 16:04 LTS but I believe it also works for 18.04LTS). I am interested in trying TensorFlow with both Java and Python, I have done some testing with Python so far. Java I will try later.
I first updated my system to ensure that all the packages were up to date, and I also used the Driver utility to install the NVidia Driver, version 396.54. TensorFlow uses Cuda 9.0 at the minute so you have install the version, you can get it here. I use the local runfile, this is as it is an easy way of getting cuda installed in /usr/local and the samples installed in your home directory. If like me you have already installed the Nvidia drivers, skip the step to install the drivers as its not required.
Once you have it downloaded run it, remembering to skip the driver install if you have them installed already.
sudo ~/Downloads/cuda_9.0.176_384.81_linux-run
TensorFlow also needs Nvidia cuDNN to work, you have to join as a Nvidia developer and then you can download this, pick the version for Cuda 9.0.
This can easily be installed in the terminal.
tar xvzf cudnn-9.0-linux-x64-v7.2.1.38.tgz sudo cp -P cuda/include/cudnn.h /usr/local/cuda-9.0/include sudo cp -P cuda/lib64/libcudnn* /usr/local/cuda-9.0/lib64 sudo chmod a+r /usr/local/cuda-9.0/include/cudnn.h /usr/local/cuda-9.0/lib64/libcudnn*
Make sure that your .bash_profile file in your home dir contains something like (I am not sure the last one is needed, seems to be in there from ages ago):
export PATH=/usr/local/cuda/bin:$PATH export LD_LIBRARY_PATH=/usr/local/cuda/lib64:$LD_LIBRARY_PATH export LD_LIBRARY_PATH="$LD_LIBRARY_PATH:/usr/local/cuda/lib64:/usr/local/cuda/extras/CUPTI/lib64" export CUDA_HOME=/usr/local/cuda
Note that I have a symlink from /usr/local/cuda to /usr/local/cuda-9.0/. That is cuda installed, you can check it is working by compiling one of the samples that use it.
Next install TensorFlow, I did this in the python evironment, the full details are here. As I have a Tesla K20 in this system I installed the version with GPU support. I use python 3.5.
#install pip and virtualenv $ sudo apt-get install python3-pip python3-dev python-virtualenv #upgrade pip $ sudo pip install -U pip #make a dir to work out of $ mkdir ~/tensorflow $ cd ~/tensorflow # I chose Python3 environment for the environment directory: $ virtualenv --system-site-packages -p python3 environment_dir #activate the environment $ source ~/tensorflow/environment_dir/bin/activate #in the virtual environment upgrade pip (environment_dir)$ pip install -U pip #then install tensorflow with gpu support (environment_dir)$ pip install -U tensorflow-gpu #check it works (environment_dir)$ python -c "import tensorflow as tf; print(tf.__version__)"
That should be it!
Akroma Mining with Tesla K40
Following on from my post about mining ZCash I decided to mine some Akroma coin, I like the look of the project as it has some nice features and who knows. Maybe it will take off! I don’t have a lot of mining hardware, but I have a test system for messing about with this stuff. I do mining more for interest, and I buy some coins for investments. Which are doing ok.
I downloaded the miner from here, it is a version that has the dev fee, but I am fine with that as making software is hard work. You need to get a wallet, I chose the web wallet as that seems like a reasonable option at the minute. In general I prefer stand alone wallets but this is fine until there is a better option for Akroma.
I am running the miner in eth only mode on Ubuntu Linux using the following command:
./ethdcrminer64 -epool stratum+tcp://geo.pool.akroma.eu:8001 -ewal YOURWALLETADDRESS -eworker workerx -epsw x -asm 1 -allpools 1 -di 1
On the K40 I get about 11.6 Mh/s, which is slightly less than I get with a K20 (12.14 Mh/s). No idea why that might be, it could be differences with the Cuda install perhaps. I think the system with the K20 is more up-to-date. If I leave both systems running I will get about 7-8 coins in 24hrs, I will probably only run them together for a day or so and the leave one going for about 4 coins a day.
I will leave this mining for a while, generate some coins and then perhaps go back to ZCash, or try something new! I would like to build a small rig, I have an old system that could have about 3 GPUs in it. That might be the place to start, it would be cool to get one of those multi-gpu cases that could have about 4 GPUs in.
OS X High Sierra Allow Apps in Security & Privacy
I use Mac laptops, up till now they have been close enough to linux and useable that I like them. However increasingly they are starting to annoy me and my next laptop might be linux only. One example of how annoy they are is this new ‘feature’ in OS X High Sierra where if you are installing something like a system extension, when you go to Security & Privacy to ‘allow’ it, pressing the ‘allow’ button doesn’t work.
For me I was enabling a file system feature of an app, in order to do this I had to allow the app, by going to Security & Privacy and allowing the extension to install. However, I go there, unlock the settings and press ‘allow’. Nothing! Nothing happens, the button remains and the system extension isn’t allowed. So annoying, this is my computer! Have I no control!?
There is a ridiculous fix, and that is too use an apple script to ‘click’ in the correct place. The code is below, to get the coordinates that you need use cmd-shift-4 and you get the screenshot crosshairs that will give you the coordinates that go in the {}.
osascript -e 'tell application "System Events" to click at {584, 819}'