Akroma Mining with Tesla K40

Following on from my post about mining ZCash I decided to mine some Akroma coin, I like the look of the project as it has some nice features and who knows. Maybe it will take off! I don’t have a lot of mining hardware, but I have a test system for messing about with this stuff. I do mining more for interest, and I buy some coins for investments. Which are doing ok.

I downloaded the miner from here, it is a version that has the dev fee, but I am fine with that as making software is hard work. You need to get a wallet, I chose the web wallet as that seems like a reasonable option at the minute. In general I prefer stand alone wallets but this is fine until there is a better option for Akroma.

I am running the miner in eth only mode on Ubuntu Linux using the following command:

./ethdcrminer64 -epool stratum+tcp://geo.pool.akroma.eu:8001 -ewal YOURWALLETADDRESS -eworker workerx -epsw x -asm 1 -allpools 1 -di 1

On the K40 I get about 11.6 Mh/s, which is slightly less than I get with a K20 (12.14 Mh/s). No idea why that might be, it could be differences with the Cuda install perhaps. I think the system with the K20 is more up-to-date. If I leave both systems running I will get about 7-8 coins in 24hrs, I will probably only run them together for a day or so and the leave one going for about 4 coins a day.

I will leave this mining for a while, generate some coins and then perhaps go back to ZCash, or try something new! I would like to build a small rig, I have an old system that could have about 3 GPUs in it. That might be the place to start, it would be cool to get one of those multi-gpu cases that could have about 4 GPUs in.

Posted in cryptocurrency, Linux Tips | Comments Off on Akroma Mining with Tesla K40

OS X High Sierra Allow Apps in Security & Privacy

I use Mac laptops, up till now they have been close enough to linux and useable that I like them. However increasingly they are starting to annoy me and my next laptop might be linux only. One example of how annoy they are is this new ‘feature’ in OS X High Sierra where if you are installing something like a system extension, when you go to Security & Privacy to ‘allow’ it, pressing the ‘allow’ button doesn’t work.

For me I was enabling a file system feature of an app, in order to do this I had to allow the app, by going to Security & Privacy and allowing the extension to install. However, I go there, unlock the settings and press ‘allow’. Nothing! Nothing happens, the button remains and the system extension isn’t allowed. So annoying, this is my computer! Have I no control!?

There is a ridiculous fix, and that is too use an apple script to ‘click’ in the correct place. The code is below, to get the coordinates that you need use cmd-shift-4 and you get the screenshot crosshairs that will give you the coordinates that go in the {}.

osascript -e 'tell application "System Events" to click at {584, 819}'

Posted in OSX | Comments Off on OS X High Sierra Allow Apps in Security & Privacy

Twitter4J Extended Mode – 280 Character Tweets

Looking to use Twitter4J to download the full 280 characters of extended tweets? I can easily be done. You need to have a relatively recent version to enable extended mode, I have tested this with version 4.0.6 and it works well.

Its pretty straightforward and you enabled it at the configuration builder phase.


Thats is, the setTweetModeExtended(true) is all you need.

Posted in Programming | Comments Off on Twitter4J Extended Mode – 280 Character Tweets

Graph Visualisation with Neo4J and Alchemy

I have been using the Alchemy.js library to do some graph visualisation for netorks. Its pretty good, my default template is below as I had some trouble with those I found on around the internet.

It makes a simple graph with node labels visible, and takes the data as a .json file. I produce the data file from an R script that extracts data out of the Neo4J database. The basic code for that is below. One thing to remember with this is that the data file with the network in is easily accessible to anyone that might want to download it. So if it constitutes a lot of work and you don’t want anyone to get hold of it (e.g. if it is research you intend to publish at a later point), this isn’t the method for you.

<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/alchemyjs/0.4.2/alchemy.min.css" />
    <h1>Network Visualisation</h1>
    <div class="alchemy" id="alchemy"></div>

    <script src="http://cdn.graphalchemist.com/alchemy.min.js"></script>

    <script src="https://d3js.org/d3.v3.min.js" charset="utf-8"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/alchemyjs/0.4.2/alchemy.min.js"></script>
    <script src="https://cdnjs.cloudflare.com/ajax/libs/alchemyjs/0.4.2/scripts/vendor.js"></script>
    <script type="text/javascript">
         var config = {
              dataSource: 'data.json',
              forceLocked: false,
              graphHeight: function(){ return 1000; },
              graphWidth: function(){ return 1000; },      
              linkDistance: function(){ return 40; },
              nodeCaptionsOnByDefault: true,
              "nodeCaption": 'name', 
              "nodeMouseOver": 'name'

          alchemy = new Alchemy(config)

Here is the R code that generates json data file from the Neo4J database, its really simple. Connect to the local running Neo4J database, run a query to get the nodes, run a query to get the relationships (edges), and then I did some quick clustering analysis in iGraph. You need to do a bit of work to get the communities into the dataframe, if you cluster. You then need to do a little work to get the json file. Its pretty good an not a lot of code to get a nice visualisation from your Neo4J database.

neo4j = startGraph("http://localhost:7474/db/data/")

nodes_query = "MATCH (a) RETURN DISTINCT ID(a) AS id, a.name AS name, labels(a) AS type"
nodes = cypher(neo4j, nodes_query)

rels_query = "MATCH (a1)--(a2) RETURN ID(a1) AS source, ID(a2) AS target"
rels = cypher(neo4j, rels_query)

ig = graph.data.frame(rels,directed=TRUE,nodes)
communities = edge.betweenness.community(ig)

memb = data.frame(name = communities$names, cluster = communities$membership)
nodes = merge(nodes, memb)

nodes_json = paste0("\"nodes\":", jsonlite::toJSON(nodes))
edges_json = paste0("\"edges\":", jsonlite::toJSON(rels))
all_json = paste0("{", nodes_json, ",", edges_json, "}")

sink(file = 'data.json')
Posted in Programming, Technology | Comments Off on Graph Visualisation with Neo4J and Alchemy

ZCash Mining With Nvidia Telsa

I have two Nvidia Telsa cards that I use for work, they are good for certain tasks around data mining and machine learning, and I have been interested in trying to use them for Zcash mining. I have a Antminer U3 that mines Bitcoins (slowly, and not that well sometimes), however I fancied a go at mining Zcash too. How hard could it be? Harder than it should have been.

Being Cuda cards I would obviously need a cuda enabled miner to use them. I already mine with Antpool, and they allow users to mine Zcash so that bit was easy. What wasn’t easy was trying to get a miner to compile. I tried nheqminer, that wouldn’t compile and also the newest version doesn’t work with the tesla cards I have as they are not compute 5. The older versions wouldn’t compile either. I also had a few problems with getting cuda running on Kubuntu 16.04 as I needed to upgrade the nvidia drivers which was a pain!


I got there in the end as I found a binary of nanopool’s ewbf-miner that works with cuda cards! This works great, I get about 70 Sol/s on the K20 and 95Sol/s on the K40. So that is pretty good, you would get about 40 Sol/s on a i7-6700K CPU @ 4.00GHz. I am sure that with a newer telsa or CPU you would get more. This only an experiment however, not a mining operation so I am happy.

Posted in cryptocurrency, Linux Tips, Technology | Comments Off on ZCash Mining With Nvidia Telsa

Kubuntu 16.04 LTS

One of my Opensuse workstations fell over. Weird problem with the login, the sddm-greeter was crashing on start. I reinstalled a bunch of things but it would not come back to life. I decided that the system could do with a restart, the partitioning was a mess and the bootloader was badly installed. The system hadn’t been reinstalled for a lone time.

I decided that I need this system to be stable, and Opensuse gets updated too frequently for some of the software that I run leaving me solving problems with missing libraries or incompatible versions of software. A Linux distro with a longer lifecycle was in order.

My initial thought was to install Centos 7, its RPM based (what I am used to) and has the required stability. However I had problems with Centos and NVidia drivers! Once installed the system would login! Rather than spend ages figuring out how to fix it I decided to which to Ubuntu, or rather Kubuntu (I am a fan of KDE). As a user of Debian for my servers I wasn’t expecting any problems. BTW the partitioner in the Centos install is a mess, they need to sort that out.

Kubuntu was straightforward to install, no problems and the partitioner is easy (take note Centos). I did have a problem that I have encountered before with Ubuntu was that the GUI crashed shortly after system start. This seems to be a problem with the opensource NVidia driver. If you drop to runlevel 3 you can install the proprietary driver and its fine. Which is ok if you don’t mind using it. So far getting the system setup has been painless and I am happy to have the system going again without problem. Now to restore all the data and apps!

Posted in Linux Tips, Technology | Tagged , , , , , , | Comments Off on Kubuntu 16.04 LTS


I seem to be constantly trying out different ways of syncing data. I have had no end of problems with unison. Apart from being slow getting build versions to match is a constant source of frustration.

However,  I have found Syncthing, which seems great so far. I have owncloud, which is great for syncing documents; think Dropbox that you own. However I have some data sets that are very large and don’t change much. Owncloud would just get clogged up looking after these. I could use rsync, which is fine, however syncthing has some nice additional features.

Syncthing is a bidirectional sync tool, that can also do versioning if you want it to. It works by running as a web service on your computers and is accessible via a web gui. It keeps directories up to date over a number of computers, there is no copy of the files in the cloud, unless you put one there. So if you delete everything off one computer it will disappear from them all (eventually). It plays well with dynamic IP addresses and firewalls. All traffic between the computers is encrypted, and it is fast. It doesn’t use up too much resource, and I set the sync frequency on my files for 1hr as they are unlikely to change that often, and also only I use them. These are large data sets of source material mainly, and photos that I edit on two computers.

Setup is pretty straightforward, the most tricky bit being auto starting the service on different computers/operating systems. It works well.

Posted in Linux Tips, Technology | Comments Off on Syncthing

Lightworks, PortAudio, and Opensuse Leap 42.2

I had some more messing about with PortAudio and Lightworks. This time I decided to build my own PortAudio as using a random build it silly.

Its easy after all: Download from:

Then if you build with:
./configure –enable-cxx=yes

then you can either make install as root (then check where it puts them might be the wrong place) or get the libraries out of the folders and dump them into /usr/lib/lightworks
libportaudioccp is in bindings/cpp/lib/.libs/
libportaudio is in lib/.libs/

Make sure you copy the sym links over too. It should then work without problems.

Posted in Linux Tips | Comments Off on Lightworks, PortAudio, and Opensuse Leap 42.2

Lightworks 12.6 and PortAudio – Opensuse 42.2

I had a lot of problems with Opensuse Leap 42.2 and Lightworks. This time it was portaudio that was causing all the problems. It would either hang Lightworks when loading at the point portaudio tried to detected the audio interfaces. Or it would just be a mess of buffer overloads.

I fixed it in a weird way. I downloaded builds of libportaudio2 and libportaudioccp from Ubuntu, for the right architecture. I think unpacked them into /usr/lib/lightworks, which is where lightworks looks first for libraries. This worked, with one additional requirement that you have to use GlobalSettings.txt in your ~/Lightworks directory to make sure that portaudio uses pulseaudio as the interface. This avoids the buffer problems. Do all that and Lightworks 12.6 is happy on Opensuse Leap 42.2.

I am about to try Lightworks 14, see how that works before I pay for the upgrade.

Posted in Linux Tips | Comments Off on Lightworks 12.6 and PortAudio – Opensuse 42.2

OwnCloud – its awesome!

I gave up on SpiderOak in the end, it was too resource heavy for my laptops, their batteries were really struggling. The constant encrypting and decrypting was too much. Its a shame because as a backup system for large amounts of data it was really very good. Especially if that data was on desktops or didn’t change much. I couldn’t justify keeping it for this use only, too expensive at $25 a month (which is about £20 now!). So I looked back into OwnCloud.

I tried OwnCloud a number of years ago as I run my own home server anyway. Back then the desktop sync client was just not up to the job. It would take forever to sort out changes and sync them, often using up entire CPU cores on both the server and the client machines. I am pleased to say that problem as been solved and now OwnCloud is a great choice if you are running your own server somewhere anyway. The sync seems to be fast, and the setup is easy. I am using a self-signed SSL certificate, which doesn’t cause much in the way of annoying warnings. OwnCloud has the other obvious advantage that you can really limit the access to the data, and it supports encryption of the data on the server too. Or you can encrypt the drives etc. I have been using it for about 4 months without problem. The only issue I had was when there was a power cut in my house while I was away. Doh! This killed the server obviously.

The downside of course is that with SpiderOak I had a ‘cloud’ copy of all my important data that was at a different location to my house. This is no longer the case. I don’t use OwnCloud to sync all the large data-sets across my computers, I sync those with Unison instead. So to avoid a catastrophic data loss in the event of disaster I have synced all my important data and docs to my office computer using Unison. This is about 1.7TB of data onto a 5TB raid array. The first sync took ages but as this data doesn’t change very much. I also use Amazon Glacier for data that can be essentially archived off and forgotten about, its super cheap.

Posted in Linux Tips, Technology | Tagged , , , , | Comments Off on OwnCloud – its awesome!