Machine Learning at an Industrial Scale

Our Machine Learning Factory API

Posted by Sam Lacey on June 11, 2019 Reading Time: 8m

This post will demonstrate how anybody can use our platform to easily produce hundreds of machine learning algorithms with just a few simple commands.

So let’s get started...

Note: This post is best viewed on a laptop or desktop.

Setting up singularity-cli

The API is secured using SSL & HMAC. While HMAC secures the API brilliantly, generating the signatures can be fiddly and nigh impossible when using cURL. To help with this, I built an easy use CLI and Python API (repositories can be found here and here) which handles all the security for you. The best type of security is the one that doesn't get in the way of your productivity while efficiently blocking bad actors.

If you’d rather build your own library, our full API documentation can be found here.

I heavily recommend you use the CLI as it combines several endpoints and has a number of useful extras.

Installation couldn’t be easier:

Upload data

What’s Machine Learning without a big data set or 10?

We have a very early stages technique we call data sharding, which allows us to do some incredible operations in our backend, vastly increasing efficiency and keeping user costs down.

The end goal for data sharding is technology that can utilise the distributed nature of block chain and apply it with the enormity of big data. Data sets are only going to get bigger and more distributed with time, so the technology industry needs a solution to manage that.

With our API, you can upload as many different data sets as you like and use them to train as many Machine Learning models as you want.

For this demo I’m using a data set that consists of 25000 images of dogs and cats (and we’ll build a model that can tell the difference). I’ll upload three versions, each version contains different image sizes: 64x64, 128x128 and 256x256.

Below demonstrates the 128x128 upload, where the CLI automatically breaks down the data sets into easier to handle shards and sends them to the API:

Inspect your data

Once your data sets have been successfully uploaded, you can find out its unique identifier like so:

Do some Machine Learning already!

This technology is built to train any Machine Learning algorithm using any framework of any quantity in parallel in the cloud. The only limitation is how much computational power our cloud provider has!

A simpler take is, if you’re using machine learning in your organisation then our technology can train it. It doesn’t matter if you’re training a Deep Neural Network, some crazy Evolutionary Algorithm or need to run thousands of Reinforcement Learning trials. This is a complete solution and can handle all of it!

It’s quite a bold claim, so lets do it.

Firstly, create a batch file where you’ll list each experiment you want to run:

Each experiment just requires a few simple things:

  • Docker image and tag
  • The command to run
  • The data set to use

The batch file I used for this post contains 7 experiments and can be viewed in full here.

I’ll be making another post detailing how we help simplify your docker image build process too!

Now we can start training. Here you just state how much computational power you need and we’ll fulfill it. We give engineers unfettered access to the power of the cloud like never before with no fiddly instance types or idling risks.

The current GPU we use is the powerful NVIDIA Tesla V100.

For this post we'll instruct the system to train each of these algorithms in parallel with 1 GPU and 4 CPUs per job:

If you forget a batch ID

No worries, to get a quick summary of all batches you are running or have run just do the following:

Inspect a batch

To inspect a batch just type the below command. You can see how each training experiment is doing, how much training time has been spent so far and view the results in real time.

Cancel a job

The keen eyed of you will have noticed that one of the experiments, is going to be vastly over trained and will probably waste a lot of time and money. Canceling a job or batch is really easy and is done like so:

(Batches can be canceled in full with a similar command)

Is it done yet?

So hopefully you’ll find a good use for all the extra time saved training everything in parallel but now lets see how our experiments went.

Once again, we’ll inspect the batch and see the results:

Inspect an interesting job result

Of the seven jobs we ran:

  • Five performed pretty poorly
  • Another we canceled but
  • One seems to be heading in the right direction with an accuracy of 70%

Furthermore, the reported training time is to millisecond precision and this demo accrued an overall training time that equates to 3.44 hours. Based on our soon to be announced pricing, this would cost under $14! Which for GPU rental, is pretty cheap!

The format of the results are user defined, so engineers can track whatever they are interested in and even automate it. Easy.

From here we can either launch a production training batch to train the promising model for longer, run another series of experiments to investigate other architectures and data sets or start optimising hyper parameters.

To inspect a job in more detail just do the following:

Download a model

If you want to download a model from any job (to check for overfitting or because you can’t believe it’s been trained this easily) just use the following command:

That's all there is to it!

If you think that seemed pretty straight forward, then I've done my job well. Machine Learning at its core is a simple idea but the field has become complicated with many competing frame works and ideas. This system is built to cut through all the noise and to simply give engineers the resources they need to build the Machine Learning systems of tomorrow.

How can I start using this?

I’m currently looking for companies to run pilot schemes with, if you’re interested just drop us an email at:

info@singularity-technologies.io

and lets have a chat!

What’s next?

The technology that’s been demonstrated here is extremely powerful and I believe about 4 or 5 years ahead of its time. It simplifies cloud computing and gives engineers true access to its vast power like never before, all in a completely ephemeral and cost effective way.

I’ve come up with some pretty significant innovations in the back end to make the extremely complex and varied task of machine learning into something that can be produced on an industrial scale.

Furthermore, this is just the start, our next engineering phase will focus on building more tools to help engineers produce better algorithms and further increasing our computational reach to become the solution for all machine learning problems worldwide.

The end goal is to produce a completely autonomous solution for machine learning.

Sam Lacey



WRITTEN BY

Sam Lacey

Founder, CTO and CEO
Singularity Technologies