Return to site

5 emerging trends from the AWS Summit, London

Luca Marchesotti (Guest Author)

· News and Events,AI and Mach Learning,AWS and Serverless

12,000 people gathered at the ExCel expo in London on the 6th of May, for the 2019 Amazon Web Service one day event!

The real focus of attention was on AI and on the huge number of novel services AWS has launched in the last six months to become - the - reference platform for AI development.

Here's 5 trends that emerged as common themes within the many AI and machine learning talks:

1. Machine learning is for everyone

Whatever your proficiency level is with AI, Amazon wants to be your friend, providing a broad range of services that, quite honestly, are a no brainer. Three tiers of technology are available:

  1. AWS services (API) for vision, natural language processing, and interaction: they are great if you are an A.I. Newbie
  2. AWS Sage Maker, the option if you are already proficient in Machine Learning.
  3. AWS Machine Learning Frameworks (TensorFlow, Theano, etc.) if you are a Machine Learning pro.

The lesson learned is: don't code a thing, check if the model / ML algorithm you need is already on AWS Sage Maker or in the Marketplace.

2. AWS is bridging the gap between Deep Learning models and Amazon hardware

It is no news that Amazon is pushing deeper integration between HW and SW. The challenge now is taking the next steps forward.

Amazon HW such as DeepLenses: you can deploy your bespoke, deep learning model in one click. Essentially, you can train it to detect whatever is interesting for your use case. Another example is Amazon Deep Racer: a fully autonomous, 1/18th scale, race car driven by reinforcement learning, 3D racing simulator, and global racing league. A-Mazing (and fun stuff).

3. Optimized Deep Learning is the new normal

AWS is walking the extra mile to make your Deep Learning inference cheaper (and better). Not only are they building dedicated hardware, but they also optimised Tensor Flow to make it much more efficient in terms of computational load. In particular, AWS Deep Learning AMIs now supports distributed training of TensorFlow deep learning models, with near-linear scaling efficiency, up to 256 GPUs: ResNet50 model with TensorFlow-Horovod is trained in just under 15 minutes!

4. Bigger, better and cheaper instances for AI development and deployment

Essentially all of this was announced back in November 2018, but it is worth mentioning a couple of services again, that can make life much better for you:

  • Amazon Elastic Inference: essentially enables you to use low-cost GPUs to accelerate  Amazon EC2 and Amazon SageMaker instances. The upside? ..you reduce the cost of running a deep learning inference by up to 75%!!!!
  • AWS Inferentia: a machine learning inference chip, custom designed by AWS, delivers high throughput, low latency inference performance at an extremely low cost. NICE!

5. Machine learning is becoming more and more "contained" and easily deployable into the edge

With services like Green grass, you are now able to easily contain your Deep Learning models and deploy on devices. This is key if you want to develop decentralised Machine Learning architectures and solutions that are working even without an internet connection.

Hope you enjoyed this blogpost. If you want to know more about the “5 machine learning trends”, or anything else we learned at the summit, just send us a message!

All Posts
×

Almost done…

We just sent you an email. Please click the link in the email to confirm your subscription!

OK