Skip to content

Blog

Donut or Not! - Deploying Deep Learning Inference as Serverless Functions

There is a common myth that to perform deep learning, one needs high compute, GPU enabled devices. While this is true to a degree, when training deep learning models, it is often just as possible to perform inference using simple CPU based architectures in compute and memory constrained environments - such as serverless functions. This blog takes you through my journey of deploying a simple donut vs bagel vs vada classifier as a AWS Lambda function!

AWS Lambda Functions - authoring with Docker images

At re:Invent 2020, AWS announced support for authoring, shipping and deploying the popular serverless Lambda services via Docker images. Further, they allow images up to 10GB in size. As multiple authorities noted, this is a game changer, particularly for the scientific Python community as this would allow us to author machine learning and even deep learning inferencing functions using AWS Lambdas. This blog takes a quick look at authoring a "hello-world" style lambda using Docker.

How many snakes do you need? - An introduction to concurrency and parallelism in Python

Performance matters

At some point, every Python developer wonders if it's their program that is slow, or Python that is slow. In most cases, it is their program itself. Although Python gets a bad rap for being slower than compiled languages like C, C++, developers can utilize concurrency and parallelism to see significant gains.