Random Winner Photo Application with Microsoft Face API and Azure Functions

Random Winner Photo Application with Microsoft Face API and Azure Functions

In this post I am going to explain how I used Microsoft Face API, Azure Functions, and Azure Storage, to create a random winner photo application.

Every month at Toronto .Net Meetup we give our members swag from our sponsors like Telerik or Pluralsight. But we always struggle to find the winners, as not all people who attend on meetup.com show up. And often, people who are not even registered online attend our meetups. So, any attempt on using excel and the exported list of attendees usually fails. To overcome the difficulties of being fair to our attendees, I usually come up with random techniques to pick the winner among the crowd, like asking attendees to come up with a number, and counting people in the room to finally hit a winner. But to be honest, it is usually a bit chaotic.

So I decided to use technology to solve the problem. How about taking pictures from the people in the room (something we normally do to update the meetup photos.), then use image recognition to find people in the image, and finally randomly pick a winner among them? Sounds complicated? Well I thought so too, but a little bit of digging around Microsoft Azure technologies made it surprisingly easier.

Obtain Azure Face API Key

First we need a Microsoft Face API Key. To create one, navigate to Azure Portal create a Face API resource. You can use a F0 pricing tier which at the time of writing this post, allows you to make 20 calls per minute.

Azure Portal Face API

Once you created the resource, you must take note of the Endpoint in the Overview tab and one of the API KEYs under Keys tab.

Face API Endpoint and Key

Create an Azure Function for Face Detection

Next, I am going to create an Azure Function, to detect faces on an image and randomly choose a winner among these faces. To make things easy and cloudy, I am going to create a function which is triggered whenever an image is uploaded to Azure Blob storage. Then, using the Face API, the faces on the image will be detected and stored in an Azure Table storage. Next, I will draw blue rectangles around all the faces. Finally, with a simple random number generator, I will pick a face among the detected faces and draw a rectangle around that face.

I am going to use VS Code for development. Also, I am using dotnet core for this development. I have the following extensions installed during this implementation:

Create a Storage Account

Creating an Azure Function which triggers by a file upload to blob would need a storage account. Also I am going to use the same storage for the Azure Function to write the logs. Follow the official tutorial at Microsoft Docs to create a storage account. Take a note of the connection string for the next step:

Azure Storage Connection String

Create a Function with Blob Trigger

Open VS Code Command Palette and type Azure Functions: Create Function command. This will provide step by step instructions on how to bootstrap an azure function using the Azure Functions extension.

I have provided the following values:

  • Function Name: FaceDetect
  • Blob Container Template: facedetectscontainer/{name}
  • New Storage Connection Name: face_detect_storage

Here is a snap shot of generated code:

namespace RadnomWinnerPhotoApp
{
    public static class FaceDetect
    {
        [FunctionName("FaceDetect")]
        public static void Run([BlobTrigger("facedetectscontainer/{name}", Connection = "face_detect_storage")]Stream imageStream, string name, TraceWriter log)
        {
            log.Info($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {imageStream.Length} Bytes");
        }
    }
}

Use Face API to Detect Image Faces

To use the Microsoft Face API, we can either construct API calls manually or use one of the available SDKs. Run the following command to get the latest version of Microsoft Cognitive Services Face API from nuget:

dotnet add package Microsoft.ProjectOxford.Face --version 1.4.0

Note: you will be getting a few warning messages once this package is installed. The Microsoft.ProjectOxford.Face is not fully compatible with netstandard2.0 which is the target of the .NET Core project. However, building the project will be successful (tested on Windows 10 and OSX).

Let's create an image service which uses the SDK to detect faces on an image:

public class ImageService
{
    public async Task<List<FaceLocation>> DetectImageFaces(Stream image, string imageName) {
        IFaceServiceClient faceServiceClient = new FaceServiceClient("FACE API KEY", "Face API Endpoint");
        List<FaceLocation> faceLocations = new List<FaceLocation>();
        try
        {
            Face[] faces = await faceServiceClient.DetectAsync(image, returnFaceId: true, returnFaceLandmarks: false);
            foreach (Face face in faces)
            {
                var faceRectangle = new FaceLocation()
                {
                    Top = face.FaceRectangle.Top,
                    Left = face.FaceRectangle.Left,
                    Height = face.FaceRectangle.Height,
                    Width = face.FaceRectangle.Width
                };
                faceRectangle.RowKey = Guid.NewGuid().ToString();
                faceRectangle.PartitionKey = imageName;
                faceLocations.Add(faceRectangle);
            }
        }
        catch (Exception ex)
        {
            Console.WriteLine(ex.Message);
        }
        return faceLocations;
    }
}

public class FaceLocation : TableEntity
{
    public int Left { get; set; }
    public int Top { get; set; }
    public int Width { get; set; }
    public int Height { get; set; }
}

Using the SDK makes the code very straightforward:

  • Create an instance of FaceServiceClient.
  • Call DetectAsync function by passing the image stream.
  • Return a collection of detected face locations.

Draw Rectangles Around Faces

The next step is to use a drawing library to draw rectangles around people's faces. Since this Azure Function is written in .NET Core, we need to find a cross-platform drawing library. I am going to use CoreCompat.System.Drawing:

dotnet add package CoreCompat.System.Drawing --version 1.0.0-beta006

The following function will take an image stream, an array of face locations, and an output image stream to save the modified image. Using the drawing library, I create a blue rectangle around all the detected faces. Then I randomly pick a face from the array and draw a red rectangle around that face.

public void DrawRectangleOnImage(Stream image, List<FaceLocation> faces, Stream outImage)
{
    Bitmap faceBitmap = new Bitmap(image);
    using (var g = Graphics.FromImage(faceBitmap))
    {
        foreach(var face in faces)
        {
            var faceRect = new Rectangle(face.Left, face.Top, face.Width, face.Height);
            Pen skyBluePen = new Pen(Brushes.DeepSkyBlue);
            skyBluePen.Width = 4.0F;
            skyBluePen.LineJoin = System.Drawing.Drawing2D.LineJoin.Bevel;
            g.DrawRectangle(skyBluePen, faceRect);
            skyBluePen.Dispose();
        }

        var random = new Random();
        var randomWinnerNumber = random.Next(0, faces.Count - 1);
        var randomWinner = faces[randomWinnerNumber];
        var winnerFaceRect = new Rectangle(randomWinner.Left, randomWinner.Top, randomWinner.Width, randomWinner.Height);
        Pen winnerPen = new Pen(Brushes.OrangeRed);
        winnerPen.Width = 8.0F;
        winnerPen.LineJoin = System.Drawing.Drawing2D.LineJoin.Bevel;
        g.DrawRectangle(winnerPen, winnerFaceRect);
        winnerPen.Dispose();
    }

    faceBitmap.Save(outImage, ImageFormat.Jpeg);
}

Implement The Azure Function

Finally, let's use the ImageService in the azure function. I have modified the function generated by Azure Functions VS Code extension to be able to log the detected faces in Azure Table storage and also write the face-detected image to Azure blob.

public static class FaceDetect
{
    [FunctionName("FaceDetect")]
    public static async Task Run([BlobTrigger("facedetectscontainer/{name}", Connection = "face_detect_storage")]Stream imageStream, [Table("imagefacelocations", Connection = "face_detect_storage")]IAsyncCollector<FaceLocation> outTable, [Blob("facedetectscontainer-edits/{name}", FileAccess.Write, Connection = "face_detect_storage")] Stream outBlob, string name, TraceWriter log)
    {
        log.Info($"C# Blob trigger function Processed blob\n Name:{name} \n Size: {imageStream.Length} Bytes");

        string lowerCaseName = name.ToLower();
        ImageService imageService = new ImageService();
        List<FaceLocation> facesResult = await imageService.DetectImageFaces(imageStream, lowerCaseName);
        if (facesResult.Count > 0)
        {
            foreach (var face in facesResult)
            {
                await outTable.AddAsync(face);
            }
            
            imageService.DrawRectangleOnImage(imageStream, facesResult, outBlob);
        }
    }
}

And this is enough to have my random voting app to work. While deploying the function (.NET Core) to Azure, I went through a couple of challenges which would be the topic of another post. But after I deployed the function to Azure, a simple image upload to the facedetectscontainer blob container triggered the function and after a few seconds, I had a winner selected. Fair and square.

Face Detection Sample