Image classification, looking forward (and sideways)

A couple of Friday’s have past since the last post, but it’s not because of a lack of activity on our side. We been busily working away to grow out our capability. One of the initial features that we had in mind from the outset of FatigueM8, was to add context to the ECG observations by overlaying additional information sets. This week we’ve brought online an additional data stream, leveraging AWS Rekognition service.

As we have explored in previous posts, we’ve had our fair share of challenges with our forward facing camera. Adding in the Rekognition classifications threw up a couple more curve balls to us. The pictures below are from two (2) of our trucks were the FatigueM8 unit had shifted since being installed. Being prototypes we’d used “suckers” to connect the unit to the wind screen and with the changing of temperatures the suction was lost resulting the cameras pointing every which way.

A quick trip to the local hardware store has us back up and pointing in the right direction! A little 25 cms elastic strap and a couple of cable ties have the install back on track.

The resulting photos (below) are back to being captured correctly and ready for processing through rekognitition. Now when an image is uploaded from the truck into the cloud it’s classified by rekognition and then depending on what’s been found, boxes are drawn around the objects.

If a person is detected in the image, we’ll blur them out or don’t use the image just to be safe.

As we run all the images through this process we’ll be able to build up a picture (no pun intended) about the traffic, road and environmental conditions our drivers are driving in; and this will add more context to the ECG observations we collecting.

Until next time, stay safe.