Protecting water ecosystems, the Nature Conservancy monitors unregulated and unreported tuna fishing in the Western and Central Pacific Ocean. The organization equipped the commercial boats with recording cameras and GPS devices to ensure that everything is done according to the rules. However, a ten-hour trip takes nearly six hours to review. To speed up the process, the Nature Conservancy needed to develop an AI-based solution that would detect species of fish.
Tough Call: Image Detection Approach
Our team, consisting of three engineers, entered the contest along with 2,300 teams from around the globe. All teams were provided with a dataset of 3,700 images sliced from video footage of fishing practices. The time given to develop the algorithm was five months. First, we turned to tried-and-true algorithms for image detection, like HOG and the Viola-Jones method, but ultimately decided that these would not be effective.
In machine learning, convolutional neural networks have outstanding potential in analyzing visual imagery. We made the move to using object detection approaches based on convolutional neural networks instead of classical computer vision. Our team checked out possible applications of current state-of-the-art object detection approaches such as Overfeat, Single-Shot Detector and RCNN-family (Faster-RCNN and Fast-RCNN).
A Pretty Kettle of Fish on Our Way
We had to take all of the following into account.
Low-quality photos from some boats from water drops and sheens on camera lenses.
Poor lighting conditions. Night images had shifted white balance with green and blue dominance.
Unbalanced initial image selection. Only 20% of the 3,700 photos were relevant (and there was a different quantity in each of the eight classes).
Obstructions such as fish-like objects on the boat and moving people.
Very often, fish were butchered right on the board, which meant once-detected fish were now being seen in a different condition.
Each image had only one fish category and it was important to understand the priority.
Fish size substantially differed throughout the dataset. For example, very small fish used for bait didn’t have to be taken into account.
Models could learn to classify fish from provided images of particular ships, but when the model was applied to other ships with different features, the results were not so accurate.
The algorithm for fish detection and classification.
Outcome and Scoreboard Result
Our team built different algorithms: a fish detection algorithm and “a whole image” classification algorithm. The fish detection algorithm could classify eight categories: six fish species (albacore tuna, bigeye tuna, yellowfin tuna, Mahi Mahi, opah, and sharks), “other,” and “nothing” with 95% precision, while the classification algorithm could detect fish on a boat with 93% accuracy. And during the second stage of the competition, we had to apply the developed algorithms to 12,000+ images.
Ultimately, our team hit bronze in the Nature Conservancy’s contest. We scored a 1.5 logarithmic loss on the contest’s main performance evaluation, while the highest score of the competition was 1.1.
As more data are processed, the accuracy of the algorithm will constantly improve. The Nature Conservancy is planning to embed the fish-identifying algorithm into automated monitoring software.