There is large consent that successful training of deep networks requires many thousand annotated training samples.
說明困難: 一般的 deep learning 要上千張的照片。所以沒有超過一千張別玩深度學習了。
In this paper, we present a network and training strategy that relies on the strong use of data augmentation to use the available annotated samples more efficiently.
做了什麼: 提出network與data augmentation策略增加標註資料的利用效率。
The architecture consists of a contracting path to capture context and a symmetric expanding path that enables precise localization.
We show that such a network can be trained end-to-end from very few images and outperforms the prior best method (a sliding-window convolutional network) on the ISBI challenge for segmentation of neuronal structures in electron microscopic stacks.
優勢: end-to-end, 很少圖片, 比2015年那時的方法好。
Using the same network trained on transmitted light microscopy images (phase contrast and DIC) we won the ISBI cell tracking challenge 2015 in these categories by a large margin. Moreover, the network is fast. Segmentation of a 512x512 image takes less than a second on a recent GPU.
優勢: 在GPU上很快, 比賽中最好。
The full implementation (based on Caffe) and the trained networks are available at http://lmb.informatik.uni-freiburg.de/people/ronneber/u-net
U-Net is a convolutional neural network that was developed for biomedical image segmentation at the Computer Science Department of the University of Freiburg, Germany. The network is based on the fully convolutional network and its architecture was modified and extended to work with fewer training images and to yield more precise segmentations. Segmentation of a 512 × 512 image takes less than a second on a modern GPU.
摘要 - 是一種CNN/fully convolutional network/很快。
摘要 - 一大堆程式碼，可以讓我畢業的東西。
翻譯 - 我覺得這個技術可以讓我畢業。