Today’s deep learning methods focus on how to design the objective functions to make the prediction as close as possible to the target.
Meanwhile, an appropriate neural network architecture has to be designed.
Existing methods ignore a fact that when input data undergoes layer-by-layer feature transformation, large amount of information will be lost.
This paper delve into the important issues of information bottle- neck and reversible functions. We proposed the concept of programmable gradient information (PGI) to cope with the various changes required by deep networks to achieve multiple objectives.
PGI can provide complete input information for the target task to calculate objective function, so that reliable gradient information can be obtained to update network parameters. In addition, a lightweight network architecture – General- ized Efficient Layer Aggregation Network (GELAN) is designed.
GELAN confirms that PGI has gained superior results on lightweight models. We verified the proposed GELAN and PGI on MS COCO object detection dataset.
The results show that GELAN only uses conventional convolution operators to achieve better parameter utilization than the state-of-the-art methods developed based on depth-wise convolution.
PGI can be used for variety of models from lightweight to large. It can be used to obtain complete information, so that train-from-scratch models can achieve better results than state-of-the-art models pre-trained using large datasets, the comparison results are shown in Figure above.