This article presents an investigation into the problem of 3D radar echo extrapolationin precipitation nowcasting, using recent AI advances, together with a viewpoint from ComputerVision. While Deep Learning methods, especially convolutional recurrent neural networks, havebeen developed to perform extrapolation, most works use 2D radar images rather than 3D images.In addition, the very few ones which try 3D data do not show a clear picture of results. Throughthis study, we found a potential problem in the convolution-based prediction of 3D data, which issimilar to the cross-talk effect in multi-channel radar processing but has not been documented well inthe literature, and discovered the root cause. The problem was that, when we generated differentchannels using one receptive field, some information in a channel, especially observation errors,might affect other channels unexpectedly. We found that, when using the early-stopping technique toavoid over-fitting, the receptive field did not learn enough to cancel unnecessary information. If weincreased the number of training iterations, this effect could be reduced but that might worsen theover-fitting situation. We therefore proposed a new output generation block which generates eachchannel separately and showed the improvement. Moreover, we also found that common imageaugmentation techniques in Computer Vision can be helpful for radar echo extrapolation, improvingtesting mean squared error of employed models at least 20% in our experiments.