CNNs (Convolutional Neural Networks) can have a large number of parameters, thereby having high storage and computational requirements. These requirements are not typically satisfied by resource-constrained edge devices. Thus, current industry practice for making decisions at edge include transferring visual data from edge to cloud nodes, making prediction on that data with a CNN processed in the cloud and return the output to edge devices. There are two problems with this approach - Sending visual data from edge to cloud requires high bandwidth between edge and cloud, and we are not making use of the computational resources available at edge. One solution to this problem is to split the CNN between edge and cloud. The efficient way to split CNN has yet to be investigated in detail. In this paper, we propose a novel CNN splitting algorithm that efficiently splits CNN between edge and cloud with the sole objective of reducing bandwidth consumption. We consider various parameters such as task load at edge, input image dimensions and bandwidth constraints in order to choose the best splitting layer. Through experiments, we show that to optimize our objective function, CNN splitting should only be made at layers whose output dimensions are lower than input image dimensions. A random partitioning of layers between edge and cloud might result in increased bandwidth consumption. The algorithm proposed in this paper dynamically chooses the best CNN splitting layer and moves CNN layers between edge and cloud as and when required, thus allowing multitasking at edge while optimizing bandwidth consumption. We are able to perform such tasks without any loss of prediction accuracy since we do not modify the pretrained CNN architecture that we use.