Cerebral Palsy (CP) is a motor dysfunction caused by brain injury or brain malformation in utero or before or after birth. It is clinically important to objectively evaluate gait function in patients with this kind of motor dysfunction. The gait deviation index (GDI), is an important index that can be obtained by using optical motion capture. Typically, markers are attached to various positions on the patient's body, and the patient is made to perform specific movements in a well-equipped environment for the capture. However, wearing markers not only interferes with the natural gait of the patient, but also requires such a well-equipped environment for measurement. Thanks to advances in artificial intelligence (AI) and image processing, it is possible to estimate a person's landmarks (pose estimation) from videos taken with a camera such as a smartphone without specialized equipment, and well-defined markers. This ability to obtain important information such as GDI without specialized equipment can help obtain data about the patients at the comfort of their homes as well as can help early detection of CP. Many AI driven pose estimation algorithms, including the state-of-the-art OpenPose use deep convolutional neural networks (CNNs) for robust estimation of landmarks. However, the public data sets, such as the COCO and MPII, used to train the current CNN based models do not truly reflect the at-home low-res video based poses. Also, the typical CP patient videos may have information about the heel and toe only. Therefore, unless additional data sets are used, the accuracy of the algorithm is often evaluated by estimating up to the ankles. In this work, we present our research efforts on evaluating various AI driven video-based gait analysis for CP patients. We illustrate the problems in real-world data that is not present in public data based AI video gait analysis models.